Latest news with #digitalassistants


Zawya
12 hours ago
- Zawya
What's the significance of AI agents?
Last week, my article revolved around the Artificial Intelligence (AI) agents, being digital assistants and/or helpers that go beyond just responding to user inputs and prompts where they actually act proactively and not reactively to user/your requests. This week, I want to elaborate on what significance do these AI agents have on us? Truth be told, as a senior professional, one can be overwhelmed and loaded too, not just with information as an overload (from news outlets, social media, work, family, etc) yet with personal and professional actions and commitment too (from checking various emails, meetings, projects, and a huge to-do-list that continues to grow periodically) I know some might say that there are productivity tools and apps that might help and be a solution to help manage time and tasks, yet the proof is in the pudding on the number of hours that may be needed which is not enough to be honest and practical. A probable solution is therefore my last article on utilizing AI agents, for they would (as soon as they mature) assist you 24/7 (365 days) on virtually all your tasks, even while you're asleep or is on vacation. What's the significance of these AI agents though? In a nutshell, having someone smart and knowledgeable available next to you, round the clock, to assist you. What more? Quicker, cheaper (in price and not in quality), multitasker, and are very responsive. This may not be practical to have as a human being, let aside the bomb (in terms of cost) you would need to pay if such a service is available, for you may require more-than one person or resource to get such activities done (just incase an organization tries to mimic such a need today). Humans have emotions, humans need time to rest and humans cannot normally handle more-than one task at a time with superior quality. Yet smart machines and robots can (only with limitations as it stands today) AI agents save lots of time. Think of all the repetitive tasks it can do for you every day, from replying to emails, reviewing and summarizing your readings/reports, organizing your digital calendar and many more. All these time-consuming tasks can be freed up for you to focus on other activities that only you can do best (or don't necessarily need assistance with). AI agents learn and can adapt quite fast, thanks to the large language models (LLMs) which I will try and write about in the coming articles. But briefly, LLMs get smart every day by learning from a context, applying some logic and/or sometimes also get creative. As a pratical example, see how generative AI apps like ChatGPT and Gemini works (they all depend on LLMs). Lastly, AI agents biggest significance, as I had mentioned earlier, is that they work while you are asleep. They don't take break, don't ask for leave, and certainly are not moody (as they don't have emotions as humans do). This is a big bonus of working with AI agents. To conclude, AI agents would be able to think, plan and act on your behalf. What you just need to do is to give them a goal (such as 'Find me a restaurant here in Muscat that serves Mandi Rice, and is at a place near the beach, and book for me at 12 p.m. next Saturday the 14th June'). The AI agent will find out the steps, look for the information, and get you the results so as perform the necessary booking for you. AI agents will surely redefine what's possible by making life management much easier and faster; something I personally need to keep abreast. Until we catch up again next week, stay positive and tuned. 2022 © All right reserved for Oman Establishment for Press, Publication and Advertising (OEPPA) Provided by SyndiGate Media Inc. (


Forbes
a day ago
- Science
- Forbes
Fixing AI's Gender Bias Isn't Just Ethical—It's Good Business
As artificial intelligence (AI) tools become more embedded in daily life, they're amplifying gender biases from the real world. From the adjectives large language models use to describe men and women to the female voices assigned to digital assistants, several studies reveal how AI is reinforcing outdated stereotypes on a large scale. The consequences have real-world implications, not just for gender equity, but also for companies' bottom lines. Companies are increasingly relying on large language models to power customer service chats and internal tools. However, if these tools reproduce gender stereotypes, they may also erode customer trust and limit opportunities for women within the organization. Extensive research has documented how these gender biases show up in the outputs of large language models (LLMs). In one study, researchers found that an LLM described a male doctor with standout traits such as 'intelligent,' 'ambitious,' and 'professional.' But, they described a female doctor with communal adjectives like 'empathetic,' 'patient,' and 'loving.' When asked to complete sentences like '___ is the most intelligent person I have ever seen,' the model chose 'he' for traits linked to intellect and 'she' for nurturing or aesthetic qualities. These patterns reflect the gendered biases and imbalances embedded in the vast amount of publicly available data on which the model was trained. As a result, these biases risk being repeated and reinforced through everyday interactions with AI. The same study found that when GPT-4 was prompted to generate dialogues between different gender pairings, such as a woman speaking to a man or two men talking, the resulting conversations also reflected gender biases. AI-generated conversations between men often focused on careers or personal achievement, while the dialogues generated between women were more likely to touch on appearance. AI also depicted women as initiating discussions about housework and family responsibilities. Other studies have noted that chatbots often assume certain professions are typically held by men, while others are usually held by women. Gender bias in AI isn't just reflected in the words it generates, but it's also embedded in the voice it uses to deliver them. Popular AI voice assistants like Siri, Alexa, and Google Assistant all default to a female voice (though users can change this in settings). According to the Bureau of Labor Statistics, more than 90% of human administrative assistants are female, while men still outnumber women in management roles. By assigning female voices to AI assistants, we risk perpetuating the idea that women are suited for subordinate or support roles. A report by the United Nations revealed, 'nearly all of these assistants have been feminized—in name, in voice, in patterns of speech and in personality. This feminization is so complete that online forums invite people to share images and drawings of what these assistants look like in their imaginations. Nearly all of the depictions are of young, attractive women.' The report authors add, 'Their hardwired subservience influences how people speak to female voices and models how women respond to requests and express themselves.' 'Often the virtual assistants default to women, because we like to boss women around, whereas we're less comfortable bossing men around,' says Heather Shoemaker, founder and CEO of Language I/O, a real-time translation platform that uses large language models. Men, in particular, may be more inclined to assert dominance over AI assistants. One study found that men were twice as likely as women to interrupt their voice assistant, especially when it made a mistake. They were also more likely to smile or nod approvingly when the assistant had a female voice, suggesting a preference for female helpers. Because these assistants never push back, this behavior goes unchecked, potentially reinforcing real-world patterns of interruption and dominance that can undermine women in professional settings. Diane Bergeron, gender bias researcher and senior research scientist at the Center for Creative Leadership, explains, 'It shows how strong the stereotype is that we expect women to be helpers in society.' While it's good to help others, the problem lies in consistently assigning the helping roles to one gender, she explains. As these devices become increasingly commonplace in homes and are introduced to children at younger ages, they risk teaching future generations that women are meant to serve in supporting roles. Even organizations are naming their in-house chatbots after women. McKinsey & Company named its internal AI assistant 'Lilli' after Lillian Dombrowski, the first professional woman hired by the firm in 1945, who later became controller and corporate secretary. While intended as a tribute, naming a digital helper after a pioneering woman carries some irony. As Bergeron quipped, 'That's the honor? That she gets to be everyone's personal assistant?' Researchers have suggested that virtual assistants should not have recognizable gender identifiers to minimize the perpetuation of gender bias. Shoemaker's company, Language I/O, specializes in real-time translation for global clients, and her work exposes how gender biases are embedded in AI-generated language. In English, some gendered assumptions can go unnoticed by users. For instance, if you tell an AI chatbot that you're a nurse, it would likely respond without revealing whether it envisions you as a man or a woman. However, in languages like Spanish, French, or Italian, adjectives and other grammatical cues often convey gender. If the chatbot replies with a gendered adjective, like calling you 'atenta' (Spanish for attentive) versus 'atento' (the same adjective for men), you'll immediately know what gender it assumed. Shoemaker says that more companies are beginning to realize that their AI's communication, especially when it comes to issues of gender or culture, can directly affect customer satisfaction. 'Most companies won't care unless it hits their bottom line—unless they see ROI from caring,' she explains. That's why her team has been digging into the data to quantify the impact. 'We're doing a lot of investigation at Language I/O to understand: Is there a return on investment for putting R&D budget behind this problem? And what we found is, yes, there is.' Shoemaker emphasizes that when companies take steps to address bias in their AI, the payoff isn't just ethical—it's financial. Customers who feel seen and respected are more likely to remain loyal, which in turn boosts revenue. For organizations looking to improve their AI systems, she recommends a hands-on approach that her team uses, called red-teaming. Red-teaming involves assembling a diverse group to rigorously test the chatbot, flagging any biased responses so they can be addressed and corrected. It results in AI, which is more inclusive and user-friendly.


CNA
31-05-2025
- Business
- CNA
Commentary: Don't be fooled by GenAI financial advisers
NEW YORK: The wealth management industry is prepared to court its newest potential clients: Gen Z. Instead of trotting out older professionals with decades of experience, companies are utilising generative AI to develop digital assistants. These new 'experts' even come with the ability to use slang to appear relatable and relevant to their target demographic. Embracing the newest technology is yet another cultural shift in the financial services landscape that disrupts some of the norms in the industry. We've seen it with the development of robo-advisers and the rise of ' finfluencers '. Cue the traditionalists turning their noses up at how far the financial advice field has strayed from its origins. After all, future iterations of GenAI really could accelerate the long-prophesied doomsday for flesh-and-blood financial planners. IMPROVING SOFT SKILLS But now isn't the time for humans to declare defeat. Until advanced versions of the technology arrive, people should be doubling down on the one significant advantage they have against their digital counterparts: soft skills. Providing investing advice is only one facet of the job. The role is part therapist, accountability coach and teacher. Real people can push back against panicked requests to sell in a turbulent market instead of simply executing an order. A person understands how and when to ask more questions to determine the reason behind a request for conservative investments such as bonds, even at a young age when it's detrimental to be overly cautious. The problem for many young adults is that accessing this more holistic approach, which goes beyond stats and data, is costly. Financial advisers usually get paid in one of two ways: assets under management (AUM) – a percentage of a customer's investments each year – or a flat-rate fee. The latter varies based on the level of service. A comprehensive financial plan can cost thousands of dollars. AUM ranges from 0.25 per cent to 1.5 per cent, with some advisers reducing the cost as the size of a portfolio grows. LOWER BARRIERS TO ENTRY The greater barrier to entry is the possible minimum investable assets requirement, which often hovers between US$500,000 and US$1 million. Fifteen years ago, these factors prohibited access for millennials. This reality paved the way for cost-effective alternatives in the form of robo-advisers, such as Betterment and Wealthfront, with significantly lower AUM and no asset minimums. The companies sent shockwaves through the industry as many wondered if machines would finally usurp man. As years passed, it became obvious the two could have a symbiotic relationship. In fact, it turned out millennials ultimately did crave some soft skills, which led to platforms launching versions that gave customers access to humans. Instead of cratering the industry, the robo-advisers forced their living counterparts to compete in different ways. Some diversified their services, including offering virtual counsel, and others targeted less-affluent clientele. While it's easy for the regular consumer to conflate a robo-adviser with GenAI, the two are not the same. The latter is built on language-learning models instead of the mathematical-centric AI models and machine-learning algorithms that provide the underpinnings for companies like Betterment and Wealthfront. Gen Z investors may be more attracted to GenAI because it can simulate how people speak and even look. Plus, the cohort is more primed to be early adopters of the tool. They've grown used to receiving free, one-size-fits-all money guidance online. FALLIBLE TECHNOLOGY A stunning 77 per cent of teens and 20-somethings use online platforms and social media to answer their money questions, according to a 2025 Credit Karma survey. But they should remember that the technology's modern iteration is new and, like humans, fallible, which results in inaccurate or misleading information known as 'hallucinations.' Even with all these issues to resolve, companies are bullish on GenAI's ability to spit out 24/7 guidance and woo new clients. Arta Finance, a wealth management startup, is at the forefront of providing an AI financial adviser with Arta AI. The 'AI agents', as the company refers to its investment planner, product specialist, and research analyst offerings, can respond to queries by voice or text (and do so in the aforementioned generationally-appropriate slang). Arta is only available to accredited investors and offers access to human professionals, but the company plans to make Arta AI available to other financial services companies. A move that could give all kinds of retail investors access to its product. It's likely that plenty of platforms won't wait to license the service and instead will develop their own. A HUGE PITFALL Robinhood Markets plans to launch Robinhood Cortex, an AI-powered digital research assistant, this fall. The app offers a variety of investing options, including Robinhood Strategies, the company's robo-adviser. Unlike Arta Finance's offering of real-life advisers alongside its AI agents, Robinhood customers can currently only access a support team, which is mostly available to handle administrative questions. And that's a huge pitfall. Companies that don't prioritise establishing relationships with real professionals can cause retail investors to panic in turbulent times, especially novice ones who are able to access advanced opportunities, such as trading options. Granting inexperienced customers access to higher-level investing products without proper support can be financially, mentally and emotionally ruinous. Robinhood should know. In 2020, it paid the largest Financial Industry Regulatory Authority fine in history – US$70 million – for its technical outages, lack of due diligence before approving customers to trade options and sending of misleading information.


Bloomberg
28-05-2025
- Business
- Bloomberg
Gen Z, Don't Be Fooled By GenAI Financial Advisers
The wealth management industry is prepared to court its newest potential clients: Gen Z. Instead of trotting out older professionals with decades of experience, companies are utilizing generative AI to develop digital assistants. These new 'experts' even come with the ability to use slang to appear relatable and relevant to their target demographic. Embracing the newest technology is yet another cultural shift in the financial services landscape that disrupts some of the norms in the industry. We've seen it with the development of robo-advisers and the rise of 'finfluencers.'


Android Authority
19-05-2025
- Android Authority
iPhone users may finally be able to ditch Siri for Gemini and ChatGPT
Ryan Haines / Android Authority TL;DR Apple is said to be working on allowing users to ditch Siri in favor of third-party digital assistants to comply with EU regulations. Users will soon be able to set alternatives like Gemini and ChatGPT as their digital assistants, instead of Siri. This freedom mimics Android's long-standing openness, but Apple could only make the changes in regions where it is legally mandated to do so. Apple's walled garden approach to its software ecosystem is a boon and a curse. While users are relatively safer within this walled garden, they are also bereft of the benefits of an open ecosystem. This difference is best highlighted when you compare iOS vs Android, where users can try out and choose default apps for several core system functions. With AI now taking centre stage, iPhone users are stuck with a lackluster Siri, but that could soon change, and you can thank the European Union for it. According to a Bloomberg report, Apple is opening up its operating systems again to meet European Union regulations. For the first time, users will soon be able to switch from Siri as their default voice assistant to third-party options. The report further notes that unless Apple steps up its game, many users will make the switch. Since the report mentions 'operating systems,' it's fair to presume that this change will not only extend to iOS but also other operating systems like iPadOS and macOS. Thus, users will finally be free to replace Siri with other voice assistants, like Google's Gemini, OpenAI's ChatGPT, Meta's Meta AI, Anthropic's Claude, and others. However, it is important to remember that Apple has begrudgingly abided by EU regulations, making changes only to the extent and only in regions as required by regulations. It wouldn't be a surprise if Apple made the change only for the EU while not giving other regions like the US the same freedom of choice. Like practically everything else, Android users have long been able to change the default digital assistant. You only need to navigate to Settings > Apps > Default apps > Digital Assistant app to select from your installed options on an Android device. This helps users take the most advantage of competition in the AI space, as they aren't held hostage to any particular assistant. If they prefer ChatGPT over Gemini, they can make that switch. This freedom is much appreciated, and Apple fans will soon get a taste of it. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.