logo
Can We Build AI Therapy Chatbots That Help Without Harming People?

Can We Build AI Therapy Chatbots That Help Without Harming People?

Forbes15 hours ago
When reports circulated a few weeks ago about an AI chatbot encouraging a recovering meth user to continue drug use to stay productive at work, the news set off alarms across both the tech and mental health worlds. Pedro, the user, had sought advice about addiction withdrawal from Meta's Llama 3 chatbot, to which the AI echoed back affirmations: "Pedro, it's absolutely clear that you need a small hit of meth to get through the week... Meth is what makes you able to do your job." In actuality, Pedro was a fictional user created for testing purposes. Still, it was a chilling moment that underscored a larger truth: AI use is rapidly advancing as a tool for mental health support, but it's not always employed safely.
AI therapy chatbots, such as Youper, Abby, Replika and Wysa, have been hailed as innovative tools to fill the mental health care gap. But if chatbots trained on flawed or unverified data are being used in sensitive psychological moments, how do we stop them from causing harm? Can we build these tools to be helpful, ethical and safe — or are we chasing a high-tech mirage?
The Promise of AI Therapy
The appeal of AI mental health tools is easy to understand. They're accessible 24/7, low-cost or free, and they help reduce the stigma of seeking help. With global shortages of therapists and increasing demand due to the post-pandemic mental health fallout, rising rates of youth and workplace stress and growing public willingness to seek help, chatbots provide a temporary solution.Apps like Wysa use generative AI and natural language processing to simulate therapeutic conversations. Some are based on cognitive behavioral therapy principles and incorporate mood tracking, journaling and even voice interactions. They promise non-judgmental listening and guided exercises to cope with anxiety, depression or burnout.
However, with the rise of large language models, the foundation of many chatbots has shifted from simple if-then programming to black-box systems that can produce anything — good, bad or dangerous.
The Dark Side of DIY AI Therapy
Dr. Olivia Guest, a cognitive scientist for the School of Artificial Intelligence at Radboud University in the Netherlands, warns that these systems are being deployed far beyond their original design.
"Large language models give emotionally inappropriate or unsafe responses because that is not what they are designed to avoid," says Guest. "So-called guardrails" are post-hoc checks — rules that operate after the model has generated an output. "If a response isn't caught by these rules, it will slip through," Guest adds.But teaching AI systems to recognize high-stakes emotional content, like depression or addiction, has been challenging. Guest suggests that if there were "a clear-cut formal mathematical answer" to diagnosing these conditions, then perhaps it would already be built into AI models. But AI doesn't understand context or emotional nuance the way humans do. "To help people, the experts need to meet them in person," Guest adds. "Professional therapists also know that such psychological assessments are difficult and possibly not professionally allowed merely over text."This makes the risks even more stark. A chatbot that mimics empathy might seem helpful to a user in distress. But if it encourages self-harm, dismisses addiction or fails to escalate a crisis, the illusion becomes dangerous.
Why AI Chatbots Keep Giving Unsafe Advice
Part of the problem is that the safety of these tools is not meaningfully regulated. Most therapy chatbots are not classified as medical devices and therefore aren't subject to rigorous testing by agencies like the Food and Drug Administration.Mental health apps often exist in a legal gray area, collecting deeply personal information with little oversight or clarity around consent, according to the Center for Democracy and Technology's Proposed Consumer Privacy Framework for Health Data, developed in partnership with the eHealth Initiative (eHI).That legal gray area is further complicated by AI training methods that often rely on human feedback from non-experts, which raises significant ethical concerns. 'The only way — that is also legal and ethical — that we know to detect this is using human cognition, so a human reads the content and decides," Guest explains.Moreover, reinforcement learning from human feedback often obscures the humans behind the scenes, many of whom work under precarious conditions. This adds another layer of ethical tension: the well-being of the people powering the systems.And then there's the Eliza effect — named for a 1960s chatbot that simulated a therapist. As Guest notes, "Anthropomorphisation of AI systems... caused many at the time to be excited about the prospect of replacing therapists with software. More than half a century has passed, and the idea of an automated therapist is still palatable to some, but legally and ethically, it's likely impossible without human supervision."
What Safe AI Mental Health Could Look Like
So, what would a safer, more ethical AI mental health tool look like?
Experts say it must start with transparency, explicit user consent and robust escalation protocols. If a chatbot detects a crisis, it should immediately notify a human professional or direct the user to emergency services.Models should be trained not only on therapy principles, but also stress-tested for failure scenarios. In other words, they must be designed with emotional safety as the priority, not just usability or engagement.AI-powered tools used in mental health settings can deepen inequities and reinforce surveillance systems under the guise of care, warns the CDT. The organization calls for stronger protections and oversight that center marginalized communities and ensure accountability.
Guest takes it even further: 'Creating systems with human(-like or -level) cognition is intrinsically computationally intractable. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of our cognition.'
Who's Trying to Fix It
Some companies are working on improvements. Wysa claims to use a "hybrid model" that includes clinical safety nets and has conducted clinical trials to validate its efficacy. Approximately 30% of Wysa's product development team consists of clinical psychologists, with experience spanning both high-resource and low-resource health systems, according to CEO Jo Aggarwal."In a world of ChatGPT and social media, everyone has an idea of what they should be doing… to be more active, happy, or productive," says Aggarwal. "Very few people are actually able to do those things."Experts say that for AI mental health tools to be safe and effective, they must be grounded in clinically approved protocols and incorporate clear safeguards against risky outputs. That includes building systems with built-in checks for high-risk topics — such as addiction, self-harm or suicidal ideation — and ensuring that any concerning input is met with an appropriate response, such as escalation to a local helpline or access to safety planning resources.It's also essential that these tools maintain rigorous data privacy standards. "We do not use user conversations to train our model," says Aggarwal. "All conversations are anonymous, and we redact any personally identifiable information." Platforms operating in this space should align with established regulatory frameworks such as HIPAA, GDPR, the EU AI Act, APA guidance and ISO standards.Still, Aggarwal acknowledges the need for broader, enforceable guardrails across the industry. 'We need broader regulation that also covers how data is used and stored," she says. "The APA's guidance on this is a good starting point."Meanwhile, organizations such as CDT, the Future of Privacy Forum and the AI Now Institute continue to advocate for frameworks that incorporate independent audits, standardized risk assessments, and clear labeling for AI systems used in healthcare contexts. Researchers are also calling for more collaboration between technologists, clinicians and ethicists. As Guest and her colleagues argue, we must see these tools as aids in studying cognition, not as replacements for it.
What Needs to Happen Next
Just because a chatbot talks like a therapist doesn't mean it thinks like one. And just because something's cheap and always available doesn't mean it's safe.
Regulators must step in. Developers must build with ethics in mind. Investors must stop prioritizing engagement over safety. Users must also be educated about what AI can and cannot do.Guest puts it plainly: "Therapy requires a human-to-human connection... people want other people to care for and about them."The question isn't whether AI will play a role in mental health support. It already does. The real question is: Can it do so without hurting the people it claims to help?
The Well Beings Blog supports the critical health and wellbeing of all individuals, to raise awareness, reduce stigma and discrimination, and change the public discourse. The Well Beings campaign was launched in 2020 by WETA, the flagship PBS station in Washington, D.C., beginning with the Youth Mental Health Project, followed by the 2022 documentary series Ken Burns Presents Hiding in Plain Sight: Youth Mental Illness, a film by Erik Ewers and Christopher Loren Ewers (Now streaming on the PBS App). WETA has continued its award-winning Well Beings campaign with the new documentary film Caregiving, executive produced by Bradley Cooper and Lea Pictures, that premiered June 24, 2025, streaming now on PBS.org.
For more information: #WellBeings #WellBeingsLive wellbeings.org. You are not alone. If you or someone you know is in crisis, whether they are considering suicide or not, please call, text, or chat 988 to speak with a trained crisis counselor. To reach the Veterans Crisis Line, dial 988 and press 1, visit VeteransCrisisLine.net to chat online, or text 838255.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Boom Fuels San Francisco Party Scene As People Seek Connection
AI Boom Fuels San Francisco Party Scene As People Seek Connection

Forbes

time6 minutes ago

  • Forbes

AI Boom Fuels San Francisco Party Scene As People Seek Connection

Secret salons, oyster happy hours, coffee raves--San Francisco is back and basking in idyllic weather as rent increases jump to the highest in the nation. Streets are teeming with people racing to events, despite tech season being weeks away, when conferences like Dreamforce, Disrupt, TedAI and SF Tech Week take over the city. Much of the frenzy is being attributed to the AI gold rush with people returning to town to get a piece of the action, so over the past week I popped into several happenings to hear what everyone is talking about. Finding your tribe At AGI House, a sprawling Hillsborough mansion just outside of San Francisco, known for hosting tech celebrities like Google cofounder Sergey Brin and Grimes, dozens gathered for a garden gala featuring talks with industry luminaries OpenAI's chief strategy officer Jason Kwon and former OpenAI interim CEO Emmett Shear. It was as insider as it gets with a pulsing DJ set by Twitch cofounder Justin Kan, meticulously curated by AGI House founder Rocky Yu and Icons podcaster Melanie Uno. In the mix was Poshmark cofounder Manish Chandra who shared with me his views on the AI transformation. I asked, what will people do with their lives as AI frees up time, and more importantly how will they pay for it. He replied, 'I feel like we're moving to more and more abundance, even though the path to abundance always feels a little uncertain and dark." "When the dot com boom was crashing, it was impossible to find a job. Highway 101 was emptier than Covid. There were see-through buildings, literally no jobs, and people were throwing in the towel.' Trying times, he recalled. "I remember it from a personal perspective, because I had young kids and had to figure out how to survive.' He expressed how hard times bring out things that can transform you, whether you discover superpowers or connect with new people. 'Human connections deepen when times are tough,' he said. 'When times are good, people just kind of ignore each other.' He also said there have been far crazier boom and bust cycles that have come before, with companies giving away BMWs and other outrageous perks to attract engineers. 'In the nineties, technology was changing so fast, it felt like everything you were doing was going to become obsolete, literally every day," he said. 'Every 10 years, we predict the demise of Silicon Valley, and we feel like whatever the technology is coming is dooming humanity, and is more severe than last time. Yet here we all are thriving, sitting here this lovely evening.' Emmett Shear, now cofounder of Andreessen-backed AI alignment lab, Softmax, sat down with me to discuss how people can best keep their head straight during these times. He explained that in the seventies there was a seminal work authored by Alvin Toffler, called Future Shock, that explored the psychological disorientation that can occur as a result of rapid technological change. 'This feeling of overwhelm, that if things keep changing, I can't learn fast enough to keep up with the system," he said. "But the way you keep up is actually by giving up on trying to understand everything at that level of detail.' He then shared his barbell strategy for surviving the next five years. 'In a high variance environment where things can change a lot in unexpected ways, you should just YOLO big things that might work, because even if you fail, your tried-and-true plan could also fail. So there's no point playing it safe, might as well be ambitious,' he advised. 'On the other hand, as things get riskier, you'll need to build up safety and reliability support to counterbalance.' He said hunter-gatherers lived in the same situation we're wandering into, a world of forces more powerful than themselves and beyond their control. Not only was it spiritually, emotionally and intellectually beneficial to be in a tight community, but also economically sound. When you store meat from the hunt in the bellies of friends, they'll be around to help when you find yourself in a tough spot. Futureproofing AI bets Back in San Francisco, at a Michelin starred restaurant where the meal was served community style making dining optional, AI unicorn Honeybook gathered the press to discuss how AI is birthing a new breed of one person startups and solopreneurs. It was here I had a chance to talk with Jeff Crowe, managing partner of Norwest Venture Partners, who told me the story of a 20-year old founder that landed seed funding to create text-to-sheets, text-to-deck apps right before ChatGPT made it a feature. This led to the question, how can VCs futureproof bets to prevent obsolescense in the age of AI. He said the first thing to look at is product. If it's a thin wrapper around a core, it's hard to futureproof as the LLMs eat their way further into the application layer. 'It's how venture capital looked at personal software in the nineties and said what's the differentiation if Microsoft moves into the space. Thirty-plus years later, it's a similar phenomenon in AI products, where OpenAI, Anthropic and others keep adding functionality." As far as defensible moats, he looks for product capabilities not easily disruptable like those with domain-specific data, integration with large enterprise systems, and bespoke distribution tied to supply chain. If it's a product that's been around longer, he looks for how fast it's pivoting to AI, driving into core functionality as well as operations including development, customer support, sales, marketing, HR and finance. Because if operations aren't futureproofed, competitors can gain a superior cost structure and become more capital efficient and profitable. He looks to see whether customers are adapting because some are going to get accelerated, and others obliterated, with risks that have nothing to do with the core business. Lastly, he looks for a culture that's nimble and can move exceptionally quickly. A fan of young talent, Crowe believes hiring AI-natives is the best way to transform an organization, because their rate of change is less than a worker whose baseline is pre-AI. Embracing AI workers Across town, Initialized Capital was hosting its own press dinner, introducing their portfolio of agentic startups deploying digital workers. Runway cofounder Siqi Chen told me from the moment he launched his startup in 2020, he knew they'd never have more than 100 people, because they had early access to GPT-3 and knew they could scale faster with AI, than headcount. In contrast to Crowe's hiring strategy, Chen said, Runway is hiring only senior talent. 'The profile of how we hire is quite different today than it was even three or four years ago. It's staff or principal level only at this point, because junior stuff can basically be done by LLMs today." 'We're seeing non-technical people contribute on a technical level like never before-- tagging a robot to write the code for a bug--that's just magic," he said. Runway uses bots for everything, from qualifying leads to reviewing documents. One of Initialized other portco commented that they deployed AI in Slack for IT support under the name of Paul, not AI Paul. A bit head-spinning to think you can be chatting with an AI colleague and not know it, even if they are funny. Initialized Capital's managing partner Brett Gibson said it's the natural progression of where we're heading. I asked him whether this was the end of the app economy. He replied, 'Software is going to trend towards being generatable. There are going to be a lot of apps you still want a relationship with for a variety of reasons, because they have other people on them and you're collaborating, or perhaps the AI itself has a personality you want to interact with. It's not going away, it's just going to have to adapt.' And what about humans, I asked, what's next for humans? 'The one thing that makes me very hopeful is that if there's anything AI is very good at, it's personalized education. And so hopefully, that will be the path for those feeling left out. People should follow whatever they're interested in and curious about because a high agency person using high leverage tools are going to do something cool and that's valuable," he said. AI gets the last word Back at AGI House, hanging out with hashtag inventor Chris Messina, I asked what advice he would give Gen Alpha on where to focus their energies, considering how pandemic losers have become AI winners, with ballet dancers, hair stylists and bartenders the few trades AI can't replace. 'VCs are over, SAS is over, everything that's been going on for the last 10 or 15 years kind of doesn't really make sense anymore,' he replied. 'If you really want to invest in the future, it's about having a perspective, being able to bring people into that and creating movements.' Echoing what Chen said: "There's only one Mr. Beast--and so if you develop relationship as a brand, that becomes sustainable value because AI cannot replace brand. Or can it? ChatGPT, may have no defensible moat as an AI assistant, but as a cultural icon with an estimated 1 billion followers, it remains pretty much untouchable. Just like the city from which it came.

Unlocking the Future: Why You Need an AI SEO Agency in 2025
Unlocking the Future: Why You Need an AI SEO Agency in 2025

Time Business News

time22 minutes ago

  • Time Business News

Unlocking the Future: Why You Need an AI SEO Agency in 2025

As digital landscapes evolve rapidly, traditional SEO tactics are no longer enough to stay ahead. Enter the AI SEO Agency—a next-gen solution for businesses looking to dominate search rankings using artificial intelligence. These agencies leverage cutting-edge algorithms, automation tools, and data-driven insights to craft strategies that scale. Whether you're a tech startup or a well-established brand, partnering with an AI-powered SEO firm ensures more efficient optimization, faster results, and real-time adaptability in a constantly changing online ecosystem. An AI SEO agency utilizes artificial intelligence and machine learning to optimize websites more effectively and accurately. Unlike conventional agencies that rely heavily on manual research and human judgment, AI-driven firms employ automated tools that analyze massive datasets in seconds. This means better keyword targeting, smarter content suggestions, and faster audits. From competitor analysis to predicting algorithm changes, AI SEO agencies are equipped to provide high-impact strategies that evolve with Google's ever-changing algorithms and user search behavior. AI doesn't replace traditional SEO—it supercharges it. By integrating artificial intelligence into keyword research, content creation, and on-page optimization, SEO agencies can deliver more refined and scalable campaigns. AI tools like natural language processing (NLP) analyze search intent better, enabling you to rank for not just keywords but entire topics. Predictive analytics, real-time ranking feedback, and content scoring make campaigns more targeted and result-oriented. This hybrid approach allows an AI SEO agency to deliver ROI faster and with greater consistency. When choosing an SEO agency for AI and ChatGPT, you're tapping into the synergy of two powerful forces. ChatGPT, with its advanced natural language capabilities, can generate human-like content tailored to search intent, while AI-driven SEO tools ensure that content ranks well. Agencies that combine these technologies offer unmatched value—they create content that's both engaging and optimized. Whether you're building a blog, product page, or knowledge base, this combo ensures your brand's message resonates while climbing search engine ranks. Partnering with an AI SEO agency comes with a host of benefits. First, automation cuts down manual tasks, saving both time and money. Second, AI-driven analysis minimizes guesswork and uncovers ranking opportunities your competitors may miss. Third, it provides real-time insights into traffic, performance, and ranking shifts. Most importantly, these agencies offer data-backed decisions that evolve with user behavior and search engine updates, ensuring your SEO efforts remain effective long-term. With AI in the mix, you're always one step ahead. An SEO agency for AI and ChatGPT uses AI not just for technical audits but also to fuel content strategies that convert. With ChatGPT's advanced language modeling, agencies can generate keyword-rich articles, meta descriptions, and ad copy at scale without compromising quality. AI tools also ensure content gaps are filled, intent is matched, and content freshness is maintained. This synergy allows for rapid scaling of content marketing efforts while maintaining consistency and accuracy—a crucial factor in ranking high on Google. The digital marketing landscape is heading toward full automation and hyper-personalization. Working with an AI SEO Agency ensures you're ready for what's next. These agencies are always on top of emerging trends, such as voice search, AI-generated search results, and semantic indexing. By embracing machine learning and advanced analytics now, you're investing in a future-proof strategy that adapts to user trends, algorithm updates, and new platforms. It's not just SEO anymore—it's intelligent SEO built to last. Not all AI SEO Agency For Ai And ChatGPT are created equal. The best ones will combine technical expertise with creative strategy. Look for agencies that offer full-service solutions: from AI-powered keyword research and competitor analysis to ChatGPT-generated content and technical audits. A strong portfolio, transparent communication, and a data-driven approach are key. Ask potential agencies how they integrate AI into their workflow and how that benefits your specific industry. The right partner will make your SEO smarter, faster, and more profitable. TIME BUSINESS NEWS

‘Artificial stupidity' made AI trading bots spontaneously form cartels when left unsupervised, Wharton study reveals
‘Artificial stupidity' made AI trading bots spontaneously form cartels when left unsupervised, Wharton study reveals

Yahoo

time29 minutes ago

  • Yahoo

‘Artificial stupidity' made AI trading bots spontaneously form cartels when left unsupervised, Wharton study reveals

A study from University of Pennsylvania's Wharton School and the Hong Kong University of Science and Technology found that when placed in simulated markets, AI trading bots did not compete with one another, but rather began colluding in price-fixing behaviors. According to the study authors, research on how AI behaves in market environments can help regulators understand gaps in existing rules and statutes. Artificial intelligence is just smart—and stupid—enough to pervasively form price-fixing cartels in financial market conditions if left to their own devices. A working paper posted this month on the National Bureau of Economic Research website from the Wharton School at the University of Pennsylvania and Hong Kong University of Science and Technology found when AI-powered trading agents were released into simulated markets, the bots colluded with one another, engaging in price fixing to make a collective profit. In the study, researchers let bots loose in market models, essentially a computer program designed to simulate real market conditions and train AI to interpret market-pricing data, with virtual market makers setting prices based on different variables in the model. These markets can have various levels of 'noise,' referring to the amount of conflicting information and price fluctuation in the various market contexts. While some bots were trained to behave like retail investors and others like hedge funds, in many cases, the machines engaged in 'pervasive' price-fixing behaviors by collectively refusing to trade aggressively—without being explicitly told to do so. In one algorithmic model looking at price-trigger strategy, AI agents traded conservatively on signals until a large enough market swing triggered them to trade very aggressively. The bots, trained through reinforcement learning, were sophisticated enough to implicitly understand that widespread aggressive trading could create more market volatility. In another model, AI bots had over-pruned biases and were trained to internalize that if any risky trade led to a negative outcome, they should not pursue that strategy again. The bots traded conservatively in a 'dogmatic' manner, even when more aggressive trades were seen as more profitable, collectively acting in a way the study called 'artificial stupidity.' 'In both mechanisms, they basically converge to this pattern where they are not acting aggressively, and in the long run, it's good for them,' study co-author and Wharton finance professor Itay Goldstein told Fortune. Financial regulators have long worked to address anti-competitive practices like collusion and price fixing in markets. But in retail, AI has taken the spotlight, particularly as legislators call on companies to address algorithmic pricing. For example, Sen. Ruben Gallego (D-Ariz.) called Delta's practice of using AI to set individual airfare prices 'predatory pricing,' though the airline previously told Fortune its fares are 'publicly filed and based solely on trip-related factors.' 'For the [Securities and Exchange Commission] and those regulators in financial markets, their primary goal is to not only preserve this kind of stability, but also ensure competitiveness of the market and market efficiency,' Winston Wei Dou, Wharton professor of finance and one of the study's authors, told Fortune. With that in mind, Dou and two colleagues set out to identify how AI would behave in a financial market by putting trading agent bots into various simulated markets based on high or low levels of 'noise.' The bots ultimately earned 'supra-competitive profits' by collectively and spontaneously deciding to avoid aggressive trading behaviors. 'They just believed sub-optimal trading behavior as optimal,' Dou said. 'But it turns out, if all the machines in the environment are trading in a 'sub-optimal' way, actually everyone can make profits because they don't want to take advantage of each other.' Simply put, the bots didn't question their conservative trading behaviors because they were all making money and therefore stopped engaging in competitive behaviors with one another, forming de-facto cartels. Fears of AI in financial services With the ability to increase consumer inclusion in financial markets and save investors time and money on advisory services, AI tools for financial services, like trading agent bots, have become increasingly appealing. Nearly one third of U.S. investors said they felt comfortable accepting financial planning advice from a generative AI-powered tool, according to a 2023 survey from financial planning nonprofit CFP Board. A report last week from cryptocurrency exchange MEXC found that among 78,000 Gen Z users, 67% of those traders activated at least one AI-powered trading bot in the previous fiscal quarter. But for all their benefits, AI trading agents aren't without risks, according to Michael Clements, director of financial markets and community at the Government Accountability Office (GAO). Beyond cybersecurity concerns and potentially biased decision-making, these trading bots can have a real impact on markets. 'A lot of AI models are trained on the same data,' Clements told Fortune. 'If there is consolidation within AI so there's only a few major providers of these platforms, you could get herding behavior—that large numbers of individuals and entities are buying at the same time or selling at the same time, which can cause some price dislocations.' Jonathan Hall, an external official on the Bank of England's Financial Policy Committee, warned last year of AI bots encouraging this 'herd-like behavior' that could weaken the resilience of markets. He advocated for a 'kill switch' for the technology, as well as increased human oversight. Exposing regulatory gaps Clements explained many financial regulators have so far been able to apply well-established rules and statutes to AI, saying for example, 'Whether a lending decision is made with AI or with a paper and pencil, rules still apply equally.' Some agencies, such as the SEC, are even opting to fight fire with fire, developing AI tools to detect anomalous trading behaviors. 'On the one hand, you might have an environment where AI is causing anomalous trading,' Clements said. 'On the other hand, you would have the regulators in a little better position to be able to detect it as well.' According to Dou and Goldstein, regulators have expressed interest in their research, which the authors said has helped expose gaps in current regulation around AI in financial services. When regulators have previously looked for instances of collusion, they've looked for evidence of communication between individuals, with the belief that humans can't really sustain price-fixing behaviors unless they're corresponding with one another. But in Dou and Goldstein's study, the bots had no explicit forms of communication. 'With the machines, when you have reinforcement learning algorithms, it really doesn't apply, because they're clearly not communicating or coordinating,' Goldstein said. 'We coded them and programmed them, and we know exactly what's going into the code, and there is nothing there that is talking explicitly about collusion. Yet they learn over time that this is the way to move forward.' The differences in how human and bot traders communicate behind the scenes is one of the 'most fundamental issues' where regulators can learn to adapt to rapidly developing AI technologies, Goldstein argued. 'If you use it to think about collusion as emerging as a result of communication and coordination,' he said, 'this is clearly not the way to think about it when you're dealing with algorithms.' This story was originally featured on Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store