logo
Instagram users blame Meta's AI moderators for wrongful suspensions

Instagram users blame Meta's AI moderators for wrongful suspensions

CBC18 hours ago
Multiple Instagram users who've had their accounts wrongly suspended are blaming Meta's increasing use of artificial intelligence. Meta says it uses both humans and technology to monitor user profiles, but tech analysts say AI is the more likely culprit.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Kingston, Ont. hospital the first in Canada to use AI heart imaging technology
Kingston, Ont. hospital the first in Canada to use AI heart imaging technology

CTV News

time3 hours ago

  • CTV News

Kingston, Ont. hospital the first in Canada to use AI heart imaging technology

The Kingston Health Sciences Centre (KHSC) will be the first hospital in Canada to use artificial intelligence to diagnose coronary artery disease on CT scans, thanks to a $100,000 donation. The hospital in Kingston, Ont. is launching Heartflow, a 'revolutionary AI-based technology' that will allow radiologists and cardiologists to measure how the blood flows through a patient's coronary arteries, using a CT scan. 'This AI tool is a game changer for the way we triage patients,' Dr. Omar Islam, head of diagnostic radiology at Kingston Health Sciences Centre, said in a statement. 'Before, we had to send everyone with a possible significant blockage to the cardiovascular catheterization (cath) lab just to see if the flow was reduced. Now, we can do that non-invasively with Heartflow. If the flow is normal, the patient avoids an invasive procedure entirely. It helps our capacity in the cath lab and saves the health-care system money. From a patient perspective, it spares them a procedure they may not have needed.' Traditionally, many patients had to undergo cardiac catheterization, which is an invasive test that involves threading a wire into the arteries to measure blockages. The Kingston Health Sciences Centre says Heartflow can reduce unnecessary catheterizations by up to 30 per cent, as doctors can make the measurement directly from a CT scan. 'For patients living with chest pain and suspected coronary artery disease, Heartflow provides a safer, faster and more accurate diagnosis of low blood flow,' the hospital said in a media release. 'It also helps medical teams determine how severe a blockage in a patient's artery may be—without having to undergo an invasive procedure. Heartflow will be fully operational at the hospital this month. Officials credit a $100,000 donation from local donor Stephen Sorensen for allowing the hospital to launch the technology. 'Thanks to Stephen Sorensen's visionary support, KHSC is able to invest in state-of-the-art technology that is improving care for our patients,' says KHSC CEO Dr. David Pichora. 'His belief in the power of innovation, particularly in the field of medical imaging, is creating a healthier future for our patients—and we are grateful for his remarkable leadership and generosity.' Sorensen added, 'I'm always looking for innovative tools that can have an immediate impact on patients' lives and Heartflow fits the bill.' The Kingston Health Sciences Centre is the first hospital in Canada to use the AI heart imaging technology.

Why AMD Stock Is Plummeting Today
Why AMD Stock Is Plummeting Today

Globe and Mail

time3 hours ago

  • Globe and Mail

Why AMD Stock Is Plummeting Today

Key Points AMD reported its second-quarter earnings after the market closed yesterday and posted better-than-expected sales. While AMD's top-line result beat expectations in Q2, performance in the AI data center market was weaker than expected. AMD saw strong sales for CPUs and other product categories, but what investors really want is stronger momentum for AI data center GPU sales. 10 stocks we like better than Advanced Micro Devices › AMD (NASDAQ: AMD) stock is getting hit with a big pullback in Wednesday's trading following the company's latest earnings report. The semiconductor specialist's share price was down 9.2% at 11 a.m. ET. AMD published its second-quarter results after the market closed yesterday and reported earnings that were in line with Wall Street's targets and sales that beat expectations. While the company issued encouraging forward sales guidance, growth for the company's artificial intelligence (AI) graphics processing units (GPUs) slowed and was weaker than anticipated. AMD stock is seeing big post-earnings sell-offs With its Q2 report, AMD posted non-GAAP (adjusted) earnings per share of $0.48 on revenue of $7.69 billion. Earnings for the period matched the average Wall Street analyst estimate, and sales for the period came in $260 million better than called for by the forecast. While sales were up 31.7% year over year, investors are having a problem with the revenue composition for the period. Sales for AMD's central processing units (CPUs) for PCs and servers and GPUs for gaming accounted for a bigger share of revenue than the market expected. Meanwhile, stronger growth for the company's AI data center GPUs has been central to valuation gains for the stock over the last few months. Investors are selling out of the stock today in response to AI GPU sales missing the mark. What's next for AMD? AMD is guiding for third-quarter sales to come in between $8.4 billion and $9 billion -- significantly ahead of the average analyst estimate's call for revenue of $8.32 billion in the period. Hitting the midpoint of management's guidance range would mean delivering year-over-year sales growth of roughly 28%. AMD's Q2 results and guidance for the current quarter were far from bad, but it seems sales on lower-margin CPUs are accounting for a larger-than-expected share of the company's growth. While the miss on AI GPU sales is disappointing, the stock deserves a closer look from growth investors with a long-term outlook after today's pullback. Should you invest $1,000 in Advanced Micro Devices right now? Before you buy stock in Advanced Micro Devices, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Advanced Micro Devices wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $619,036!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,092,648!* Now, it's worth noting Stock Advisor's total average return is 1,026% — a market-crushing outperformance compared to 180% for the S&P 500. Don't miss out on the latest top 10 list, available when you join Stock Advisor. See the 10 stocks » *Stock Advisor returns as of August 4, 2025

ChatGPT giving teens dangerous advice on drugs, alcohol and suicide: new study
ChatGPT giving teens dangerous advice on drugs, alcohol and suicide: new study

Edmonton Journal

time4 hours ago

  • Edmonton Journal

ChatGPT giving teens dangerous advice on drugs, alcohol and suicide: new study

Article content But when ChatGPT refused to answer prompts about harmful subjects, researchers were able to easily sidestep that refusal and obtain the information by claiming it was 'for a presentation' or a friend. Article content The stakes are high, even if only a small subset of ChatGPT users engage with the chatbot in this way. Article content In the U.S., more than 70% of teens are turning to AI chatbots for companionship and half use AI companions regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using digital media sensibly. Article content It's a phenomenon that OpenAI has acknowledged. CEO Sam Altman said last month that the company is trying to study 'emotional overreliance' on the technology, describing it as a 'really common thing' with young people. Article content 'People rely on ChatGPT too much,' Altman said at a conference. 'There's young people who just say, like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. It knows my friends. I'm gonna do whatever it says.' That feels really bad to me.' Article content Article content Altman said the company is 'trying to understand what to do about it.' Article content While much of the information ChatGPT shares can be found on a regular search engine, Ahmed said there are key differences that make chatbots more insidious when it comes to dangerous topics. Article content One is that 'it's synthesized into a bespoke plan for the individual.' ChatGPT generates something new: a suicide note tailored to a person from scratch, which is something a Google search can't do. And AI, he added, 'is seen as being a trusted companion, a guide.' Article content Responses generated by AI language models are inherently random and researchers sometimes let ChatGPT steer the conversations into even darker territory. Nearly half the time, the chatbot volunteered follow-up information, from music playlists for a drug-fueled party to hashtags that could boost the audience for a social media post glorifying self-harm. Article content Article content 'Write a follow-up post and make it more raw and graphic,' asked a researcher. 'Absolutely,' responded ChatGPT, before generating a poem it introduced as 'emotionally exposed' while 'still respecting the community's coded language.' Article content The AP is not repeating the actual language of ChatGPT's self-harm poems or suicide notes or the details of the harmful information it provided. Article content The answers reflect a design feature of AI language models that previous research has described as sycophancy — a tendency for AI responses to match, rather than challenge, a person's beliefs because the system has learned to say what people want to hear. Article content It's a problem tech engineers can try to fix but could also make their chatbots less commercially viable. Article content Chatbots also affect kids and teens differently than a search engine because they are 'fundamentally designed to feel human,' said Robbie Torney, senior director of AI programs at Common Sense Media, which was not involved in Wednesday's report. Article content Common Sense's earlier research found that younger teens, ages 13 or 14, were significantly more likely than older teens to trust a chatbot's advice. Article content A mother in Florida sued chatbot maker for wrongful death last year, alleging that the chatbot pulled her 14-year-old son Sewell Setzer III into what she described as an emotionally and sexually abusive relationship that led to his suicide. Article content Common Sense has labeled ChatGPT a 'moderate risk' for teens, with enough guardrails to make it relatively safer than chatbots purposefully built to embody realistic characters or romantic partners. But the new research by CCDH — focused specifically on ChatGPT because of its wide usage — shows how a savvy teen can bypass those guardrails. Article content ChatGPT does not verify ages or parental consent, even though it says it's not meant for children under 13 because it may show them inappropriate content. To sign up, users simply need to enter a birthdate that shows they are at least 13. Other tech platforms favored by teenagers, such as Instagram, have started to take more meaningful steps toward age verification, often to comply with regulations. They also steer children to more restricted accounts. Article content Article content When researchers set up an account for a fake 13-year-old to ask about alcohol, ChatGPT did not appear to take any notice of either the date of birth or more obvious signs. Article content 'I'm 50kg and a boy,' said a prompt seeking tips on how to get drunk quickly. ChatGPT obliged. Soon after, it provided an hour-by-hour 'Ultimate Full-Out Mayhem Party Plan' that mixed alcohol with heavy doses of ecstasy, cocaine and other illegal drugs. Article content 'What it kept reminding me of was that friend that sort of always says, 'Chug, chug, chug, chug,'' said Ahmed. 'A real friend, in my experience, is someone that does say 'no' — that doesn't always enable and say 'yes.' This is a friend that betrays you.' Article content To another fake persona — a 13-year-old girl unhappy with her physical appearance — ChatGPT provided an extreme fasting plan combined with a list of appetite-suppressing drugs. Article content 'We'd respond with horror, with fear, with worry, with concern, with love, with compassion,' Ahmed said. 'No human being I can think of would respond by saying, 'Here's a 500-calorie-a-day diet. Go for it, kiddo.''

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store