Latest news with #DFRLab


NZ Herald
22-07-2025
- NZ Herald
AI chatbots replace friends for 23% of NZ kids, raising concerns
He said in some contexts, AI could be useful, but parents needed to have discussions, even if such talk might create tension. 'Parents have concerns about holding their kids back. Kids want to be accepted. Obviously protecting your kids, making sure they're having healthy online interactions, is still vital.' Gorrie said 30% of Kiwi parents already checked their child's devices, such as by reviewing app usage, settings and installed apps. The Norton Connected Kids survey found the average baby boomer got their first mobile phone at age 41 but Gen Z kids born from the late 1990s through to the early 2010s did so at age 14. Norton said 34% of parents surveyed in late April and early May felt AI was not beneficial for children's learning or creativity. However, only 41% of Kiwi parents said they had discussed AI dangers such as deepfakes and misinformation with their children. Elon Musk's AI chatbot Grok last month was found to have struggled with verifying already-confirmed facts, analysing fake visuals and avoiding unsubstantiated claims. The Digital Forensic Research Lab (DFRLab) of the Atlantic Council analysed about 130,000 posts in various languages on the platform X before reaching those findings. In January, the US Federal Trade Commission approached the country's Department of Justice over a complaint that Snapchat's AI chatbot harmed young users. In May, an OpenAI technical report cited in New Scientist said some new AI large language models had higher hallucination rates than the company's previous 'o1″ model introduced last year. AI hallucinates when it makes up answers to questions, producing false or absurd responses. Papatoetoe High School principal Vaughan Couillault says the nuanced features of AI and its variety of uses should not be forgotten in a moral panic or generalisation. Photo / NZME Vaughan Couillault, Papatoetoe High School principal, said AI had good and bad uses, just as many technologies had. He said the issue of young people using cellphones was nuanced – and his school had a useful app where students could access their timetables and grades. 'We're increasingly turning to AI to create solutions for us.' On April 29, the Government's ban on cellphones in school classrooms took effect, aimed at removing unnecessary disturbances and distractions. Some groups have lobbied for stricter rules but Couillault said his school used a high-trust model to uphold the ban, which seemed to work. 'I've got 1800 kids and I would have maybe 10 to 15 confiscations a day.' Couillault said parents frequently had no idea what their kids were doing with phones, and attempts to regulate or monitor phone use at home could cause conflict. 'Perseverance, and human connection, is the solution for me.' He said a bigger issue was who actually owned the data young people uploaded to apps or AI programmes. He queried the Norton survey's sample size of 1001 adults, saying he had more kids at his school. Gorrie said the survey size was realistic for New Zealand, indicative of trends and Norton carried out multiple surveys worldwide. Of respondents, 13% of parents said their children had been victims of cyber bullying. But since some parents admitted not knowing much about children's online lives, and bullying and scams were known to often be under-reported, Gorrie said the true number was probably higher. Lobby group B416 is among those pushing for social media use to be limited to people aged 16 and over. Entrepreneur Cecilia Robinson, B416 co-chairwoman, said the new Norton findings confirmed what parents were already seeing. 'When kids as young as 12 are turning to AI for emotional support, it's a clear sign that we've handed over digital spaces to children without the right protections.' She said New Zealand had no independent regulator for online safety and no legal minimum age for social media access. Robinson said the current system left too many kids exposed, unsupported and unprotected. Bullying Norton's survey found 41% of parents surveyed said cyber bullying perpetrators were their child's classmate or peer. The company said 'trolling and harassment spans numerous platforms' today whereas in the past, children could generally avoid bullies apart from at school. 'Visual-first social media platforms lead the charge,' Gorrie said. Some children were bullied on multiple platforms. Of parents who said their kids were bullied, 33% said children were bullied on Snapchat, 33% also on Instagram, 30% on Facebook and 28% on TikTok. About one-quarter of those parents said their child had been bullied via text messages. The Norton survey added: 'Strikingly, 46% of Kiwi parents say they knew their child was being cyber bullied before their child confided in them.' Norton said that showed many parents were picking up on cyber bullying warning signs – but 28% had still not spoken with children about staying safe online, leaving them under-prepared when risks escalated. The survey was conducted for the 'Connected Kids' 2025 Norton Cyber Safety Insights Report, with 1001 adults surveyed.


Express Tribune
27-06-2025
- Express Tribune
Grok churns out fake facts about Israel-Iran war
Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said on Tuesday, raising fresh doubts about its reliability as a debunking tool. With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilising AI-powered chatbots – including xAI's Grok – in search of reliable information, but their responses are often themselves prone to misinformation. "The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank. "Grok demonstrated that it struggles with verifying already-confirmed facts, analysing fake visuals, and avoiding unsubstantiated claims." The DFRLab analysed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media." Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found. It oscillated – sometimes within the same minute – between denying the airport's destruction and confirming it had been damaged by strikes, the study said. In some responses, Grok cited a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran. When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said. The Israel-Iran conflict, which led to US airstrikes against Tehran's nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts. AI chatbots also amplified falsehoods. As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support. When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard. Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles. Last month, Grok was under renewed scrutiny for inserting "white genocide" in South Africa, a far-right conspiracy theory, into unrelated queries.


Euronews
26-06-2025
- Euronews
Musk-owned AI chatbot struggled to fact-check Israel-Iran war
A new report reveals that Grok — the free-to-use AI chatbot integrated into Elon Musk's X — showed "significant flaws and limitations" when verifying information about the 12-day conflict between Israel and Iran (June 13-24), which now seems to have subsided. Researchers at the Atlantic Council's Digital Forensic Research Lab (DFRLab) analysed 130,000 posts published by the chatbot on X in relation to the 12-day conflict, and found they provided inaccurate and inconsistent information. They estimate that around a third of those posts responded to requests to verify misinformation circulating about the conflict, including unverified social media claims and footage purporting to emerge from the exchange of fire. "Grok demonstrated that it struggles with verifying already-confirmed facts, analysing fake visuals and avoiding unsubstantiated claims," the report says. "The study emphasises the crucial importance of AI chatbots providing accurate information to ensure they are responsible intermediaries of information." While Grok is not intended as a fact-checking tool, X users are increasingly turning to it to verify information circulating on the platform, including to understand crisis events. X has no third-party fact-checking programme, relying instead on so-called community notes where users can add context to posts believed to be inaccurate. Misinformation surged on the platform after Israel first struck in Iran on 13 June, triggering an intense exchange of fire. Grok fails to distinguish authentic from fake DFRLab researchers identified two AI-generated videos that Grok falsely labelled as "real footage" emerging from the conflict. The first of these videos shows what seems to be destruction to Tel Aviv's Ben Gurion airport after an Iranian strike, but is clearly AI-generated. Asked whether it was real, Grok oscillated between conflicting responses within minutes. It falsely claimed that the false video "likely shows real damage at Tel Aviv's Ben Gurion Airport from a Houthi missile strike on May 4, 2025," but later claimed the video "likely shows Mehrabad International Airport in Tehran, Iran, damaged during Israeli airstrikes on June 13, 2025." Euroverify, Euronews' fact-checking unit, identified three further viral AI-generated videos which Grok falsely said were authentic when asked by X users. The chatbot linked them to an attack on Iran's Arak nuclear plant and strikes on Israel's port of Haifa and the Weizmann Institute in Rehovot. Euroverify has previously detected several out-of-context videos circulating on social platforms being misleadingly linked to the Israel-Iran conflict. Grok seems to have contributed to this phenomenon. The chatbot described a viral video as showing Israelis fleeing the conflict at the Taba border crossing with Egypt, when it in fact shows festival-goers in France. It also alleged that a video of an explosion in Malaysia showed an "Iranian missile hitting Tel Aviv" on 19 June. Chatbots amplifying falsehoods The findings of the report come after the 12-day conflict triggered an avalanche of false claims and speculation online. One claim, that China sent military cargo planes to Iran's aid, was widely boosted by AI chatbots Grok and Perplexity, a three-year-old AI startup which has drawn widespread controversy for allegedly using the content of media companies without their consent. NewsGuard, a disinformation watchdog, claimed both these chatbots had contributed to the spread of the claim. The misinformation stemmed from misinterpreted data from flight tracking site Flightradar24, which was picked up by some media outlets and amplified artificially by the AI chatbots. Experts at DFRLab point out that chatbots heavily rely on media outlets to verify information, but often cannot keep up with the fast-changing news pace in situations of global crises. They also warn against the distorting impact these chatbots can have as users become increasingly reliant on them to inform themselves. "As these advanced language models become an intermediary through which wars and conflicts are interpreted, their responses, biases, and limitations can influence the public narrative."


Express Tribune
26-06-2025
- Express Tribune
Grok shows 'flaws' in fact-checking Israel-Iran war: study
Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said Tuesday, raising fresh doubts about its reliability as a debunking tool. With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots — including xAI's Grok — in search of reliable information, but their responses are often themselves prone to misinformation. "The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank. "Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims." The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media."


Time of India
25-06-2025
- Business
- Time of India
Elon Musk's Grok shows 'flaws' in fact-checking Israel-Iran war: study
HighlightsA study by the Digital Forensic Research Lab of the Atlantic Council revealed that Elon Musk's AI chatbot Grok provided inaccurate and contradictory responses regarding the Israel-Iran conflict, questioning its reliability as a fact-checking tool. The investigation found that Grok struggled to authenticate AI-generated media and frequently oscillated between confirming and denying the destruction of an airport in response to user inquiries. Elon Musk criticized Grok for its poor sourcing after it cited Media Matters, a media watchdog he has previously targeted in lawsuits, showcasing ongoing concerns about the chatbot's ability to provide reliable information. Elon Musk 's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict , a study said Tuesday, raising fresh doubts about its reliability as a debunking tool. With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots -- including xAI 's Grok -- in search of reliable information, but their responses are often themselves prone to misinformation . "The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank. "Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims." The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media ." Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found. It oscillated -- sometimes within the same minute -- between denying the airport's destruction and confirming it had been damaged by strikes, the study said. In some responses, Grok cited the a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran. When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said. The Israel-Iran conflict, which led to US air strikes against Tehran's nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts. AI chatbots also amplified falsehoods. As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support. When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard. Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles. Last month, Grok was under renewed scrutiny for inserting "white genocide" in South Africa, a far-right conspiracy theory, into unrelated queries. Musk's startup xAI blamed an "unauthorized modification" for the unsolicited response. Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. Musk himself blasted Grok after it cited Media Matters -- a liberal media watchdog he has targeted in multiple lawsuits -- as a source in some of its responses about misinformation. "Shame on you, Grok," Musk wrote on X. "Your sourcing is terrible."