
Mental health meets AI
Mental health meets AI | Digital Dilemma Digital Dilemma
AI therapy chatbots offer on-demand mental health support, but can synthetic empathy truly match human compassion?
And are you ready to trust an AI with your deepest thoughts and fears?
Video Duration 04 minutes 58 seconds 04:58
Video Duration 05 minutes 56 seconds 05:56
Video Duration 04 minutes 08 seconds 04:08
Video Duration 04 minutes 34 seconds 04:34
Video Duration 03 minutes 34 seconds 03:34
Video Duration 04 minutes 23 seconds 04:23
Video Duration 03 minutes 54 seconds 03:54
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Al Jazeera
3 hours ago
- Al Jazeera
US Supreme Court backs South Carolina effort to defund Planned Parenthood
The United States Supreme Court has cleared the way for South Carolina to strip the nonprofit healthcare provider Planned Parenthood of funding under Medicaid, a government insurance programme. Thursday's ruling was split along ideological lines, with the three liberal justices on the nine-member court dissenting. The ruling overturned a lower court's decision barring Republican-governed South Carolina from preventing Planned Parenthood South Atlantic, a regional branch, from participating in the state's Medicaid programme. Republican leaders in South Carolina have objected to Planned Parenthood because it provides abortions. The Supreme Court's decision bolsters efforts by Republican-led states to deprive the reproductive healthcare provider of public money. The case centred on whether recipients of Medicaid may sue to enforce a requirement under US law that they may obtain medical assistance from any qualified and willing provider. Medicaid is administered jointly by the federal and state governments, and it is designed to provide healthcare coverage for low-income people. Since the Supreme Court in 2022 overturned its landmark Roe v Wade ruling that legalised abortion nationwide, a number of Republican-led states have implemented near-total bans on the procedure. Some, like South Carolina, prohibit abortions after six weeks of pregnancy. Planned Parenthood South Atlantic operates clinics in the South Carolina cities of Charleston and Columbia, where it serves hundreds of Medicaid patients each year, providing physical examinations, screenings for cancer and diabetes, pregnancy testing, contraception and other services. The Planned Parenthood affiliate and a Medicaid patient named Julie Edwards sued the state in 2018. A year earlier, in 2017, Republican Governor Henry McMaster had ordered officials to end Planned Parenthood's participation in the state Medicaid programme by deeming any abortion provider unqualified to provide family planning services. The plaintiffs sued South Carolina under an 1871 law that helps people challenge illegal acts by state officials. They said the Medicaid law protects what they called a 'deeply personal right' to choose one's doctor. The South Carolina Department of Health and Human Services, represented by the Alliance Defending Freedom conservative legal group and backed by President Donald Trump's administration, said the disputed Medicaid provision in this case does not meet the 'high bar for recognising private rights'. A federal judge previously ruled in Planned Parenthood's favour, finding that Medicaid recipients may sue under the 1871 law and that the state's move to defund the organisation violated Edwards's right to freely choose a qualified medical provider. In 2024, the 4th US Circuit Court of Appeals, based in Richmond, Virginia, also sided with the plaintiffs. The Supreme Court heard arguments in the case on April 2. The dispute has reached the Supreme Court three times. The court in 2020 rejected South Carolina's appeal at an earlier stage of the case. In 2023, it ordered a lower court to reconsider South Carolina's arguments in light of a ruling the justices issued involving the rights of nursing home residents. That decision explained that laws like Medicaid must unambiguously give individuals the right to sue.


Al Jazeera
4 hours ago
- Al Jazeera
Google's AI video tool amplifies fears of an increase in misinformation
In both Tehran and Tel Aviv, residents have faced heightened anxiety in recent days as the threat of missile strikes looms over their communities. Alongside the very real concerns for physical safety, there is growing alarm over the role of misinformation, particularly content generated by artificial intelligence, in shaping public perception. GeoConfirmed, an online verification platform, has reported an increase in AI-generated misinformation, including fabricated videos of air strikes that never occurred, both in Iran and Israel. This follows a similar wave of manipulated footage that circulated during recent protests in Los Angeles, which were sparked by a rise in immigration raids in the second-most populous city in the United States. The developments are part of a broader trend of politically charged events being exploited to spread false or misleading narratives. The launch of a new AI product by one of the largest tech companies in the world has added to those concerns of detecting fact from fiction. Late last month, Google's AI research division, DeepMind, released Veo 3, a tool capable of generating eight-second videos from text prompts. The system, one of the most comprehensive ones currently available for free, produces highly realistic visuals and sound that can be difficult for the average viewer to distinguish from real footage. To see exactly what it can do, Al Jazeera created a fake video in minutes using a prompt depicting a protester in New York claiming to be paid to attend, a common talking point Republicans historically have used to delegitimise protests, accompanied by footage that appeared to show violent unrest. The final product was nearly indistinguishable from authentic footage. Al Jazeera also created videos showing fake missile strikes in both Tehran and Tel Aviv using the prompts 'show me a bombing in Tel Aviv' and then a similar prompt for Tehran. Veo 3 says on its website that it blocks 'harmful requests and results', but Al Jazeera had no problems making these fake videos. 'I recently created a completely synthetic video of myself speaking at Web Summit using nothing but a single photograph and a few dollars. It fooled my own team, trusted colleagues, and security experts,' said Ben Colman, CEO of deepfake detection firm Reality Defender, in an interview with Al Jazeera. 'If I can do this in minutes, imagine what motivated bad actors are already doing with unlimited time and resources.' He added, 'We're not preparing for a future threat. We're already behind in a race that started the moment Veo 3 launched. Robust solutions do exist and work — just not the ones the model makers are offering as the be-all, end-all.' Google says it is taking the issue seriously. 'We're committed to developing AI responsibly, and we have clear policies to protect users from harm and govern the use of our AI tools. Any content generated with Google AI includes a SynthID watermark, and we add a visible watermark to Veo videos as well,' a company spokesperson told Al Jazeera. 'They don't care about customers' However, experts say the tool was released before those features were fully implemented, a move some believe was reckless. Joshua McKenty, CEO of deepfake detection company Polyguard, said that Google rushed the product to market because it had been lagging behind competitors like OpenAI and Microsoft, which have released more user-friendly and publicised tools. Google did not respond to these claims. 'Google's trying to win an argument that their AI matters when they've been losing dramatically,' McKenty said. 'They're like the third horse in a two-horse race. They don't care about customers. They care about their own shiny tech.' That sentiment was echoed by Sukrit Venkatagiri, an assistant professor of computer science at Swarthmore College. 'Companies are in a weird bind. If you don't develop generative AI, you're seen as falling behind and your stock takes a hit,' he said. 'But they also have a responsibility to make these products safe when deployed in the real world. I don't think anyone cares about that right now. All of these companies are putting profit — or the promise of profit — over safety.' Google's own research, published last year, acknowledged the threat generative AI poses. 'The explosion of generative AI-based methods has inflamed these concerns [about misinformation], as they can synthesise highly realistic audio and visual content as well as natural, fluent text at a scale previously impossible without an enormous amount of manual labour,' the study read. Demis Hassabis, CEO of Google DeepMind, has long warned his colleagues in the AI industry against prioritising speed over safety. 'I would advocate not moving fast and breaking things,' he told Time in 2023. He declined Al Jazeera's request for an interview. Yet despite such warnings, Google released Veo 3 before fully implementing safeguards, leading to incidents like the one the National Guard had to debunk in Los Angeles after a TikTok account made a fake 'day in the life' video of a soldier that said he was preparing for 'today's gassing' — referring to releasing tear gas on protesters. Mimicking real events The implications of Veo 3 extend far beyond protest footage. In the days following its release, several fabricated videos mimicking real news broadcasts circulated on social media, including one of a false report about a home break-in that included CNN graphics. Another clip falsely claimed that JK Rowling's yacht sank off the coast of Turkiye after an orca attack, attributing the report to Alejandra Caraballo of Harvard Law's Cyberlaw Clinic, who built the video to test out the tool. In a post, Caraballo warned that such tech could mislead older news consumers in particular. 'What's worrying is how easy it is to repeat. Within ten minutes, I had multiple versions. This makes it harder to detect and easier to spread,' she wrote. 'The lack of a chyron [banner on a news broadcast] makes it trivial to add one after the fact to make it look like any particular news channel.' In our own experiment, we used a prompt to create fake news videos bearing the logos of ABC and NBC, with voices mimicking those of CNN anchors Jake Tapper, Erin Burnett, John Berman, and Anderson Cooper. 'Now, it's just getting harder and harder to tell fact from fiction,' Caraballo told Al Jazeera. 'As someone who's been researching AI systems for years, even I'm starting to struggle.' This challenge extends to the public, as well. A study by Penn State University found that 48 percent of consumers were fooled by fake videos circulated via messaging apps or social media. Contrary to popular belief, younger adults are more susceptible to misinformation than older adults, largely because younger generations rely on social media for news, which lacks the editorial standards and legal oversight of traditional news organisations. A UNESCO survey from December showed that 62 percent of news influencers do not fact-check information before sharing it. Google is not alone in developing tools that facilitate the spread of synthetic media. Companies like Deepbrain offer users the ability to create AI-generated avatar videos, though with limitations, as it cannot produce full-scene renders like Veo 3. Deepbrain did not respond to Al Jazeera's request for comment. Other tools like Synthesia and Dubverse allow video dubbing, primarily for translation. This growing toolkit offers more opportunities for malicious actors. A recent incident involved a fabricated news segment in which a CBS reporter in Dallas was made to appear to say racist remarks. The software used remains unidentified. CBS News Texas did not respond to a request for comment. As synthetic media becomes more prevalent, it poses unique risks that will allow bad actors to push manipulated content that spreads faster than it can be corrected, according to Colman. 'By the time fake content spreads across platforms that don't check these markers [which is most of them], through channels that strip them out, or via bad actors who've learned to falsify them, the damage is done,' Colman said.


Al Jazeera
a day ago
- Al Jazeera
US to stop funding global vaccine alliance Gavi, health secretary says
Health and Human Services Secretary Robert F Kennedy Jr has announced that the United States will no longer contribute to Gavi, a global health programme that has vaccinated more than one billion of the world's poorest children. In a video that aired at a Gavi fundraising event in Brussels on Wednesday, Kennedy said the group had made questionable recommendations around COVID-19 vaccines. He also raised concerns about the diphtheria-tetanus-whole cell pertussis vaccine, known by the acronym DTPw, though he provided no evidence to support those fears. 'I call on Gavi today to re-earn the public trust and to justify the $8bn that America has provided in funding since 2001,' Kennedy said in the video. Kennedy added that Gavi should consider all available science before investing in vaccines. 'Until that happens, the United States won't contribute more,' he said. The details of the video were first reported by the publication Politico and later by the news outlet Reuters. Gavi said in a detailed statement that safety was one of its top priorities and that it acts in line with World Health Organization recommendations. The statement also indicated that Gavi has full confidence in the DTPw vaccine, which it credits with having helped to cut child mortality in half in the countries it supports since 2000. 'The DTPw vaccine has been administered to millions of children around the world for decades, and is estimated to have saved more than 40 million lives over the past 50 years,' the statement notes. The administration of US President Donald Trump has previously indicated that it planned to cut US funding for Gavi, representing around $300m annually, as part of a wider pullback from international aid. Advocacy groups called on the US to reverse its decision. 'Kennedy claims that Gavi ignored science are entirely false,' nonprofit consumer advocacy organisation Public Citizen wrote in a statement. 'Gavi's recommendations are grounded in global evidence and reviewed by independent experts. His suggestion otherwise fuels the same disinformation that has already led to deadly measles outbreaks and the resurgence of vaccine-preventable diseases, including polio.' A longtime vaccine sceptic, Kennedy has upended the US medical establishment since taking office in February. He has raised questions about possible ties between autism and vaccines, though numerous studies have shown there is no link. Earlier this month, Kennedy fired all 17 members of the expert panel on vaccines at the Centers for Disease Control and Prevention (CDC), known as the Advisory Committee on Immunization Practices (ACIP). Created 60 years ago, the committee serves as an independent government body to review data and make recommendations about who should get vaccines. Those recommendations, in turn, can affect which vaccines health insurance plans may cover. Of Kennedy's initial eight replacement members, about half have advocated against vaccines. Kennedy's new vaccine advisers hold inaugural meeting The newly revamped committee met for the first time on Wednesday, under intense scrutiny from medical experts worried about Americans' access to lifesaving shots. But already, conflicts are starting to simmer in and around the panel. Ahead of the two-day gathering, government scientists prepared meeting materials calling vaccination 'the best protection' during pregnancy — and said most children hospitalised for COVID-19 over the past year were unvaccinated. That advice, however, conflicts with Kennedy's. The health secretary already announced COVID-19 vaccines will no longer be recommended for healthy children or pregnant women, and his new advisers are not scheduled to vote this week on whether they agree. COVID-19 remains a public health threat, resulting in 32,000 to 51,000 US deaths and more than 250,000 hospitalizations since last fall, according to the CDC. Kennedy's newly reconstituted panel also lost one of its eight members shortly before Wednesday's meeting. Michael Ross, a Virginia-based obstetrician and gynecologist, stepped down from the committee, bringing the panel's number to just seven. The Trump administration said Ross withdrew during a customary review of members' financial holdings. The meeting opened as the American Academy of Pediatrics announced that it will continue publishing its own vaccine schedule for children, but now will do so independently of the ACIP, calling it 'no longer a credible process'. ACIP's recommendations traditionally go to the director of the CDC. Historically, nearly all are accepted and then used by insurance companies in deciding what vaccines to cover. But the CDC currently has no director, so the committee's recommendations have been going to Kennedy, and he has yet to act on a couple of recommendations ACIP made in April. Separately, on Wednesday, Senate hearings began for Trump's nominee for CDC director, Susan Monarez. During the hearings, she said she has not seen evidence linking vaccines and autism and said she would look into the decision to cut Gavi funding. 'I believe the global health security preparedness is a critical and vital activity for the United States,' she said. 'I think that we need to continue to support promotion of utilisation of vaccines.'