logo
Brain-dead mother forced to carry pregnancy under Georgia's abortion ban

Brain-dead mother forced to carry pregnancy under Georgia's abortion ban

Time of India15-05-2025

(Representative photo)
An Atlanta woman has been kept on life support for over three months, despite being declared brain dead, due to strict abortion laws in the state of Georgia. Adriana Smith, a 30-year-old nurse and mother, was nine weeks pregnant when she suffered severe headaches in February and visited a hospital for help.
According to her mother, April Newkirk, staff gave her medication but failed to conduct crucial scans, including a CT scan, which might have detected the blood clots later found in her brain.
The following morning, Smith's boyfriend found her unresponsive and struggling to breathe. She was rushed to hospital, where doctors discovered extensive brain damage caused by multiple clots. Though surgery was considered, it was already too late.
Smith was pronounced brain dead shortly afterwards.
Yet, because of Georgia's abortion ban, doctors have been legally required to keep Smith alive to allow the foetus to reach viability. The law, known as the LIFE Act or 'Heartbeat Bill' bans abortions once a foetal heartbeat is detected, usually around six weeks. While exceptions exist for medical emergencies, Smith's condition reportedly falls into a legal grey area.
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
AI guru Andrew Ng recommends: Read These 5 Books And Turn Your Life Around in 2025
Blinkist: Andrew Ng's Reading List
Undo
As she is already brain dead, doctors argue she is no longer considered at risk, and the pregnancy must be maintained until at least 32 weeks.
Her mother describes the experience as a living nightmare. 'It's torture for me. I see my daughter breathing through machines, but she's not there,' Newkirk said. 'And her son—he thinks she's just sleeping.' The family now faces the heartbreaking possibility that the baby, if born, could face severe health issues.
Newkirk stressed that the most painful part is not having the choice. 'This decision should've been left to us. We might not have ended the pregnancy, but not having the option is the hardest part.' Smith is currently 21 weeks pregnant, with doctors hoping to sustain her body for another 11 weeks.
(This is a top Google Trends topic)

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Brave Chinese voices have begun to question the hype around AI
Brave Chinese voices have begun to question the hype around AI

Mint

time19 hours ago

  • Mint

Brave Chinese voices have begun to question the hype around AI

Against the odds, some in China are questioning the top-down push to get aboard the artificial intelligence (AI) hype bandwagon. In a tightly controlled media environment where these experts can easily be drowned out, it's important to listen to them. Across the US and Europe, loud voices inside and outside the tech industry are urging caution about AI's rapid acceleration, pointing to labour market threats or more catastrophic risks. But in China, this chorus has been largely muted. Until now. Also Read: Parmy Olson: The DeepSeek AI revolution has a security problem China has the highest global share of people who say AI tools have more benefits than drawbacks, and they've shown an eagerness to embrace it. It's hard to overstate the exuberance in the tech sector since the emergence of DeepSeek's market-moving reasoning model earlier this year. Innovations and updates have been unfurling at breakneck speed and the technology is being widely adopted across the country. But not everyone's on board. Publicly, state-backed media has lauded the widespread adoption of DeepSeek across hundreds of hospitals in China. But a group of medical researchers tied to Tsinghua University published a paper in the medical journal JAMA in late April gently questioning if this was happening 'too fast, too soon." It argued that healthcare institutions are facing pressure from 'social media discourse" to implement DeepSeek in order to not appear 'technologically backward." Doctors are increasingly reporting patients who 'present DeepSeek-generated treatment recommendations and insist on adherence to these AI-formulated care plans." The team argued that as much as AI has shown potential to help in the medical field, this rushed rollout carries risks. They are right to be cautious. Also Read: The agentic AI revolution isn't the future, it's already here It's not just the doctors who are raising doubts. A separate paper from AI scientists at the same university found last month that some of the breakthroughs behind reasoning models—including DeepSeek's R1, as well as similar offerings from Western tech giants—may not be as revolutionary as some have claimed. They found that the novel training method used for this new crop 'is not as powerful as previously believed." The method used to power them 'doesn't enable the model to solve problems that the base model can't solve," one of the scientists added. This means the innovations underpinning what has been widely dubbed as the next step—toward achieving so-called Artificial General Intelligence—may not be as much of a leap as some had hoped. This research from Tsinghua holds extra weight: The institution is one of the pillars of the domestic AI scene, long churning out both keystone research and ambitious startup founders. Another easily overlooked word of warning came from a speech by Zhu Songchun, dean of the Beijing Institute for General Artificial Intelligence, linked to Peking University. Zhu said that for the nation to remain competitive, it needs more substantive research and less laudatory headlines, according to an in-depth English-language analysis of his remarks published by the independent China Media Project. These cautious voices are a rare break from the broader narrative. But in a landscape where the deployment of AI has long been government priority, it makes them especially noteworthy. The more President Xi Jinping signals that embracing AI technology is important, the less likely people are to publicly question it. This can lead to less overt forms of backlash, like social media hashtags on Weibo poking fun at chatbots' errors. Or it can result in data centres quietly sitting unused across the country as local governments race to please Beijing—as well as a mountain of PR stunts. Also Read: AI as infrastructure: India must develop the right tech Perhaps the biggest headwind facing the sector, despite the massive amounts of spending, is that AI still hasn't altered the earnings outlooks at most of the Chinese tech firms. The money can't lie. This doesn't mean that AI in China is just propaganda. The conflict extends far beyond its tech sector—US firms are also guilty of getting carried away promoting the technology. But multiple things can be true at once. It's undeniable that DeepSeek has fuelled new excitement, research and major developments across the AI ecosystem. But it's also been used as a distraction from the domestic macro-economic pains that predated the ongoing trade war. Without guard-rails, the risk of rushing out the technology is greater than just investors losing money—people's health is at stake. From Hangzhou to Silicon Valley, the more we ignore the voices questioning the AI hype bandwagon, the more we blind ourselves to consequences of a potential derailment. ©Bloomberg The author is a Bloomberg Opinion columnist covering Asia tech.

The Digital Shoulder: How AI chatbots are built to ‘understand' you
The Digital Shoulder: How AI chatbots are built to ‘understand' you

Mint

time20 hours ago

  • Mint

The Digital Shoulder: How AI chatbots are built to ‘understand' you

As artificial intelligence (AI) chatbots are becoming an inherent part of people's lives, more and more users are spending time chatting with these bots to not just streamline their professional or academic work but also seek mental health advice. Some people have positive experiences that make AI seem like a low-cost therapist. AI models are programmed to be smart and engaging, but they don't think like humans. ChatGPT and other generative AI models are like your phone's auto-complete text feature on steroids. They have learned to converse by reading text scraped from the internet. When a person asks a question (called a prompt) such as 'how can I stay calm during a stressful work meeting?' the AI forms a response by randomly choosing words that are as close as possible to the data it saw during training. This happens really fast, but the responses seem quite relevant, which might often feel like talking to a real person, according to a PTI report. But these models are far from thinking like humans. They definitely are not trained mental health professionals who work under professional guidelines, follow a code of ethics, or hold professional registration, the report says. When you prompt an AI system such as ChatGPT, it draws information from three main sources to respond: Background knowledge it memorised during training, external information sources and information you previously provided. To develop an AI language model, the developers teach the model by having it read vast quantities of data in a process called 'training'. This information comes from publicly scraped information, including everything from academic papers, eBooks, reports, and free news articles to blogs, YouTube transcripts, or comments from discussion forums such as Reddit. Since the information is captured at a single point in time when the AI is built, it may also be out of date. Many details also need to be discarded to squish them into the AI's 'memory'. This is partly why AI models are prone to hallucination and getting details wrong, as reported by PTI. The AI developers might connect the chatbot itself with external tools, or knowledge sources, such as Google for searches or a curated database. Meanwhile, some dedicated mental health chatbots access therapy guides and materials to help direct conversations along helpful lines. AI platforms also have access to information you have previously supplied in conversations or when signing up for the platform. On many chatbot platforms, anything you've ever said to an AI companion might be stored away for future reference. All of these details can be accessed by the AI and referenced when it responds. These AI chatbots are overly friendly and validate all your thoughts, desires and dreams. It also tends to steer conversation back to interests you have already discussed. This is unlike a professional therapist who can draw from training and experience to help challenge or redirect your thinking where needed, reported PTI. Most people are familiar with big models such as OpenAI's ChatGPT, Google's Gemini, or Microsoft's Copilot. These are general-purpose models. They are not limited to specific topics or trained to answer any specific questions. Developers have also made specialised AIs that are trained to discuss specific topics, like mental health, such as Woebot and Wysa. According to PTI, some studies show that these mental health-specific chatbots might be able to reduce users' anxiety and depression symptoms. There is also some evidence that AI therapy and professional therapy deliver some equivalent mental health outcomes in the short term. Another important point to note is that these studies exclude participants who are suicidal or who have a severe psychotic disorder. And many studies are reportedly funded by the developers of the same chatbots, so the research may be biased. Researchers are also identifying potential harms and mental health risks. The companion chat platform for example, has been implicated in an ongoing legal case over a user's suicide, according to the PTI report. At this stage, it's hard to say whether AI chatbots are reliable and safe enough to use as a stand-alone therapy option, but they may also be a useful place to start when you're having a bad day and just need a chat. But when the bad days continue to happen, it's time to talk to a professional as well. More research is needed to identify if certain types of users are more at risk of the harms that AI chatbots might bring. It's also unclear if we need to be worried about emotional dependence, unhealthy attachment, worsening loneliness, or intensive use.

Do you talk to AI when you're feeling down? Here's where chatbots get their therapy advice
Do you talk to AI when you're feeling down? Here's where chatbots get their therapy advice

Mint

timea day ago

  • Mint

Do you talk to AI when you're feeling down? Here's where chatbots get their therapy advice

Brisbane, Jun 11 (The Conversation) As more and more people spend time chatting with artificial intelligence (AI) chatbots such as ChatGPT, the topic of mental health has naturally emerged. Some people have positive experiences that make AI seem like a low-cost therapist. But AIs aren't therapists. They're smart and engaging, but they don't think like humans. ChatGPT and other generative AI models are like your phone's auto-complete text feature on steroids. They have learned to converse by reading text scraped from the internet. When someone asks a question (called a prompt) such as 'how can I stay calm during a stressful work meeting?' the AI forms a response by randomly choosing words that are as close as possible to the data it saw during training. This happens so fast, with responses that are so relevant, it can feel like talking to a person. But these models aren't people. And they definitely are not trained mental health professionals who work under professional guidelines, adhere to a code of ethics, or hold professional registration. Where does it learn to talk about this stuff? When you prompt an AI system such as ChatGPT, it draws information from three main sources to respond: background knowledge it memorised during training external information sources information you previously provided. To develop an AI language model, the developers teach the model by having it read vast quantities of data in a process called 'training'. Where does this information come from? Broadly speaking, anything that can be publicly scraped from the internet. This can include everything from academic papers, eBooks, reports, free news articles, through to blogs, YouTube transcripts, or comments from discussion forums such as Reddit. Are these sources reliable places to find mental health advice? Sometimes. Are they always in your best interest and filtered through a scientific evidence based approach? Not always. The information is also captured at a single point in time when the AI is built, so may be out-of-date. A lot of detail also needs to be discarded to squish it into the AI's 'memory'. This is part of why AI models are prone to hallucination and getting details wrong. 2. External information sources The AI developers might connect the chatbot itself with external tools, or knowledge sources, such as Google for searches or a curated database. When you ask Microsoft's Bing Copilot a question and you see numbered references in the answer, this indicates the AI has relied on an external search to get updated information in addition to what is stored in its memory. Meanwhile, some dedicated mental health chatbots are able to access therapy guides and materials to help direct conversations along helpful lines. 3. Information previously provided AI platforms also have access to information you have previously supplied in conversations, or when signing up to the platform. When you register for the companion AI platform Replika, for example, it learns your name, pronouns, age, preferred companion appearance and gender, IP address and location, the kind of device you are using, and more (as well as your credit card details). On many chatbot platforms, anything you've ever said to an AI companion might be stored away for future reference. All of these details can be dredged up and referenced when an AI responds. And we know these AI systems are like friends who affirm what you say (a problem known as sycophancy) and steer conversation back to interests you have already discussed. This is unlike a professional therapist who can draw from training and experience to help challenge or redirect your thinking where needed. What about specific apps for mental health? Most people would be familiar with the big models such as OpenAI's ChatGPT, Google's Gemini, or Microsofts' Copilot. These are general purpose models. They are not limited to specific topics or trained to answer any specific questions. But developers can make specialised AIs that are trained to discuss specific topics, like mental health, such as Woebot and Wysa. Some studies show these mental health specific chatbots might be able to reduce users' anxiety and depression symptoms. Or that they can improve therapy techniques such as journalling, by providing guidance. There is also some evidence that AI-therapy and professional therapy deliver some equivalent mental health outcomes in the short term. However, these studies have all examined short-term use. We do not yet know what impacts excessive or long-term chatbot use has on mental health. Many studies also exclude participants who are suicidal or who have a severe psychotic disorder. And many studies are funded by the developers of the same chatbots, so the research may be biased. Researchers are also identifying potential harms and mental health risks. The companion chat platform for example, has been implicated in ongoing legal case over a user suicide. This evidence all suggests AI chatbots may be an option to fill gaps where there is a shortage in mental health professionals, assist with referrals, or at least provide interim support between appointments or to support people on waitlists. At this stage, it's hard to say whether AI chatbots are reliable and safe enough to use as a stand-alone therapy option. More research is needed to identify if certain types of users are more at risk of the harms that AI chatbots might bring. It's also unclear if we need to be worried about emotional dependence, unhealthy attachment, worsening loneliness, or intensive use. AI chatbots may be a useful place to start when you're having a bad day and just need a chat. But when the bad days continue to happen, it's time to talk to a professional as well. (The Conversation) GRS GRS

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store