logo
Teen trick stops blood to brain of 11-yr-old , lands him in hospital

Teen trick stops blood to brain of 11-yr-old , lands him in hospital

Hindustan Times2 days ago

An 11-year-old boy fell unconscious and had to receive expert medical attention for a week after a teenager pressed his carotid artery during play, recently, in Kamalapur Deviganj village of Kunda tehsil in Pratapgarh.
After the child failed to recover, the panicked parents took the child to a neurosurgeon in Prayagraj, who had to treat the child for nearly a week to make him gain normalcy.
According to city-based neurosurgeon Dr Prakash Khaitan, who treated the child, dangerous online games on social networking sites, are accessible to children. They practise the tricks shown in the video on friends, sometimes injuring them.
'Ahmaan, 11, arrived in hospital last week. He was exhibiting symptoms of epilepsy. On enquiring about the case history, his family members revealed the truth. As usual, children were playing in the house where a wedding was scheduled. One of the boys, Farhan, a teenager, tried to show off some tricks to other children, which he had picked up from YouTube. He pressed the two carotid nerves leading to the brain, present below the ear lobes, thereby blocking blood supply to the brain, leaving the boy unconscious,' he added.
'Although this is a unique case in my career, it certainly exposes the threat that unchecked exposure to the internet can spell for children,' he said.
Ahman's family members said that when he did not regain consciousness, his friends called them to see what had happened. 'When we went inside the room, Ahmaan was not fully in his senses and foam was coming out of his mouth. He had also wet his pants and was not able to drink water or stand on his legs. His face had also turned red,' said his father Mohd Yunus.
Dr Kehtan said that an MRI showed that the brain was not getting oxygen due to the compression of the carotid artery supplying blood to his brain. 'Due to this, the child started having epileptic attacks and his hands and legs became weak. After five days of treatment, the child has now started walking on his feet.The trick could be dangerous as cutting of blood supply to the brain could prove fatal leading to epilepsy, paralysis, etc,' he added.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Adulterated mustard oil seized in Ghaziabad: How to check oil purity at home
Adulterated mustard oil seized in Ghaziabad: How to check oil purity at home

Time of India

time12 hours ago

  • Time of India

Adulterated mustard oil seized in Ghaziabad: How to check oil purity at home

Adulteration of mustard oil, a staple in North Indian households, is on the rise, prompting crackdowns by food safety departments. A recent complaint led to an inspection in Ghaziabad, revealing potential adulteration. A simple home test using nitric acid can detect the presence of toxic argemone oil, a known adulterant with a history of causing epidemics in India. Mustard oil is one of the most commonly consumed oils in North Indian households. Unfortunately, this nutrient-rich oil is now being adulterated. As per ANI report, Ghaziabad food safety department recently cracked down on adulterated edible oil, after receiving a complaint about mustard oil adulteration. As per the report, during the inspection, the team collected samples of mustard oil, and sent them for further testing to determine the extent of adulteration. Also Read : Are your mustard seeds adulterated with toxic argemone seeds? The inspection was made after receiving a complaint, where the complainant, alleged that he experienced health issues after consuming mustard oil purchased from a local vendor. How to check adulteration at home In a YouTube video , the Food Safety and Standard Authority of India once shared how to check if your mustard oil is adulterated with argemone oil. According to the video, all you need to do is, take a sample of 5 ml of mustard oil in a test tube, add 5 ml of nitric acid in the test tube. Shake the test tube gently, The unadulterated mustard oil will show no colour change in the acidic layer. An orange yellow to red colour will develop in the acidic layer in the adulterated mustard oil. According to the video, argemone oil contains sanguinarine, a toxic polycyclic salt. The video states, the reaction is very sensitive and the intensity of colour formed is due to the formation of sanguinarine nitrate. As per the Neurology India , the first four cases of argemone oil poisoning from Bombay were reported in 1877. Various epidemics have also been reported in India from Calcutta (1877), Assam, Bihar, Eastern U.P, Orissa, Madhya Pradesh, Gujrat and Delhi since then. One step to a healthier you—join Times Health+ Yoga and feel the change

The Digital Shoulder: How AI chatbots are built to ‘understand' you
The Digital Shoulder: How AI chatbots are built to ‘understand' you

Mint

time13 hours ago

  • Mint

The Digital Shoulder: How AI chatbots are built to ‘understand' you

As artificial intelligence (AI) chatbots are becoming an inherent part of people's lives, more and more users are spending time chatting with these bots to not just streamline their professional or academic work but also seek mental health advice. Some people have positive experiences that make AI seem like a low-cost therapist. AI models are programmed to be smart and engaging, but they don't think like humans. ChatGPT and other generative AI models are like your phone's auto-complete text feature on steroids. They have learned to converse by reading text scraped from the internet. When a person asks a question (called a prompt) such as 'how can I stay calm during a stressful work meeting?' the AI forms a response by randomly choosing words that are as close as possible to the data it saw during training. This happens really fast, but the responses seem quite relevant, which might often feel like talking to a real person, according to a PTI report. But these models are far from thinking like humans. They definitely are not trained mental health professionals who work under professional guidelines, follow a code of ethics, or hold professional registration, the report says. When you prompt an AI system such as ChatGPT, it draws information from three main sources to respond: Background knowledge it memorised during training, external information sources and information you previously provided. To develop an AI language model, the developers teach the model by having it read vast quantities of data in a process called 'training'. This information comes from publicly scraped information, including everything from academic papers, eBooks, reports, and free news articles to blogs, YouTube transcripts, or comments from discussion forums such as Reddit. Since the information is captured at a single point in time when the AI is built, it may also be out of date. Many details also need to be discarded to squish them into the AI's 'memory'. This is partly why AI models are prone to hallucination and getting details wrong, as reported by PTI. The AI developers might connect the chatbot itself with external tools, or knowledge sources, such as Google for searches or a curated database. Meanwhile, some dedicated mental health chatbots access therapy guides and materials to help direct conversations along helpful lines. AI platforms also have access to information you have previously supplied in conversations or when signing up for the platform. On many chatbot platforms, anything you've ever said to an AI companion might be stored away for future reference. All of these details can be accessed by the AI and referenced when it responds. These AI chatbots are overly friendly and validate all your thoughts, desires and dreams. It also tends to steer conversation back to interests you have already discussed. This is unlike a professional therapist who can draw from training and experience to help challenge or redirect your thinking where needed, reported PTI. Most people are familiar with big models such as OpenAI's ChatGPT, Google's Gemini, or Microsoft's Copilot. These are general-purpose models. They are not limited to specific topics or trained to answer any specific questions. Developers have also made specialised AIs that are trained to discuss specific topics, like mental health, such as Woebot and Wysa. According to PTI, some studies show that these mental health-specific chatbots might be able to reduce users' anxiety and depression symptoms. There is also some evidence that AI therapy and professional therapy deliver some equivalent mental health outcomes in the short term. Another important point to note is that these studies exclude participants who are suicidal or who have a severe psychotic disorder. And many studies are reportedly funded by the developers of the same chatbots, so the research may be biased. Researchers are also identifying potential harms and mental health risks. The companion chat platform for example, has been implicated in an ongoing legal case over a user's suicide, according to the PTI report. At this stage, it's hard to say whether AI chatbots are reliable and safe enough to use as a stand-alone therapy option, but they may also be a useful place to start when you're having a bad day and just need a chat. But when the bad days continue to happen, it's time to talk to a professional as well. More research is needed to identify if certain types of users are more at risk of the harms that AI chatbots might bring. It's also unclear if we need to be worried about emotional dependence, unhealthy attachment, worsening loneliness, or intensive use.

Do you talk to AI when you're feeling down? Here's where chatbots get their therapy advice
Do you talk to AI when you're feeling down? Here's where chatbots get their therapy advice

Mint

time15 hours ago

  • Mint

Do you talk to AI when you're feeling down? Here's where chatbots get their therapy advice

Brisbane, Jun 11 (The Conversation) As more and more people spend time chatting with artificial intelligence (AI) chatbots such as ChatGPT, the topic of mental health has naturally emerged. Some people have positive experiences that make AI seem like a low-cost therapist. But AIs aren't therapists. They're smart and engaging, but they don't think like humans. ChatGPT and other generative AI models are like your phone's auto-complete text feature on steroids. They have learned to converse by reading text scraped from the internet. When someone asks a question (called a prompt) such as 'how can I stay calm during a stressful work meeting?' the AI forms a response by randomly choosing words that are as close as possible to the data it saw during training. This happens so fast, with responses that are so relevant, it can feel like talking to a person. But these models aren't people. And they definitely are not trained mental health professionals who work under professional guidelines, adhere to a code of ethics, or hold professional registration. Where does it learn to talk about this stuff? When you prompt an AI system such as ChatGPT, it draws information from three main sources to respond: background knowledge it memorised during training external information sources information you previously provided. To develop an AI language model, the developers teach the model by having it read vast quantities of data in a process called 'training'. Where does this information come from? Broadly speaking, anything that can be publicly scraped from the internet. This can include everything from academic papers, eBooks, reports, free news articles, through to blogs, YouTube transcripts, or comments from discussion forums such as Reddit. Are these sources reliable places to find mental health advice? Sometimes. Are they always in your best interest and filtered through a scientific evidence based approach? Not always. The information is also captured at a single point in time when the AI is built, so may be out-of-date. A lot of detail also needs to be discarded to squish it into the AI's 'memory'. This is part of why AI models are prone to hallucination and getting details wrong. 2. External information sources The AI developers might connect the chatbot itself with external tools, or knowledge sources, such as Google for searches or a curated database. When you ask Microsoft's Bing Copilot a question and you see numbered references in the answer, this indicates the AI has relied on an external search to get updated information in addition to what is stored in its memory. Meanwhile, some dedicated mental health chatbots are able to access therapy guides and materials to help direct conversations along helpful lines. 3. Information previously provided AI platforms also have access to information you have previously supplied in conversations, or when signing up to the platform. When you register for the companion AI platform Replika, for example, it learns your name, pronouns, age, preferred companion appearance and gender, IP address and location, the kind of device you are using, and more (as well as your credit card details). On many chatbot platforms, anything you've ever said to an AI companion might be stored away for future reference. All of these details can be dredged up and referenced when an AI responds. And we know these AI systems are like friends who affirm what you say (a problem known as sycophancy) and steer conversation back to interests you have already discussed. This is unlike a professional therapist who can draw from training and experience to help challenge or redirect your thinking where needed. What about specific apps for mental health? Most people would be familiar with the big models such as OpenAI's ChatGPT, Google's Gemini, or Microsofts' Copilot. These are general purpose models. They are not limited to specific topics or trained to answer any specific questions. But developers can make specialised AIs that are trained to discuss specific topics, like mental health, such as Woebot and Wysa. Some studies show these mental health specific chatbots might be able to reduce users' anxiety and depression symptoms. Or that they can improve therapy techniques such as journalling, by providing guidance. There is also some evidence that AI-therapy and professional therapy deliver some equivalent mental health outcomes in the short term. However, these studies have all examined short-term use. We do not yet know what impacts excessive or long-term chatbot use has on mental health. Many studies also exclude participants who are suicidal or who have a severe psychotic disorder. And many studies are funded by the developers of the same chatbots, so the research may be biased. Researchers are also identifying potential harms and mental health risks. The companion chat platform for example, has been implicated in ongoing legal case over a user suicide. This evidence all suggests AI chatbots may be an option to fill gaps where there is a shortage in mental health professionals, assist with referrals, or at least provide interim support between appointments or to support people on waitlists. At this stage, it's hard to say whether AI chatbots are reliable and safe enough to use as a stand-alone therapy option. More research is needed to identify if certain types of users are more at risk of the harms that AI chatbots might bring. It's also unclear if we need to be worried about emotional dependence, unhealthy attachment, worsening loneliness, or intensive use. AI chatbots may be a useful place to start when you're having a bad day and just need a chat. But when the bad days continue to happen, it's time to talk to a professional as well. (The Conversation) GRS GRS

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store