logo
Gujarat HSC 2025 Supplementary exam registration underway: Check how to apply

Gujarat HSC 2025 Supplementary exam registration underway: Check how to apply

Time of India14-05-2025
GSEB Supplementary exam registration
window 2025: The Gujarat Secondary and Higher Secondary Education Board (GSEB) has officially opened the application window for the HSC (Class 12) Supplementary Examinations 2025 for both Science and General streams.
The students who have either failed in one or more subjects, were absent, or wish to improve their scores can apply for the supplementary exams.
As per the latest GSEB notification, students can apply for the supplementary exams between May 12 and May 19, 2025, until 5 PM. Applications must be submitted exclusively through schools via the board's official websites — gseb.org or https://hscscipurakreg.gseb.org. No offline applications will be entertained.
The GSEB HSC 2025 Results were declared on May 5, with the Science stream recording a pass percentage of 83.51%, and the General stream achieving 93.07%. The board conducted the main HSC exams from February 27 to March 13, 2025.
The GSEB Supplementary exam registration window opened on May 12 and will close on May 19, 2025.
GSEB HSC Supplementary Exams 2025: Steps to access
Students can follow the steps mentioned here to apply for the GSEB HSC Supplementary exams 2025:
Visit your school
: Students cannot apply individually. Schools are authorized to submit applications on behalf of their students.
Prepare required information
: Submit your seat number and subject details for which you're appearing.
Visit the GSEB portal
: Schools must visit either gseb.org or https://hscscipurakreg.gseb.org.
Complete online registration
: Schools need to log in and fill out the student's details. Ensure all entries are correct, especially the seat number and exemption status, if applicable.
Pay the examination fee
: The fee payment must be done online (except for exempted categories).
Submit and confirm
: After verification, confirm the application. A printout or digital acknowledgment should be retained for future reference.
Concerned authorities of the school can click on the link provided
here
to apply for the GSEB HSC Supplementary exam 2025.
AI Masterclass for Students. Upskill Young Ones Today!– Join Now
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘We'll be history': ‘Godfather of AI' says AI might destroy humanity - the one thing that could save us is…
‘We'll be history': ‘Godfather of AI' says AI might destroy humanity - the one thing that could save us is…

Mint

time16 hours ago

  • Mint

‘We'll be history': ‘Godfather of AI' says AI might destroy humanity - the one thing that could save us is…

Former Google executive Geoffrey Hinton, also known as the 'Godfather of AI', fears that artificial intelligence could wipe out humanity in the future and that 'tech bros' are taking the wrong approach to the technology. The Nobel Prize-winning computer scientist, in an interaction with CNN, said that there is a 10 to 20% chance that AI wipes out humans and expressed doubts about how companies are trying to ensure that humans remain 'dominant' over 'submissive' AI systems. Speaking at Ai4, an industry conference in Las Vegas, Hinton said, 'That's not going to work. They're going to be much smarter than us. They're going to have all sorts of ways to get around that.' Hinton went on to warn that, in the future, AI systems would be able to control humans just as easily as an adult can bribe a 3-year-old with candy. In such a scenario, Hinton has given a unique solution to protect humanity — he suggests building 'maternal instincts' into AI models so that 'they really care about people' even when the technology becomes more powerful than humans. AI systems 'will very quickly develop two subgoals, if they're smart: One is to stay alive… (and) the other subgoal is to get more control.' 'There is good reason to believe that any kind of agentic AI will try to stay alive,' he added. Speaking about the risks of AI to CNN, Hinton said, 'Most of the AI experts believe that sometime in the next five to 20 years, we'll make ais that are smarter than people, and they'll probably end up much smarter than people, and there's very few examples we know of smarter things being controlled by less smart things.' 'In fact, pretty much the only example we know is a mother being controlled by her baby to make that happen. Evolution built maternal instincts into the mother, and if we don't do something like that with these alien beings we are creating, we'll be history' Hinton added

ChatGPT health scare: 60-year-old man hospitalized after following AI advice- Here's why
ChatGPT health scare: 60-year-old man hospitalized after following AI advice- Here's why

Time of India

timea day ago

  • Time of India

ChatGPT health scare: 60-year-old man hospitalized after following AI advice- Here's why

. A 60-year-old man was hospitalized for three weeks after replacing table salt with sodium bromide following advice from the AI chatbot ChatGPT. The case was detailed in a report published this month in the Annals of Internal Medicine by three physicians from the University of Washington. According to the report, the man had no prior psychiatric history when he arrived at the hospital "expressing concern that his neighbor was poisoning him. " He reported that he had been distilling his own water at home and appeared paranoid about the water he was offered. After lab tests and consultation with poison control, doctors found high levels of bromide in the body, as reported by NBC News. "In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability," the case report said. Once stabilized, the man revealed that he had conducted a "personal experiment" to eliminate table salt from his diet after reading about its potential health risks. He said he had consulted ChatGPT before making the change, which he followed for three months. The physicians did not have access to the man's exact ChatGPT conversation logs. However, when they asked ChatGPT 3.5 what chloride could be replaced with, the AI suggested bromide. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Use an AI Writing Tool That Actually Understands Your Voice Grammarly Install Now Undo "Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do," the report said. In a prior statement to Fox News, OpenAI the parent company has emphasized that the chatbot is not intended for treating any health condition. "We have safety teams working on reducing risks and have trained our AI systems to encourage people to seek professional guidance," the statement said. The report noted that bromide toxicity was more common in the early 1900s, when bromide salts were found in over-the-counter medications and sedatives and accounted for about 8% of psychiatric admissions. Today, bromide is primarily used in veterinary medicine as an anti-epileptic treatment for cats and dogs, according to the National Library of Medicine. The report said the syndrome is rare, but cases have recently re-emerged because bromide-containing substances have become more widely available online.

Health professionals' skills to detect benign tumours drop after using AI for 3 months, study finds
Health professionals' skills to detect benign tumours drop after using AI for 3 months, study finds

Time of India

time2 days ago

  • Time of India

Health professionals' skills to detect benign tumours drop after using AI for 3 months, study finds

New Delhi: Frequent reliance on artificial intelligence may lead to the risk of losing skills, as indicated by a study that discovered a 20 per cent decrease in the ability of experienced health professionals to detect benign tumour growths in colonoscopies when not using AI. Researchers from Poland, Norway, Sweden, and other European nations examined more than 1,400 colonoscopies - approximately 800 were conducted without AI assistance, while 650 utilised AI during the procedure. A colonoscopy is used to inspect the large intestine, encompassing the colon and rectum, for disease. The study compared colonoscopies performed three months prior to and following the integration of AI. Three months after becoming reliant on AI for support, the detection rate of adenomas -- a non-cancerous tumour -- during standard colonoscopy decreased significantly from 28.4 per cent before to 22.4 per cent after exposure to AI, the authors stated in their study published in The Lancet Gastroenterology and Hepatology journal. While studies have shown that using AI can help doctors and clinicians in improving cancer detection, the study is the first to "suggest a negative impact of regular AI use on healthcare professionals' ability to complete a patient-relevant task in medicine of any kind," author Marcin Romanczyk, Academy of Silesia in Poland, said. "Our results are concerning, given that the adoption of AI in medicine is rapidly spreading. We urgently need more research into the impact of AI on health professionals' skills across different medical fields," Romanczyk said. Author Yuichi Mori from the University of Oslo, Norway, said the results posed "an interesting question" related to previous trials, which found that an AI-assisted colonoscopy allowed for a higher tumour detection, compared to one that did not use AI's help. "It could be the case that non-AI-assisted colonoscopy assessed in these trials is different from standard non-AI-assisted colonoscopy as the endoscopists in the trials may have been negatively affected by continuous AI exposure," Mori said. The authors emphasised the necessity for additional research to comprehend the dynamics involved when healthcare professionals and AI systems are not effectively synchronised. In a commentary article related to the research, Dr Omer Ahmad from University College London, who was not involved in the study, said the findings "temper the current enthusiasm for (a) rapid adoption of AI-based technologies." The results provide the "first real-world clinical evidence for the phenomenon of deskilling, potentially affecting patient-related outcomes" and "highlight the importance of carefully considering possible unintended clinical consequences," Dr Ahmad said. "Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy," the author added.>

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store