Latest news with #NitinAnand


Business Standard
2 days ago
- Business
- Business Standard
Suraksha Se Samriddhi Summit 2025: Smart Technologies to Predict and Prevent Crime
PNN New Delhi [India], August 14: The prestigious Suraksha Se Samriddhi Summit 2025, organized by MBL Suraksha Security Systems, was held at the Radisson Blu Hotel in Paschim Vihar, attracting over 500 entrepreneurs, social workers, and concerned citizens. The summit showcased the latest advancements in smart technology designed to provide real-time alerts and early warnings of potential accidents, thefts, fires, and other emergencies. Led by the company's Founder & Director, *Jatinder Singh, and Director **Nitin Anand*, the summit emphasized the importance of adopting a proactive approach to safety and security. "We must now adopt security by being proactive rather than reactive," said Jatinder Singh during his address. "Our goal is to issue warnings before incidents occur and prevent crimes before they happen." Mr. Jatinder Singh shared that the inspiration behind developing these technologies came from a personal incident--a theft at his own shop. With no existing solution available, he decided to develop the system himself. "This strategy has not only helped prevent thefts and violence but has also created employment opportunities for young people," he added. Director Nitin Anand provided attendees with a detailed explanation of the technical aspects of the smart security systems. "These live monitoring services empower users with real-time vigilance, enabling them to respond before an emergency escalates," he noted. Adding a motivational dimension to the event, renowned speaker *Dr. Vivek Bindra* addressed the audience, emphasizing the importance of safety, self-reliance, and technological awareness. "In today's world, it's not enough to stay safe--we must also contribute to the safety of others," he said. The summit was widely praised as a major step forward in integrating smart technologies with community awareness. Attendees appreciated the initiative, recognizing it as a forward-thinking approach to public safety. More than just a showcase of innovation, the Suraksha Se Samriddhi Summit 2025 served as a powerful reminder of the need for vigilance and preparedness. By combining technology with education and awareness, it paved the way for a safer and more secure future for all. (ADVERTORIAL DISCLAIMER: The above press release has been provided by PNN. ANI will not be responsible in any way for the content of the same)
&w=3840&q=100)

Business Standard
27-06-2025
- Business Standard
Thinking capped: How generative AI may be quietly dulling our brains
It has been barely three years since generative artificial intelligence (AI) chatbots such as ChatGPT appeared on the scene, and there is already concern over how they might be affecting the human brain. The early prognosis isn't good. The findings of a recent study by researchers from the Massachusetts Institute of Technology (MIT) Media Lab, Wellesley College, and MassArt indicate that tools such as ChatGPT negatively impact the neural, linguistic, and cognitive capabilities of humans. While this study is preliminary and limited in scope, involving barely 54 subjects aged 18 to 34, it found that those who used ChatGPT for writing essays (as part of the research experiment) showed measurably lower brain activity than their peers who didn't. 'Writing without (AI) assistance increased brain network interactions across multiple frequency bands, engaging higher cognitive load, stronger executive control, and deeper creative processing,' it found. Various experts in India, too, reiterate the concerns of overdependence on AI, to the extent where people outsource even thinking to AI. Those dealing with the human brain define this as 'cognitive offloading' which, they caution, can diminish critical thinking and reasoning capability while also building a sense of social isolation – in effect, dragging humans into an 'idiot trap'. Training the brain to be lazy 'We now rely on AI for tasks we used to do ourselves — writing essays, solving problems, even generating ideas,' says Nitin Anand additional professor of clinical psychology, National Institute of Mental Health and Neuro Sciences (Nimhans), Bengaluru. 'That means less practice in critical thinking, memory recall, and creative reasoning.' This dependence, he adds, is also weakening people's ability to delay gratification. 'AI tools are designed for speed. They answer instantly. But that trains people to expect quick solutions everywhere, reducing patience and long-term focus.' Anand warns that this behavioural shift is feeding into a pattern of digital addiction, which he classifies as the 4Cs: craving, compulsion, loss of control, and consequences (see box). 'When someone cannot stop checking their phone, feels restless without it, and suffers in real life because of it — that's addiction,' he says, adding that the threat of addiction towards technology has increased multifold by something as adaptive and customisable as AI. Children and adolescents are particularly at risk, says Pankaj Kumar Verma, consultant psychiatrist and director of Rejuvenate Mind Neuropsychiatry Clinic, New Delhi. 'Their prefrontal cortex — the brain's centre for planning, attention, and impulse control — is still developing,' he explains. 'Constant exposure to fast-changing AI content overstimulates neural circuits, leading to short attention spans, poor impulse control, and difficulty with sustained focus.' The effects don't stop at attention 'We're seeing a decline in memory retention and critical thinking, simply because people don't engage deeply with information anymore,' Verma adds. Even basic tasks like asking for directions or speaking to others are being replaced by AI, increasing social isolation, he says. Much of this harks back to the time when landlines came to be replaced by smartphones. Landline users rarely needed a phonebook — numbers of friends, family, and favourite shops were memorised by heart. But with mobile phones offering a convenient 'contacts' list, memory was outsourced. Today, most people can barely remember three-odd numbers unaided. With AI, such cognitive shifts will likely become more pronounced, the experts say. What looks like convenience today might well be shaping a future where essential human skills quietly fade away. Using AI without losing ourselves Experts agree that the solution is not to reject AI, but to regulate its use with conscious boundaries and real-world grounding. Verma advocates for structured rules around technology use, especially in homes with children and adolescents. 'Children, with underdeveloped self-regulation, need guidance,' he says. 'We must set clear boundaries and model balanced behaviour. Without regulation, we risk overstimulating developing brains.' To prevent digital dependence, Anand recommends simple, yet effective, routines that can be extended to AI use. The 'phone basket ritual', for instance, involves setting aside all devices in a common space at a fixed hour each day — usually in the evening — to create a screen-free window for family time or rest. He also suggests 'digital fasting': unplugging from all screens for six to eight hours once a week to reset attention and reduce compulsive use. 'These habits help reclaim control from devices and re-train the brain to function independently,' he says. Perhaps, digital fasting can be extended to 'AI fasting' during work and school assignments to allow the brain to engage in cognitive activities. Pratishtha Arora, chief executive officer of Social and Media Matters, a digital rights organisation, highlights the essential role of parental responsibility in shaping children's digital lives. 'Technology is inevitable, but how we introduce it matters,' she says. 'The foundation of a child's brain is laid early. If we outsource that to screens, the damage can be long-term.' She also emphasises the need to recognise children's innate skills and interests rather than plunging them into technology at an early age. Shivani Mishra, AI researcher at the Indian Institute of Technology Kanpur, cautions against viewing AI as a replacement for human intelligence. 'AI can assist, but it cannot replace human creativity or emotional depth,' she says. Like most experts, she too advises that AI should be used to reduce repetitive workload, 'and free up space for thinking, not to avoid thinking altogether'. The human cost According to Mishra, the danger lies not in what AI can do, but in how much we delegate to it, often without reflection. Both Anand and Verma share concerns about how its unregulated use could stunt core human faculties. Anand reiterates that unchecked dependence could erode the brain's capacity to delay gratification, solve problems, and tolerate discomfort. 'We're at risk of creating a generation of young people who are highly stimulated but poorly equipped to deal with the complexities of real life,' Verma says. The way forward, the experts agree, lies in responsible development, creating AI systems grounded in ethics, transparency, and human values. Research in AI ethics must be prioritised not just for safety, but also to preserve what makes us human in the first place, they advise. The question is not whether AI will shape the future; it is already doing so. It is whether humans will remain conscious architects of that future or passive participants in it. Writing without AI assistance leads to higher cognitive load engagement, stronger executive control, and deeper creative processing Writing with AI assistance reduces overall neural connectivity and shifts the dynamics of information flow Large language model (LLM) users noted a diminishing inclination to evaluate the output critically Participants who were in the brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to other groups Essays written with the help of LLM carried less significance or value to the participants as they spent less time on writing and mostly failed to provide a quote from their essays