Latest news with #Societies


Express Tribune
01-08-2025
- Politics
- Express Tribune
The wish guru
Listen to article India today is not ruled by a prime minister but by a wish guru. A leader fuelled less by policy than by image, less by competence than by dominance. At the heart of this phenomenon lie two forces Francis Fukuyama identifies as isothymia and megalothymia: the yearning to be recognised, and the darker craving to dominate. The first is an innocent and universal human impulse. The second is what happens when recognition becomes a zero-sum game. Countries nursing old colonial wounds are particularly vulnerable to this transformation, mistaking raw power for self-respect. India's growing discord, both at home and abroad, is a textbook example of this shift. The pattern is global. Dominance hierarchies are driving debates everywhere, from conspiracy theories to cultural skirmishes like Sydney Sweeney's "good jeans". We live in the age of pop psychology, where one-size-fits-all cures for complex social pathologies sell faster than careful scholarship. Jordan Peterson's 12 Rules for Life, with its lobsters and wrens, rehabilitates hierarchy as a survival mechanism, outselling more challenging works like Acemoglu and Robinson's The Narrow Corridor: States, Societies, and the Fate of Liberty, which connects the thirst for dominance to despotic leviathans. These influences seep into politics, legitimising strongmen who promise self-assertion on a national scale, even at the cost of liberty. Last week, I wrote about India's monsoon session and the widening rift between the Modi government and the RSS. Some readers complained that I glossed over the international blowback against the diaspora and the policies that caused it. The oversight was due to space, but the point deserves elaboration, particularly when the ineptness of Indian discourse leaves these issues unexamined. India is in the eye of a storm. Exogenous shocks and self-inflicted wounds buffet the country, yet its pundits seem blind to the forces shaking their world. This is no accident. In electing Modi over more capable leaders, India chose form over substance. Its media abandoned its role as everyman's watchdog and recast itself as the strongman's cheerleader and dirty tricks department. Journalism was reduced to laundering one man's image, living off state or elite-sponsored hand-me-downs. The so-called alternative media offered only reaction, not depth, sacrificing critical thought at the altar of expediency. Consider this. Last week I noted ex-VP Jagdeep Dhankhar's age (74) as a key reason for his shock resignation being ignored. No one has gone near it with a barge pole since. Likewise, the media gawks at President Trump's tariff policy towards India without understanding the deeper dynamics, or Modi's failure to strike a trade deal with Washington. When you have a hammer, everything looks like a nail. Modi used repression to silence farmers' protests. Could he then open agriculture to foreign competition without risking chaos? His supporters blame Congress for forcing his hand in Operation Sindoor. But would this opposition back him in case of trade concessions? Modi rose to power not as a policy craftsman but as a political sorcerer selling wishes. His rhetoric promised miracles: a self-reliant India, instant global respect, prosperity for all. Like a street-side fakir offering talismans for every ailment, he presented dominance as a cure for national insecurity. But wishes are not strategies, and magic cannot substitute governance. The deeper the country's problems grew, the louder the promises became. India was not given a statesman but a wish guru, a leader who thrives on chants of devotion while evading the hard work of building lasting institutions. The Indian diaspora played a crucial part in Modi's rise. It craved an image makeover abroad and better governance at home. After "putting Muslims in their place" during the 2002 riots, he built a doer's reputation as Gujarat's chief minister. Billionaire allies enriched under his rule amplified the myth. Eleven years later, many diaspora members are waking up to the reality that they were duped. For years, the diaspora believed that power abroad would translate into prestige at home, and that Modi was the man to deliver it. They mistook fear for respect, thinking that browbeating minorities, silencing journalists, and projecting brute strength would make India admired on the world stage. But dominance is not dignity. As the cracks appear, H-1B visas under fire, overseas scrutiny rising, far-right links backfiring, the myth of Modi as a global strongman-turned-statesman is collapsing. Wishes, no matter how loudly sold, cannot override the long memory of democracies or the quiet contempt of the powerful. This diaspora had flourished under western multicultural hospitality. Modi's natural allies, however, were not liberal democrats but far-right extremists. His obsessive image projection forced Indians abroad into the spotlight, inviting scrutiny and paranoia, while his minions empowered far-right groups making life abroad harder. Manmohan Singh had mainstreamed Indians overseas. Modi weaponised them. Even coercion was acceptable if it polished his image. Had it not been for the ill-advised visit of far-right EU MEPs to Kashmir to end the post-Article 370 isolation, the world might never have heard of Srivastava Group's operations or India's links with Europe's far right. When Nikki Haley was foisted on Trump's first administration as UN ambassador, it passed without comment. But when she was last to exit the 2024 primaries, despite paltry votes and after a failed attempt on Trump's life, it was too much for his base. Kamala Harris then became his main challenger. The same base that still seethes over Hillary's challenge to their leader began to question the Indian link. Focus shifted to Silicon Valley, feeding into the H-1B visa backlash. Rishi Sunak, similarly pushed to front a failing UK government, led his party to its worst defeat in living memory. Meanwhile, Modi's obsession with image left him surrounded by yes-men. Governance atrophied. His lack of education and limited grasp of key issues left India with little more than a wish guru at the helm. The RSS, under Dr Mohan Bhagwat, had banked early wins under Modi but plays a long game. It now sees the fallout of self-serving blind policies. Listen to Bhagwat's speeches after enduring Modi's diatribes and you are pleasantly surprised. Unlike Modi, talent and genuine intellectual discourse do not make him insecure. Darkness may be Modi's compulsion. The RSS wants to outgrow it and build genuine global outreach. The 2024 election results, state polls in Haryana and Maharashtra, and the Operation Sindoor debacle offer a chance to replace Modi. A hundred-year-old organisation with all the cards is unlikely to let that chance slip. Modi has none left to play.


Zawya
08-05-2025
- Science
- Zawya
When AI dominates, do minds fade?
There is growing unease that excessive reliance on AI may lead to a gradual deterioration in human intellectual capacity. As algorithms increasingly integrate into every dimension of life - including, soon, our biological systems - the question of the future of human cognition becomes not only philosophical but existential. While technological advancement is often praised for enhancing productivity and improving quality of life, there is growing unease that excessive reliance on AI systems may lead to a gradual deterioration in human intellectual capacity. This is not merely about the erosion of practical skills, but about the very architecture of intelligence itself. A 2023 report by the World Economic Forum projected that AI will directly impact around 83 million jobs by 2025, with algorithms replacing many roles once carried out by humans. Though this shift is often justified by gains in efficiency and reductions in error, a pressing question emerges: will this transformation result in the atrophy of human intellect due to increasing reliance on digital systems? To address this question objectively, we turn to a 2025 study published in 'Societies', which found that prolonged use of AI tools - particularly generative AI models that now rival traditional search engines - correlates with a measurable decline in memory and critical thinking skills. Those who regularly rely on digital tools for quick problem-solving, the study noted, tend to demonstrate diminished creativity and struggle with complex decision-making. This cognitive decline appears most pronounced in educational contexts. While AI-powered personalised learning platforms have been lauded for tailoring education to individual needs - something I've previously affirmed in academic articles - there's a darker undercurrent. Over time, this ease of access and consumption may produce addiction-like effects that dull the brain's analytical and reflective capabilities. Learning without effort, the study suggests, undermines the very mental muscles needed for critical and independent thinking. A 2024 report from the UK Parliament reinforces this concern. It found that students who depend heavily on AI tools for research and writing assignments exhibit lower levels of logical reasoning and idea generation compared to peers who employ traditional study methods. The researchers recommend striking a balance between leveraging advanced technologies and cultivating independent cognitive skills. The issue is not confined to educational outcomes; deeper consequences loom on the horizon. There is growing unease that excessive reliance on AI may lead to a gradual deterioration in human intellectual capacity. A 2023 study in 'Frontiers' warns that excessive dependence on AI may cause long-term changes in brain structure, particularly in areas responsible for memory and spatial reasoning. Reduced cognitive engagement can also impair the development of neural networks critical for innovation and analytical thought. From a genetic perspective, emerging hypotheses - though not yet definitive - suggest that prolonged mental inactivity could influence gene expression in neurons, ultimately impairing adaptability and mental growth across generations. These concerns are not alarmist exaggerations but existential challenges that demand urgent reassessment of our relationship with technology. To protect the integrity of human cognition, we must redesign our educational systems and daily habits in ways that uphold mental resilience. Among the actionable steps is a shift towards interactive and creative education models, those that stimulate critical thinking, encourage debate and maintain space for organic human engagement. Moderate, intentional use of AI must be emphasised, with conscious limits on digital immersion and reinforcement of non-digital experiences. I have personally experimented with hybrid teaching methods in university settings - integrating technology while preserving active discussion and inquiry - and witnessed clear improvements in student creativity and engagement. Humanity is inherently ethical, rational and expressive, but these innate faculties require nurturing. Overuse of AI, if unchecked, risks stunting the emergence of these traits over time. Therefore, we must invest in revitalising moral education - especially in the face of globalised digital values - while strengthening language, communication and logical reasoning in an age increasingly shaped by the cold rationality of algorithms. To conclude, the challenge we face is not merely technological, but civilisational. Between the hammer of advancing AI and the anvil of intellectual complacency, we are forging the future of the human mind. We must ensure that what emerges is not an echo of machines, but a revitalised humanity worthy of the tools it has created. 2022 © All right reserved for Oman Establishment for Press, Publication and Advertising (OEPPA) Provided by SyndiGate Media Inc. (


Observer
07-05-2025
- Science
- Observer
When AI dominates, do minds fade?
During my second reading of the influential book Superintelligence: Paths, Dangers, Strategies, which explores the accelerating rise of artificial intelligence and the alarming possibilities of achieving superior synthetic cognition, I found myself reflecting on a more profound concern: not the ascent of machines, but the potential decline of human intelligence itself. As algorithms increasingly integrate into every dimension of life - including, soon, our biological systems - the question of the future of human cognition becomes not only philosophical but existential. While technological advancement is often praised for enhancing productivity and improving quality of life, there is growing unease that excessive reliance on AI systems may lead to a gradual deterioration in human intellectual capacity. This is not merely about the erosion of practical skills, but about the very architecture of intelligence itself. A 2023 report by the World Economic Forum projected that AI will directly impact around 83 million jobs by 2025, with algorithms replacing many roles once carried out by humans. Though this shift is often justified by gains in efficiency and reductions in error, a pressing question emerges: will this transformation result in the atrophy of human intellect due to increasing reliance on digital systems? To address this question objectively, we turn to a 2025 study published in 'Societies', which found that prolonged use of AI tools - particularly generative AI models that now rival traditional search engines - correlates with a measurable decline in memory and critical thinking skills. Those who regularly rely on digital tools for quick problem-solving, the study noted, tend to demonstrate diminished creativity and struggle with complex decision-making. This cognitive decline appears most pronounced in educational contexts. While AI-powered personalised learning platforms have been lauded for tailoring education to individual needs - something I've previously affirmed in academic articles - there's a darker undercurrent. Over time, this ease of access and consumption may produce addiction-like effects that dull the brain's analytical and reflective capabilities. Learning without effort, the study suggests, undermines the very mental muscles needed for critical and independent thinking. A 2024 report from the UK Parliament reinforces this concern. It found that students who depend heavily on AI tools for research and writing assignments exhibit lower levels of logical reasoning and idea generation compared to peers who employ traditional study methods. The researchers recommend striking a balance between leveraging advanced technologies and cultivating independent cognitive skills. The issue is not confined to educational outcomes; deeper consequences loom on the horizon. There is growing unease that excessive reliance on AI may lead to a gradual deterioration in human intellectual capacity. A 2023 study in 'Frontiers' warns that excessive dependence on AI may cause long-term changes in brain structure, particularly in areas responsible for memory and spatial reasoning. Reduced cognitive engagement can also impair the development of neural networks critical for innovation and analytical thought. From a genetic perspective, emerging hypotheses - though not yet definitive - suggest that prolonged mental inactivity could influence gene expression in neurons, ultimately impairing adaptability and mental growth across generations. These concerns are not alarmist exaggerations but existential challenges that demand urgent reassessment of our relationship with technology. To protect the integrity of human cognition, we must redesign our educational systems and daily habits in ways that uphold mental resilience. Among the actionable steps is a shift towards interactive and creative education models, those that stimulate critical thinking, encourage debate and maintain space for organic human engagement. Moderate, intentional use of AI must be emphasised, with conscious limits on digital immersion and reinforcement of non-digital experiences. I have personally experimented with hybrid teaching methods in university settings - integrating technology while preserving active discussion and inquiry - and witnessed clear improvements in student creativity and engagement. As I also discussed in my book Thus We Evolve - Arabic version, the development of a human being is rooted not only in cognition but in moral, linguistic and logical dimensions. Humanity is inherently ethical, rational and expressive, but these innate faculties require nurturing. Overuse of AI, if unchecked, risks stunting the emergence of these traits over time. Therefore, we must invest in revitalising moral education - especially in the face of globalised digital values - while strengthening language, communication and logical reasoning in an age increasingly shaped by the cold rationality of algorithms. To conclude, the challenge we face is not merely technological, but civilisational. Between the hammer of advancing AI and the anvil of intellectual complacency, we are forging the future of the human mind. We must ensure that what emerges is not an echo of machines, but a revitalised humanity worthy of the tools it has created.


Time of India
30-04-2025
- Business
- Time of India
Marketing Federation MD suspended over Rs 5.5cr fraud
Panaji: The Registrar of Cooperative Societies has suspended, Kashinath Naik, managing director of the Goa State Cooperative Marketing and Supply Federation till the inquiry into his alleged role in a Rs 5.5 crore fraud is over. The suspension order was issued based on recommendations of the committee of administrators. The administrators have identified another transaction involving a staff member that allegedly resulted in a financial loss to the tune of Rs 1.95 crore. The federation bought 1,589 metric tonnes of onion from Nafed (National Agricultural Cooperative Marketing Federation of India) at Rs 35 per kg between Oct 22 and Nov 11. The onions were to be sold in Goa at a subsidised rate but were allegedly sold at a higher price for personal gains. The suspension comes months after a complaint was filed in Nashik for allegedly defrauding NAFED and other federations of Rs 5.5 crore. Officials with the Registrar of Cooperative Societies said that a detailed inquiry and disciplinary proceedings is being initiated. Local authorities will coordinate with the Nashik police 's economic offences wing .

Wall Street Journal
03-04-2025
- Wall Street Journal
How I Realized AI Was Making Me Stupid—and What I Do Now
I first suspected artificial intelligence was eating my brain while writing an email about my son's basketball coach. I wanted to complain to the local rec center—in French—that the coach kept missing classes. As an American reporter living in Paris, I've come to speak French pretty well, but the task was still a pain. I described the situation, in English, to ChatGPT. Within seconds, the bot churned out a French email that sounded both resolute and polite. I changed a few words and sent it. I soon tasked ChatGPT with drafting complex French emails to my kids' school. I asked it to summarize long French financial documents. I even began asking it to dash off casual-sounding WhatsApp messages to French friends, emojis and all. After years of building up my ability to articulate nuanced ideas in French, AI had made this work optional. I felt my brain get a little rusty. I was surprised to find myself grasping for the right words to ask a friend for a favor over text. But life is busy. Why not choose the easy path? AI developers have promised their tools will liberate humans from the drudgery of repetitive brain labor. It will unshackle our minds to think big. It will give us space to be more creative. But what if freeing our minds actually ends up making them lazy and weak? 'With creativity, if you don't use it, it starts to go away,' Robert Sternberg, a Cornell University professor of psychology, told me. Sternberg, who studies human creativity and intelligence, argues that AI has already taken a toll on both. Smartphones are already blamed for what some researchers call 'digital dementia.' In study after study, scientists have shown that people who regularly rely on digital help for some tasks can lose capacity to do them alone. The more we use GPS, the worse we become at finding our way on our own. The more we rely on our stored contacts, the less likely we are to know the phone numbers of close friends, or even our spouse's. Most of us don't worry about not learning phone numbers anymore, if we're old enough to have ever learned them at all. But what happens when we start outsourcing core parts of our thinking to a machine? Such as understanding a text well enough to summarize it. Or finding the words that best express a thought. Is there a way to use these new AI tools without my brain becoming mush? Like AI itself, research into its cognitive effects is in its infancy, but early results are inauspicious. A study published in January in the journal Societies found that frequent use of AI tools such as ChatGPT correlated with reduced critical thinking, particularly among younger users. In a new survey of knowledge workers, Microsoft researchers found that those with more confidence in generative AI engaged in less critical thinking when using it. 'Tools like GPS and generative AI make us cognitively lazy,' said Louisa Dahmani, a neuroscientist at Massachusetts General Hospital, who in 2020 showed that habitual use of GPS navigation reduces one's spatial memory. 'While it's possible to use these tools in a mindful manner, I think that most of us will take the path of least resistance,' she told me. Adopting tools for brain work—a process called cognitive offloading—has been largely an engine of human progress. Ever since Sumerians scratched their debts into clay tablets, people have been using stone, papyrus and paper to outsource their memories and conceptions of everything from theorems to shopping lists. Opportunities for cognitive offloading have multiplied lately. Paper calendars have long kept appointments; digital ones send alerts when they are happening. Calculators add up numbers; Excel spreadsheets balance whole budgets. Generative AI promises to boost our productivity further. Workers are increasingly using it to write emails, transcribe meetings or even—shhh—summarize those way-too-long documents your boss sends. By late last year, around a quarter of all corporate press releases were likely written with AI help, according to a preprint paper led by Stanford Ph.D. students. But these short-term gains may have long-term costs. George Roche, co-founder of Bindbridge, an AI molecular-discovery startup, told me he uploads several scientific papers a day, on topics from botany to chemistry, to an AI chatbot. It has been a boon, allowing Roche to stay on top of far more research than he could before. Yet this ease has begun to trouble him. 'I'm outsourcing my synthesis of information,' Roche told me. 'Am I going to lose that ability? Am I going to get less sharp?' Hemant Taneja, chief executive of Silicon Valley venture-capital firm General Catalyst, which has invested in AI companies including Anthropic and Mistral AI, concedes that while AI technology offers real benefits, it may also compromise our thinking skills. 'Our ability to ask the right questions is going to weaken if we don't practice,' Taneja said. These risks could be greater for young people if they start offloading to AI cognitive skills that they haven't yet honed for themselves. Yes, some studies show that AI tutors can help students if used well. But a Wharton School study last year found that high-school math students who studied with an AI chatbot that was willing to provide answers to math problems trailed a group of bot-free students on the AI-free final exam. 'There is a possible cyberpunk dystopian future where we become stupid and computers do all the thinking,' Richard Heersmink, a philosopher of technology at Tilberg University in the Netherlands, told me. Let's not panic just yet. Humans have a history of issuing dire predictions about new technologies that later prove to be misplaced. More than 2,400 years ago, Socrates reportedly suggested that writing itself would 'produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory.' It would be hard to suggest, however, that the benefits of writing and reading don't outweigh the costs. Since then, new technologies, from the printing press to the knitting machine to the telegraph, have all provoked objections about their impact on individuals and society—with varying degrees of prescience. But there is no stopping progress. With the AI future on our doorsteps, what do scientists say we ought to do to keep our minds spry? The basic principle is use it or lose it. Writing is a good way to practice thinking and reasoning precisely because it is hard. 'The question is what skills do we think are important and what skills do we want to relinquish to our tools,' said Hamsa Bastani, a professor at the Wharton School and an author of that study on the effects of AI on high-school math students. Bastani told me she uses AI to code, but makes sure to check its work and does some of her own coding too. 'It's like forcing yourself to take the stairs instead of taking the elevator.' Mark Maitland, a senior partner at the consulting firm Simon-Kucher, said that although his staff now uses AI transcriptions of meetings, he asks his team to take handwritten notes, too, given research that taking notes leads to better recall. 'It's easy to become lazy if you think something else is doing it for you,' Maitland told me. I'm now leaning into mental effort in my own life, too. That means I make myself turn off the GPS in unfamiliar places. I take handwritten notes when I want to remember something. I also resist my kids' demands to ask ChatGPT for a made-up story and encourage them to create their own instead. I've even started writing my own French-language emails and WhatsApp messages again. At least most of the time. I'm still busy after all. Sam Schechner is a technology reporter in The Wall Street Journal's Paris bureau.