
To Gatekeep Or Let Lose? Parents Face Tough Choices On AI
When it comes to AI, many parents navigate between fear of the unknown and fear of their children missing out.
"It's really hard to predict anything over five years," said Adam Tal, an Israeli marketing executive and father of two boys aged seven and nine, when describing the post-generative AI world.
Tal is "very worried" about the future this technology holds for his children -- whether it's deepfakes, "the inability to distinguish between reality and AI," or "the thousands of possible new threats that I wasn't trained to detect."
Mike Brooks, a psychologist from Austin, Texas, who specialises in parenting and technology, worries that parents are keeping their heads in the sand, refusing to grapple with AI.
"They're already overwhelmed with parenting demands," he observed -- from online pornography and TikTok to video games and "just trying to get them out of their rooms and into the real world."
For Marc Watkins, a professor at the University of Mississippi who focuses on AI in teaching, "we've already gone too far" to shield children from AI past a certain age.
Yet some parents are still trying to remain gatekeepers to the technology.
"In my circle of friends and family, I'm the only one exploring AI with my child," remarked Melissa Franklin, mother of a 7-year-old boy and law student in Kentucky.
"I don't understand the technology behind AI," she said, "but I know it's inevitable, and I'd rather give my son a head start than leave him overwhelmed."
Benefits and risks
The path is all the more difficult for parents given the lack of scientific research on AI's effects on users.
Several parents cite a study published in June by MIT, showing that brain activity and memory were more stimulated in individuals not using generative AI than in those who had access to it.
"I'm afraid it will become a shortcut," explained a father of three who preferred to remain anonymous. "After this MIT study, I want them to use it only to deepen their knowledge."
This caution shapes many parents' approaches. Tal prefers to wait before letting his sons use AI tools. Melissa Franklin only allows her son to use AI with her supervision to find information "we can't find in a book, through Google, or on YouTube."
For her, children must be encouraged to "think for themselves," with or without AI.
But one father -- a computer engineer with a 15-year-old -- doesn't believe kids will learn AI skills from their parents anyway.
"That would be like claiming that kids learn how to use TikTok from their parents," he said. It's usually "the other way around."
Watkins, himself a father, says he is "very concerned" about the new forms that generative AI is taking, but considers it necessary to read about the subject and "have in-depth conversations about it with our children."
"They're going to use artificial intelligence," he said, "so I want them to know the potential benefits and risks."
The CEO of AI chip giant Nvidia, Jensen Huang, often speaks of AI as "the greatest equalisation force that we have ever known," democratising learning and knowledge.
But Watkins fears a different reality: "Parents will view this as a technology that will be used if you can afford it, to get your kid ahead of everyone else."
The computer scientist father readily acknowledged this disparity, saying, "My son has an advantage because he has two parents with PhDs in computer science, but that's 90 percent due to the fact that we are more affluent than average" -- not their AI knowledge.
"That does have some pretty big implications," Watkins said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
6 minutes ago
- Economic Times
Will AI take away our sense of purpose? Sam Altman says, ‘People Will have to redefine what it means to contribute'
Synopsis OpenAI CEO Sam Altman, in a conversation with Theo Von, addressed concerns about AI's impact on humanity. Altman acknowledged anxieties surrounding job displacement and data privacy, particularly regarding users sharing personal information with AI. He highlighted the lack of legal protections for AI conversations, creating a privacy risk. AP OpenAI CEO Sam Altman talked about AI's impact on jobs and human purpose. Altman acknowledged concerns about data privacy and the rapid pace of AI development. He also addressed the lack of clear legal regulations. Altman highlighted the risks of users sharing personal information with AI. In a rare, thought-provoking conversation that danced between comedy and existential crisis, OpenAI CEO Sam Altman sat down with podcaster Theo Von on This Past Weekend. What unfolded was less a traditional interview and more a deeply human dialogue about the hopes, fears, and massive unknowns surrounding artificial intelligence. As AI continues its unstoppable advance, Von posed a question many of us have been quietly asking: 'Are we racing toward a future where humans no longer matter?' Altman didn't sugarcoat the situation. He agreed with many of Von's concerns, from data privacy to AI replacing jobs, and even the unnerving pace at which the technology is evolving. 'There's this race happening,' Altman said, referring to the breakneck competition among tech companies. 'If we don't move fast, someone else will — and they might not care as much about the consequences.' But amid all the alarms, Altman offered a cautious dose of optimism. 'Even in a world where AI is doing all of this stuff humans used to do,' he said, 'we are going to find a way to feel like the main characters.' His tone, however, betrayed a sense of uncertainty: the script isn't written yet. Perhaps the most powerful moment came when Von bluntly asked: 'What happens to our sense of purpose when AI does everything for us?' Altman acknowledged that work has always been a major source of meaning for people. While he's hopeful that AI will free humans to pursue more creative or emotional pursuits, he conceded that the transition could be deeply painful. 'One of the big fears is like purpose, right?' Von said. 'Like, work gives us purpose. If AI really continues to advance, it feels like our sense of purpose would start to really disappear.' Altman responded with guarded hope: 'People will have to redefine what contribution looks like… but yeah, it's going to be unsettling.' In what may be one of the most revealing admissions from a tech CEO, Altman addressed the disturbing trend of people — especially young users — turning to AI as a confidant or therapist. 'People talk about the most personal sh*t in their lives to ChatGPT,' he told Von. 'But right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege… We haven't figured that out yet for when you talk to ChatGPT.' With AI tools lacking legal confidentiality protections, users risk having their most intimate thoughts stored, accessed, or even subpoenaed in court. The privacy gap is real, and Altman admitted the industry is still trying to figure it out. Adding to the complexity, Altman highlighted how the lack of federal AI regulations has created a patchwork of rules that vary wildly across states. This legal uncertainty is already playing out in real-time — OpenAI, for example, is currently required to retain user conversations, even deleted ones, as part of its legal dispute with The New York Times. 'No one had to think about that even a year ago,' Altman said, calling the situation 'very screwed up.'


India.com
6 minutes ago
- India.com
Open AI's ChatGPT, Google's Gemini and Microsoft's Copilot: How is AI taking away our Drinking water? Read full story here
AI drinking water- Representational AI image We all know and accept the fact that artificial intelligence (AI) has become a very important part of our lives. With being increasingly integrated into daily life, concerns are mounting over the environmental footprint of AI, which is particularly related to its growing consumption of water and electricity required to operate the massive data centers needed to run AI queries by apps like Open AI's ChatGPT, Google's Gemini and Microsoft's Copilot. As per a report by BBC Hindi, the expansion of AI technologies could intensify global water stress, especially in the face of climate change and its rising demand. It has been revealed by media reports that AI systems like ChatGPT rely on vast data centers that consume enormous energy and water for cooling. How AI is taking away your drinking water? The reports have also revealed that a single AI query may use significantly more electricity, which will need more water for cooling, than a typical internet search. Proving the claim, International Energy Agency (IEA) has estimated that a query made on ChatGPT consumes about 10 times more electricity than a search made on Google search engine. Studies also indicate that the AI industry could use 4–6 times more water annually than a country like Denmark by 2027. Also, companies such as Google, Microsoft, and Meta have reported major increases in water use with the increased use of AI. With many data centers being set up in drought-prone areas, the companies have also dealt with sparking protests and environmental backlash. What Sam Altman said on future of AI? As AI begins to transform industries globally, ensuring that the benefits of AGI (Artificial General Intelligence) are broadly distributed is critical, according to OpenAI Co-founder and CEO Sam Altman. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes and economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas, he emphasised in a new blog post. (With inputs from agencies)


Time of India
10 minutes ago
- Time of India
Will AI take away our sense of purpose? Sam Altman says, ‘People Will have to redefine what it means to contribute'
In a rare, thought-provoking conversation that danced between comedy and existential crisis, OpenAI CEO Sam Altman sat down with podcaster Theo Von on This Past Weekend. What unfolded was less a traditional interview and more a deeply human dialogue about the hopes, fears, and massive unknowns surrounding artificial intelligence. As AI continues its unstoppable advance, Von posed a question many of us have been quietly asking: 'Are we racing toward a future where humans no longer matter?' Explore courses from Top Institutes in Please select course: Select a Course Category Digital Marketing Data Science Data Analytics Artificial Intelligence others Project Management Leadership Management Operations Management Technology Product Management Cybersecurity CXO Design Thinking Public Policy Others Data Science Finance PGDM Degree Healthcare MBA MCA healthcare Skills you'll gain: Digital Marketing Strategy Search Engine Optimization (SEO) & Content Marketing Social Media Marketing & Advertising Data Analytics & Measurement Duration: 24 Weeks Indian School of Business Professional Certificate Programme in Digital Marketing Starts on Jun 26, 2024 Get Details Skills you'll gain: Digital Marketing Strategies Customer Journey Mapping Paid Advertising Campaign Management Emerging Technologies in Digital Marketing Duration: 12 Weeks Indian School of Business Digital Marketing and Analytics Starts on May 14, 2024 Get Details 'We're Still the Main Characters'—But for How Long? Altman didn't sugarcoat the situation. He agreed with many of Von's concerns, from data privacy to AI replacing jobs, and even the unnerving pace at which the technology is evolving. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Never Put Eggs In The Refrigerator. Here's Why... Car Novels Undo 'There's this race happening,' Altman said, referring to the breakneck competition among tech companies. 'If we don't move fast, someone else will — and they might not care as much about the consequences.' But amid all the alarms, Altman offered a cautious dose of optimism. 'Even in a world where AI is doing all of this stuff humans used to do,' he said, 'we are going to find a way to feel like the main characters.' His tone, however, betrayed a sense of uncertainty: the script isn't written yet. You Might Also Like: Telling secrets to ChatGPT? Using it as a therapist? Your AI chats aren't legally private, warns Sam Altman AI and the Crisis of Human Purpose Perhaps the most powerful moment came when Von bluntly asked: 'What happens to our sense of purpose when AI does everything for us?' Altman acknowledged that work has always been a major source of meaning for people. While he's hopeful that AI will free humans to pursue more creative or emotional pursuits, he conceded that the transition could be deeply painful. 'One of the big fears is like purpose, right?' Von said. 'Like, work gives us purpose. If AI really continues to advance, it feels like our sense of purpose would start to really disappear.' Altman responded with guarded hope: 'People will have to redefine what contribution looks like… but yeah, it's going to be unsettling.' You Might Also Like: Sam Altman wishes to give 'free GPT-5' to everyone on Earth: OpenAI CEO's bold dream sparks awe and alarm AI as Therapist? The Privacy Dilemma We Can't Ignore In what may be one of the most revealing admissions from a tech CEO, Altman addressed the disturbing trend of people — especially young users — turning to AI as a confidant or therapist. 'People talk about the most personal sh*t in their lives to ChatGPT,' he told Von. 'But right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege… We haven't figured that out yet for when you talk to ChatGPT.' With AI tools lacking legal confidentiality protections, users risk having their most intimate thoughts stored, accessed, or even subpoenaed in court. The privacy gap is real, and Altman admitted the industry is still trying to figure it out. A Legal Gray Zone and a Growing Cloud of Concern Adding to the complexity, Altman highlighted how the lack of federal AI regulations has created a patchwork of rules that vary wildly across states. This legal uncertainty is already playing out in real-time — OpenAI, for example, is currently required to retain user conversations, even deleted ones, as part of its legal dispute with The New York Times. 'No one had to think about that even a year ago,' Altman said, calling the situation 'very screwed up.'