logo
Anxious parents face tough choices on AI

Anxious parents face tough choices on AI

Eyewitness News6 days ago
NEW YORK - When it comes to AI, many parents navigate between fear of the unknown and fear of their children missing out.
"It's really hard to predict anything over five years," said Adam Tal, an Israeli marketing executive and father of two boys aged seven and nine, when describing the post-generative AI world.
Tal is "very worried" about the future this technology holds for his children -- whether it's deepfakes, "the inability to distinguish between reality and AI," or "the thousands of possible new threats that I wasn't trained to detect."
Mike Brooks, a psychologist from Austin, Texas, who specializes in parenting and technology, worries that parents are keeping their heads in the sand, refusing to grapple with AI.
"They're already overwhelmed with parenting demands," he observed -- from online pornography and TikTok to video games and "just trying to get them out of their rooms and into the real world."
For Marc Watkins, a professor at the University of Mississippi who focuses on AI in teaching, "we've already gone too far" to shield children from AI past a certain age.
Yet some parents are still trying to remain gatekeepers to the technology.
"In my circle of friends and family, I'm the only one exploring AI with my child," remarked Melissa Franklin, mother of a 7-year-old boy and law student in Kentucky.
"I don't understand the technology behind AI," she said, "but I know it's inevitable, and I'd rather give my son a head start than leave him overwhelmed."
'BENEFITS AND RISKS'
The path is all the more difficult for parents given the lack of scientific research on AI's effects on users.
Several parents cite a study published in June by MIT, showing that brain activity and memory were more stimulated in individuals not using generative AI than in those who had access to it.
"I'm afraid it will become a shortcut," explained a father of three who preferred to remain anonymous. "After this MIT study, I want them to use it only to deepen their knowledge."
This caution shapes many parents' approaches. Tal prefers to wait before letting his sons use AI tools. Melissa Franklin only allows her son to use AI with her supervision to find information "we can't find in a book, through Google, or on YouTube."
For her, children must be encouraged to "think for themselves," with or without AI.
But one father -- a computer engineer with a 15-year-old -- doesn't believe kids will learn AI skills from their parents anyway.
"That would be like claiming that kids learn how to use TikTok from their parents," he said. It's usually "the other way around."
Watkins, himself a father, says he is "very concerned" about the new forms that generative AI is taking, but considers it necessary to read about the subject and "have in-depth conversations about it with our children."
"They're going to use artificial intelligence," he said, "so I want them to know the potential benefits and risks."
The CEO of AI chip giant Nvidia, Jensen Huang, often speaks of AI as "the greatest equalization force that we have ever known," democratizing learning and knowledge.
But Watkins fears a different reality: "Parents will view this as a technology that will be used if you can afford it, to get your kid ahead of everyone else."
The computer scientist father readily acknowledged this disparity, saying "My son has an advantage because he has two parents with PhDs in computer science, but that's 90 percent due to the fact that we are more affluent than average" -- not their AI knowledge.
"That does have some pretty big implications," Watkins said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Turning off AI detection software is the right call for SA universities
Turning off AI detection software is the right call for SA universities

Daily Maverick

time2 days ago

  • Daily Maverick

Turning off AI detection software is the right call for SA universities

Universities across South Africa are abandoning problematic artificial intelligence detection tools that have created a climate of suspicion. The recently announced University of Cape Town decision to disable Turnitin's AI detection feature is to be welcomed – and other universities would do well to follow suit. This move signals a growing recognition that AI detection software does more harm than good. The problems with Turnitin's AI detector extend far beyond technical glitches. The software's notorious tendency towards false positives has created an atmosphere where students live in constant fear of being wrongly accused of academic dishonesty. Unlike their American counterparts, South African students rarely pursue legal action against universities, but this should not be mistaken for acceptance of unfair treatment. A system built on flawed logic As Rebecca Davis has pointed out in Daily Maverick: detection tools fail. The fundamental issue lies in how these detection systems operate. Turnitin's AI detector doesn't identify digital fingerprints that definitively prove AI use. Instead, it searches for stylistic patterns associated with AI-generated text. The software might flag work as likely to be AI-generated simply because the student used em-dashes or terms such as 'delve into' or 'crucial' – a writing preference that has nothing to do with artificial intelligence. This approach has led to deeply troubling situations. Students report receiving accusatory emails from professors suggesting significant portions of their original work were AI-generated. One student described receiving such an email indicating that Turnitin had flagged 30% of her text as likely to be AI-generated, followed by demands for proof of originality: multiple drafts, version history from Google Docs, or reports from other AI detection services like GPTZero. The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system. Other academics have endorsed the use of services like Grammarly Authorship or Turnitin Clarity for students to prove their work is their own. The burden of proof has been reversed: students are guilty until proven innocent, a principle that would be considered unjust in any legal system and is pedagogically abhorrent in an educational context. The psychological impact cannot be overstated; students describe feeling anxious about every assignment, second-guessing their natural writing styles, and living under a cloud of suspicion despite having done nothing wrong. The absurdity exposed The unreliability of these systems becomes comically apparent when examined closely. The student mentioned above paid $19 to access GPTZero, another AI detection service, hoping to clear her name. The results were revealing: the programs flagged different portions of her original work as AI-generated, with only partial overlap between their accusations. Even more telling, both systems flagged the professor's own assignment questions as AI-generated, though the Turnitin software flagged Question 2 while GPTZero flagged Question 4. Did the professor use ChatGPT to write one of the questions, both, or neither? The software provides no answers. This inconsistency exposes the arbitrary nature of AI detection. If two leading systems cannot agree on what constitutes AI-generated text, and both flag the professor's own questions as suspicious, how can any institution justify using these tools to make academic integrity decisions? Gaming the system While South African universities have been fortunate to avoid the litigation that has plagued American institutions, the experiences across the Atlantic serve as a stark warning. A number of US universities have abandoned Turnitin after facing lawsuits from students falsely accused of using AI. Turnitin's terms and conditions conveniently absolve the company of responsibility for these false accusations, leaving universities to face the legal and reputational consequences alone. The contrast with Turnitin's similarity detection tool is important. While that feature has its own problems, primarily academics assuming that the percentage similarity is an indicator of the amount of plagiarism, at least it provides transparent, visible comparisons that students can review and make sense of. The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system. Undermining educational relationships Perhaps most damaging is how AI detection transforms the fundamental relationship between educators and students. When academics become primarily focused on catching potential cheaters, the pedagogical mission suffers. Education is inherently relational, built on trust, guidance and collaborative learning. AI detection software makes this dynamic adversarial, casting educators as judges, AI detection as the evidence and students as potential criminals. The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology The lack of transparency compounds this problem. Students cannot see the AI detection reports that are being used against them, cannot understand the reasoning behind the accusations and cannot meaningfully defend themselves against algorithmic judgements. This violates basic principles of fairness and due process that should govern any academic integrity system. A path forward UCT's decision to disable Turnitin's AI detector represents more than just abandoning a problematic tool. It signals a commitment to preserving the educational relationship and maintaining trust in our universities. Other institutions following suit demonstrate that the South African higher education sector is willing to prioritise pedagogical principles over technological convenience. This doesn't mean ignoring the challenges that AI presents to academic integrity. Rather, it suggests focusing on educational approaches that help students understand appropriate AI use, develop critical thinking skills and cultivate a personal relationship with knowledge. When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose. The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology. Detection should give way to education, suspicion to support and surveillance to guidance. When we position students as already guilty, we shouldn't be surprised that they respond by trying to outwit our systems rather than engaging with the deeper questions about learning and integrity that AI raises. The anxiety reported by students who feel constantly watched and judged represents a failure of educational technology to serve educational goals. When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose. UCT and other South African universities deserve recognition for prioritising student welfare and educational relationships over the false security of flawed detection software. Their decision sends a clear message: technology should serve education, not the other way around. As more institutions grapple with AI's impact on higher education, South Africa's approach offers a valuable model; one that chooses trust over surveillance, education over detection and relationships over algorithms. In an era of rapid technological change, this commitment to fundamental educational values provides a steady foundation for navigating uncertainty. The future of academic integrity lies not in better detection software, but in better education about integrity itself. DM Sioux McKenna is professor of higher education studies at Rhodes University. Neil Kramm is an educational technology specialist in the Centre of Higher Education Research, Teaching and Learning (CHERTL) at Rhodes University. He is currently completing his PhD on AI and its influence on assessment in higher education.

Discover how AI-Powered analytics are revolutionising retail Forex Trading in 2025
Discover how AI-Powered analytics are revolutionising retail Forex Trading in 2025

Mail & Guardian

time3 days ago

  • Mail & Guardian

Discover how AI-Powered analytics are revolutionising retail Forex Trading in 2025

The foreign-exchange market has always rewarded traders who can process information faster than the competition. In 2025, that advantage increasingly belongs to those who deploy artificial-intelligence-driven analytics. South African retail traders, once limited to basic charting packages and delayed economic calendars, now enjoy real-time pattern detection, sentiment scoring, and predictive order-flow models piped straight into their trading dashboards. What was once the preserve of hedge-fund quants is being delivered through cloud-first platforms that cost less per month than a Saturday braai with friends. Platforms such as The traditional technical-analysis toolkit still matters, but AI augments it in ways that speak directly to the needs of a retail audience. Deep-learning models trained on decades of market microstructure spot hidden liquidity pockets long before conventional indicators flash. Natural-language-processing engines scan every Monetary Policy Committee briefing out of Pretoria and every Treasury press release, tagging each sentence with a probability that it will move USD ZAR by more than 0.5 %. These probabilities appear as colour-coded prompts beside a trader's chart, trimming hours from the research cycle and reducing the urge to trade on gut feeling alone. South Africa boasts the largest, most liquid Predictive order-book heatmaps draw on Level-II data from Johannesburg-hosted servers, showing where institutional bots are clustering liquidity. Sentiment fusion models combine Twitter, YouTube, and local news feeds to score public mood about the rand, gold prices, and the broader BRICS narrative. Reinforcement-learning trade managers adjust stop-loss and take-profit levels dynamically, factoring in S&P 500 futures, Bitcoin momentum, and commodity index volatility. Voice-activated assistants let traders ask, 'What is the probability of a USD ZAR rally if Brent crude jumps three per cent?' and receive an instant, data-backed answer. These tools lower the knowledge barrier for beginners while giving veterans sharper edges for prop-firm assessments. AI is not merely about finding entries; it transforms risk control, too. Value-at-risk models that once refreshed daily now recalculate every minute, pulling live volatility surfaces from interbank venues. When sudden rand weakness lifts the three-month implied volatility curve, the smart system can automatically cut leverage on open positions or recommend a counter-correlated hedge in AUD USD. For South Africans trading from their phones between meetings, this auto-adjustment is a lifeline, preventing small mistakes from compounding into blown accounts. Moreover, AI exposes hidden correlations that local traders may overlook. In 2024, an unscheduled power-station outage in Mpumalanga coincided with a spike in inland coal prices, which in turn pressured mining-linked equities and dragged the rand lower. An AI engine that maps energy supply data to currency performance could have pre-emptively reduced exposure hours before the sell-off became obvious on charts. Looking toward the second half of 2025, three trends are set to intensify. First, edge computing under the POPIA framework will keep more personal data within South Africa's borders while still feeding anonymised trade statistics to global AI clouds, balancing privacy and performance. Second, generative AI agents will start running full back-tests and generating narrative reports that explain their logic in plain English, helping traders satisfy both compliance and client-reporting requirements without manual spreadsheets. Third, broker-agnostic neural APIs will allow savvy coders to plug AI signals directly into MT5 or cTrader, opening the door to semi-autonomous portfolios that self-rebalance based on volatility clustering instead of fixed time intervals. For retail traders, the message is clear: mastering the basics of forex remains essential, but harnessing AI analytics is quickly becoming the decisive differentiator. Those who embrace the new tools stand to capture moves earlier, size positions more intelligently, and protect capital during South Africa's trademark bouts of volatility. Traders who ignore the shift risk being outpaced by algorithms that never sleep. In a market where milliseconds matter and global flows can spin the rand from calm to chaos before dawn breaks over Table Mountain, AI-powered analytics offer a competitive edge that is both accessible and increasingly indispensable. Whether you are a newcomer placing your first micro-lot or an experienced day-trader aiming for consistent withdrawals in 2025, integrating intelligent datafeeds into your strategy could be the upgrade that turns potential into real, measurable performance.

Why UCT has stopped policing AI-generated student work
Why UCT has stopped policing AI-generated student work

The South African

time3 days ago

  • The South African

Why UCT has stopped policing AI-generated student work

In the global education debate on artificial intelligence (AI), the University of Cape Town (UCT) has officially adopted a university-wide framework. The roadmap – called the UCT AI in Education Framework – sets out how this tech should be integrated into teaching, learning and assessment. Central to the new policy is the decision to stop using AI detection tools, such as Turnitin's AI Score, from 1 October, citing concerns over their accuracy and fairness. According to Sukaina Walji, director of UCT's Centre for Innovation in Learning and Teaching, the framework is the result of months of consultation and development. 'AI detectors are simply not reliable…there are no magic solutions,' said Walji, while the University noted that these tools risk undermining student trust and fairness. 'We're focusing instead on assessing the process of learning, not just the product, and developing assessment strategies that are AI-resilient.' These include oral exams, collaborative projects, and reflective assignments where students must disclose how they've used AI. The Framework also includes practical support – such as training, workshops, online guides, and a six-week short course for educators – to help staff and students navigate AI responsibly. According to education professor Jonathan Jansen, South African universities are slow to adapt and what UCT is doing now should have been done five years ago. AI analyst Arthur Goldstuck echoed that view, while welcoming the move away from 'flawed' detection software. 'Many students will get AI to write the entire paper,' added Goldstuck. 'Detection tools should only be used as a basic screening mechanism, they're too unreliable to be conclusive.' 'The real danger is penalising students who've done nothing wrong.' Let us know by leaving a comment below, or send a WhatsApp to 060 011 021 1. Subscribe to The South African website's newsletters and follow us on WhatsApp, Facebook, X and Bluesky for the latest news.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store