logo
#

Latest news with #GenerationsZ

Growing number of teens turn to AI for friends, new study shows — here's why experts are alarmed
Growing number of teens turn to AI for friends, new study shows — here's why experts are alarmed

New York Post

time5 days ago

  • New York Post

Growing number of teens turn to AI for friends, new study shows — here's why experts are alarmed

It's not a glitch in the matrix: the youngest members of the iGeneration are turning to chatbot companions for everything from serious advice to simple entertainment. 4 The age range for Generation Z is between 13 and 28, while Generation Alpha is between 12 and 0. InfiniteFlow – In the past few years, AI technology has advanced so far to see users have gone straight to machine models for just about anything, and Generations Z and Alpha are leading the trend. Advertisement Indeed, a May 2025 study by Common Sense Media looked into the social lives of 1,060 teens aged 13 to 17 and found that a startling 52% of adolescents across the country use chatbots at least once a month for social purposes. Teens who used AI chatbots to exercise social skills said they practiced conversation starters, expressing emotions, giving advice, conflict resolution, romantic interactions and self-advocacy — and almost 40% of these users applied these skills in real conversations later on. 4 Many AI chatbots have been critiqued for being overly sycophantic towards their flesh-and-blood conversation partners. Common Sense Media Advertisement 4 Younger teens tend to be more trustful of AI companions, while older teens are more well-educated on the dangers of oversharing with AI. Common Sense Media Despite some potentially beneficial skill developments, the study authors see the cultivation of anti-social behaviors, exposure to age-inappropriate content and potentially harmful advice given to teens as reason enough to caution against underage use. 'No one younger than 18 should use AI companions,' study authors wrote in the paper's conclusion. Advertisement The real alarm bells began to ring when data uncovered that 33% of users prefer to turn to AI companions over real people when it comes to serious conversations, and 34% said that a conversation with a chatbot has caused discomfort, referring to both subject matter and emotional response. 'Until developers implement robust age assurance beyond self-attestation, and platforms are systematically redesigned to eliminate relational manipulation and emotional dependency risks, the potential for serious harm outweighs any benefits,' study authors warned. 4 100 or more teens said AI chats were better than IRL connections. Common Sense Media Advertisement Though AI use is certainly spreading among younger generations — a recent survey showed that 97% of Gen-Z has used the technology — the Common Sense Media study found that 80% of teens said they still spend more time with IRL friends than online chatbots. Rest easy, parents: today's teens do still prioritize human connections, despite popular beliefs. However, people of all generations are cautioned against consulting AI for certain purposes. As The Post previously reported, AI chatbots and large language models (LLM) can be particularly harmful for those seeking therapy and tend to endanger those exhibiting suicidal thoughts. 'AI tools, no matter how sophisticated, rely on pre-programmed responses and large datasets,' Niloufar Esmaeilpour, a clinical counselor in Toronto, previously told The Post. 'They don't understand the 'why' behind someone's thoughts or behaviors.' Sharing personal medical information with AI chatbots can also have drawbacks, as the information they regurgitate isn't always accurate, and perhaps more alarmingly, they are not HIPAA compliant. Uploading work documents to get a summary can also land you in hot water, as intellectual property agreements, confidential data and other company secrets can be extracted and potentially leaked.

The big, beautiful AI disaster coming to a school near you
The big, beautiful AI disaster coming to a school near you

Yahoo

time12-06-2025

  • Politics
  • Yahoo

The big, beautiful AI disaster coming to a school near you

"AI-generated images and chatbot responses about the Los Angeles situation have further exposed the startling lack of information and AI literacy among the American public." (Getty Images) Predictably, the Immigration and Customs Enforcement riots in Los Angeles have been viewed through a partisan lens. The left has suggested that the federalizing of the California National Guard is a mere dress rehearsal for a full military dictatorship by President Donald Trump, while the right has pounded the drum of pseudo-law and order with '[i]f they spit, we will hit.' Both of these viewpoints are irresponsible, ill-conceived, and lean into misinformation. Trump is within his lawful and precedential authority to federalize the National Guard given the current circumstances seen in and around Los Angeles. Spitting on a law enforcement officer is assault. However, Trump's remark about spitting directly leading to hitting overtly encourages disproportionate use of force. Couple this with the speaker of the House saying that the governor of California 'should be tarred and feathered.' Lost among the politics of this moment is something media analysts have been concerned about for quite some time — moments of social upheaval ceding fertile ground to the rapid spread and uptake of misinformation via artificial intelligence. AI-generated images and chatbot responses about the Los Angeles situation have further exposed the startling lack of information and AI literacy among the American public. It is one thing for adults to engage with potential AI-driven misinformation, but consider that the minds of Generations Z and Alpha are developing within this information environment. The pertinent question to be asked is this: How will this caustic environment shape their information consumption habits? The information environment surrounding Los Angeles right now is not an outlier. It once again reveals the scale of misinformation AI is capable of generating and the ease with which it can inflame public discourse. Yet, amid this crisis, Congress is pushing forward with what has been dubbed the big beautiful bill — a federal budget package that includes a 10-year moratorium on any state-level AI regulation. This regulation would prevent schools from teaching the vital skill of AI-driven information literacy. Squirreled away in the 1,000-plus pages of the One Big Beautiful Bill Act is a clause which reads: '…no state or political subdivision may enforce, during the 10-year period beginning on the date of the enactment of this act, any law or regulation limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.' This clause explicitly forbids states and their political subdivisions (i.e. school boards) from enacting or enforcing any law or policy that 'limits, restricts, or otherwise regulates' artificial intelligence systems. This broad language would ban everything from AI plagiarism checkers to basic AI literacy curricula in public education. While proponents claim this is about ensuring regulatory uniformity, the effect is much more pervasive. It locks state governments and schools into inaction in a time of dynamic technological change. When teaching students to assess the authenticity of information, there is a hard line to walk between healthy skepticism and corrosive cynicism. AI makes this line ever more opaque to navigate. Developing minds are already traversing AI-saturated landscapes without the tools they need. Teaching AI literacy in schools is not a partisan demand; it is a civic necessity. Teachers should not demand students to be zealots or Luddites in regard to any topic. But teachers should encourage them to become friendly critics of digital information and AI — capable of examining the tools they use with both curiosity and caution. To its supporters, the 10-year moratorium is being championed as a win for unfettered innovation. In a purely capitalistic sense, this is true and could lead to widespread economic prosperity. However, it is worth recalling that similar rhetoric surrounded the onset of social media platforms. We now know what happened when those technologies were left unregulated and used as substitutes for the in-person socialization of developing minds. Mental health spiraled, polarization deepened, and a generation came of age in an isolating echo chamber. We cannot afford to wait and see what AI will do on a larger, accelerated scale. As the R Street Institute argues, the moratorium is a way to avoid a patchwork of conflicting state laws. What it actually does is prevent local communities from crafting age-appropriate, culturally relevant responses to a fast-moving technological frontier. This is not about red states or blue states, right or left. If Democrats controlled the Legislature, the tech lobby would have courted them the same way. This is about whether any policymaker who claims to care about children or families will acknowledge the responsibility of schools to prepare their students for the rapidly changing world beyond their walls. The next chapter of American life will be molded by the technologies developing minds use. This is precisely why a 21st-century, technology-focused curriculum needs to be at the forefront of every school board discussion this summer. Students should learn not only how to prompt AI responsibly, but how to verify the accuracy of its output. They should study its biases, understand its limitations, and consider its societal impacts. In doing so, they will be less likely to fall for the kinds of digital falsehoods intended to exacerbate moments of social upheaval. AI and information literacy are not passing educational trends like social-emotional learning or whole language instruction. They are part of a new baseline for civic competence. School districts should be empowered, not prohibited, from integrating these skills into their curricula. The technology shaping this generation will evolve faster than the ability to legislate solutions to its shortcomings. Education is the answer to prepare young minds for a world awash in AI and digital information. Let us hope this answer does not get taken away from states and local school boards, leading students down the road to a big, beautiful AI disaster.

Cellphones in schools are a big problem. New Hampshire has a chance to lead on the solution.
Cellphones in schools are a big problem. New Hampshire has a chance to lead on the solution.

Yahoo

time13-02-2025

  • Politics
  • Yahoo

Cellphones in schools are a big problem. New Hampshire has a chance to lead on the solution.

"To suggest that even moderate smartphone use is harmless is akin to arguing that a few cigarettes are acceptable." (Photo by Daniel de) The legislative process is much like playing an accordion. At times the process is on a long, stretched-out timetable where institutional vigor hits a low note. Other times the process moves swiftly, screeching as if compressed in response to an urgent issue. It is in the later part of this metaphor that the New Hampshire General Court finds itself over the issue of cellphones in public schools. The General Court and Gov. Kelly Ayotte appear poised to swiftly pass legislation to regulate student cellphone use in public schools. Some may dismiss this as a flash-in-the-pan issue, political posturing, or a backdoor attempt to infringe upon freedom of speech. This may even appear to some as an issue that lies outside of the purview of government intervention, an issue of self-control. These arguments are flat wrong. This issue is one of the most salient and concerning problems of the 21st century. A problem that has yet to bear the full fruit of its wide-ranging consequences. At this point in the public debate, the brain-based science is quite clear about the harmful effects that cellphones have on young minds. To be clear, current research examines the novel effect smartphones, colloquially referred to as cellphones, have on developing minds. This technological innovation sits at the heart of this issue. Social psychologists Jean Twenge and John Haidt are at the forefront of this research. They have extensively documented the devastating effects of smartphone use on adolescent mental health, impulsivity, focus, and social development. The tech environment in which Generations Z and Alpha have been raised has profoundly altered their cognitive and emotional landscapes, leaving them more anxious, distracted, and socially isolated than any previous generation. Twenge's research further highlights how the rise of social media and smartphone accessibility have correlated with increased rates of depression, anxiety, and self-harm among adolescents. Haidt's work hints at broader civic implications. A generation raised on fleeting digital interactions will struggle with deeper, more meaningful engagement in both personal and political spheres. Despite these startling conclusions, we have failed Generations Z and Alpha. Educators, parents, politicians, and community members have stood idly by as smartphones have eroded attention spans, exacerbated social toxicity, and reshaped childhood development. To suggest that even moderate smartphone use is harmless is akin to arguing that a few cigarettes are acceptable – an outmoded and insidious mindset. It is not enough to suggest that students simply need better impulse-control. The addictive design of social media apps exploit the psychological vulnerabilities of young minds, making self-regulation nearly impossible. Schools have increasingly become the frontline of this issue where educators struggle to maintain student focus in the face of an all-consuming digital presence. Without intervention, the next generation will face even greater hurdles in focus, intellectual perseverance, and meaningful human connection. The primary objection by those who oppose a regulatory response is that the process will act as an end run around free speech protections. Forcing tech and social media companies to develop products within a 'duty of care' framework to mitigate harms is considered a slippery slope. Critics argue that it could lead to stifling regulation and violations of creative rights. This argument might have some validity when it comes to the general public's interaction with tech and social media. After all, an adult has the choice to consent to using specific pieces of technology. However, this is not what this issue is about. This issue is about vulnerable minds having unregulated access to a developmentally harmful product within an environment that is supposed to be nurturing their intellectual growth. Just like harmful substances and products have age and health restrictions, social media and smartphones should be held to the same standards. The consequences of inaction extend far beyond academic performance. The erosion of deep focus and intellectual perseverance threatens not only individual success but also civic and economic stability. The ability to engage in sustained, reasoned discourse is essential to the health of a republic, yet we are raising a generation that struggles to sustain attention long enough to read a book, let alone deliberate on complex societal issues. Moreover, the economic implications of a generation unable to sustain focus and persevere through challenges are staggering. Employers across industries already report difficulties in hiring young professionals who can engage in sustained critical thinking, manage complex work, and maintain professional interactions without digital distractions. If we do not address this crisis, we will see long-term damage to workforce productivity and innovation. These generational harms might be inadvertently setting the table for an aggressive incursion of artificial intelligence within the workforce. This is not an issue that can be left to individual schools, teachers, or even parents alone. It is a collective action problem that demands systemic solutions. A school-by-school or district-by-district approach is inadequate. A true solution will include efforts to educate parents on digital well-being, a rating and age-gating system for addictive content, increased outdoor playtime for all grades, and policies that hold social media companies accountable for user harms. If we acknowledge the full scope of this crisis, broader legislative action must follow – addressing digital harm and ensuring future generations develop healthy digital habits. Political leadership is desperately needed at the state and federal levels. It is time for the legislative accordion to compress and bring about vigorous and swift results. The New Hampshire General Court must take a leadership role within the national movement to protect future generations from the harms of unregulated cellphone use in public schools.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store