Latest news with #InfiniteFlow


New York Post
7 days ago
- New York Post
Growing number of teens turn to AI for friends, new study shows — here's why experts are alarmed
It's not a glitch in the matrix: the youngest members of the iGeneration are turning to chatbot companions for everything from serious advice to simple entertainment. 4 The age range for Generation Z is between 13 and 28, while Generation Alpha is between 12 and 0. InfiniteFlow – In the past few years, AI technology has advanced so far to see users have gone straight to machine models for just about anything, and Generations Z and Alpha are leading the trend. Advertisement Indeed, a May 2025 study by Common Sense Media looked into the social lives of 1,060 teens aged 13 to 17 and found that a startling 52% of adolescents across the country use chatbots at least once a month for social purposes. Teens who used AI chatbots to exercise social skills said they practiced conversation starters, expressing emotions, giving advice, conflict resolution, romantic interactions and self-advocacy — and almost 40% of these users applied these skills in real conversations later on. 4 Many AI chatbots have been critiqued for being overly sycophantic towards their flesh-and-blood conversation partners. Common Sense Media Advertisement 4 Younger teens tend to be more trustful of AI companions, while older teens are more well-educated on the dangers of oversharing with AI. Common Sense Media Despite some potentially beneficial skill developments, the study authors see the cultivation of anti-social behaviors, exposure to age-inappropriate content and potentially harmful advice given to teens as reason enough to caution against underage use. 'No one younger than 18 should use AI companions,' study authors wrote in the paper's conclusion. Advertisement The real alarm bells began to ring when data uncovered that 33% of users prefer to turn to AI companions over real people when it comes to serious conversations, and 34% said that a conversation with a chatbot has caused discomfort, referring to both subject matter and emotional response. 'Until developers implement robust age assurance beyond self-attestation, and platforms are systematically redesigned to eliminate relational manipulation and emotional dependency risks, the potential for serious harm outweighs any benefits,' study authors warned. 4 100 or more teens said AI chats were better than IRL connections. Common Sense Media Advertisement Though AI use is certainly spreading among younger generations — a recent survey showed that 97% of Gen-Z has used the technology — the Common Sense Media study found that 80% of teens said they still spend more time with IRL friends than online chatbots. Rest easy, parents: today's teens do still prioritize human connections, despite popular beliefs. However, people of all generations are cautioned against consulting AI for certain purposes. As The Post previously reported, AI chatbots and large language models (LLM) can be particularly harmful for those seeking therapy and tend to endanger those exhibiting suicidal thoughts. 'AI tools, no matter how sophisticated, rely on pre-programmed responses and large datasets,' Niloufar Esmaeilpour, a clinical counselor in Toronto, previously told The Post. 'They don't understand the 'why' behind someone's thoughts or behaviors.' Sharing personal medical information with AI chatbots can also have drawbacks, as the information they regurgitate isn't always accurate, and perhaps more alarmingly, they are not HIPAA compliant. Uploading work documents to get a summary can also land you in hot water, as intellectual property agreements, confidential data and other company secrets can be extracted and potentially leaked.


New York Post
07-06-2025
- Business
- New York Post
UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court
Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said, warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has 'serious implications for the administration of justice and public confidence in the justice system.' In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. Advertisement 3 Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said. AP They were asked to rule after lower court judges raised concerns about 'suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked,' leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. Advertisement The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor, Abid Hussain. But Sharp said it was 'extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around.' 3 High Court justice Victoria Sharp said the misuse of AI has 'serious implications for the administration of justice and public confidence in the justice system.' InfiniteFlow – In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had 'not provided to the court a coherent explanation for what happened.' Advertisement The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the 'most egregious cases,' perverting the course of justice, which carries a maximum sentence of life in prison. 3 The judge also warned that attorneys could be prosecuted if they don't check the accuracy of their research. Song_about_summer – She said in the judgment that AI is a 'powerful technology' and a 'useful tool' for the law. Advertisement 'Artificial intelligence is a tool that carries with it risks as well as opportunities,' the judge said. 'Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.'