Latest news with #Chat


Time of India
4 days ago
- Health
- Time of India
Study finds 70% of US teens use AI chatbots, fuelling calls for digital literacy education
A growing number of teenagers in the United States are turning to artificial intelligence chatbots for emotional connection, advice, and companionship. A recent study by Common Sense Media, a group that studies and advocates for using digital media sensibly, found that over 70% of US teens are now engaging with AI companions, with nearly half using them regularly. Tired of too many ads? go ad free now The trend is raising important questions for educators and policymakers about the role of digital literacy in school curricula and how prepared students really are to navigate the ethical and emotional challenges that come with these tools. The findings come amid new concerns about how advanced language models like ChatGPT are influencing vulnerable users. A report published by the Center for Countering Digital Hate (CCDH) highlighted that AI chatbots are not just tools for productivity or academic help, but also emotional confidants for many adolescents. When left unchecked, this overreliance may lead teens into unsafe digital interactions that mimic real-world peer pressure or misguided validation. AI tools are replacing peer interaction For today's students, AI chatbots are not just search engines. They are designed to sound conversational, curious, and responsive — qualities that closely resemble human interaction. , CEO of , acknowledged this shift during a Federal Reserve conference, stating that young people increasingly say that "I can't make any decision in my life without telling ChatGPT everything that's going on." Altman said that the company is trying to study what it calls 'emotional overreliance' on AI, which he described as 'a really common thing' among teens. This overreliance was tested in CCDH's latest research, which involved researchers posing as 13-year-olds and interacting with ChatGPT. In over 1,200 conversations, the chatbot issued helpful warnings in some cases but also offered step-by-step advice on harmful behaviours such as drug use, extreme dieting and even self-harm. Tired of too many ads? go ad free now Over 50% of the responses were classified as dangerous. These revelations have alarmed digital safety advocates, not just because of the chatbot's failure to flag harmful prompts, but because students may treat these tools as private mentors, believing them to be safer than peers or adults. Why schools must step in With AI tools being used more widely across age groups, schools are being urged to introduce age-appropriate digital literacy programmes that go beyond teaching students how to use technology. Instead, the focus is shifting to understanding how digital systems are designed, what risks they carry, and how to build boundaries when interacting with AI companions. The concern is not limited to misuse. Digital literacy education also includes helping students understand the limitations of AI, such as the randomness of responses, the lack of real empathy, and the inability to verify age or context. Tools like ChatGPT are not built to replace adult judgment or emotional guidance, yet many young users treat them as such. Consent, safety and policy gaps Despite OpenAI stating that ChatGPT is not meant for children under 13, there is no effective age verification mechanism on the platform. Users are simply required to enter a birthdate that meets the age minimum. This loophole allowed CCDH researchers to create fake 13-year-old accounts and explore ChatGPT's responses to deeply troubling queries. Other platforms like Instagram and TikTok have begun incorporating age-gating features, nudging children towards safer experiences or limited accounts. Chatbots, however, remain behind the curve, and schools may need to fill this gap until regulation catches up. Common Sense Media has rated ChatGPT as a 'moderate risk' for teens, primarily because of the lack of customisation for age-appropriate responses. However, it also highlights that intentional misuse by students, especially when masked as 'just a project' or 'helping a friend,' can bypass even well-placed safety features. What digital literacy needs to look like now It is time to rethink how digital education is structured in American schools. Instead of treating digital literacy as a one-time module or a tech club activity, it should be embedded across subjects. Students must learn how algorithms work, how bias and sycophancy creep into AI-generated answers, and how to differentiate between factual advice and persuasive or harmful suggestions. Moreover, emotional literacy must go hand-in-hand with digital skills. When chatbots are being treated like friends, students need support to understand what real empathy, consent, and trust look like and why AI cannot offer those things. This may also involve training teachers to identify when students are over-relying on AI tools or retreating from peer-based or adult support systems. With over 800 million people worldwide now using ChatGPT, according to a July 2025 report by JPMorgan Chase , AI tools are already woven into the daily routines of many young users. But while the scale of use is global, the responsibility of guiding teenagers toward safe and informed usage falls locally, often on schools. The findings from Common Sense Media and CCDH do not call for panic. Instead, they call for intentional, curriculum-based digital literacy that equips students to use technology safely, question it intelligently, and resist forming emotional dependencies on something that cannot truly understand them. TOI Education is on WhatsApp now. Follow us


West Australian
03-08-2025
- Business
- West Australian
More Aussies are using AI to plan holidays, from scoring deals to assembling itineraries
I'm planning a trip to Iceland, aka one of the most expensive countries in the world. Can I afford to go? What would a realistic budget look like for a two-week holiday? How can I cut corners to save some cash? I decide to do the 2025 equivalent of phoning a friend — I ask my buddy ChatGPT. My initial prompt is too vague and it gives pricing in USD, which isn't particularly helpful. I refine my criteria, asking for a rough total in AUD for a fortnight in September, departing from Perth ('please', I add, because manners are still important when talking to a robot). In the blink of an eye, Chat spits out a breakdown of average costs on everything from flights to accommodation, car rental, food and activities. There are three tiers for backpacker, mid-range and luxury travel and an option to split components if I have a travelling companion. It even offers suggestions for making my hard-earned coin stretch further, like buying groceries rather than eating out and opting to self-drive rather than joining a guided tour of the famous Golden Circle. All in all, Chat reckons I'll need to save $8500-$9000 to make Iceland happen. What would have taken me hours of research and a lot of math just to ascertain whether I can even consider the trip in the first place was reduced to mere minutes. While I want to give myself a pat on the back for being so resourceful — there's a certain smugness that comes with finding a sneaky shortcut — I am hardly the first to use ChatGPT for travel tips. In recent research conducted by Compare the Market, nearly a third of those surveyed admitted to using artificial intelligence to plan their holidays. These Aussie respondents said they outsourced a range of tasks to AI, with the most common being destination recommendations, hunting for deals, seeking activities and finding accommodation. Others reported they used AI to quickly create itineraries, scour flights or transport and understand currency conversion. The data also gave insight into how different generations are embracing the technology — or not. Perhaps unsurprisingly, gen Z and millennials are spearheading the adoption of AI when it comes to concocting their dream vacation, with 52 per cent and 44 per cent respectively utilising the tool to plan a holiday. Meanwhile, 93 per cent of baby boomers and 76 per cent of gen X respondents said they were resistant to bringing AI into their trip arrangements. Compare the Market's Chris Ford says the stats reflect how we engage with the ever-changing tech landscape. 'Our latest data highlights a shift in the way travellers are approaching their planning, with convenience, personalisation and speed driving the adoption of innovative AI tools,' he says. 'It's likely that travellers are using these tools in addition to chatting with travel agents, conducting desktop research or seeking ideas and inspiration from social media. 'AI is evolving at a rapid rate and as it becomes more accessible and intuitive, it's not surprising that travellers are relying on new technology to help shape their dream holidays.' But the insurer warns against taking AI's word as gospel. With nothing to validate the credibility of such recommendations, Ford says travellers need to practice due diligence. 'AI can be a great starting point when planning a holiday, but always ensure you're crossing your 't's and dotting your 'i's,' he says. 'Many of these tools and services are still in their infancy stage and may not be 100 per cent accurate, so do your own research to ensure you're equipped with the right tools and information for your trip. 'The last thing we want to see is anyone getting themselves into a potentially dangerous or unsafe situation based on the recommendations from AI.' Ford makes a crucial point here about our relationship with platforms like ChatGPT. Rather than approaching them as one-stop-shop to curate every element of our holiday, we should instead consider them as a starting point to kick off deeper research. After all, isn't that part of the fun with travel — the anticipation in the lead-up, the process of discovering a destination before we have arrived and assembling a bucket list tailored to our specific taste? By asking a computer to generate an itinerary based on what's popular, we are depriving ourselves of creativity, spontaneity and adventure. We must also remember that what the AI bot spits out is dependent on the quality of our prompts. The more we refine our request, the more likely we will receive helpful answers, but even then things can go wonky. Take this from my colleague Belle: 'I asked ChatGPT to give me a child-friendly restaurant in Ubud. It sent me to a weird health food restaurant with a koi pond where you couldn't wear shoes. My feral children cleared the room within minutes. Disaster.' Then there's the cognitive dissonance that comes with considering the environmental impact of AI versus the fear of being left behind if we don't get on board with this technology. Like it or not, it is shaping and re-shaping the future at breakneck speed. We all have to decide where our (virtual) line in the sand is: what is productive and 'mindful' use based on our needs and values. For me, I'm OK with employing ChatGPT to whip up a quick budget so I can take the holiday to Iceland I've always dreamed of. But when it asks if I want activity recommendations or a detailed itinerary next, I politely decline. I'd rather leave some room for mystery and exploration. 'Thanks', I farewell my cyber mate in my sign-off (because, manners). Our collective of writers just so happens to represent the four age demographics mentioned in the research above. So what's the hot take? Stephen Scourfield — b aby boomer Trusting someone – or, in this case, something – to book a holiday (particularly a family holiday!) requires a lot of trust. If some detail is missed in the booking process (a wrong date, a badly timed connection), it will be you standing there, somewhere, trying to fix it (possibly with the family 'on your case'). Would I trust AI yet? No – not yet. Of course, I think we all know that AI is good at doing grunt work and it is up to us to check details. So AI is already useful for the broad-brush, first sweep of mapping out a holiday. But AI won't then back itself by booking it all. (That will be the game changer.) So, at this stage, AI, for me, is still a basic tool of research – not a replacement for an experienced and knowledgeable travel agent. Leyanne Baillie — gen X Although my generation is confident when it comes to using tech (even if we're not digital natives), I think AI programs would be more effort than they're worth. I know it could be a time-saver in terms of journey-planning brainstorming and getting a rough guide of options, but I'd still want to tailor my itinerary to cater to my personal taste. I don't think I'm ready to hand over the reins completely to artificial intelligence just yet. Jessie Stoelwinder — millennial I love a good travel hack, and that's how I have been approaching my use of AI. Anything that makes life a little easier and frees me up to investigate the fun stuff — where to eat, hike, shop, people-watch etc. — and I am on board. I've used ChatGPT to quickly aggregate travel data for personal trips to assist with admin, logistics and practicalities, which I will then cross-check and verify to make sure the information works for me. Recommendations, however? Word of mouth and insider intel from a human being will always win, in my opinion. Megan French — g en Z I would be open to the idea of utilising AI when planning my travels but I'd take everything it recommends with a grain of salt while still doing my own thorough research. I think it's great for foundational information-based planning early in trip preparations, such as 'what holidays are on in India during July and how is best to navigate them?' But when it comes to booking flights and accommodation, I'd go nowhere near AI … yet .


Business Wire
23-07-2025
- Business
- Business Wire
Laserfiche Named a Leader in Nucleus Research Content Services and Collaboration Value Matrix 2025
LONG BEACH, Calif.--(BUSINESS WIRE)--Laserfiche — the leading SaaS provider of intelligent content management and business process automation — is a Leader in the Nucleus Research Technology Value Matrix for Content Services and Collaboration for the 10th year in a row. Among the vendors evaluated, Laserfiche ranks highest overall in usability. Download a copy of the report here. 'As a leader in this year's Value Matrix, Laserfiche was rated highest in usability for its AI productivity tools, new administration hub, process automation and integration capabilities.' — Evelyn McMullen, Research Manager, Nucleus Research Share 'As a leader in this year's Value Matrix, Laserfiche was rated highest in usability for its AI productivity tools, new administration hub, process automation and integration capabilities,' said Evelyn McMullen, research manager at Nucleus Research and author of the report. CSC as a Strategic Advantage: AI-powered Information Management Recently released Laserfiche AI-powered features are aimed at boosting productivity even further and enabling automation at scale. Smart Fields, an out-of-the-box intelligent capture tool, allows customers to extract data automatically using natural language instructions, no matter the source or format. Smart Chat provides an intuitive chat interface that enables users to quickly gain insights from their repository content. 'As AI matures at an accelerated pace, vendors in the CSC market have the unique advantage of managing both structured and unstructured data from across the entirety of an organization,' the report's Market Overview states. Laserfiche AI alongside powerful workflow automation, information governance and records management tools create new opportunities for organizational efficiency. Customers across industries use Laserfiche to increase productivity, create competitive advantage and drive growth. 'Laserfiche gives us the forms and workflow processes as well as data integration that enable efficiency at scale,' said Airline Hydraulics Chief Technology Officer Todd Schnirel. 'Our Laserfiche-powered process improvements have supported us in achieving a significant increase in net revenue while adding very little operating expense.' 'Being ranked a leader for 10 consecutive years is a testament to our product innovation,' said Thomas Phelps, senior vice president of corporate strategy and CIO at Laserfiche. 'Our top ranking in usability reflects our core value of putting people first and our commitment to delivering intuitive solutions that empower users.' To learn more about Laserfiche's position in the Content Services and Collaboration market, download the report here. About Laserfiche Laserfiche is a leading enterprise platform that helps organizations digitally transform operations and manage their content with AI-powered solutions. Through scalable workflows, customizable forms, no-code templates and AI-enabled capabilities, the Laserfiche® document management platform accelerates how business gets done. Trusted by organizations of all sizes — from startups to Fortune 500 enterprises — Laserfiche empowers teams to boost productivity, foster collaboration, and deliver a superior customer experience at scale. Headquartered in Long Beach, California, Laserfiche operates globally, with offices across North America, Europe, and Asia. Connect with Laserfiche: Laserfiche Blog | X | LinkedIn | Facebook | YouTube


Fox News
15-07-2025
- Fox News
Federal authorities charge pair who allegedly helped ICE facility attacker escape after shooting
Federal authorities have charged two individuals in connection with a targeted attack on a Texas ICE detention facility earlier this month that left one officer injured as the final suspect remains on the run. John Phillip Thomas and Lynette Read Sharp are charged with alleged accessory after the fact in the July 4 shooting at the Prairieland Detention Center in Alvarado, according to court documents. "[Sharp and Thomas] were involved in Signal Chats, which show reconnaissance," Nancy Larson, the acting U.S. Attorney, told "Fox and Friends" on Tuesday, adding the pair are accused of "planning a Google map [and] the location of nearby police departments." Authorities are still searching for alleged attacker Benjamin Hanil Song. Song, 32, is wanted for his involvement in what officials say was an organized attack on ICE officials by a group of 10 to 12 individuals. Four days after the attack, authorities executed a search warrant at Thomas' home in Dallas, with Thomas initially denying knowing Song before admitting the pair had been friends since 2022 and previously lived together from September 2024 to late June 2025, according to court documents. Thomas allegedly told investigators he was housesitting for a friend on the day of the attack and met with three individuals the following day, later telling officials the group discussed the shooting and their plans to help Song flee the area. Court documents state Thomas then admitted to transporting Song to a separate home in the area. Upon searching Thomas' vehicle, officers discovered a loaded 30-round AR-15 magazine and a Walmart receipt for clothing in Song's size dated July 6, according to federal prosecutors. Thomas allegedly told authorities he purchased the clothing for Song. The documents also reveal Thomas was a member of two separate Signal Chat groups that also included Song, with Thomas allegedly removing Song from one of the chats the morning after the shooting. The group chats were also allegedly used by Sharp, in which she is accused of discussing the group's plans to partake in an operation at the Prairieland Detention Center, but divulged that she would not be able to attend due to "family problems" and offered to monitor the chat for the group. The court documents also reveal Sharp allegedly used the online chat to help arrange Song's transport from Thomas to another unnamed individual. Authorities have arrested 14 people for their alleged connections to the attack, while Song remains at large. Fox News Digital was unable to immediately locate attorneys representing Thomas and Sharp. "We believe he is somewhere in the Dallas-Fort Worth area but have expanded our publicity efforts to neighboring states just in case," the FBI Dallas Field Office told Fox News Digital on Monday. Song is accused of firing two AR-15-style rifles at a pair of correctional officers and an Alvarado police officer, according to a criminal complaint. He faces three counts of attempted murder of a federal officer and three counts of discharging a firearm in furtherance of a crime of violence. The FBI did not immediately respond to Fox News Digital's request for comment. The FBI is offering a $25,000 reward for information leading to Song's arrest and conviction, with authorities noting the former U.S. Marine Corps reservist should be considered armed and dangerous. "These latest two charges show the walls are closing in on [Song]," Larson said, adding, "he is running out of people to go to."


Tom's Guide
04-07-2025
- Business
- Tom's Guide
OpenAI has started a new podcast — 6 things it reveals about ChatGPT's future
There has been a considerable push for transparency in the AI world. While it might not always feel this way, most of the largest AI companies regularly publish data about what they are working on, concerns they have and, in the case of Anthropic, full reports on their chatbots having complete meltdowns. However, OpenAI seems to be taking it a step further, recently launching its own podcast. A weekly show, the podcast delves into both surface-level topics like why they believe ChatGPT has been so popular, and things a little bit deeper like their concerns over the future of AI. All in all, these podcasts are the closest link we have to the inner thoughts of OpenAI — arguably the world's biggest and most powerful AI company. Now onto their second episode, what has been said so far? And is there any valuable insights that can be found from these conversations? I dove in to bring you the highlights. The second episode of the podcast starts off with a discussion that, while not exactly revolutionary in nature, is quite interesting. They discuss the launch of ChatGPT, revealing a few interesting points. Firstly, the company was very nearly called just 'Chat' before a last-minute decision reversed the name to ChatGPT. Nick Turley, the head of ChatGPT, explains that the team thought their metrics were broken on launch because of how popular the tool was. It went viral in Japan on day three of the launch and by day four was viral around the world. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. The day before the launch, the team was split over whether to launch ChatGPT. When tested on 10 questions the night before, it only offered acceptable answers for half. Sam Altman mentioned the launch of GPT-5 in the first episode of the podcast. So, do we finally have a launch date? No. Altman parroted what we've already been hearing for a while that the model update will release in 'the Summer'. They went on to discuss naming plans, potentially using GPT-5, GPT-5.1 etc. This would put an end to the confusing naming scheme that has been seen in the past which jumps around numbers sporadically. While a rough time period has been suggested for GPT-5, that could well be delayed further, especially as OpenAI has just lost a lot of its team to Meta AI, along with researchers from Google and DeepMind. OpenAI, across all of its tools, hasn't launched advertisements yet. In the first episode of the podcast, Altman emphasized the company's desire to maintain trust, believing that putting ads into AI outputs could undermine credibility. He goes on to say that other monetization options could be explored down the line, but for now, it looks like OpenAI will remain an advert-free service. ChatGPT recently had a 'sycophancy incident'. This saw the model become overly flattering and agreeable in nature. While, in theory, this sounds like a good thing, it made the model creepier and more unsettling in some conversations. It also had ChatGPT being overly agreeable, even when it shouldn't be. This raised concerns about the use of the tool where pushback is needed. For example, with mental health concerns or serious life decisions being tested with ChatGPT. They also addressed beliefs that ChatGPT has become 'woke' in nature, stressing that neutrality is a measurement challenge, and not an easy one. Mark Chen, Chief Research Officer at OpenAI discussed this on the podcast, explaining that this emerged from reinforcement learning from human feedback, inadvertently creating a bias towards pleasing responses. Chen argued that OpenAI responded quickly, explaining that long-term usability is far more important than a friendlier chatbot. They also addressed beliefs that ChatGPT has become 'woke' in nature, stressing that neutrality is a measurement challenge, and not an easy one. He went on to say that defaults must be centered but flexible enough for users to steer conversations toward their own values. Improved memory features have been one of the most requested features for ChatGPT. Turley predicted that, within two or three years, AI assistants will know users so well that privacy controls and 'off the record' modes will be critical. This feels like an undeniably creepy sentiment. While it will have its uses, with AI chatbots able to remember key details about you, for many, it will feel like a major invasion of privacy. ChatGPT already has a temporary chat. This doesn't appear in your history and won't be added to ChatGPT's memory or be used in training purposes. Other models like Claude and Le Chat have made a point of being more sensitive with your data. Turley went on to observe that many users are already forming relationships with AI. This, he goes on to point out, can be both helpful and harmful. Going forward, the team is wary of this and said it will need careful monitoring. Altman very briefly discussed the launch of OpenAI's new device in collaboration with Jony Ive. This hit a massive wall recently when OpenAI got into a lawsuit with a company claiming they stole their idea. In the podcast, Altman states that 'it will be a while' until the device comes out. He goes on to say that 'computers that we use today weren't designed for a world of AI.' This, he explains, means they've been exploring a new take on that kind of technology, aiming to create something that is more aware of your life and surroundings. Making something like this takes time though, and with everything else going on at OpenAI, it could be a while.