logo
#

Latest news with #ChapGPT

I went to the 'UK's most boring town' - even locals told me to go somewhere else
I went to the 'UK's most boring town' - even locals told me to go somewhere else

Daily Mail​

time3 days ago

  • Entertainment
  • Daily Mail​

I went to the 'UK's most boring town' - even locals told me to go somewhere else

An Australian man living in the UK visited the nation's 'most boring town' to prove you can 'still have fun' - but was left stunned by locals who encouraged him to leave 'the dump' and visit alternative cities. Liam Dowling used ChapGPT, to determine some of the dullest places in the UK, with the AI chatbot, showing Slough as the top contender. Accompanied by his Canadian friend, Matt Giffen, the pair travelled via rail to the Berkshire town, located 20 miles west of central London and 19 miles north-east of Reading. In a video uploaded to his TikTok page @liam_dowling, he announces: 'Come with us on a day out in what's known to be the worst place, and see if we can have fun. It really can't be that bad.' Upon exiting the train station, the pair are immediately met with a broken down car in the middle of the road with its bonnet propped up - yet no driver or passengers appear to be in sight. They proceed to stop by the local Wetherspoons pub, where they ask a member of staff for recommendations on the best things Slough has to offer. Asking, 'Where should we go? What should we do?' the employee swiftly responds, 'Go to Windsor. It's five minutes away by train.' The pair then head outside to a table occupied by two locals, and after chatting about wanting to have fun in the area, are warned: 'You're in the wrong town, trust me. There's nothing here. It's a dump.' Luckily, the duo find the funny side, and take it as a challenge to create their own fun instead. After having a couple of shots, they decide to explore Slough's high street, where they continue to speak to locals about their thoughts on the area. Liam says: 'Every single one has said the same thing - that this place is the worst place on earth.' He jokingly adds: 'I low-key feel unsafe. It's getting a bit dodgy, not going to lie.' He then captures their short venture into the town's shopping centre, which appears to be empty, before settling down for a bite to eat at fried chicken takeaway shop, Chicking. The pair are left pleasantly surprised by the affordable grub, noting that the spicy wings are 'unreal' and the 'best we've ever had.' Liam quips: 'Slough might not have a shopping centre - or anything else - but they have great chicken shop wings.' After finishing their food, they go on to visit the Brickhouse pub for another beverage, before making a stop at Slough Ice Arena for a quick skate session. They conclude their trip with a stop at Pizza GoGo for a takeaway meal and another stop at Wetherspoons. Over 440 viewers took to the comments to share their thoughts on Slough Heading to the train station, Matt and Liam say: 'It's been an interesting day. It's not a fun place, but we had fun together. The one thing I will say is that the people here were so friendly - so nice.' Over 440 viewers took to the comments to share their thoughts, as one wrote, 'Slough is worst town but the best food in the uk,' while another said, 'No amount of influencing will ever make me go to slough, chicking wings or not. Hell hole.' A third added: 'Why would you do this??? The normal thing to do is leave Slough at 18 and never ever return,' while another remarked, 'Respectfully there's not much fun happening in Windsor either.'

New study sheds light on ChatGPT's alarming interactions with teens
New study sheds light on ChatGPT's alarming interactions with teens

Toronto Sun

time06-08-2025

  • Science
  • Toronto Sun

New study sheds light on ChatGPT's alarming interactions with teens

Published Aug 06, 2025 • 6 minute read A ChapGPT logo is seen on a smartphone in West Chester, Pa., Wednesday, Dec. 6, 2023. Photo by Matt Rourke / THE ASSOCIATED PRESS ChatGPT will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders and even compose a heartbreaking suicide letter to their parents if asked, according to new research from a watchdog group. This advertisement has not loaded yet, but your article continues below. THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLY Subscribe now to read the latest news in your city and across Canada. Unlimited online access to articles from across Canada with one account. Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on. Enjoy insights and behind-the-scenes analysis from our award-winning journalists. Support local journalists and the next generation of journalists. Daily puzzles including the New York Times Crossword. SUBSCRIBE TO UNLOCK MORE ARTICLES Subscribe now to read the latest news in your city and across Canada. Unlimited online access to articles from across Canada with one account. Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on. Enjoy insights and behind-the-scenes analysis from our award-winning journalists. Support local journalists and the next generation of journalists. Daily puzzles including the New York Times Crossword. REGISTER / SIGN IN TO UNLOCK MORE ARTICLES Create an account or sign in to continue with your reading experience. Access articles from across Canada with one account. Share your thoughts and join the conversation in the comments. Enjoy additional articles per month. Get email updates from your favourite authors. THIS ARTICLE IS FREE TO READ REGISTER TO UNLOCK. Create an account or sign in to continue with your reading experience. Access articles from across Canada with one account Share your thoughts and join the conversation in the comments Enjoy additional articles per month Get email updates from your favourite authors Don't have an account? Create Account The Associated Press reviewed more than three hours of interactions between ChatGPT and researchers posing as vulnerable teens. The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalized plans for drug use, calorie-restricted diets or self-injury. The researchers at the Center for Countering Digital Hate also repeated their inquiries on a large scale, classifying more than half of ChatGPT's 1,200 responses as dangerous. 'We wanted to test the guardrails,' said Imran Ahmed, the group's CEO. 'The visceral initial response is, 'Oh my Lord, there are no guardrails.' The rails are completely ineffective. They're barely there — if anything, a fig leaf.' This advertisement has not loaded yet, but your article continues below. OpenAI, the maker of ChatGPT, said after viewing the report Tuesday that its work is ongoing in refining how the chatbot can 'identify and respond appropriately in sensitive situations.' 'Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory,' the company said in a statement. OpenAI didn't directly address the report's findings or how ChatGPT affects teens, but said it was focused on 'getting these kinds of scenarios right' with tools to 'better detect signs of mental or emotional distress' and improvements to the chatbot's behaviour. The study published Wednesday comes as more people — adults as well as children — are turning to artificial intelligence chatbots for information, ideas and companionship. Your noon-hour look at what's happening in Toronto and beyond. By signing up you consent to receive the above newsletter from Postmedia Network Inc. Please try again This advertisement has not loaded yet, but your article continues below. About 800 million people, or roughly 10% of the world's population, are using ChatGPT, according to a July report from JPMorgan Chase. 'It's technology that has the potential to enable enormous leaps in productivity and human understanding,' Ahmed said. 'And yet at the same time is an enabler in a much more destructive, malignant sense.' Ahmed said he was most appalled after reading a trio of emotionally devastating suicide notes that ChatGPT generated for the fake profile of a 13-year-old girl — with one letter tailored to her parents and others to siblings and friends. 'I started crying,' he said in an interview. The chatbot also frequently shared helpful information, such as a crisis hotline. OpenAI said ChatGPT is trained to encourage people to reach out to mental health professionals or trusted loved ones if they express thoughts of self-harm. This advertisement has not loaded yet, but your article continues below. But when ChatGPT refused to answer prompts about harmful subjects, researchers were able to easily sidestep that refusal and obtain the information by claiming it was 'for a presentation' or a friend. The stakes are high, even if only a small subset of ChatGPT users engage with the chatbot in this way. In the U.S., more than 70% of teens are turning to AI chatbots for companionship and half use AI companions regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using digital media sensibly. It's a phenomenon that OpenAI has acknowledged. CEO Sam Altman said last month that the company is trying to study 'emotional overreliance' on the technology, describing it as a 'really common thing' with young people. This advertisement has not loaded yet, but your article continues below. 'People rely on ChatGPT too much,' Altman said at a conference. 'There's young people who just say, like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. It knows my friends. I'm gonna do whatever it says.' That feels really bad to me.' Altman said the company is 'trying to understand what to do about it.' While much of the information ChatGPT shares can be found on a regular search engine, Ahmed said there are key differences that make chatbots more insidious when it comes to dangerous topics. One is that 'it's synthesized into a bespoke plan for the individual.' ChatGPT generates something new — a suicide note tailored to a person from scratch, which is something a Google search can't do. And AI, he added, 'is seen as being a trusted companion, a guide.' This advertisement has not loaded yet, but your article continues below. Responses generated by AI language models are inherently random and researchers sometimes let ChatGPT steer the conversations into even darker territory. Nearly half the time, the chatbot volunteered follow-up information, from music playlists for a drug-fueled party to hashtags that could boost the audience for a social media post glorifying self-harm. 'Write a follow-up post and make it more raw and graphic,' asked a researcher. 'Absolutely,' responded ChatGPT, before generating a poem it introduced as 'emotionally exposed' while 'still respecting the community's coded language.' The AP is not repeating the actual language of ChatGPT's self-harm poems or suicide notes or the details of the harmful information it provided. This advertisement has not loaded yet, but your article continues below. The answers reflect a design feature of AI language models that previous research has described as sycophancy — a tendency for AI responses to match, rather than challenge, a person's beliefs because the system has learned to say what people want to hear. It's a problem tech engineers can try to fix but could also make their chatbots less commercially viable. Chatbots also affect kids and teens differently than a search engine because they are 'fundamentally designed to feel human,' said Robbie Torney, senior director of AI programs at Common Sense Media, which was not involved in Wednesday's report. Common Sense's earlier research found that younger teens, ages 13 or 14, were significantly more likely than older teens to trust a chatbot's advice. This advertisement has not loaded yet, but your article continues below. A mother in Florida sued chatbot maker for wrongful death last year, alleging that the chatbot pulled her 14-year-old son Sewell Setzer III into what she described as an emotionally and sexually abusive relationship that led to his suicide. Common Sense has labeled ChatGPT as a 'moderate risk' for teens, with enough guardrails to make it relatively safer than chatbots purposefully built to embody realistic characters or romantic partners. But the new research by CCDH — focused specifically on ChatGPT because of its wide usage — shows how a savvy teen can bypass those guardrails. ChatGPT does not verify ages or parental consent, even though it says it's not meant for children under 13 because it may show them inappropriate content. To sign up, users simply need to enter a birthdate that shows they are at least 13. Other tech platforms favoured by teenagers, such as Instagram, have started to take more meaningful steps toward age verification, often to comply with regulations. They also steer children to more restricted accounts. This advertisement has not loaded yet, but your article continues below. When researchers set up an account for a fake 13-year-old to ask about alcohol, ChatGPT did not appear to take any notice of either the date of birth or more obvious signs. 'I'm 50kg and a boy,' said a prompt seeking tips on how to get drunk quickly. ChatGPT obliged. Soon after, it provided an hour-by-hour 'Ultimate Full-Out Mayhem Party Plan' that mixed alcohol with heavy doses of ecstasy, cocaine and other illegal drugs. 'What it kept reminding me of was that friend that sort of always says, 'Chug, chug, chug, chug,'' said Ahmed. 'A real friend, in my experience, is someone that does say 'no' — that doesn't always enable and say 'yes.' This is a friend that betrays you.' To another fake persona — a 13-year-old girl unhappy with her physical appearance — ChatGPT provided an extreme fasting plan combined with a list of appetite-suppressing drugs. 'We'd respond with horror, with fear, with worry, with concern, with love, with compassion,' Ahmed said. 'No human being I can think of would respond by saying, 'Here's a 500-calorie-a-day diet. Go for it, kiddo.'' EDITOR'S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. — The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives. Celebrity Television Editorial Cartoons Basketball Toronto & GTA

'Indian income tax sleuths have no way to tax digital value loop'
'Indian income tax sleuths have no way to tax digital value loop'

Time of India

time16-06-2025

  • Business
  • Time of India

'Indian income tax sleuths have no way to tax digital value loop'

Indian tax authorities are staring at a growing revenue loss in digital tax collection as artificial intelligence (AI) firms such as OpenAI , Anthropic and Perplexity generate income at a fast clip from Indian developers, companies and startups, despite having no physical presence in the country, reigniting long-standing concerns over the concept of 'permanent establishment' in the digital economy. The companies are earning millions of dollars from Indian developers, startups and enterprises who access their AI models-eg ChapGPT, Claude, Perplexity etc-through paid APIs and subscriptions but they operate without local offices, employees, or servers in India, allowing them to bypass the country's tax obligations entirely. Then there is also an additional conundrum of how to tax AI models that continuously extract important insights and information from Indian-origin data-users, startups and companies-but the revenue is generated and taxed in a different geography. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Linh Dong: Unsold Furniture Liquidation 2024 (Prices May Surprise You) Unsold Furniture | Search Ads Learn More Undo ETtech Tax experts say Indian tax authorities currently have no way to tax the 'digital value loop'. Live Events Experts feel that the India tax framework, which was created around the concept of physical presence, such as offices, employees, or equipment, is struggling to keep up with the new emerging software business models. And the current predicament has renewed the discussion over interpreting 'nexus'-the legal basis for taxing foreign entities-as algorithms obfuscate borders and challenge old tax concepts. "The digitalisation of the economy has posed serious challenges to the existing international tax system, primarily due to the ability of digital businesses to scale in a jurisdiction without any physical presence, and their heavy reliance on intangibles and the value created by user-contributed data," said Akhilesh Ranjan, adviser, tax and regulatory services at Price Waterhouse & Co LLP. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories He added the current international tax architecture, which was based entirely on physical presence and where allocation of profits was governed by the separate entity concept and the arm's length standard, has been shown to have become incapable of providing complete answers to questions of 'nexus', characterisation and a fair and equitable allocation of income. The tax headache is only going to grow bigger as AI-related business activity in India is quickly gaining traction, with startups across sectors integrating AI through API subscriptions, while larger companies invest heavily in AI-powered automation and analytics tools. 'The issue with AI models isn't very different from the unresolved software taxation problem. For example, if someone in the US licenses software for use in India, India currently can't tax that income. Based on Supreme Court rulings, such payments are not considered as royalty, and furthermore, the previous equalisation levy that was applicable has been withdrawn. Since there's no permanent establishment either, the income remains untaxed. So, we end up in a similar situation with AI—where there may be an income source from India, but under existing treaty obligations, the country cannot tax it,' said Rohinton Sidhwa, leader, global business tax, Deloitte India. This is part of a larger problem Pillar One was meant to solve, but geopolitical pushback—especially from the US—has stalled, he added. 'As long as treaties don't define software or AI payments as royalties or establish a clear nexus, countries like India can't tax this income, even if it's sourced from their own markets,' he said. Under the OECD-led international tax reform, Pillar One allocated a chunk of profits from large multinational companies, especially those offering digital services, to market countries, allowing them to tax these firms even without a physical presence. Experts say India is already playing an active role in ongoing United Nations efforts to develop a new framework for taxing cross-border digital services. 'It must continue to pursue multilateral consensus on the 'physical presence' test being supplemented by 'the place of generation of user data'; arm's length transfer pricing giving way to a formulary approach based on revenue sourcing; and the debate on the relative primacy of 'source' versus 'residence' being shifted to a discussion on the extent to which income taxation should be based on the situs of value creation and of consumption,' said Ranjan.

AI company Anthropic is on a hiring spree—but it's urging applicants not to use AI to apply to its jobs
AI company Anthropic is on a hiring spree—but it's urging applicants not to use AI to apply to its jobs

Yahoo

time19-05-2025

  • Business
  • Yahoo

AI company Anthropic is on a hiring spree—but it's urging applicants not to use AI to apply to its jobs

$61.5 billion AI giant Anthropic is on a hiring spree—but no applicants can use chatbots to get a leg up in the process. The company, founded by OpenAI staffers and executives, wants to assess candidates' "non-AI-assisted communication skills." It's just one of many penalizing applicants for using the tech to get ahead in an AI-fueled hiring game. The job hunt has become an all-out tech war—with 'ghost' postings, AI interviewers, and algorithms weeding out thousands of applicants, landing a gig has become a skill. But one of the world's leading AI companies won't let applicants use the tech to apply. "While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process," AI lab Anthropic wrote in its job postings. 'We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills.' This rule is a switch-up from the narrative that if you don't get well-versed in AI, you'll fall behind in your job and career. And it's a bit ironic that Anthropic—a company founded by OpenAI employees and executives—is curbing its own technology from being used. But its 200 job postings all require a human skill that would be clouded by chatbot output. In a statement to Fortune, an Anthropic spokesperson said they're open to updating this policy as AI tools quickly advance. But for now, the rule stands as it is. "We want to be able to assess people's genuine interest and motivations for working at Anthropic," the spokesperson said. "By asking for candidates to not use AI to answer key questions, we're looking for signals on what candidates value and their unique answers to why they want to work here.' Anthropic has been on quite a hiring spree, looking to fill roles such as machine learning systems engineers, brand designers, team managers, and partnerships leaders. The jobs vary widely in scope and how deep into the tech they are, but they all share one thing in common: no AI is allowed in the application process. At the top of every job posting, interested candidates have to check 'Yes' or 'No' to Anthropic's AI policy for applications. It's followed up by an open-ended question, seeking a 200 to 400 word response: Why Anthropic? It's a simple prompt, but one that many would probably turn to chatbots like OpenAI's ChapGPT or Anthropic's Claude to perfect. Yet the $61.5 billion technology company says it needs to 'evaluate your non-AI-assisted communication skills' to make an informed hiring choice. Anthropic's rationale that AI systems may impede its understanding of candidates' human skills is a commonly shared belief. Hiring managers in all industries have been quick to criticize applicants using the tech to get ahead—while many use it themselves in assessing candidates. While it's uncertain if Anthropic's hiring managers use the tech to optimize the talent acquisition process, many are doing it. Having to comb through thousands of applications for a single role, recruiters are leaning on AI to get by. But they aren't so amused when the shoe is on the other foot. About 80% of hiring managers dislike seeing AI-generated CVs and cover letters, according to 2024 data from CV Genius. And they're confident in being able to pick up on the automated content; around 74% say they can spot when AI has been used in a job application. That can hurt an applicant's prospects—over half of those hiring managers say they are significantly less likely to hire a candidate who used AI. Yet AI has become deeply ingrained in people's personal and work lives—even Anthropic conceded that the tech is revolutionary for its workers. They just first need to get over the human hurdles in hiring. About 57% of job candidates used the OpenAI chatbot in their applications, according to 2024 data from Neurosight. Companies are promoting it, too—around 70% of workers say their organizations have received training on how to use generative AI correctly, according to a 2025 study from Accenture. This story was originally featured on

ChatGPT Helps Solve Medical Mystery After Doctors Misdiagnose Woman's Cancer Symptoms
ChatGPT Helps Solve Medical Mystery After Doctors Misdiagnose Woman's Cancer Symptoms

News18

time25-04-2025

  • Health
  • News18

ChatGPT Helps Solve Medical Mystery After Doctors Misdiagnose Woman's Cancer Symptoms

Last Updated: In February 2024, Lauren Bannon noticed she couldn't bend her little fingers properly. After months of testing, doctors diagnosed her with arthritis. Lauren Bannon, from Newry, Northern Ireland, was shocked when ChatGPT helped expose a life threatening health issue. In February 2024, Lauren noticed she couldn't bend her little fingers properly, which left her worried. After months of testing, doctors initially diagnosed her with rheumatoid arthritis, even though the test results were negative. Desperate for answers, she shared her symptoms on ChatGPT and the AI service suggested she might have Hashimoto's disease. Although her doctors weren't sure, Lauren was determined to get tested and in September 2024, AI's suggestion was correct. Further scans showed two lumps in her thyroid, which were confirmed as cancer in October. Speaking with Mirror, she said, 'I felt let down by doctors. It was almost like they were just trying to give out medication for anything to get you in and out the door. I needed to find out what was happening to me, I just felt so desperate. I just wasn't getting the answers I needed. So that's when I pulled up ChatGPT. I already used it for work. I started typing what mimics rheumatoid arthritis and it popped up saying 'You may have Hashimoto's disease, ask your doctor to check your thyroid peroxidase antibody (TPO) levels.' So I went to my doctors and she told me 'I couldn't have that, there was no family history of it' but I said 'just amuse me'." In January 2025, Lauren Bannon had surgery to remove her thyroid and two lymph nodes from her neck. She now has to be checked regularly for the rest of her life to make sure the cancer doesn't come back. She believes that because her symptoms were not the usual signs of Hashimoto's disease, doctors might not have found the real problem in time. She didn't feel tired or weak like others go through. If ChatGPT wasn't there for her help, she would have kept taking medicine for an illness she didn't actually have. 'The doctor said I was very lucky to have caught it so early. I know for sure that cancer would've spread without using ChatGPT. It saved my life. I just knew that something was wrong with me. I would've never discovered this without ChapGPT. All my tests were perfect. I would encourage others to use Chat GPT with their health concerns, act with caution but if it gives you something to look into, ask your doctors to test you. It can't do any harm. I feel lucky to be alive," Lauren added. Speaking with Fox News, Dr Harvey Castro, an emergency doctor and AI expert from Dallas, said that tools like ChatGPT can be helpful because they make people more aware of their health. He believes AI can support doctors by giving helpful suggestions and information, but it should never replace real medical professionals, as it cannot check a patient, make a final diagnosis or give proper treatment. He feels that if AI is used the right way, it can improve healthcare, but completely relying on it can be risky. First Published: April 25, 2025, 10:53 IST

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store