logo
#

Latest news with #ChatGPT-generated

Spotting text written by ChatGPT is still a struggle
Spotting text written by ChatGPT is still a struggle

Straits Times

time31-07-2025

  • Straits Times

Spotting text written by ChatGPT is still a struggle

Sign up now: Get ST's newsletters delivered to your inbox Some signs may serve as tells but are unable to definitively confirm if AI was used in any writing. A 2024 study found that 94 per cent of undergraduate exams written by ChatGPT went undetected by graders at a British university. People are now routinely using chatbots to write computer code, summarise articles and books, or solicit advice. But these chatbots are also employed to quickly generate text from scratch, with some users passing off the words as their own. This has, not surprisingly, created headaches for teachers tasked with evaluating their students' written work. It has also created issues for people seeking advice on forums like Reddit, or consulting product reviews before making a purchase. Over the past few years, researchers have been exploring whether it is even possible to distinguish human writing from artificial intelligence-generated text. But the best strategies to distinguish between the two may come from the chatbots themselves. Too good to be human? Several recent studies have highlighted just how difficult it is to determine whether text was generated by a human or a chatbot. Research participants recruited for a 2021 online study, for example, were unable to distinguish between human- and ChatGPT-generated stories, news articles and recipes. Language experts fare no better. In a 2023 study, editorial board members of top linguistics journals were unable to determine which article abstracts had been written by humans and which were generated by ChatGPT. And a 2024 study found that 94 per cent of undergraduate exams written by ChatGPT went undetected by graders at a British university. Clearly, humans are not very good at this. Top stories Swipe. Select. Stay informed. Singapore ICA to bar high-risk, undesirable travellers from boarding ships, flights bound for Singapore Singapore 17-member committee to drive roll-out of autonomous vehicles in Singapore Singapore Over half of job applications by retrenched Jetstar Asia staff led to offers or interviews: CEO Singapore Underground pipe leak likely reason for water supply issues during Toa Payoh fire: Town council Business Singapore gold investment soars 37% to 2.2 tonnes in Q2 while jewellery demand wanes Multimedia 60 years, 60 items: A National Day game challenge Asia US-Malaysia tariff deal set for Aug 1 after Trump-Anwar phone call Singapore 'Switching careers just as I became a dad was risky, but I had to do it for my family' A commonly held belief is that rare or unusual words can serve as 'tells' regarding authorship, just as a poker player might somehow give away that they hold a winning hand. Researchers have, in fact, documented a dramatic increase in relatively uncommon words, such as 'delves' or 'crucial', in articles published in scientific journals over the past couple of years. This suggests that unusual terms could serve as tells that generative AI has been used. It also implies that some researchers are actively using bots to write or edit parts of their submissions to academic journals. Whether this practice reflects wrongdoing is up for debate. In another study, researchers asked people about characteristics they associate with chatbot-generated text. Many participants pointed to the excessive use of em dashes – an elongated dash used to set off text or serve as a break in thought – as one marker of computer-generated output. But even in this study, the participants' rate of AI detection was only marginally better than chance. Given such poor performance, why do so many people believe that em dashes are a clear tell for chatbots? Perhaps it is because this form of punctuation is primarily employed by experienced writers. In other words, people may believe that writing that is 'too good' must be artificially generated. But if people cannot intuitively tell the difference, perhaps there are other methods for determining human versus artificial authorship. Stylometry to the rescue? Some answers may be found in the field of stylometry, in which researchers employ statistical methods to detect variations in the writing styles of authors. I'm a cognitive scientist who authored a book on the history of stylometric techniques. In it, I document how researchers developed methods to establish authorship in contested cases, or to determine who may have written anonymous texts. One tool for determining authorship was proposed by the Australian scholar John Burrows. He developed Burrows' Delta, a computerised technique that examines the relative frequency of common words, as opposed to rare ones, that appear in different texts. It may seem counterintuitive to think that someone's use of words like 'the' 'and' or 'to' can determine authorship, but the technique has been impressively effective. Burrows' Delta, for example, was used to establish that Ruth Plumly Thompson, the successor of L. Frank Baum, was the author of a disputed book in the Wizard Of Oz series. It was also used to determine that love letters attributed to Confederate General George Pickett were actually the inventions of his widow, LaSalle Corbell Pickett. A major drawback of Burrows' Delta and similar techniques is that they require a fairly large amount of text to reliably distinguish between authors. A 2016 study found that at least 1,000 words from each author may be required. A relatively short student essay, therefore, would not provide enough input for a statistical technique to work its attribution magic. More recent work has made use of what are known as Bert language models, which are trained on large amounts of human- and chatbot-generated text. The models learn the patterns that are common in each type of writing, and they can be much more discriminating than people: The best ones are between 80 per cent and 98 per cent accurate. However, these machine-learning models are 'black boxes' – that is, we do not really know which features of texts are responsible for their impressive abilities. Researchers are actively trying to find ways to make sense of them, but for now, it is not clear whether the models are detecting specific, reliable signals that humans can look for on their own. A moving target Another challenge for identifying bot-generated text is that the models themselves are constantly changing – sometimes in major ways. Early in 2025, for example, users began to express concerns that ChatGPT had become overly obsequious, with mundane queries deemed 'amazing' or 'fantastic'. OpenAI addressed the issue by rolling back some changes it had made. Of course, the writing style of a human author may change over time as well, but it typically does so more gradually. At some point, I wondered what the bots had to say for themselves. I asked ChatGPT-4o: 'How can I tell if some prose was generated by ChatGPT? Does it have any 'tells', such as characteristic word choice or punctuation?' The bot admitted that distinguishing human from non-human prose 'can be tricky'. Nevertheless, it did provide me with a 10-item list, replete with examples. These included the use of hedges – words like 'often' and 'generally' – as well as redundancy, an overreliance on lists and a 'polished, neutral tone'. It did mention 'predictable vocabulary', which included certain adjectives such as 'significant' and 'notable', along with academic terms like 'implication' and 'complexity'. However, though it noted that these features of chatbot-generated text are common, it concluded that 'none are definitive on their own'. Chatbots are known to hallucinate, or make factual errors. But when it comes to talking about themselves, they appear to be surprisingly perceptive. Roger J. Kreuz is associate dean and professor of psychology at the University of Memphis. This first appeared in The Conversation .

I asked ChatGPT's study mode if I should buy a car. The questions it asked me back convinced me to stay car-free.
I asked ChatGPT's study mode if I should buy a car. The questions it asked me back convinced me to stay car-free.

Business Insider

time30-07-2025

  • Automotive
  • Business Insider

I asked ChatGPT's study mode if I should buy a car. The questions it asked me back convinced me to stay car-free.

For months, I've been paralyzed with indecision every time I walk past a car dealership. It all started earlier this summer when my partner bought his first car — a five-year-old black Ford Fusion — ahead of a move to Indiana for his Ph.D. program. Since then, we've both been using the car we named Raven, and I've become used to picking up groceries with ease and leaving home 10 minutes before a dinner party starts. But with him leaving in three weeks and taking Raven along for the ride, I've become overwhelmed with confusion every time I ask myself: Should I buy a car? So I asked ChatGPT. And not just any ChatGPT: I chose study mode, a new version that has freshly appeared in the toolbar as a book icon. OpenAI says it "helps you work through problems step by step instead of just getting an answer." It can also quiz its users and prompt them to explain their reasoning. I did not have high hopes, having witnessed many disastrous ChatGPT-generated essays that friends in academia have had to grade. Still, the study mode asked me enough well-rounded questions to help me make the unexpected decision to remain car-free. What I knew before turning to ChatGPT I live in a co-housing community with a garage to store and charge cars. I also happen to live downtown in a Bay Area city, two blocks away from a Chinatown, where I can find just about any food I need. Still, having a car would mean not having to frequently turn to Instacart because shopping often overwhelms me. It would mean being able to access our regional park full of redwoods, where there is no cell signal and no chance to Uber back home. I prefer EVs because they emit less, and I like the peace of mind of knowing that the price of oil, which fluctuates with geopolitics, won't affect my budget as much. Most of my friends own cars and started sending me their hot takes. I started watching Instagram reels on car recommendations, until they had fully taken over my "For You" page. Based on those criteria, I found options like a used Nissan Leaf and a pre-owned Tesla. A used 2017 Nissan Leaf would only have between 50 and 80 miles of range per charge, but it could cost as little as $7,000, not including taxes. Teslas are a more expensive option, but they do have decent range and technologies that compensate for my lack of skills, plus many people are looking to sell. In my head, I felt like I spent plenty on rides and deliveries to match the convenience a car could provide. Fortunately, ChatGPT study mode explained to me that not only was I wrong, but there was so much more to consider. Study mode asked me surprising questions and taught me new concepts Study mode started by asking me what level of studies I'm at, to which I explained that I have already gone through grad school, but have other pressing life problems. I then gave a general description of my circumstances and asked whether I should buy a car. The AI commended me for making a "thoughtful, not impulsive" decision and explained the concept of being " car poor," meaning buying a fancier car than necessary or having monthly payments cut into other life expenses. Since I mentioned grocery deliveries and Uber trips, study mode then prompted me to think about how many times I use these services a week. I get grocery deliveries about three times a month, plus about two Uber trips and two takeout deliveries a week. Study mode soon crunched the numbers for me. The conveniences I see as indulgences actually cost me around $3,000 a year, but a car would cost me between $6,000 and $8,000 a year, not including the down payment I would need to put down for the car. "Are the extra ~$3,000 — $4,000 per year worth the added freedom and independence? Do you feel anxious or limited without a car?" the AI asked. In bullet points, it asked me about factors I hadn't really thought about before, such as whether I like to go out often, if I enjoy driving, and if I have family who lives far away. It also asked me if I wanted to see a cost breakdown of whether it would be worth it to live further from downtown to lower housing costs, but own a car instead. As an introvert with no family in this country who mostly spends her weekends with her cat and her next craft project, an answer was beginning to emerge. But I pushed study mode further by asking about the benefits of an EV and if it would actually save me money. The AI gave me a cost breakdown that compared a Chevy Bolt to a fully gas-powered Toyota Corolla, and while the former obviously emits less, it gives me only a marginal amount of savings. Based on my own investigation, the cost difference between the two cars appears accurate, but the AI vastly undercounts the cost of insuring both types of vehicles by more than 50%. A quote on Geico for a 2022 Chevy Bolt and a Toyota Corolla made the same year would both amount to more than $450 a month in insurance. As alternatives to an EV, the AI asked me to consider if carpooling with friends and renting cars when I truly need them would be better options for my wallet and for the environment. It also said buying a three-year-old car is optimal because that is when depreciation slows down and before maintenance costs start to rise. Study mode said the bottom line was that if I really, really, still wanted a car, it would suggest a 2022 Chevy Bolt. Still, I think I'll pass, because to answer one of the AI's previous questions: no, I don't actually enjoy driving. I will save myself the fear of freeways and the panic of not being able to parallel park, and continue my car-free life. My next question to ChatGPT will be what kind of bike I should get.

Nick Cave admit his view of artificial intelligence as an artistic tool has begun to shift
Nick Cave admit his view of artificial intelligence as an artistic tool has begun to shift

Yahoo

time29-07-2025

  • Entertainment
  • Yahoo

Nick Cave admit his view of artificial intelligence as an artistic tool has begun to shift

Nick Cave has admitted his view of artificial intelligence as an artistic tool has begun to shift. The 66-year-old singer-songwriter and frontman of The Bad Seeds has long been a vocal critic of artificial intelligence. He previously described the technology as 'unbelievably disturbing' and warned of a 'humiliating effect' on the creative industries. But he confessed his view has changed after seeing a new AI-powered video made to mark the 40th anniversary of his song Tupelo. Writing on The Red Hand Files platform , he said: 'As I watched Andrew's surreal little film, I felt my view of AI as an artistic device soften. 'To some extent, my mind was changed.' The video he was referring to was created by 56-year-old filmmaker Andrew Dominik, and used AI to animate still archival images. Tupelo is a 1985 single he has performed 'at nearly every Bad Seeds concert since it was first written'. The track depicts the mythic birth of Elvis Presley during a storm in Tupelo, Mississippi. Nick added on The Red Hand Files he was initially sceptical when he learned AI had been used to create the video that has gone with it. Filmmaker Andrew has previously collaborated with Nick on the 2016 documentary One More Time With Feeling and the 2022 film This Much I Know to Be True. Nick added in his online post he found the video 'an extraordinarily profound interpretation of the song – a soulful, moving, and entirely original retelling of Tupelo, rich in mythos and a touching tribute to the great Elvis Presley, as well as to the song itself'. He also said the AI-animated images of Elvis 'had an uncanny quality, as if he had been raised from the dead, and the crucifixion-resurrection images at the end were both shocking and deeply affecting'. In January 2023, Nick publicly condemned ChatGPT, telling fans on his blog The Red Hand Files it should 'f*** off and leave songwriting alone'. He added at the time: 'I feel sad about it, disappointed that there are smart people out there that actually think the artistic act is so mundane that it can be replicated by a machine.' Nick also received multiple fan submissions of ChatGPT-generated lyrics written 'in the style of Nick Cave' were ones he viewed as 'bulls***' and 'a grotesque mockery of what it is to be human'.

Nick Cave admit his view of artificial intelligence as an artistic tool has begun to shift
Nick Cave admit his view of artificial intelligence as an artistic tool has begun to shift

Perth Now

time29-07-2025

  • Entertainment
  • Perth Now

Nick Cave admit his view of artificial intelligence as an artistic tool has begun to shift

Nick Cave has admitted his view of artificial intelligence as an artistic tool has begun to shift. The 66-year-old singer-songwriter and frontman of The Bad Seeds has long been a vocal critic of artificial intelligence. He previously described the technology as 'unbelievably disturbing' and warned of a 'humiliating effect' on the creative industries. But he confessed his view has changed after seeing a new AI-powered video made to mark the 40th anniversary of his song Tupelo. Writing on The Red Hand Files platform , he said: 'As I watched Andrew's surreal little film, I felt my view of AI as an artistic device soften. 'To some extent, my mind was changed.' The video he was referring to was created by 56-year-old filmmaker Andrew Dominik, and used AI to animate still archival images. Tupelo is a 1985 single he has performed 'at nearly every Bad Seeds concert since it was first written'. The track depicts the mythic birth of Elvis Presley during a storm in Tupelo, Mississippi. Nick added on The Red Hand Files he was initially sceptical when he learned AI had been used to create the video that has gone with it. Filmmaker Andrew has previously collaborated with Nick on the 2016 documentary One More Time With Feeling and the 2022 film This Much I Know to Be True. Nick added in his online post he found the video 'an extraordinarily profound interpretation of the song – a soulful, moving, and entirely original retelling of Tupelo, rich in mythos and a touching tribute to the great Elvis Presley, as well as to the song itself'. He also said the AI-animated images of Elvis 'had an uncanny quality, as if he had been raised from the dead, and the crucifixion-resurrection images at the end were both shocking and deeply affecting'. In January 2023, Nick publicly condemned ChatGPT, telling fans on his blog The Red Hand Files it should 'f*** off and leave songwriting alone'. He added at the time: 'I feel sad about it, disappointed that there are smart people out there that actually think the artistic act is so mundane that it can be replicated by a machine.' Nick also received multiple fan submissions of ChatGPT-generated lyrics written 'in the style of Nick Cave' were ones he viewed as 'bulls***' and 'a grotesque mockery of what it is to be human'.

People in the UK now have to take an age verification selfie to watch porn online
People in the UK now have to take an age verification selfie to watch porn online

Engadget

time26-07-2025

  • Business
  • Engadget

People in the UK now have to take an age verification selfie to watch porn online

As of Friday, anyone trying to watch porn online in the UK will need to subject themselves to an awkward selfie or get their photo ID ready. The UK government announced it will start checking compliance with its Online Safety Act, which requires any website that publishes or displays pornographic content to implement a system for "highly effective age checks." Ofcom, the UK's media regulatory agency, offered a few methods for sites to provide more extensive age verification, including credit card checks, photo ID matching and even estimating a user's age with a provided selfie. Several popular porn sites, including the country's most-visited site PornHub, have agreed to the new regulations. Beyond popular porn providers, social media sites like Reddit and Bluesky, have already implemented an age verification system that complies with the Online Safety Act's rules. Savvy internet users are already circumventing the age checks by using a VPN, providing a fake ChatGPT-generated photo ID, or taking a high-quality selfie of video game characters. However, the UK's new law also makes it illegal for websites to promote VPNs that get around age verification. To make sure the Online Safety Act is effective, Ofcom will fine companies caught breaking the rules with fines of up to £18 million or 10 percent of their global revenue.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store