
How to spot AI writing — 5 telltale signs to look for
That email you just received, the product review you're reading, or the Reddit post that sounds oddly corporate might all be generated by tools like AI chatbots like ChatGPT, Gemini or Claude.
The writing often appears polished, maybe too polished, hitting every point perfectly while maintaining an unnaturally enthusiastic tone throughout.
While AI detectors promise to catch machine-generated text, they're often unreliable and miss the subtler signs that reveal when algorithms have done the heavy lifting.
You don't need fancy software or expensive tools to spot it. The clues are right there in the writing itself.
There's nothing wrong with using AI to improve your writing. These tools excel at checking grammar, suggesting better word choices, and helping with tone—especially if English isn't your first language.
AI can help you brainstorm ideas, overcome writer's block, or polish rough drafts. The key difference is using AI to enhance your own knowledge and voice rather than having it generate everything from scratch.
The problems arise when people let AI do all the thinking and just copy-paste whatever it produces without adding their own insights, and that's when you start seeing the telltale signs below.
AI writing tools consistently rely on the same attention-grabbing formulae. You'll see openings like "Have you ever wondered..." "Are you struggling with..." or "What if I told you..." followed by grand promises.
This happens because AI models learn from countless blog posts and marketing copy that use these exact patterns. Real people mix it up more, they might jump straight into a story, share a fact, or just start talking about the topic without all the setup.
When you spot multiple rhetorical questions bunched together or openings that feel interchangeable across different topics, you're likely reading AI-generated content.
You'll see phrases like "many studies show", "experts agree", or "a recent survey found" without citing actual sources.
AI tends to speak in generalities like "a popular app" or "leading industry professionals" instead of naming specific companies or real people. Human writers naturally include concrete details, actual brand names, specific statistics, and references to particular events or experiences they've encountered.
When content lacks these specific, verifiable details, it's usually because AI doesn't have access to real, current information or personal experience.
AI writing often sounds impressive at first glance but becomes hollow when you examine it closely.
You'll find excessive use of business jargon like "game-changing", "cutting-edge", "revolutionary", and "innovative" scattered throughout without explaining what these terms actually mean.
The writing might use sophisticated vocabulary but fail to communicate ideas clearly. A human expert will tell you exactly why one method works better than another, or admit when something is kind of a pain to use.
If the content feels like it was written to impress rather than inform, AI likely played a major role.
AI writing maintains an unnaturally consistent, enthusiastic tone throughout entire pieces.
Every sentence flows smoothly into the next, problems are always simple to solve and there's rarely any acknowledgment that things can be complicated or frustrating.
Real people get frustrated, go off on tangents, and have strong opinions. Human writing naturally varies in tone, sometimes confident, sometimes uncertain, occasionally annoyed or conversational.
When content sounds relentlessly positive and avoids any controversial takes, you're probably reading AI-generated material.
This is where the lack of real experience shows up most clearly. AI might correctly explain the basics of complex topics, but it often misses the practical complications that anyone who's actually done it knows about.
The advice sounds textbook-perfect but lacks the yeah, but in reality... insights that make content actually useful. Human experts naturally include caveats, mention common pitfalls, or explain why standard advice doesn't always work in practice.
When content presents complex topics as straightforward without acknowledging the messy realities, it's usually because real expertise is missing.
People love to point at em dashes as proof of AI writing, but that's unfair to a perfectly good punctuation mark. Writers have used em dashes for centuries—to add drama, create pauses or insert extra thoughts into sentences.
The real issue isn't that AI uses them, it's how AI uses them incorrectly. You'll often see AI throwing in em dashes where a semicolon would work better, or using them to create false drama in boring sentences.
Real writers use em dashes purposefully to enhance their meaning, while AI tends to sprinkle them in as a lazy way to make sentences sound more sophisticated.
Before you dismiss something as AI-written just because of punctuation, check whether those dashes actually serve a purpose or if they're just there for show.
Now you've learned the tell-tale signs for spotting AI-generated writing, why not take a look at our other useful guides?
Don't miss this tool identifies AI-generated images, text and videos — here's how it works and you can stop Gemini from training on your data — here's how
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
And if you want to explore some lesser known AI models, take a look at I write about AI for a living — here's my 7 favorite free AI tools to try now.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
2 hours ago
- Tom's Guide
From AI training to banning VPNs – here are 8 questions the internet is asking about the Online Safety Act
It's been almost two weeks since the Online Safety Act was introduced in the UK. The law requires websites and apps to conduct age verification checks before allowing users to view content deemed as potentially harmful to under 18s or explicit. Despite the law being well intentioned, people are unhappy at having to hand over sensitive personal information to third-party companies. Many have turned to the best VPNs in an attempt to bypass age verification checks. Supporters and opponents of the law have made their views clear but there are still questions being asked. People are wondering how their data is used and stored, what the risks are, and are even asking if there'll be a VPN ban. We've pulled together some of the most popular questions people are asking and give them our best answer. This is probably a good place to start. The Online Safety Act 2023 is a new law introduced by the UK government which aims to keep children, and adults, safe online. In an explainer, the UK government said "platforms will be required to prevent children from accessing harmful and age-inappropriate content." The main way this is done is through age verification checks. In order to access age-restricted content, users will have to prove they are over 18. This can include submitting photo ID, completing credit card or bank checks, and uploading a live selfie. OFCOM has the power to enforce regulations and fine those found to be breaking the law. Fines may be up to 10% of global revenue or £18 million, whichever is greater. There are numerous privacy and cybersecurity risks posed by the Online Safety Act. Privacy advocates believe the law imposes on people's online freedoms and argue we shouldn't need to hand over our data to browse the web. Cybersecurity experts see it as a "disaster waiting to happen." We're trusting our highly sensitive personal information to third-parties, many of which are not based in the UK. These age check providers store data for different lengths of time but it only takes one to suffer a data breach and risk serious consequences for huge amounts of people – one expert believed this was a matter of when, not if. We fully support the law's goals, but we're not surprised it doesn't cover personal data safety. There must be a balance between protecting vulnerable internet users but not compromising our data privacy and security. The answer varies between providers. Persona, Yoti, and AgeGO are some notable providers. Persona, used by Reddit, says it retains personal data "for as long as necessary to provide the service and fulfill the verification you have requested." But it also says data collection may vary depending on how you use the service and how third-parties implement it. In Reddit's case, the site says Persona doesn't retain your uploaded photo for longer than 7 days. Yoti, which is used by Spotify, says it stores your data for up to 28 days, and doesn't have access to it beyond that. Yoti says some of its clients will configure the service to delete your data as soon as age verification is complete. AgeGO says it does "not access or retain your personal data during the verification process." However it uses third-party providers in the age-check process and only receives the results. Although it improves anonymity, these third-parties are not disclosed so we don't know how they process and store data. This leaves us asking if AgeGO is safe to use. This is a big concern people have and is another question which doesn't have a straightforward answer. AI-powered age estimation techniques are employed by some age-check providers. AI will analyze a picture of your face, or even your browsing habits, to determine whether you're over 18. Age-check providers, including Persona, have said it uses "uploaded images of identity documents" to train its services and enable it to understand what documents and their features look like. Age verification on social media app X is another which utilizes AI-based age estimation, using its own AI systems. Its Age Assurance on X policy doesn't state how its AI systems use this data. However its wider privacy policy states it may use the data it collects "to help train our machine learning or artificial intelligence models." We would therefore exercise caution in uploading your information to providers using AI-based age estimation. By the letter of the law, any site or app hosting adult content must introduce age verification checks and they're present on many popular sites. Reddit and X require age checks, as well as Spotify – something which has baffled people online. There have been complaints of overreach and unintended consequences of the legislation. Reddit users have noted that subreddits discussing sobriety, mental health support, and combating addiction have been impacted. Wikipedia may have to introduce checks and it is fighting the regulations as a result. The BBC reported that information and news reports covering the conflicts in Gaza and Ukraine have also been blocked by age verification rules. Reports of this type of content being blocked is worrying, especially at a time when internet censorship and disinformation is highly prevalent. The UK government has said "platforms should not arbitrarily block or remove content and instead must take a risk-based, proportionate approach to child safety duties." It also said the law is "not designed to censor political debate." It "does not require platforms to age gate any content other than those which present the most serious risks to children." No, the government will not look at any age verification related information you upload. Beyond passing the law, the government has little to no involvement in its day-to-day running or enforcement – and this makes the process quite inefficient. There are multiple age-check providers, requiring multiple proofs of age. You'll likely have to prove your age multiple times as not every site uses the same provider. Some experts, and internet users, believe the UK government should have set up a secure, government-run, age-check system and database. This would be more streamlined and having only one database would reduce the risk of data breaches. Major VPN providers saw a surge in sign ups in the first week of the law's introduction as people searched for ways to avoid age verification checks. The likes of Proton VPN, NordVPN, Opera VPN, and Surfshark – as well as suspect free VPNs – rose up the Apple App Store's top charts. In 2022, Labour MP Sarah Champion proposed an amendment to the Online Safety Act – then still a Bill – which would see OFCOM investigate if VPN use was undermining internet regulations. The amendment didn't pass and on July 28 2025, Champion responded to the news of a UK VPN surge. She said in a tweet she warned the previous government this would happen and "child protection should always be the priority." So frustrating, I did warn the last Government this would happen, but was shut down. Child protection should always be the priority. 😞 28, 2025 The UK's Secretary of State for Science, Innovation and Technology, Peter Kyle, has acknowledged VPN use is rising and would look "very closely" at how they're being used – however, he said he would not ban VPNs. So, it is very unlikely the UK government will ban VPNs. The law is well intentioned. It aims to protect children from seeing content they shouldn't and this can only be a good thing. It can be easy to access harmful content and children shouldn't be able to accidentally access it. However, as we've detailed, the law's implementation has raised serious concerns around data privacy and security. So, while its motives are a good thing, the law risks compromising our safety in other ways. A petition calling on the UK government to repeal the law is approaching 500,000 signatures. The government has said it won't repeal the law and Peter Kyle has said those wanting to overturn it are "on the side of predators." These types of accusations do nothing but widen divisions and fail to recognise the genuine concerns people have regarding their personal information and its safety. We test and review VPN services in the context of legal recreational uses. For example: 1. Accessing a service from another country (subject to the terms and conditions of that service). 2. Protecting your online security and strengthening your online privacy when abroad. We do not support or condone the illegal or malicious use of VPN services. Consuming pirated content that is paid-for is neither endorsed nor approved by Future Publishing.


Bloomberg
4 hours ago
- Bloomberg
A $30 Million Paycheck? Big Law Has Lost the Plot
Opinion Newsletter Jessica Karl, Columnist Save This is Bloomberg Opinion Today, the total billable hours of Bloomberg Opinion's opinions. Sign up here. If you were a lawyer earning tens of millions of dollars a year — compensation that Chris Bryant says is not unheard of in prestigious partner circles — would you be using ChatGPT to do your job? Because this individual certainly didn't bat an eyelash when they typed this inquiry in the search bar:


The Hill
4 hours ago
- The Hill
Meta bans millions of WhatsApp accounts linked to scam operations
Meta took down 6.8 million WhatsApp accounts tied to scam operations on Tuesday after victims reported financial fraud schemes. The company said many of the scam sources were based in Southeast Asia at criminal scam centers. 'Based on our investigative insights into the latest enforcement efforts, we proactively detected and took down accounts before scam centers were able to operationalize them,' Meta said in a Tuesday release. 'These scam centers typically run many scam campaigns at once — from cryptocurrency investments to pyramid schemes. There is always a catch and it should be a red flag for everyone: you have to pay upfront to get promised returns or earnings,' they wrote. In an effort to ensure users are protected, the company said it would flag when people were added to group messages by someone who isn't in their contact list and urge individuals to pause before engaging with unfamiliar messages where they're encouraged to communicate on other social platforms. 'Scams may start with a text message or on a dating app, then move to social media, private messaging apps and ultimately payment or crypto platforms,' Meta said. 'In the course of just one scam, they often try to cycle people through many different platforms to ensure that any one service has only a limited view into the entire scam, making it more challenging to detect,' the company added. The Tuesday release highlighted an incident with Cambodian users urging people to enlist in a rent a scooter pyramid scheme with an initial text message generated by ChatGPT. The message contained a link to a WhatsApp chat which redirected the target to Telegram where they were told to like TikTok videos. 'We banned ChatGPT accounts that were generating short recruitment-style messages in English, Spanish, Swahili, Kinyarwanda, German, and Haitian Creole. These messages offered recipients high salaries for trivial tasks — such as liking social media posts — and encouraged them to recruit others,' OpenAI wrote in their June report focused on disrupting malicious artificial intelligence efforts. 'The operation appeared highly centralized and likely originated from Cambodia. Using AI-powered translation tools, we were able to investigate and disrupt the campaign's use of OpenAI services swiftly,' the company added. The Federal Trade Commission has reported a steady increase in social media fraud. The agency said more money was reported lost to fraud originating on social media than any other method of contact from January 2021 to June 2023 — with losses totaling $2.7 billion.