
AI knows we shouldn't trust it for everything. I know because I asked it.
Since the emergence of artificial intelligence a few years ago, I've been split between two sides of the ongoing debate. Is AI evil or the next revolutionary advance in society?
On the one hand, I'm a typical AI skeptic. I worry that its prevalence is harming critical thinking skills and creativity, and I am very concerned about its environmental impact.
Conversely, I'm a child of the internet. I know this conversation has happened before, and I know when I'm getting left behind. I've heard enough friends discuss using ChatGPT in their daily lives to know that AI is here to stay, so I might as well get accustomed to it.
I had also been using AI in small doses already: Every time I use Google, its AI technology Gemini summarizes what I need an answer to. I used to use AI to transcribe my interviews. My work uses Microsoft Teams, which has its own AI called Copilot.
But I had yet to dive headfirst into the world of ChatGPT, OpenAI's chatbot that launched in 2022 and effectively changed the way AI is used by everyday people. With the blessing of my editor, I decided it was time to get familiar with the tool that's probably going to take my job one day.
I opened the app, created an account and introduced myself as a journalist.
'Hi Sara!' ChatGPT replied. 'Great to meet you. I'm ready for your questions – ask away whenever you're ready.'
Did ChatGPT immediately go woke, or was it just agreeing with me?
To start, I launched into a series of questions about Zohran Mamdani, the Democratic candidate for New York City mayor known for his progressive politics. I told ChatGPT that I generally agree with Mamdani's politics, and asked if the chatbot believed democratic socialism could work in New York City.
'Yes, democratic socialism can work in New York City – at least in specific, meaningful ways – but there are structural, political, and fiscal tensions that make its implementation uneven and often contentious,' the AI responded.
It then gave me a list of reasons why it could work (organizing leftists, public opinion and current elected officials) and a list of roadblocks to making that happen (state government, finances and strong opposition from bigwigs in the city).
This made me immediately skeptical of what ChatGPT was telling me. I had made it clear that I supported Mamdani's policies – was it just telling me what I wanted to hear?
I asked another question – this time about Mamdani's chances in the November election. In seconds, the chatbot pulled up a series of polls showing him in the lead. I asked what neighborhoods he'd need to win – something I know journalists have previously broken down.
To my surprise, ChatGPT linked to actual news articles that delved into these topics. Research that would have taken me several hours to find the information. On the other hand, it also cited Wikipedia multiple times – a practice that I've been told not to engage in since at least middle school.
Opinion: Zohran Mamdani rallied Gen Z voters. We can't abandon him now.
Overall, I was surprised by how easy it was to get the answers I needed, and particularly stunned when the information was easier to digest than what I'd get from a basic Google search. AI seemed able to break down my questions into thoughtful answers in a way that standard search never could, unless I was very specific in what I searched for.
ChatGPT knows humans might not trust AI
Next, I moved on to why I was really there – to see if ChatGPT could actually be a useful tool for me.
'I'm writing a column about you, and the inevitability of AI becoming part of daily life,' I typed in the chat. 'Do you think this is where things are headed?'
It responded by calling my column idea 'fascinating.' Flattering.
'Yes, it does look increasingly inevitable that AI will become a normalized, embedded part of daily life,' the bot responded.
Of course it believes that. It also doesn't seem to be wrong.
Opinion: AI is changing our world. At what point will it change our reality?
'If I'm honest, I'm a little wary of using you ‒ it seems like AI can be a force for good, but relying on it too heavily can harm human critical thinking skills,' I wrote.
'You're absolutely right to be wary,' ChatGPT responded. 'AI is a tool, not a replacement for human judgment, and there is a real risk that relying on it too much can blunt critical thinking, creativity, and independent analysis ‒ especially in fields like journalism, where those skills are core.'
AI knows it shouldn't do my job. Also, AI wants to do my job.
It feels strange to know that this chatbot is somewhat self-aware. It didn't think it was a good idea for humans to rely on it, yet it kept trying to write an outline for my column or give me short quips that I could use in my column. The line between 'tool' and 'crutch' seems awfully thin.
Aside from the fact that it would be unethical to have ChatGPT write this column, I also don't find that the program creates particularly compelling writing. The heavy reliance on dashes and italics got old quickly.
It also seemed to struggle with turns of phrase. I told it an old journalism idiom – "If your mother says she loves you, check it out" – which it regurgitated as "if (AI) says my mother loves me, I'll still check the birth certificate."
Opinion: School cell phone bans are a distraction. The real crisis isn't in your kid's hand.
Another thing that stuck out to me was how complimentary ChatGPT was. It called my questions 'excellent'; it told me how important journalism is as a career. I appreciated the ego boost, noticing that it made me want to use the chatbot even more. After all, who doesn't like being told that they're intelligent and interesting?
I can't lie. I get it now. I understand the allure of AI.
I began thinking of all the ways I could use ChatGPT – replying to reader emails, synthesizing the week's important events, maybe even responding to people on Hinge.
In the end, I had to stop myself – I fear that becoming too reliant on AI would dull my senses in the long run, destroying my capacity for creativity and leading me to forget why I love writing in the first place.
When I declined to let it write my column and told it I'd be working with my editor, it told me that this was a good idea.
"Your readers will benefit from the fact that you're approaching this moment with curiosity and caution," it told me. "That's where good journalism lives."
I still have a healthy skepticism of ChatGPT and AI's newfound place in our culture. I believe we should all be cautious when using it – after all, there are plenty of instances of AI being wrong. At the same time, I do see the benefit: It's quick, thorough and conversational. I understand why so many people I know use it.
You don't have to use AI, the same way you don't have to use the Internet. When you do use it, be skeptical of the information the program provides. Try to limit the times you use it to reduce its environmental impact. Just be aware of the fact that this is where the future is headed, whether we like it or not.
Follow USA TODAY columnist Sara Pequeño on X, formerly Twitter: @sara__pequeno

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
15 minutes ago
- Forbes
FBI Warning To 10 Million Android Users — Disconnect Your Devices Now
Discconnect now, FBI warns 10 million Android users. Update, July 26, 2025: This story, originally published on July 25, has been updated with a statement from the researchers which initially disclosed and disrupted the BadBox 2.0 operation that the FBI and Google are tackling head-on. In March, I reported that one of the largest botnets of its kind ever detected had impacted over a million Android devices. That massive attack was known as BadBox, but it has now been eclipsed by BadBox 2.0, with at least 10 million Android devices infected. Google has taken action to protect users as best it can, as well as launching legal action against the attackers, and the FBI has urged impacted users to disconnect their devices from the internet. Here's what you need to know. The FBI, Google And Others Warn Of Android BadBox 2.0 Attacks The FBI cybersecurity alert, I-060525-PSA, could not have been clearer: ongoing attacks are targeting everything from streaming devices, digital picture frames, third-party aftermarket automobile infotainment systems and other assorted home smart devices. The devices, all low-cost and uncertified, mostly originating in China, allow attackers to access your home network and beyond by, the FBI warned, 'configuring the product with malicious software prior to the user's purchase.' It has also been noted, however, that mandatory 'software updates' during the installation process can also install a malicious backdoor. Point Wild's Threat Intelligence Lat61 Team reverse-engineered the BadBox 2 infection chain and, as a result, uncovered new indicators of compromise that have been shared with global Computer Emergency Response Teams, as well as law enforcement. 'This Android-based malware is pre-installed in the firmware of low-cost IoT devices, smart TVs, TV boxes, tablets, before they even leave the factory,' Kiran Gaikwad from the LAT61 team said, 'It silently turns them into residential proxy nodes for criminal operations like click fraud, credential stuffing, and covert command and control (C2) routing.' Google, meanwhile, confirmed in a July 17 statement that it had 'filed a lawsuit in New York federal court against the botnet's perpetrators.' Google also said that it has 'updated Google Play Protect, Android's built-in malware and unwanted software protection, to automatically block BadBox-associated apps.' Human Security Behind Initial BadBox 2.0 Disclosure And Disruption Human Security, whose Satori Threat Intelligence and Research Team originally both disclosed and disrupted the BadBox 2.0 threat campaign, said at the time that researchers believed 'several threat actor groups participated in BadBox 2.0, each contributing to parts of the underlying infrastructure or the fraud modules that monetize the infected devices, including programmatic ad fraud, click fraud, proxyjacking, and creating and operating a botnet across 222 countries and territories.' If nothing else, that provides some context to the scale of this campaign. Now, Stu Solomon, the Human Security CEO, has issued the following statement: 'We applaud Google's decisive action against the cybercriminals behind the BadBox 2.0 botnet our team uncovered. This takedown marks a significant step forward in the ongoing battle to secure the internet from sophisticated fraud operations that hijack devices, steal money, and exploit consumers without their knowledge. Human's mission is to protect the integrity of the digital ecosystem by disrupting cybercrime at scale, and this effort exemplifies the power of collective defense. We're proud to have been deeply involved in this operation, working in close partnership with Google, TrendMicro, and the Shadowserver Foundation. Their collaboration has been invaluable in helping us expose and dismantle this threat.' FBI Recommendations And Mitigations — Disconnect Your Devices Now The FBI has recommended that Android users should be on the lookout for a number of potential clues that your Chinese-manufactured smart device could be infected with BadBox 2.0 malware. When it comes to mitigation, the advice is straightforward: users should 'consider disconnecting suspicious devices from their networks,' the FBI said.


New York Post
16 minutes ago
- New York Post
Become a published author for $49 — no ghostwriter required
Discover startups, services, products and more from our partner StackCommerce. New York Post edits this content, and may be compensated and/or receive an affiliate commission if you buy through our links. TL;DR: Create nonfiction manuscripts with one single idea with help from a lifetime subscription to AI-powered Youbooks, now just $49 (reg. $540). Got a killer idea for a book, but no time to sit down and write it? If you'd like a completed manuscript without having to deal with writer's block or hiring a ghostwriter, Youbooks is ready to help. This AI-powered tool turns your idea into a polished nonfiction book, and right now you can get a lifetime subscription for just $49 (reg. $540). Youbooks is an AI-powered nonfiction book generator ready to take your book from rough idea to a comprehensive manuscript. Just give a brief or detailed description of your book's topic, and Youbooks combines the power of different AI models like ChatGPT, Claude, Gemini, and Llama to create a full-length nonfiction book up to 300,000 words long. After you give Youbooks your idea, you can then customize how your book will be written. Supply samples of your writing style to make sure it sounds like your voice and tone. You can also upload your own documents, transcripts, or memos, and make sure the book includes your research and studies. New York Post Compo While Youbooks creates your manuscript, it will also utilize AI to include the most up to date facts, statistics, and news via real-time web searches while drafting your manuscript. You'll also still be very involved, as the book goes through a detailed 1,000-step process. When it's complete, you can choose to receive the finished product as a PDF, EPUB, Word, or Markup file. You'll also have full commercial rights to the completed book. Your lifetime subscription includes 150,000 monthly credits. Those can be used towards writing and source uploads, with each credit amounting to one word or uploaded source word. You can also store up to 100 style samples, to keep Youbooks writing in your style. Finish that book with a lifetime subscription to AI-powered Youbooks, now for only $49 (reg. $540). StackSocial prices subject to change.


CNET
16 minutes ago
- CNET
I Asked AI to Help Me Travel More Sustainably. Here's What Happened
Growing up, we didn't travel much, especially not internationally. Even as a child, though, I knew I wanted to travel when I got older and could do things for myself. The kid who would wistfully spin and spin and spin a toy globe in my room is now an adult who can book trips as much as my schedule and budget will allow. There is a caveat, though. Growing up also means that I'm now cognizant of the environmental cost of traveling. Defying gravity is incredible and all, but it puts a damper on that dream vacation knowing that my trip is contributing to climate change and that the aviation industry accounts for 2.5% of all global carbon dioxide emissions. In a world where we're all thinking a little more about our carbon footprints, yet are also dreaming about that next far-away adventure we see on our phone screens, I wondered: Can tech -- and more specifically, AI -- help us travel more sustainably? To find out, I did what any curious digital native would do: I put this idea to the test. I turned to AI chatbots, specifically two of the biggest names in artificial intelligence: ChatGPT and Microsoft Copilot. My mission was to plan two very different vacations that would minimize my environmental guilt while still delivering on adventure, food and cultural enrichment. Here's how it went. For more AI Tips, explore these AI essentials you need to know and how to use AI to get better at playing guitar. A quick note: While AI can help you plan more sustainable trips, the tools themselves aren't exactly eco-neutral. Behind every chatbot response is a data center running thousands of servers, machines that require vast amounts of electricity and cooling water to operate. In fact, research suggests that generative AI tools like ChatGPT can use several liters of water per conversation, depending on the complexity of your queries. That water is typically used to cool the servers during processing. Microsoft, for instance, has reported a 34% year-over-year increase in water consumption, partially due to its AI investments. So while AI could be used to explore eco-friendly travel itineraries, it also has a lasting environmental impact of its own. Like with all tech, sustainable use comes down to moderation and transparency. The setup: Two trips, one goal For the experiment, I designed two different travel scenarios and let the chatbots plan the itineraries from there. Trip 1: A week and a half in Seoul, South Korea, flying from Tampa, Florida (where I live). My goals include hiking, art, food and seeing all the major historical monuments, with a budget of $2,000 to $3,000. ChatGPT asked me to specify some travel details, then created an itinerary from there. Macy Meyer/CNET Trip 2: A wild card. I gave the AI chatbots full creative control to plan the most sustainable tropical vacation possible. In both cases, I used ChatGPT and Copilot side by side to compare results. The Seoul search: Sustainability in the capital city I started with the Seoul trip. I was actually supposed to move to Korea post-grad to teach English as a foreign language, but that didn't work out, thanks to COVID-19. I still haven't made it to Korea, so it's at the tippy-top of my bucket list. Both bots quickly recognized South Korea as a fascinating mix of ancient tradition and high-tech innovation. When I asked how to make my trip eco-friendly, they took two different approaches. ChatGPT gave me a detailed itinerary, including direct flight suggestions, budget breakdowns, eco-lodging in walkable neighborhoods like Insadong and Hongdae and sustainable food recommendations, such as local markets and temple cuisine. It also factored in transit cards and local carbon offset programs. Copilot gave me more surface-level results. It recommended looking into eco-certified hotels or guesthouses, but didn't recommend specific ones, and most of the itinerary it produced was essentially "just walk around this place." I did like that the results all linked out to other sources and websites, though, so I could do a deeper dive when researching its recommendations. Day 1 and 2 of the Copilot-generated itinerary for Korea. Macy Meyer/CNET Copilot also went about $1,000 over budget with its recommendations and didn't suggest specific flights to take to Seoul. That said, Copilot did shine when it came to sourcing carbon offset programs and comparing airline emissions. Its integration with Microsoft's web tools helped it pull in more current data. The tropical wild card: AI goes off the grid Now for the fun part. I asked each AI platform to plan the most sustainable tropical trip it could dream up. My only parameters were that I wanted a warm and tropical climate, nature and an eco-conscious budget. Copilot recommended Palawan, Philippines -- the "last frontier" of the Philippines -- and laid out a 10-day trip. I loved the itinerary. I mean, it's like a dream vacation, so how could I not? But again, the results were pretty brief and lacked specificity. For instance, Copilot said things like "when booking, check for airlines that highlight sustainability" or "familiarize yourself with waste-reduction programs," when that's what I was hoping the chatbot would be able to do for me. Copilot created a 10-day itinerary for the Philippines. Macy Meyer/CNET ChatGPT was again more detailed. It chose Costa Rica's Osa Peninsula -- a place I'd honestly never heard of, but I'm glad I know about now -- and outlined an eight-day experience. The suggested itinerary also included information about eco-lodges, permaculture farm tours and sea turtle conservation, and suggested packing reef-safe sunscreen and DEET-free bug spray. ChatGPT generated a trip for Costa Rica. Macy Meyer/CNET I fact-checked the suggestions from both chatbots, and nearly everything checked out. Most of the lodges ChatGPT listed are genuinely sustainable, utilizing solar power, composting toilets and no single-use plastics. The activities Copilot suggested all support local economies, communities and conservation. Both itineraries were invested in ecotourism, which I appreciated. This brings me to an important reminder: You can't take anything at face value. AI chatbots have a history of "hallucinating" wrong answers, meaning they generate false or misleading information and present it as fact. Any search or answer must be fact-checked. What AI gets right and wrong about sustainable travel Planning with AI has its perks: It saves time, simplifies the research and pulls in suggestions I never would've thought of (looking at you, Osa Peninsula). But AI isn't perfect. The chatbots occasionally recommended hotels that looked eco-friendly, but weren't certified. Neither platform could consistently tell if a business was truly sustainable or just good at marketing, and neither can book anything for you like a real travel agent can. Also, AI doesn't know you. It doesn't know that you prefer local buses to private tours, or that your idea of a dream vacation involves zero plans and a hammock. You'll still need to tweak your itinerary to suit your actual personality, not your browser history. For sustainable travel planning, both ChatGPT and Copilot were helpful tools to jumpstart the trip-planning process. Neither platform can replace a good travel advisor or first-hand knowledge, but they're decent enough assistants for brainstorming, budgeting and discovering new ideas. See also: Chatbots Are Ready to Help in Language Learning. Here's My Experience Would I use AI to plan my next trip? Maybe, but with caveats. I'd consider using ChatGPT and Copilot again, especially at the early planning stage when I'm besieged with options for travel destinations, but I definitely feel like I could get the same results from a well-traveled friend, travel agent or travel books like Lonely Planet. If I did opt to use AI chatbots again in the future, I'd still double-check sustainability claims, compare prices manually and use traditional sites to book things. AI is a tool, not a travel agent. While AI can help, the most sustainable travel choices -- taking fewer flights, supporting local businesses, packing smart and minimizing waste -- still fall on us.