
AI knows we shouldn't trust it for everything. I know because I asked it.
Since the emergence of artificial intelligence a few years ago, I've been split between two sides of the ongoing debate. Is AI evil or the next revolutionary advance in society?
On the one hand, I'm a typical AI skeptic. I worry that its prevalence is harming critical thinking skills and creativity, and I am very concerned about its environmental impact.
Conversely, I'm a child of the internet. I know this conversation has happened before, and I know when I'm getting left behind. I've heard enough friends discuss using ChatGPT in their daily lives to know that AI is here to stay, so I might as well get accustomed to it.
I had also been using AI in small doses already: Every time I use Google, its AI technology Gemini summarizes what I need an answer to. I used to use AI to transcribe my interviews. My work uses Microsoft Teams, which has its own AI called Copilot.
But I had yet to dive headfirst into the world of ChatGPT, OpenAI's chatbot that launched in 2022 and effectively changed the way AI is used by everyday people. With the blessing of my editor, I decided it was time to get familiar with the tool that's probably going to take my job one day.
I opened the app, created an account and introduced myself as a journalist.
'Hi Sara!' ChatGPT replied. 'Great to meet you. I'm ready for your questions – ask away whenever you're ready.'
Did ChatGPT immediately go woke, or was it just agreeing with me?
To start, I launched into a series of questions about Zohran Mamdani, the Democratic candidate for New York City mayor known for his progressive politics. I told ChatGPT that I generally agree with Mamdani's politics, and asked if the chatbot believed democratic socialism could work in New York City.
'Yes, democratic socialism can work in New York City – at least in specific, meaningful ways – but there are structural, political, and fiscal tensions that make its implementation uneven and often contentious,' the AI responded.
It then gave me a list of reasons why it could work (organizing leftists, public opinion and current elected officials) and a list of roadblocks to making that happen (state government, finances and strong opposition from bigwigs in the city).
This made me immediately skeptical of what ChatGPT was telling me. I had made it clear that I supported Mamdani's policies – was it just telling me what I wanted to hear?
I asked another question – this time about Mamdani's chances in the November election. In seconds, the chatbot pulled up a series of polls showing him in the lead. I asked what neighborhoods he'd need to win – something I know journalists have previously broken down.
To my surprise, ChatGPT linked to actual news articles that delved into these topics. Research that would have taken me several hours to find the information. On the other hand, it also cited Wikipedia multiple times – a practice that I've been told not to engage in since at least middle school.
Opinion: Zohran Mamdani rallied Gen Z voters. We can't abandon him now.
Overall, I was surprised by how easy it was to get the answers I needed, and particularly stunned when the information was easier to digest than what I'd get from a basic Google search. AI seemed able to break down my questions into thoughtful answers in a way that standard search never could, unless I was very specific in what I searched for.
ChatGPT knows humans might not trust AI
Next, I moved on to why I was really there – to see if ChatGPT could actually be a useful tool for me.
'I'm writing a column about you, and the inevitability of AI becoming part of daily life,' I typed in the chat. 'Do you think this is where things are headed?'
It responded by calling my column idea 'fascinating.' Flattering.
'Yes, it does look increasingly inevitable that AI will become a normalized, embedded part of daily life,' the bot responded.
Of course it believes that. It also doesn't seem to be wrong.
Opinion: AI is changing our world. At what point will it change our reality?
'If I'm honest, I'm a little wary of using you ‒ it seems like AI can be a force for good, but relying on it too heavily can harm human critical thinking skills,' I wrote.
'You're absolutely right to be wary,' ChatGPT responded. 'AI is a tool, not a replacement for human judgment, and there is a real risk that relying on it too much can blunt critical thinking, creativity, and independent analysis ‒ especially in fields like journalism, where those skills are core.'
AI knows it shouldn't do my job. Also, AI wants to do my job.
It feels strange to know that this chatbot is somewhat self-aware. It didn't think it was a good idea for humans to rely on it, yet it kept trying to write an outline for my column or give me short quips that I could use in my column. The line between 'tool' and 'crutch' seems awfully thin.
Aside from the fact that it would be unethical to have ChatGPT write this column, I also don't find that the program creates particularly compelling writing. The heavy reliance on dashes and italics got old quickly.
It also seemed to struggle with turns of phrase. I told it an old journalism idiom – "If your mother says she loves you, check it out" – which it regurgitated as "if (AI) says my mother loves me, I'll still check the birth certificate."
Opinion: School cell phone bans are a distraction. The real crisis isn't in your kid's hand.
Another thing that stuck out to me was how complimentary ChatGPT was. It called my questions 'excellent'; it told me how important journalism is as a career. I appreciated the ego boost, noticing that it made me want to use the chatbot even more. After all, who doesn't like being told that they're intelligent and interesting?
I can't lie. I get it now. I understand the allure of AI.
I began thinking of all the ways I could use ChatGPT – replying to reader emails, synthesizing the week's important events, maybe even responding to people on Hinge.
In the end, I had to stop myself – I fear that becoming too reliant on AI would dull my senses in the long run, destroying my capacity for creativity and leading me to forget why I love writing in the first place.
When I declined to let it write my column and told it I'd be working with my editor, it told me that this was a good idea.
"Your readers will benefit from the fact that you're approaching this moment with curiosity and caution," it told me. "That's where good journalism lives."
I still have a healthy skepticism of ChatGPT and AI's newfound place in our culture. I believe we should all be cautious when using it – after all, there are plenty of instances of AI being wrong. At the same time, I do see the benefit: It's quick, thorough and conversational. I understand why so many people I know use it.
You don't have to use AI, the same way you don't have to use the Internet. When you do use it, be skeptical of the information the program provides. Try to limit the times you use it to reduce its environmental impact. Just be aware of the fact that this is where the future is headed, whether we like it or not.
Follow USA TODAY columnist Sara Pequeño on X, formerly Twitter: @sara__pequeno

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
35 minutes ago
- Yahoo
Sam Altman Warns Of 'Impending Fraud Crisis' As Banks Still Use Voiceprints For Big Transactions: 'A Thing That Terrifies Me Is...'
Benzinga and Yahoo Finance LLC may earn commission or revenue on some items through the links below. Sam Altman, the CEO of OpenAI, expressed concerns about the potential threats artificial intelligence (AI) poses to financial security, urging them to stay ahead of the technology. What Happened: During a recent event hosted by the Federal Reserve in Washington, D.C., Altman cautioned financial leaders about the dangers of AI outsmarting current authentication methods. He specifically highlighted the use of voice prints for high-value transactions. 'A thing that terrifies me is apparently there are still some financial institutions that will accept a voice print as authentication for you to move a lot of money or do something else,' Altman told Fed's Michelle Bowman. Trending: 7,000+ investors have joined Timeplast's mission to eliminate microplastics—'That is a crazy thing to still be doing. AI has fully defeated most of the ways that people authenticate currently — other than passwords,' he added. He also warned of a 'significant impending fraud crisis' due to AI, and stressed the need for a change in the way customer interactions and verifications are handled. Sam Altman warns that as AI-generated content becomes indistinguishable from reality, like hyper-realistic video calls—people will need to change how they interact and verify identities. He emphasizes the urgency of teaching society how to authenticate communications and understand the risks of fraud in this new digital landscape. Altman's concerns extend beyond customer authentication. During the Q&A session, he also highlighted the potential threat of a large-scale financial attack, possibly orchestrated by a foreign adversary using advanced AI to breach U.S. financial It Matters: The warning from Altman comes at a time when AI is increasingly being integrated into the financial sector, with companies like Wysh using AI to rebuild trust and emotional connections with customers. However, Altman's warning suggests that this reliance on AI could also pose significant risks. Altman's concerns echo those of other AI experts. AI pioneer Geoffrey Hinton warned of the potential dangers of super-intelligent AI, urging caution in the development and deployment of AI technologies. Meanwhile, AI safety researchers from OpenAI and Anthropic are openly criticizing Elon Musk's startup xAI, calling its safety practices "completely irresponsible" — a rebuke that could fuel regulatory scrutiny and impact enterprise adoption of the billion-dollar venture. As AI continues to advance, Altman's warning serves as a reminder of the need for careful consideration and regulation in its use, particularly in critical sectors like finance. Read Next: $100k+ in investable assets? Match with a fiduciary advisor for free to learn how you can maximize your retirement and save on taxes – no cost, no obligation. If there was a new fund backed by Jeff Bezos offering a 7-9% target yield with monthly dividends would you invest in it? Image via Shutterstock This article Sam Altman Warns Of 'Impending Fraud Crisis' As Banks Still Use Voiceprints For Big Transactions: 'A Thing That Terrifies Me Is...' originally appeared on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
35 minutes ago
- Yahoo
Switched to ChatGPT for search? Google wants to tempt you back with Web Guide – here's how to try it
When you buy through links on our articles, Future and its syndication partners may earn a commission. AI chatbots are becoming more widely used for searching for things online, and Google, with its Gemini chatbot, AI Mode for Search, and AI-generated summaries at the top of search pages, is no exception. But that doesn't mean the tech giant wants you to find other AI search tools it can't offer. The company has a new feature, called Web Guides, undergoing testing in its Search Labs, to address that danger. Web Guides are a kind of hybrid between the old blue‑link setup and Google's newer AI options, employing a version of Gemini to organize suggested links. Instead of an algorithm-sorted list, the links are clustered under AI‑generated headings alongside short summaries. When you search for something using Web Guides, the AI fires off several variations of your query in parallel, then stitches together the structured, guided results page you see soon after. The Web Guides are supposed to be for open‑ended searches that don't have definitive answers from a single source, but that you might not want a full conversation about. You might use it if you want to figure out how to plan something complex or have a question that requires multiple sentences to explain. If you are asked about how to plan an international trip, you might get answers set under categories like "costs" and "popular destinations." Maps versus stories One way to think about it is to compare a map to a story. Ask Gemini or ChatGPT about the best way to get in shape, and you'll get a full plan for the next several months with structured workouts, nutrition tips, and mistakes to avoid. There will be encouragement and follow-up questions, but it will be a complete document in some ways. Web Guide would draw you a map of links helpfully arranged, but you'll have to travel yourself. You might get a cluster of links on "weight exercises for beginners" or "gym etiquette and recommendations." Those web pages hold answers, but the AI isn't answering the question directly. Which is best depends on how you like to search. Web Guide is for when you want to find answers or maybe (and fairly) don't trust what an AI writes. If you just want a single link to find an answer, traditional search is probably your best bet, but if you have something big or important enough to spend time researching five or more sources, it's a good way to narrow down your hunt for answers. If you're mapping out how to refinance your house, fix a relationship, or start baking sourdough professionally, Web Guides might have real utility. That's not to say it should be your default choice. It can be quite slow relative to a normal search or even ChatGPT and Gemini because it's doing multiple queries at once behind the scenes. And as noted, not every query benefits from this treatment if it's something very specific. Big, messy questions are Web Guide's turf. I wouldn't be surprised if Web Guide becomes Google's main search tool, especially with AI Overviews and AI Mode as options. Structured search is on the rise, and Web Guides are a nice medium between the usual link list and AI chatbots. To try it yourself, you need to go to Google Search Labs, go to the 'Web' tab on any search, and you'll see the new layout when it applies. There's a toggle at the top to switch it off if it gets annoying. You might also like I compared ChatGPT's new image generator to DALL-E 3, and it's an astonishing improvement, if you have the patience I refuse to jump on ChatGPT's Studio Ghibli image generator bandwagon because it goes against everything I love about those movies Blade Runners of LinkedIn are hunting for replicants – one em dash at a time
Yahoo
an hour ago
- Yahoo
You Can Try Google's New 'Vibe Coding' App For Free Right Now
Google has been working to improve its AI coding capabilities alongside other AI companies like OpenAI and Anthropic. Many believe that AI can improve coding workflows, and it has proven time and time again that it can make the job more efficient and easier. Some have even taken to 'vibe coding,' which is the act of basically letting AI do all of the work and then just ensuring it works before you implement it. Vibe coding, many argue, is the lazy way out. Others have seen it as a way to open up the world of coding to people who might otherwise struggle to put out the code they're trying to make. And Google has been leaning into this a bit already, with the debut of Jules, an AI coding agent, earlier this year. But now Google is looking to go a step further. Instead of just helping you improve on your own code, as Jules is designed to do, a new agent called Opal will help you dive deep into vibe coding. And if you're interested in trying it, then you can sign up for Google Labs and try out Opal for yourself today for free. An AI Agent Designed To Build Apps With Natural Language Google says that Opal is designed to build, edit, and share mini-AP apps using natural language. This means you should be able to tell the AI exactly what you want -- by saying something like "make an app to order breakfast" -- and then it will spit out a project that you can tweak and change fairly effortlessly. Opal also makes it easy to share your apps, allowing you to package them and show them off with minimal effort. Of course, vibe coding is a novel idea that could open the door for new coding opportunities. But it could also turn out really poorly if you don't know what you're doing. While vibe coding has garnered a lot of praise and interest, it also has its risks. Recently, a venture capitalist shared details about an ongoing project he'd been working on using Replit, an AI designed to help with vibe coding. Despite putting hours of work into the project, the AI deleted his entire database simply because it "panicked." Despite these downsides, it's hard to argue with how easy vibe coding makes projects, and having more accessible apps like Opal will only lead to more improvements across the board. You just have to decide if the ease of use is worth it, or if you're one of the many who believe innovations like this could make it easier for AI to overtake humanity. Read the original article on BGR.