
The Cognitive Debt We Accumulate Every Time We Use AI
The AI-induced amnesia exemplifies more than just a side effect of artificial intelligence. ChatGPT and similar AI-powered tools are now used daily and widely for everything from emails to essays. Yet, as the new study indicates, we may be sacrificing cognitive capacity and creativity for short-term convenience.
AI-Induced Amnesia
The MIT study included 54 participants from the Boston area. The students wrote essays under three conditions: using ChatGPT, using Google for research, or drawing entirely on their knowledge and reasoning. The researchers examined them in terms of memory, neural activation, and feelings of ownership.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
an hour ago
- Forbes
Google Confirms Pixel 10 Pro Details With Attack On Rivals
With a simple caption of 'Google Pixel 10: Soon', the competitive tension ahead of August's 'Made By Google' launch event has been elevated. In addition, Google has also leaned into the competition to highlight why customers might be better off buying the upcoming Pixel 10 and Pixel 10 Pro smartphones: The Pixel 10 Pro Challenge To paraphrase Wikipedia: Diss Track; a song whose primary purpose is to verbally attack someone else, usually another artist. Diss tracks are often the result of an existing, escalating feud between the two people. It's hard not to think that Google's latest teaser video of the Pixel 10 family doesn't fall under this classic definition. While it holds short of naming the 'new phone with a feature coming soon, that has been coming soon for a full year', there's no manufacturer other than Apple, no other smartphone other than the iPhone, that this could apply to. Rather than who, the more interesting question is perhaps why, and why now? The Pixel 10 Pro Builds on Today's Strength One reason may be the current state of the smartphone market. The Android platform, no matter the manufacturer, is significantly ahead of iOS and the iPhone. Artificial Intelligence can be found in numerous digital assistants; some are voice-based, others are on-device clients that help sift through information to offer better feedback and present options. These are all heavily promoted and have unlocked many new features across all ranges. Pixel is no different, although it does have some advantages in being able to tap directly into Google's AI services. It also pushes the other side of the OS coin. The competition, Apple, is not delivering the same experience as Android. The buzz around artificial intelligence is not going away, and if all of that buzz can be focused on your own product and not the competition, that's going to be a benefit both in the short and the long term. The Competition Is Behind The Pixel 10 Pro Tim Cook's Apple is fully aware of this. With previous technological leaps, the perception that Apple is late to the market but comes in with the right solution has been a strong market force in both the allure of Apple and framing its hardware at the apex. This has not been the case with AI. The awkwardly backronymed Apple Intelligence will always be on the back foot when compared to Google's approach due to a higher reliance of on-device processing and reducing a user's digital footprint outside of their smartphone. That's a technical answer, and one that is relatively clear-cut with the geekerati, but it's not as simple a sell to the general consumers. They see an Apple product that does not match the competition… and those who haven't yet made the connection now have this latest video to nudge them towards Google's framing of AI, rather than Apple. Extending AI With Pixel 10 Pro Google is setting the ground for its upcoming expansion of mobile artificial intelligence. Android 16 brings a raft of new features and will make its debut on the competitively priced Pixel 10 and Pixel 10 Pro smartphones… phones that Google has once again confirmed it will be launching on Aug. 20. Not only will the Pixel 10 family show off the new version of Android to the public, but it will also bring with it Google's own first-party apps and services that are unique to Google. These will, as in previous years, offer significant support to users through AI. Waiting For The Pixel 10 Pro In this latest video, Google has stated it is in the lead with AI, its competition has failed to deliver, and it is ready to provide more AI with the new Pixel 10 and Pixel 10 Pro smartphones. Now to see how it delivers on those claims at the end of the month. Read the latest Pixel 10 Pro decisions made by Google…


Forbes
2 hours ago
- Forbes
New Models From OpenAI, Anthropic, Google – All At The Same Time
It's Christmas in August – at least, for those tech-wonks who are interested in new model releases. Today's news is a very full stocking of brand new LLM editions from three of the biggies – OpenAI, Anthropic, and Google. I'll go over these one by one, discussing what these most recent model iterations bring to the table. OpenAI OSS Models First, the tech community is getting an eye on OpenAI OSS 120b and OSS 20b, the first open-weight systems from this company since ChatGPT 2. Now, coverage from Computerworld and elsewhere points out that, although these models have Apache licenses, they are not fully open source in the conventional way, ,but partly open: the weights are open source, while the training data is not. Powered by one 80GB GPU chip, the larger OSS models, according to the above report, 'achieves parity' with the o4-mini model vis a vis reasoning power. The smaller one can run on smartphones and other edge devices. The models come quantized with MXFP4, a low-precision data type for accelerating matrix multiplications. Let Them Work Another interesting aspect of the new OSS models has to do with chain of thought, something that has revolutionized inference, while raising questions about comparative methodology. Basically, we want the LLMs to be accurate, but engineers have found that, in many cases, restricting or overly guiding systems causes them to 'hide' CoT. So OpenAI has chosen not to optimize the models in this way. 'OpenAI is intentionally leaving Chain of Thought (CoTs) unfiltered during training to preserve their usefulness for monitoring, based on the concern that optimization could cause models to hide their real reasoning,' writes Roger Montti at Search Engine Journal. 'This, however, could result in hallucinations.' Montti cites the following model card report from OpenAI: 'In our recent research, we found that monitoring a reasoning model's chain of thought can be helpful for detecting misbehavior. We further found that models could learn to hide their thinking while still misbehaving if their CoTs were directly pressured against having 'bad thoughts.'…In accord with these concerns, we decided not to put any direct optimization pressure on the CoT for either of our two open-weight models. We hope that this gives developers the opportunity to implement CoT monitoring systems in their projects and enables the research community to further study CoT monitorability.' So, the models are allowed to have these 'bad thoughts' in aid of, I suppose, transparency. OpenAI is then honest about the higher chance for hallucinations, so that users know that this trade-off has been made. Claude Opus 4.1 Here's how spokespersons rolled out the announcement of this new model Aug. 5: 'Today we're releasing Claude Opus 4.1, an upgrade to Claude Opus 4 on agentic tasks, real-world coding, and reasoning. We plan to release substantially larger improvements to our models in the coming weeks. Opus 4.1 is now available to paid Claude users and in Claude Code. It's also on our API, Amazon Bedrock, and Google Cloud's Vertex AI. Pricing is the same as Opus 4.' What's under the hood? The new Opus 4.1 model ups SWE-Bench Verified marks, and boosts agentic research skills. A breakdown of capabilities shows a 2-point increase in SWE-based agentic coding (72.5% - 74.5%) and improvement in graduate-level reasoning with GPQA Diamond (79.6% - 80.9%) over Opus 4, and slight increases in visual reasoning and agentic tool use. For a model set that pioneered human-like user capabilities, this continues to push the envelope. As for strategy: 'The release comes as Anthropic has achieved spectacular growth, with annual recurring revenue jumping five-fold from $1 billion to $5 billion in just seven months, according to industry data,' writes Michael Nunez at VentureBeat. 'However, the company's meteoric rise has created a dangerous dependency: nearly half of its $3.1 billion in API revenue stems from just two customers — coding assistant Cursor and Microsoft's GitHub Copilot — generating $1.4 billion combined. … The upgrade represents Anthropic's latest move to fortify its position before OpenAI launches GPT-5, expected to challenge Claude's coding supremacy. Some industry watchers questioned whether the timing suggests urgency rather than readiness.' Regardless, this is big news in and of itself, including for the millions of users who rely on Claude for business process engineering or anything else. Genie 3 This is the latest in the series of Genie models coming out of Google's DeepMind lab that create controlled environments. In other words, this is a gaming world model. Proponents of the new model cite longer-term memory over Genie 2's limit of about 10 seconds, as well as better visual fidelity and real-time responses. 'DeepMind claims that the new system can generate entire worlds that you can interact with consistently for several minutes in up to 720p resolution,' reports Joshua Hawkins at BGR. 'Additionally, the company says that the system will be able to respond to what it calls 'promptable world events' with real-time latency. Based on what the videos show off, it seems like Google has taken a major step forward in creating entire video game worlds using AI.' 'Genie 3 is the first real-time interactive general-purpose world model,' said DeepMind's Shlomi Fruchter in a press statement according to a TechCrunch piece suggesting that the lab considers Genie 3 to be a 'stepping stone to AGI,' a big claim in these interesting times. 'It goes beyond narrow world models that existed before. It's not specific to any particular environment. It can generate both photo-realistic and imaginary worlds, and everything in between.' All of these new models are getting their first rafts of public users today! It's enough to make your head spin, especially if you're responsible for any kind of implementation. What do you choose? To be fair, there is some amount of specialization involved. But many professionals closest to the industry would tell you it's the speed of innovation that's challenging: given the track record of most companies, by the time you get something worked into business operations, it's likely to already be obsolete! Stay tuned.


Axios
3 hours ago
- Axios
Truth Social's Perplexity search comes with Trump-friendly media sources
President Trump's social media company Truth Social unveiled a new search tool powered by AI answer engine Perplexity on Wednesday — but Truth Social users who run Perplexity searches may find their results limited to a narrow set of typically Trump-supporting media outlets. Why it matters: Increasingly, where you ask online matters as much as what you ask. Catch up quick: Trump Media & Technology Group on Wednesday said it was launching a public beta test of a search engine, Truth Search AI, powered by Perplexity. Perplexity has been seen as a nascent Google-killer and is often touted by investors as a possible acquisition target for the likes of Apple. How it works: Axios asked seven questions on both a logged-in Truth Social account and the free, logged-out Perplexity website … What happened on January 6, 2021? Why was Donald Trump impeached? What crimes was President Trump convicted of? Did Donald Trump lose the 2020 election? What is Hunter Biden's laptop a reference to? Was Hillary Clinton ever charged with a crime? Is the new "Naked Gun" movie good? Between the lines: In most cases, the responses were generally similar — but the sources linked to the answers were not. In all seven responses on Truth Social, either was the most common, or the only, listed source of information. Other sources were Washington Times or Epoch Times. In contrast, answers via the public version of Perplexity returned a wider variety of sources, including Wikipedia, Reddit, YouTube, NPR, Esquire and Politico. Although the questions were matched and asked at roughly the same time, there was no source overlap. What they're saying: A Perplexity spokesperson tells Axios that Truth Social is a customer of Perplexity's API, which means it — like tens of thousands of other developers — is building tools to its own specifications, and with its own restrictions. Any customization, like limiting the sources for its answers, would happen entirely on the Truth Social side. While it's standard practice for platforms to put their own layers of rules and information on top of tools, search tools usually cast a broader net. Truth Social did not mention any restrictions in its announcement, although it did say it plans to "refine and expand our search function based on user feedback." Perplexity's Sonar API specifically includes the ability for users to customize sources, which the company noted in January was a top user request. The bottom line: When you ask a search tool a question, particularly in the age of AI, it's best to know exactly where your information is coming from, and whether there are any limits on what the tool will tell you. Expect more of this as governments and businesses increasingly put their thumbs on the AI scale to serve their interests.