logo
The Morning After: Can the golden Trump Phone actually be made in USA?

The Morning After: Can the golden Trump Phone actually be made in USA?

Engadget20-06-2025
When the Trump family announced its debut into the glitzy world of phone carriers with Trump Mobile, it came with a phone, too: a not-gold slab called the T1. The biggest heady claim (beyond the fact that its sole $47.25 plan would be 'true value') is: how can any modern smartphone claim to be made in the US?
Alex Cranz takes Trump Mobile's sales pitch to task. While we know a lot of the specs, there's no reference to a processor — and that's because practically all smartphone processors are not made in the US. That's just the start.
— Mat Smith
Get Engadget's newsletter delivered direct to your inbox. Subscribe right here!
A SpaceX Starship vehicle has exploded yet again, and this time, the incident occurred before it even took off. NASASpaceflight captured the event in a livestream, wherein you can see the spacecraft (Ship 36) suddenly explode into a fireball after the company tested its forward flap and just before it was supposed to conduct a static fire test. The company said it was due to a 'major anomaly'. Yeah, I'd agree.
Continue reading. There's a 1-800 number you can text for ChatGPT images
OpenAI has decided that there are not enough ways to generate sloppy AI images with ChatGPT. It announced that users can tap into the image generator by texting 1-800-ChatGPT on WhatsApp. The WhatsApp chatbot is "now available to everyone." Anyone?
Continue reading.
Framework's latest repairable laptop is a smaller 12-inch model, with the same modular features of its predecessors. With a bright plastic build, it's aimed at students, and there's a lot to like. According to Engadget's Daniel Cooper, it's a little too expensive, especially in the face of the cheap (and disposable) laptops it's looking to supplant.
Continue reading.
It doesn't look like the usual Garmin affair. The fitness tracking company is getting into bed. Literally. The Index Sleep Monitor offers week-long battery life with continuous pulse ox tracking for monitoring your blood oxygen saturation while you sleep. Garmin's tracker is worn on the upper arm and tracks multiple metrics, including skin temperature, light, deep, and REM sleep stages as well as variations in heart rate and breathing.
Skin temperature tracking provides a clearer understanding of how your sleep environment impacts the quality of your sleep, even identifying potential illnesses based on your body temperature. It also features menstrual health tracking, with skin temperature changes useful for tracking cycles.
Continue reading.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta AI chatbots gave false medical advice and flirted with minors — now the company is restructuring its AI division again
Meta AI chatbots gave false medical advice and flirted with minors — now the company is restructuring its AI division again

Tom's Guide

time2 minutes ago

  • Tom's Guide

Meta AI chatbots gave false medical advice and flirted with minors — now the company is restructuring its AI division again

Meta is facing fresh scrutiny after a troubling report revealed that some of its AI chatbot personas on Facebook, Instagram and WhatsApp were allowed to flirt with minors and spread dangerously inaccurate information. The revelations, first reported by Reuters, come just as the company begins its fourth major AI reorganization in six months. The timing couldn't be worse. As Meta pours billions into building out its AI capabilities to compete with rivals like ChatGPT's OpenAI, Anthropic's Claude and Google's Gemini, this latest scandal exposes glaring holes in how the company is managing safety, oversight and ethical boundaries. According to internal documents obtained by Reuters, Meta's GenAI: Content Risk Standards once allowed AI characters to engage in 'romantic' or 'sensual' conversations with underage users. In one alarming example, a chatbot was permitted to tell a child: While the policy prohibited direct sexual content involving children under 13, it still permitted flirtatious language that many critics say veers dangerously close to grooming behavior. Meta told Reuters it has since removed these allowances, calling them 'erroneous and inconsistent with our policies.' The same leaked document revealed that Meta's guidelines did not require AI bots to provide accurate medical advice. One bot reportedly claimed Stage 4 colon cancer could be treated with 'healing quartz crystals,' so long as a disclaimer was attached. Other examples allowed for racist content, including the suggestion that 'Black people are dumber than white people,' framed as a controversial opinion. Meta responded by saying it has revised the standards and doesn't condone hate speech or misleading medical information. However, the company has not made a revised version of the GenAI guidelines publicly available. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. In what appears to be both a defensive and strategic move, Meta is now restructuring its AI division for the fourth time since February, according to a separate report from The Information. The company is dividing its AI efforts into four new units: Products, Infrastructure, the FAIR research lab, and a new experimental group tasked with developing future a major shift—and a possible acknowledgment that Meta's fragmented AI strategy hasn't kept pace with the rapid growth and scrutiny in the space. The restructuring also follows reports that Meta offered $100 million signing bonuses to poach top AI talent from rivals like OpenAI and Anthropic, a move that stirred internal resentment among long-tenured employees. Meta's generative AI tools are increasingly embedded in the daily experiences of billions of users, including teens and children. Missteps like these don't just damage reputation—they expose users to potentially harmful interactions at AI adoption accelerates, the pressure is on tech giants to balance speed, innovation and safety. Meta's recent moves suggest it knows it must do better, but it remains to be seen whether another internal shuffle will be enough to fix the foundational Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

ChatGPT is down — latest updates on outage
ChatGPT is down — latest updates on outage

Tom's Guide

time2 minutes ago

  • Tom's Guide

ChatGPT is down — latest updates on outage

ChatGPT looks to be suffering some sort of outage with users reporting that the AI chatbot was not working. We noticed an initial spike on the outage tracking site Down Detector around 7:45 a.m. PT. However, that spike quickly went away only to return a two hours later around 9:30 a.m. The official OpenAI status page has ChatGPT listed as "experiencing issues." Specifically, it says the AI assistant is having "conversation errors." Keep it here as we track the outage and provide the latest updates on the current ChatGPT issues. OpenAI's biggest AI model has gone down for varying periods several times this year with the most recent major incident occurring in June. Update: Date: 2025-08-20T17:01:51+00:00 Title: OpenAI Status page shows problems Content: The official OpenAI status page is showing ChatGPT as "experiencing issues." As of this post the status page shows the Help Center down and ChatGPT having "conversation errors." I'm also seeing an elevated error rate and a failure to load "subscriscription issue." Update: Date: 2025-08-20T16:47:07+00:00 Title: Down Detector has two spikes Content: As mentioned, ChatGPT initially went down around 7:45 a.m. Pacific before quickly resolving. The current spikes shows over 2,000 reports on Down Detector starting around 9:30 a.m. Pacific.

Did a Human Write That? Detect AI Writing With These Tips
Did a Human Write That? Detect AI Writing With These Tips

CNET

time2 minutes ago

  • CNET

Did a Human Write That? Detect AI Writing With These Tips

AI has exploded in popularity over the last few years, thanks in no small part to large learning models like ChatGPT. As AI has evolved, it's become an easy way to whip up emails, and for students to write their assignments. The days of paying someone else to write their essay are over, when there are AI writing tools that can do the deed in mere moments. As a professor of strategic communications, I encounter students using AI tools like ChatGPT, Grammarly and EssayGenius on a regular basis. It's usually easy to tell when a student has used one of these tools to draft their entire work. The telltale signs include ambiguous language and a super annoying tendency for AI to spit out text with the assignment prompt featured broadly. Don't miss any of CNET's unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome. How to tell if it was written by AI Some of the most common ways to tell if something was written using AI are: Key terms from your assignment prompt are used repeatedly. Inaccurate facts are included, thanks to the AI chatbot hallucinating. Sentences don't sound natural. Explanations are generic and repetitive, rather than actually leading anywhere. The tone doesn't sound like their usual writing style. For example, a student might use ChatGPT -- an AI chatbot that uses large language model learning and a conversational question and answer format to provide query results -- to write a short essay response to a prompt by simply copying and pasting the essay question into the tool. Take this prompt: In 300 words or fewer, explain how this SWAT and brand audit will inform your final pitch. This is ChatGPT's result: Screenshot by Rachel Kane/CNET I have received responses like this, or those very close to it, a few times in my tenure as a teacher, and one of the most recognizable red flags is the amount of instances in which key terms from the prompt are used in the final product. Students don't usually repeat key terms from the prompt in their work in this way, and the results read closer to old-school SEO-driven copy meant to define these terms rather than a unique essay meant to demonstrate an understanding of the subject matter. But can teachers use AI tools to catch students using AI tools? I devised some ways to be smarter in spotting artificial intelligence in papers. Catch AI cheaters Here's how to use AI tools to catch cheaters in your class. Understand AI capabilities There are AI tools on the market that can scan an assignment and its grading criteria to provide a fully written, cited and complete piece of work in a matter of moments. Some of these tools include GPTZero and Smodin. Familiarizing yourself with tools like these is the first step in the war against AI-driven integrity violations. Do as the cheaters do Before the semester begins, copy and paste all your assignments into a tool like ChatGPT and ask it to do the work for you. When you have an example of the type of results it provides specifically in response to your assignments, you'll be better equipped to catch AI-written answers. You could also use a tool designed specifically to spot AI writing in papers. Get a real sample of writing At the beginning of the semester, require your students to submit a simple, fun and personal piece of writing to you. The prompt should be something like "200 words on what your favorite toy was as a child," or "Tell me a story about the most fun you ever had." Once you have a sample of the student's real writing style in hand, you can use it later to have an AI tool review that sample against what you suspect might be AI-written work. Ask for a rewrite If you suspect a student of using AI to cheat on their assignment, take the submitted work and ask an AI tool to rewrite the work for you. In most cases I've encountered, an AI tool will rewrite its own work in the laziest manner possible, substituting synonyms instead of changing any material elements of the "original" work. Here's an example: Screenshot by Rachel Kane/CNET Screenshot by Rachel Kane/CNET Now, let's take something an actual human (me) wrote, my CNET bio: Screenshot by Rachel Kane/CNET The phrasing is changed, extracting much of the soul in the writing and replacing it with sentences that are arguably more clear and straightforward. There are also more additions to the writing, presumably for further clarity. Can you always tell if something was written using AI? The most important part about catching cheaters who use AI to do their work is having a reasonable amount of evidence to show the student and the administration at your school if it comes to that. Maintaining a skeptical mind when grading is vital, and your ability to demonstrate ease of use and understanding with these tools will make your case that much stronger. Good luck out there in the new AI frontier, fellow teachers, and try not to be offended when a student turns in work written by a robot collaborator. It's up to us to make the prospect of learning more alluring than the temptation to cheat.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store