logo
OpenAI CEO Sam Altman Says GPT-6 Is Already in Development

OpenAI CEO Sam Altman Says GPT-6 Is Already in Development

Business Insider11 hours ago
Microsoft-backed (MSFT) AI firm OpenAI recently launched GPT-5, but CEO Sam Altman has already announced that GPT-6 is in development, and that it's coming sooner than expected. While he didn't give a release date, Altman made it clear that GPT-6 will be more advanced and personal. Indeed, it won't just respond to your questions but it will learn your preferences, habits, and personality to tailor its responses to you. According to Altman, the key to this personalization is memory. The system needs to remember who you are and what you like in order to offer a more meaningful experience.
Elevate Your Investing Strategy:
Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence.
Altman also said that future versions of ChatGPT will follow a new executive order from the Trump administration that requires federal AI tools to remain politically neutral but customizable. This means users will be able to adjust the AI's tone to match their views. At the same time, Altman admitted that GPT-5's initial rollout received backlash after users said that it felt cold and less helpful than earlier models. In response, OpenAI quietly updated the model to make its tone warmer, which is an improvement that Altman believes has made a big difference.
Despite progress, there are still privacy concerns. In fact, Altman noted that temporary memory in ChatGPT isn't yet encrypted, which raises risks when handling sensitive information. Nevertheless, he said that encryption is likely coming, though no date has been set. In addition, when looking further ahead, Altman is exploring brain-computer interfaces, where AI could respond to your thoughts directly. However, while his team continues to improve ChatGPT for everyday use, Altman admitted that chatbot performance may have peaked for now.
Is MSFT Stock a Buy?
Turning to Wall Street, analysts have a Strong Buy consensus rating on MSFT stock based on 34 Buys and one Hold assigned in the last three months. In addition, the average MSFT price target of $623.60 per share implies 22.4% upside potential.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta AI chatbots gave false medical advice and flirted with minors — now the company is restructuring its AI division again
Meta AI chatbots gave false medical advice and flirted with minors — now the company is restructuring its AI division again

Tom's Guide

time11 minutes ago

  • Tom's Guide

Meta AI chatbots gave false medical advice and flirted with minors — now the company is restructuring its AI division again

Meta is facing fresh scrutiny after a troubling report revealed that some of its AI chatbot personas on Facebook, Instagram and WhatsApp were allowed to flirt with minors and spread dangerously inaccurate information. The revelations, first reported by Reuters, come just as the company begins its fourth major AI reorganization in six months. The timing couldn't be worse. As Meta pours billions into building out its AI capabilities to compete with rivals like ChatGPT's OpenAI, Anthropic's Claude and Google's Gemini, this latest scandal exposes glaring holes in how the company is managing safety, oversight and ethical boundaries. According to internal documents obtained by Reuters, Meta's GenAI: Content Risk Standards once allowed AI characters to engage in 'romantic' or 'sensual' conversations with underage users. In one alarming example, a chatbot was permitted to tell a child: While the policy prohibited direct sexual content involving children under 13, it still permitted flirtatious language that many critics say veers dangerously close to grooming behavior. Meta told Reuters it has since removed these allowances, calling them 'erroneous and inconsistent with our policies.' The same leaked document revealed that Meta's guidelines did not require AI bots to provide accurate medical advice. One bot reportedly claimed Stage 4 colon cancer could be treated with 'healing quartz crystals,' so long as a disclaimer was attached. Other examples allowed for racist content, including the suggestion that 'Black people are dumber than white people,' framed as a controversial opinion. Meta responded by saying it has revised the standards and doesn't condone hate speech or misleading medical information. However, the company has not made a revised version of the GenAI guidelines publicly available. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. In what appears to be both a defensive and strategic move, Meta is now restructuring its AI division for the fourth time since February, according to a separate report from The Information. The company is dividing its AI efforts into four new units: Products, Infrastructure, the FAIR research lab, and a new experimental group tasked with developing future a major shift—and a possible acknowledgment that Meta's fragmented AI strategy hasn't kept pace with the rapid growth and scrutiny in the space. The restructuring also follows reports that Meta offered $100 million signing bonuses to poach top AI talent from rivals like OpenAI and Anthropic, a move that stirred internal resentment among long-tenured employees. Meta's generative AI tools are increasingly embedded in the daily experiences of billions of users, including teens and children. Missteps like these don't just damage reputation—they expose users to potentially harmful interactions at AI adoption accelerates, the pressure is on tech giants to balance speed, innovation and safety. Meta's recent moves suggest it knows it must do better, but it remains to be seen whether another internal shuffle will be enough to fix the foundational Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

ChatGPT is down — latest updates on outage
ChatGPT is down — latest updates on outage

Tom's Guide

time11 minutes ago

  • Tom's Guide

ChatGPT is down — latest updates on outage

ChatGPT looks to be suffering some sort of outage with users reporting that the AI chatbot was not working. We noticed an initial spike on the outage tracking site Down Detector around 7:45 a.m. PT. However, that spike quickly went away only to return a two hours later around 9:30 a.m. The official OpenAI status page has ChatGPT listed as "experiencing issues." Specifically, it says the AI assistant is having "conversation errors." Keep it here as we track the outage and provide the latest updates on the current ChatGPT issues. OpenAI's biggest AI model has gone down for varying periods several times this year with the most recent major incident occurring in June. Update: Date: 2025-08-20T17:01:51+00:00 Title: OpenAI Status page shows problems Content: The official OpenAI status page is showing ChatGPT as "experiencing issues." As of this post the status page shows the Help Center down and ChatGPT having "conversation errors." I'm also seeing an elevated error rate and a failure to load "subscriscription issue." Update: Date: 2025-08-20T16:47:07+00:00 Title: Down Detector has two spikes Content: As mentioned, ChatGPT initially went down around 7:45 a.m. Pacific before quickly resolving. The current spikes shows over 2,000 reports on Down Detector starting around 9:30 a.m. Pacific.

Did a Human Write That? Detect AI Writing With These Tips
Did a Human Write That? Detect AI Writing With These Tips

CNET

time11 minutes ago

  • CNET

Did a Human Write That? Detect AI Writing With These Tips

AI has exploded in popularity over the last few years, thanks in no small part to large learning models like ChatGPT. As AI has evolved, it's become an easy way to whip up emails, and for students to write their assignments. The days of paying someone else to write their essay are over, when there are AI writing tools that can do the deed in mere moments. As a professor of strategic communications, I encounter students using AI tools like ChatGPT, Grammarly and EssayGenius on a regular basis. It's usually easy to tell when a student has used one of these tools to draft their entire work. The telltale signs include ambiguous language and a super annoying tendency for AI to spit out text with the assignment prompt featured broadly. Don't miss any of CNET's unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome. How to tell if it was written by AI Some of the most common ways to tell if something was written using AI are: Key terms from your assignment prompt are used repeatedly. Inaccurate facts are included, thanks to the AI chatbot hallucinating. Sentences don't sound natural. Explanations are generic and repetitive, rather than actually leading anywhere. The tone doesn't sound like their usual writing style. For example, a student might use ChatGPT -- an AI chatbot that uses large language model learning and a conversational question and answer format to provide query results -- to write a short essay response to a prompt by simply copying and pasting the essay question into the tool. Take this prompt: In 300 words or fewer, explain how this SWAT and brand audit will inform your final pitch. This is ChatGPT's result: Screenshot by Rachel Kane/CNET I have received responses like this, or those very close to it, a few times in my tenure as a teacher, and one of the most recognizable red flags is the amount of instances in which key terms from the prompt are used in the final product. Students don't usually repeat key terms from the prompt in their work in this way, and the results read closer to old-school SEO-driven copy meant to define these terms rather than a unique essay meant to demonstrate an understanding of the subject matter. But can teachers use AI tools to catch students using AI tools? I devised some ways to be smarter in spotting artificial intelligence in papers. Catch AI cheaters Here's how to use AI tools to catch cheaters in your class. Understand AI capabilities There are AI tools on the market that can scan an assignment and its grading criteria to provide a fully written, cited and complete piece of work in a matter of moments. Some of these tools include GPTZero and Smodin. Familiarizing yourself with tools like these is the first step in the war against AI-driven integrity violations. Do as the cheaters do Before the semester begins, copy and paste all your assignments into a tool like ChatGPT and ask it to do the work for you. When you have an example of the type of results it provides specifically in response to your assignments, you'll be better equipped to catch AI-written answers. You could also use a tool designed specifically to spot AI writing in papers. Get a real sample of writing At the beginning of the semester, require your students to submit a simple, fun and personal piece of writing to you. The prompt should be something like "200 words on what your favorite toy was as a child," or "Tell me a story about the most fun you ever had." Once you have a sample of the student's real writing style in hand, you can use it later to have an AI tool review that sample against what you suspect might be AI-written work. Ask for a rewrite If you suspect a student of using AI to cheat on their assignment, take the submitted work and ask an AI tool to rewrite the work for you. In most cases I've encountered, an AI tool will rewrite its own work in the laziest manner possible, substituting synonyms instead of changing any material elements of the "original" work. Here's an example: Screenshot by Rachel Kane/CNET Screenshot by Rachel Kane/CNET Now, let's take something an actual human (me) wrote, my CNET bio: Screenshot by Rachel Kane/CNET The phrasing is changed, extracting much of the soul in the writing and replacing it with sentences that are arguably more clear and straightforward. There are also more additions to the writing, presumably for further clarity. Can you always tell if something was written using AI? The most important part about catching cheaters who use AI to do their work is having a reasonable amount of evidence to show the student and the administration at your school if it comes to that. Maintaining a skeptical mind when grading is vital, and your ability to demonstrate ease of use and understanding with these tools will make your case that much stronger. Good luck out there in the new AI frontier, fellow teachers, and try not to be offended when a student turns in work written by a robot collaborator. It's up to us to make the prospect of learning more alluring than the temptation to cheat.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store