
OpenAI upgrades bio risk level for latest AI model
The AI firm on Thursday released ChatGPT agent, a new agentic AI model that can now perform tasks for users 'from start to finish,' according to a company press release.
OpenAI opted to treat the new model as having a high biological and chemical capability level in its preparedness framework, which evaluates for 'capabilities that create new risks of severe harm.'
'While we don't have definitive evidence that the model could meaningfully help a novice create severe biological harm—our threshold for High capability—we are exercising caution and implementing the needed safeguards now,' OpenAI wrote.
'As a result, this model has our most comprehensive safety stack to date with enhanced safeguards for biology: comprehensive threat modeling, dual-use refusal training, always-on classifiers and reasoning monitors, and clear enforcement pipelines,' it added.
OpenAI's newest model, which began rolling out to various paid users last week, comes as tech companies increasingly turn toward the agentic AI space.
Perplexity released an AI browser with agentic capabilities earlier this month, while Amazon Web Services (AWS) announced new tools last week to help its client build AI agents.
The ChatGPT maker's latest release comes as the company plans to open its first office in Washington to boost its policy ambitions and show off its products, according to Semafor.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

CNN
20 minutes ago
- CNN
Meta is shelling out big bucks to get ahead in AI. Here's who it's hiring
Meta CEO Mark Zuckerberg is on a mission for his company to be the first to reach so-called artificial superintelligence — generally considered to mean AI that's better than all humans at all knowledge work. It's a nebulous and likely far-out concept that some analysts say may not immediately benefit the company's core business. Yet Zuckerberg is shelling out huge sums to build an all-star team of researchers and engineers to beat OpenAI and other rivals to it. Zuckerberg's recruiting spree, which has reportedly included multimillion-dollar pay packages to lure top talent away from key rivals, has kicked off a talent race within the AI industry. Last month, OpenAI CEO Sam Altman claimed Meta was offering his employees $100 million signing bonuses to switch companies. And just this week, Google CEO Sundar Pichai was asked during an earnings call about his company's status in the AI talent war, a sign that Wall Street is now also invested in the competition. The stakes are high for Zuckerberg — after Meta's pivot to the metaverse fell flat, he's reoriented the company around AI in hopes of being a leader in the next transformational technology wave. The company has invested billions in data centers and chips to power its AI ambitions that it's now under pressure to deliver on. Unlike other tech giants, Meta doesn't have a cloud computing business to generate immediate revenue from those infrastructure investments. And the company is coming from somewhat behind competitors, after reported delays in releasing the largest version of its new Llama 4 AI model. 'That's the Llama 4 lesson: You can have hundreds of thousands of (GPU chips), but if you don't have the right team developing the model, it doesn't matter,' said D.A. Davidson analyst Gil Luria. But more than anything, Zuckerberg appears to be in a circle of Silicon Valley 'AI maximalists' that believe the technology will change everything about how we live and work. Becoming a leader in the space is essential to Meta and other companies whose leaders follow that line of thinking, Luria said. 'For our superintelligence effort, I'm focused on building the most elite and talent-dense team in the industry,' Zuckerberg said in a Threads post earlier this month. Meta last month invested $14.3 billion in data labeling startup Scale AI. Scale founder and then-CEO Alexandr Wang joined the social media giant as part of the deal, along with several of Scale's other top employees. Wang is now leading the new Meta Superintelligence Lab, along with former GitHub CEO Nat Friedman. 'My job is to make amazing AI products that billions of people love to use,' Friedman said in an X post earlier this month. 'It won't happen overnight, but a few days in, I'm feeling confident that great things are ahead.' And in recent weeks, Meta has attracted top researchers and engineers from the likes of OpenAI, Apple, Google and Anthropic. Multiple news outlets, including Bloomberg, Wired and The Verge, have reported that Meta has, in some cases, offered pay packages worth hundreds of millions of dollars to new AI hires. It's a sign of just how far Zuckerberg is willing to go in his quest to win the AI superintelligence race, although the Meta chief has pushed back on some of the reporting around the compensation figures. It is with that mission that Meta's new team will be working to build superintelligence. Here are some of the most prominent recent hires to the team. This list was compiled based on public statements, social media profiles and posts, and news reports, and may not be exhaustive. Meta declined to comment on this story. Zuckerberg's drive to get ahead on AI may be rooted in part in his desire to own a foundational platform for the next major technology wave. Meta lost the race to control the operating systems for the mobile web era in the early 2000s and 2010s, which Apple and Google won. In recent years, he has not been shy about expressing his frustration with having to pay fees to app store operators and comply with their policies. Meta recently partnered with Amazon Web Services on a program to support startups that want to build on its Llama AI model, in an effort to make its technology essential to businesses emerging during the AI boom. Although AI has benefitted Meta's core advertising business, some analysts question how Zuckerberg's quest for 'superintelligence' will benefit the company. Emarketer senior analyst Minda Smiley said she expects Meta executives to face tough questions during the company's earnings call next week about how its superintelligence ambitions 'align with the company's broader business roadmap.' 'Its attempts to directly compete with the likes of OpenAI … are proving to be more challenging for the company while costing it billions of dollars,' Smiley said. But as its core business continues to grow rapidly, Meta has the money to spend to build its team and 'steal' from rivals, said CFRA Research analyst Angelo Zino. And, at least for now, investors seem to be here for it — the company's shares have risen around 20% since the start of this year. And if Zuckerberg succeeds with his vision, it could propel Meta far beyond a social media company. 'I think Mark's in a manifest destiny point of his career,' said Zack Kass, an AI consultant and former OpenAI go-to-market lead. 'He always wants to point to Facebook groups as being this way that he is connecting the world … And if he can build superintelligence that cures cancer, he doesn't have to talk about Facebook groups anymore as being his like lasting legacy.'
Yahoo
an hour ago
- Yahoo
I Asked ChatGPT What ‘Generational Wealth' Really Means — and How To Start Building It
The term 'generational wealth' gets thrown around a lot these days, but what does it actually mean? And more importantly, how can regular Americans start building it? Read Next: Learn More: GOBankingRates asked ChatGPT for a comprehensive breakdown, and its response was both enlightening and surprisingly actionable. Also see five strategies high-net-worth families use to build generational wealth. Defining Generational Wealth: ChatGPT's Take When ChatGPT was asked to define generational wealth, it explained it as 'assets and financial resources that are passed down from one generation to the next, providing ongoing financial stability and opportunities for future family members.' But it went deeper, explaining that true generational wealth isn't just about leaving money behind; it's about creating a financial foundation that can grow and sustain multiple generations. The AI emphasized that generational wealth is more than just inheritance money. It's about creating a system where each generation can build upon the previous one's success, creating a compounding effect that grows over time. This includes not just financial assets, but also financial knowledge, business relationships and strategic thinking skills. Check Out: ChatGPT's Blueprint for Building Generational Wealth When asked for a practical roadmap, ChatGPT provided a comprehensive strategy broken down into actionable steps. Start With Financial Education ChatGPT emphasized that generational wealth begins with financial literacy — not just for yourself, but for your entire family. Here is what it recommended: Teach children about money management from an early age. Create family financial discussions and goal-setting sessions. Ensure all family members understand investment principles. Build a culture of financial responsibility. It stressed that many wealthy families fail to maintain their wealth across generations because they don't adequately prepare their children with the knowledge and mindset needed to manage money effectively. Build a Diversified Investment Portfolio ChatGPT recommended a multi-asset approach to wealth building: Real estate investments for appreciation and passive income Stock market investments through index funds and individual stocks Business ownership or equity stakes Alternative investments like real estate investment trusts or commodities. It explained that diversification is crucial because different asset classes perform differently in various economic conditions. This approach helps protect wealth from market volatility while providing multiple income streams. Establish Legal Protection Structures The AI strongly emphasized the importance of estate planning tools as well. Here are a few it highlighted: Wills and trusts to control asset distribution Life insurance policies to provide immediate liquidity Business succession planning for family enterprises Tax optimization strategies to minimize transfer costs. ChatGPT explained that without proper legal structures, wealth can be decimated by taxes, legal disputes or poor decision-making by inexperienced heirs. It stressed that these structures must be created while you're alive and able to make strategic decisions. Consider Dynasty Trusts For families with substantial assets, ChatGPT recommended exploring dynasty trusts. It explained these as vehicles that can preserve wealth across multiple generations while providing tax benefits. These trusts can potentially last forever in certain states, creating a truly perpetual wealth-building vehicle. Overcoming Common Obstacles ChatGPT identified several barriers to building generational wealth as well. First, it acknowledged that starting from different financial positions affects strategy. Those with limited resources need to focus first on building basic wealth before thinking about generational strategies. ChatGPT also warned against increasing spending as income grows. The AI suggested automating savings and investments to prevent lifestyle inflation from derailing wealth-building efforts. It also highlighted the complexity of tax planning for generational wealth, noting that improper planning can result in significant tax penalties that erode wealth transfer. This makes professional guidance particularly important for families with substantial assets, and the cost of professional advice is typically far outweighed by the value created through proper planning. Starting Small: ChatGPT's Practical First Steps For those just beginning, ChatGPT provided a few accessible starting points. Build an emergency fund (three to six months' worth of expenses). Maximize employer 401(k) matching. Start a Roth IRA for tax-free growth. Purchase adequate life insurance. Create a basic will. Begin investing in index funds. Consider real estate when financially ready. It emphasized that these steps can be started by anyone, regardless of income level, and that the key is consistency over time. The Importance of Values and Purpose One of ChatGPT's most interesting insights was about the importance of instilling values and purpose alongside wealth. The AI explained that families with strong values and a clear sense of purpose are more likely to maintain their wealth across generations. This can include teaching children about responsibility and work ethic and involving family members in charitable activities It also noted that generational wealth isn't primarily about the amount you leave behind. It's about creating a financial foundation and knowledge system that empowers future generations to build upon your efforts. The process of building generational wealth requires patience, discipline and strategic thinking, but the AI emphasized that with the right approach, any family can begin building wealth that will benefit generations to come. The key is to start now, stay consistent and always keep the long-term vision in mind. More From GOBankingRates 3 Luxury SUVs That Will Have Massive Price Drops in Summer 2025 The 10 Most Reliable SUVs of 2025 The 5 Car Brands Named the Least Reliable of 2025 This article originally appeared on I Asked ChatGPT What 'Generational Wealth' Really Means — and How To Start Building It


Tom's Guide
an hour ago
- Tom's Guide
How to spot AI writing — 5 telltale signs to look for
AI writing is everywhere now, flooding social media, websites, and emails—so you're probably encountering it more than you realize. That email you just received, the product review you're reading, or the Reddit post that sounds oddly corporate might all be generated by tools like AI chatbots like ChatGPT, Gemini or Claude. The writing often appears polished, maybe too polished, hitting every point perfectly while maintaining an unnaturally enthusiastic tone throughout. While AI detectors promise to catch machine-generated text, they're often unreliable and miss the subtler signs that reveal when algorithms have done the heavy lifting. You don't need fancy software or expensive tools to spot it. The clues are right there in the writing itself. There's nothing wrong with using AI to improve your writing. These tools excel at checking grammar, suggesting better word choices, and helping with tone—especially if English isn't your first language. AI can help you brainstorm ideas, overcome writer's block, or polish rough drafts. The key difference is using AI to enhance your own knowledge and voice rather than having it generate everything from scratch. The problems arise when people let AI do all the thinking and just copy-paste whatever it produces without adding their own insights, and that's when you start seeing the telltale signs below. AI writing tools consistently rely on the same attention-grabbing formulae. You'll see openings like "Have you ever wondered..." "Are you struggling with..." or "What if I told you..." followed by grand promises. This happens because AI models learn from countless blog posts and marketing copy that use these exact patterns. Real people mix it up more, they might jump straight into a story, share a fact, or just start talking about the topic without all the setup. When you spot multiple rhetorical questions bunched together or openings that feel interchangeable across different topics, you're likely reading AI-generated content. You'll see phrases like "many studies show", "experts agree", or "a recent survey found" without citing actual sources. AI tends to speak in generalities like "a popular app" or "leading industry professionals" instead of naming specific companies or real people. Human writers naturally include concrete details, actual brand names, specific statistics, and references to particular events or experiences they've encountered. When content lacks these specific, verifiable details, it's usually because AI doesn't have access to real, current information or personal experience. AI writing often sounds impressive at first glance but becomes hollow when you examine it closely. You'll find excessive use of business jargon like "game-changing", "cutting-edge", "revolutionary", and "innovative" scattered throughout without explaining what these terms actually mean. The writing might use sophisticated vocabulary but fail to communicate ideas clearly. A human expert will tell you exactly why one method works better than another, or admit when something is kind of a pain to use. If the content feels like it was written to impress rather than inform, AI likely played a major role. AI writing maintains an unnaturally consistent, enthusiastic tone throughout entire pieces. Every sentence flows smoothly into the next, problems are always simple to solve and there's rarely any acknowledgment that things can be complicated or frustrating. Real people get frustrated, go off on tangents, and have strong opinions. Human writing naturally varies in tone, sometimes confident, sometimes uncertain, occasionally annoyed or conversational. When content sounds relentlessly positive and avoids any controversial takes, you're probably reading AI-generated material. This is where the lack of real experience shows up most clearly. AI might correctly explain the basics of complex topics, but it often misses the practical complications that anyone who's actually done it knows about. The advice sounds textbook-perfect but lacks the yeah, but in reality... insights that make content actually useful. Human experts naturally include caveats, mention common pitfalls, or explain why standard advice doesn't always work in practice. When content presents complex topics as straightforward without acknowledging the messy realities, it's usually because real expertise is missing. People love to point at em dashes as proof of AI writing, but that's unfair to a perfectly good punctuation mark. Writers have used em dashes for centuries—to add drama, create pauses or insert extra thoughts into sentences. The real issue isn't that AI uses them, it's how AI uses them incorrectly. You'll often see AI throwing in em dashes where a semicolon would work better, or using them to create false drama in boring sentences. Real writers use em dashes purposefully to enhance their meaning, while AI tends to sprinkle them in as a lazy way to make sentences sound more sophisticated. Before you dismiss something as AI-written just because of punctuation, check whether those dashes actually serve a purpose or if they're just there for show. Now you've learned the tell-tale signs for spotting AI-generated writing, why not take a look at our other useful guides? Don't miss this tool identifies AI-generated images, text and videos — here's how it works and you can stop Gemini from training on your data — here's how Get instant access to breaking news, the hottest reviews, great deals and helpful tips. And if you want to explore some lesser known AI models, take a look at I write about AI for a living — here's my 7 favorite free AI tools to try now.