logo
#

Latest news with #GPT-4.1

The best use cases for each ChatGPT model
The best use cases for each ChatGPT model

Android Authority

time2 days ago

  • Business
  • Android Authority

The best use cases for each ChatGPT model

Calvin Wankhede / Android Authority While ChatGPT has existed in various forms for some time, its true mainstream success began with the release of GPT-3 in 2020. Since then, ChatGPT has evolved significantly, both for better and worse. Although the tool is now more useful than ever before, it's also become somewhat confusing. Depending on your subscription level, you might have up to eight different models to choose from, making it tricky to identify which is best suited for your task. As someone who has been a ChatGPT Plus user since subscriptions first became available, I rely on ChatGPT frequently. Sometimes it's for brainstorming, proofreading, personal organizing, or other productive activities. Other times, it's purely for entertainment — such as creating alternate timelines or pondering random philosophical ideas. Setting aside the fact that I clearly need more friends, these interactions have given me ample experience with which model works best in various situations. The truth is, there isn't one perfect use case for each ChatGPT model, as many overlap. Still, let's take a closer look at the seven models currently available, exploring the ideal scenarios for each. GPT-4o is great for generalist tasks, especially for free users Kaitlyn Cimino / Android Authority Best for : General-purpose tasks, including editing, questions, and brainstorming : General-purpose tasks, including editing, questions, and brainstorming Availability: Free or higher ChatGPT defaults to GPT-4o for a good reason: it's a solid generalist. This multimodal model can process and analyze text, images, audio, and even video, making GPT-4o ideal for a wide range of tasks, including: Composing emails Basic brainstorming and creative content Summarizing text, and basic creative content Basic editing and proofreading Simple questions That's some of the official use cases, but your imagination is the true limit. Personally, I've used GPT-4o extensively for my creative writing projects. It's also been my go-to for: Creating alternate timelines and similar role-playing scenarios Fetching general information, such as gardening tips and simple queries Performing straightforward edits and summarization Although I'm not a coder, I've heard many people successfully use GPT-4o for basic coding projects, thanks to its looser usage limits. That said, the newer GPT-4.1 is generally a much better choice for coding tasks, as we'll discuss shortly. Overall, GPT-4o is a reliable tool for just about anything, but it's important to note that, based on my experience, it becomes more prone to hallucinations as queries grow more complex. For straightforward requests with clear outcomes, GPT-4o works very well, but it struggles significantly with genuine reasoning and complex logic, making occasional errors more likely. For example, while working on an alternate timeline about Rome, GPT-4o mistakenly pulled information from a previous, unrelated timeline project I created months earlier involving a divergent North America. Despite obvious differences in divergence points, nations, and events, GPT-4o sometimes couldn't distinguish these separate contexts clearly. The key takeaway is that you should always verify any ChatGPT response independently, but this is especially important with GPT-4o, at least in my experience. Additionally, free users are limited to 10 messages every three hours, though paid Plus subscribers have an increased limit of 80 messages every three hours. GPT-4.1: Great for coding and a better generalist for Plus, Pro, and Team members Best for : Coding and detailed generalist tasks that require greater accuracy : Coding and detailed generalist tasks that require greater accuracy Availability: Plus or higher While GPT-4o remains the default, those with paid subscriptions might consider the newer GPT-4.1 as their daily driver instead. Initially accessible only via third-party software or OpenAI's API, GPT-4.1 is now fully integrated into ChatGPT for users with a Plus subscription or higher. The improved intelligence and speed of GPT-4.1 mean it can handle all the scenarios listed previously under GPT-4o, with notable enhancements. Other advantages include: It's a great option for coders looking for a balance between speed, accuracy, efficiency, and cost-effectiveness. Significantly better performance than GPT-4o for detailed proofreading, editing, and brainstorming on slightly more complex topics. Clearer and faster responses, reducing the need for extensive back-and-forth corrections. The primary downside of GPT-4.1 compared to GPT-4o is its tighter usage restriction, capped at 40 messages every three hours for Plus users. Still, this limit is likely sufficient for most users, aside from particularly extensive projects. In my personal and entertainment projects, I've occasionally reached the cap, but in those cases, I simply switch back to GPT-4o to complete the job. GPT-4.1 shares the same multimodal capabilities as GPT-4o, but delivers clear improvements across the board. According to OpenAI's official metrics, the new model offers: 21.4% higher coding accuracy : GPT-4.1 scores 54.6% versus GPT-4o's 33.2%. : GPT-4.1 scores 54.6% versus GPT-4o's 33.2%. 10.5% improvement in instruction-following accuracy : GPT-4.1 achieves 38.3% compared to GPT-4o's 27.8%. : GPT-4.1 achieves 38.3% compared to GPT-4o's 27.8%. 6.7% better accuracy for long-context tasks: GPT-4.1 scores 72% versus GPT-4o's 65.3%. As of this writing, GPT-4.1 has only been available to Plus users for about a week, so I haven't fully explored every scenario. However, my initial experiences indicate that GPT-4.1 hallucinates far less often and maintains greater consistency when staying on topic. Unlike GPT-4o, it doesn't randomly blend ideas from previous projects, a frequent issue I encountered with alternative timelines. Additionally, GPT-4.1 follows instructions more carefully and refrains from improvising unnecessarily — a tendency I've noticed in other models. OpenAI 01 Pro Mode: Powerful and precise, but best for specialized business tasks Calvin Wankhede / Android Authority Best for : Complex business and coding tasks demanding exceptional detail and accuracy : Complex business and coding tasks demanding exceptional detail and accuracy Availability: Pro or higher As you might guess, OpenAI's 01 Pro Mode requires an expensive Pro membership and therefore targets companies, independent professionals, or freelancers who handle specialized business and enterprise tasks. Although there's no firm cap, sustained, intensive use can temporarily restrict your access. For example, according to user Shingwun on Reddit, sending more than around 200 messages during a workday can quickly trigger temporary restrictions. Potential use cases for 01 Pro Mode include: Drafting highly detailed risk-analysis reports or internal memos. Creating multi-page research summaries. Developing sophisticated algorithms tailored to specific business requirements. Building specialized applications or plug-ins. Parsing complex STEM topics directly from detailed research papers. These represent just a few possible applications, but ultimately, this model is designed for extremely complex tasks. For everyday programming assistance or quicker queries, there are honestly faster and more suitable tools. Due to its advanced reasoning capabilities, 01 Pro Mode typically takes more time per response, which can become a significant bottleneck, even though the end results are often worth the wait. GPT-03 is great for general business productivity and beyond C. Scott Brown / Android Authority Best for : Business productivity, Plus-level tasks that need advanced reasoning : Business productivity, Plus-level tasks that need advanced reasoning Availability: Plus or higher If you're working on a complex, multi-step project, you'll find that models like GPT-4o are more prone to producing responses riddled with logic errors or outright hallucinations. While such mistakes can occur with any AI, GPT-03 is specifically designed with advanced reasoning in mind, making it typically better suited for tasks such as: Risk analysis reports and similarly detailed documents. Analyzing existing content more deeply and objectively, compared to the overly positive responses typical of other models. Drafting strategic business outlines based on competitor and internal data. Providing more thorough explanations for concepts related to math, science, and coding than GPT-4o or GPT-4.1. Personally, I often use GPT-03 for deeper analysis of both my personal and professional projects. I've found it particularly helpful as a tool for working through my own thoughts and ideas. While I would never fully entrust an AI to serve as a genuine advisor, GPT-03 is valuable when you want to explore or develop an idea with AI assistance. Just be sure to verify any conclusions or ideas you reach with outside sources and additional scrutiny. For example, I've used GPT-03 to help refine my own ethical and philosophical viewpoints, but always confirm these ideas by consulting both online resources and real people. Remember, AI models are very good at providing logical-sounding answers, but they can also mislead, exaggerate, or even unintentionally gaslight you. Therefore, exercise caution when using GPT-03 in this manner. AI models might provide logical-sounding answers, but they can also mislead, exaggerate, or even unintentionally gaslight you. It's also important to recognize GPT-03's other limitations. First, because GPT-03 prioritizes reasoning, responses are typically slower compared to some of the other models. Additionally, Plus, Team, or Enterprise subscribers are limited to just 100 messages per week. Depending on your project's complexity, this could be sufficient, but it also means you'll need to be more selective when choosing to use this model. Pro-level accounts, however, enjoy unlimited access to GPT-03. Lastly, although OpenAI promotes GPT-03 as ideal for advanced coding tasks, my research across Reddit and other online communities suggests a different perspective. The consensus seems to be that while GPT-03 excels at very specific coding scenarios, it can also be prone to hallucination unless prompts are crafted carefully. Most coders find GPT-4.1 to be a generally better fit for typical coding tasks. GPT-4o-mini and GPT-4.1-mini: Best for API users or when you hit usage limits Edgar Cervantes / Android Authority Best for : API users, or anyone needing a backup when other model limits are reached : API users, or anyone needing a backup when other model limits are reached Availability: Free or higher I'm grouping these two models together, as they're even more similar to each other than GPT-4o and GPT-4.1. According to OpenAI, GPT-4o-mini is best suited for fast technical tasks, such as: Quick STEM-related queries Programming Visual reasoning In reality, while it performs well enough for these cases, its limitations can become apparent for anyone doing intensive coding or using the model daily. Even though the 300-message-per-day limit sounds generous, it really depends on your workflow and the size of your projects. Ultimately, GPT-4o-mini works well as a backup if you hit message caps on other models, but I think its best use case is actually outside of ChatGPT — as a cost-effective choice for API users running larger projects. As for GPT-4.1-mini: this newer model is the default for all ChatGPT users (replacing GPT-4o-mini), though you'll still have access to both on Plus or higher tiers. One big change is that 4.1-mini also supports free accounts, so you're not restricted by payment tier. GPT-4.1-mini works much like GPT-4o-mini but with better coding ability and improved overall performance. It's a useful fallback when you max out your limit on other models, but in my opinion, both mini variants still shine brightest as affordable, lower-power options for API-based projects rather than as your main engine for regular ChatGPT queries. Still, 4.1-mini is gradually rolling out to all free users and will automatically be selected if you hit the GPT-4o cap. GPT-4o-mini-high: Best as a backup for GPT-03 and for faster reasoning Kaitlyn Cimino / Android Authority Best for : Faster reasoning than o3, and as a backup : Faster reasoning than o3, and as a backup Availability: Plus or higher GPT-4o-mini-high (formerly known as GPT-03-mini-high) used to be a favorite among those looking for less restrictive coding and more flexibility for unique projects. The current version doesn't have quite the same reputation for coding, but it still has a few official OpenAI use cases: Solving complex math equations with full step-by-step breakdowns—great for homework and learning Drafting SQL queries for data extraction and database work Explaining scientific concepts in clear, accessible language Based on my experience and what I've read in community forums, the best way to use GPT-4o-mini-high is as a backup: when you run out of credits or hit your message cap on GPT-03, mini-high offers a similar experience, though it's not quite as robust. This model is limited to 100 messages per day for Plus, Teams, and Enterprise users, while Pro users get unlimited access. GPT-4.5: Powerful generalist, but best for refinement or high-value queries Mishaal Rahman / Android Authority Best for : Final refinement, editing, or as a premium alternative to GPT-4.1 : Final refinement, editing, or as a premium alternative to GPT-4.1 Availability: Plus or higher GPT-4.5 is arguably the most powerful generalist model available, offering a noticeable leap over GPT-4.1 and GPT-4o in many scenarios. However, its strict usage limits mean you'll want to be selective. While GPT-4.5 used to allow for 50 messages per week, Plus users are now limited to just 20 weekly messages. Pro users also have a cap, but OpenAI hasn't published exact numbers. From what I've seen, most people don't reach the Pro limit easily, but if you're passionate about using GPT-4.5, you'll need to spring for the $200/month Pro tier. For more casual users like me, that's a pretty tough sell. So, what do I mean by refinement? Essentially, I like to use GPT-4o or GPT-4.1 to rough out a project and get it where I want it, then bring in GPT-4.5 for the final polish. For instance, when working on an alternate history timeline for a fiction series, I used GPT-4.1 for the main draft, then uploaded the result to GPT-4.5 to help refine the language and catch any logic gaps. The finished product was much tighter, and I only had to use a few of my 20 weekly messages. Whether it's for last-step editing, advanced review, or double-checking a critical project, GPT-4.5 excels as a finishing tool. Just keep in mind that it's not practical for multi-step, back-and-forth work unless you're on the Pro plan. My favorite workflow: Mixing models for the best results Edgar Cervantes / Android Authority While GPT-4.5 is my go-to for final refinement, I actually hop between models quite a bit depending on the project. The web version of ChatGPT makes it easy to switch models mid-conversation (even if you sometimes need to re-explain the context). For creative projects, I usually start with GPT-4.1 for drafting, then jump to GPT-03 if I need deeper reasoning or want to double-check my thinking. After narrowing things down further in GPT-4.1, I'll finish the project in GPT-4.5 for a final pass. This model dance helps catch mistakes, uncover new ideas, and produce cleaner, more reliable results. Ultimately, there's no one 'right' combination for everyone. You'll want to experiment with the models to find a workflow that fits your needs. For example, programmers might use a cheaper model like GPT-4.1 for initial coding, then switch to 01 Pro Mode for an advanced review of their work. Writers and researchers might prefer the blend of GPT-03's reasoning with GPT-4.5's editing finesse. How do you cross-utilize the different models? Maybe you have a hot take you can share in the comments that I didn't previously consider.

After string of outages, Elon Musk says he's returning to 24/7 grind to fix X: ‘Will sleep in server rooms'
After string of outages, Elon Musk says he's returning to 24/7 grind to fix X: ‘Will sleep in server rooms'

Mint

time6 days ago

  • Business
  • Mint

After string of outages, Elon Musk says he's returning to 24/7 grind to fix X: ‘Will sleep in server rooms'

Billionaire Elon Musk has said he is returning to a 24/7 work schedule to fix issues at X, and will be 'sleeping in conference/server/factory rooms.' The social media platform X (formerly Twitter) has faced frequent outages since Musk's takeover, which included cutting almost 80% of its staff and making a slew of other changes. However, outages have grown even more frequent in recent days, with the platform experiencing at least three major disruptions last week. After the latest outages, Musk, who had recently shifted focus to helping Donald Trump win the US election and later to cutting US federal expenses via DOGE, now says he will be 'super focused' on working at X, xAI, and Tesla, while also making major 'operational improvements.' You may be interested in In reply to a post on X, Musk wrote, 'Back to spending 24/7 at work and sleeping in conference/server/factory rooms.' The world's richest man added, 'I must be super focused on 𝕏/xAI and Tesla (plus Starship launch next week), as we have critical technologies rolling out. As evidenced by the 𝕏 uptime issues this week, major operational improvements need to be made. The failover redundancy should have worked, but did not.' Notably, Musk had promised to release the Grok 3.5 update to xAI's paid subscribers last month, but there has been no update since. Grok last received a major update in February, when Musk and his team hosted a live session to introduce reasoning and Deep Search features to the chatbot, alongside their latest frontier model. Since then, xAI appears to be falling behind in the AI race. OpenAI has released a native image generation feature, along with its o3 and o4-mini reasoning models, GPT-4.1, and the Codex AI agent. Meanwhile, Google unveiled a suite of new AI capabilities in Gemini during its I/O 2025 developer conference.

Claude 4 Sonnet, Opus AI models released with enhanced coding capabilities
Claude 4 Sonnet, Opus AI models released with enhanced coding capabilities

Business Standard

time23-05-2025

  • Business
  • Business Standard

Claude 4 Sonnet, Opus AI models released with enhanced coding capabilities

Anthropic has announced the launch of its new Claude 4 Sonnet and Claude 4 Opus AI models, now available through Claude's website and in Application Programming Interface (API). According to the company's official blog post, both models bring significant improvements in coding abilities, with Claude 4 Opus in particular targeting state-of-the-art performance across a variety of AI benchmarks. The company stated that Claude Sonnet 4 outperforms its predecessor, Sonnet 3.7, while Claude Opus 4 is said to match or exceed the capabilities of competing large language models like OpenAI's o3, GPT-4.1, and Google's Gemini 2.5 Pro. These results are based on benchmarks that test areas such as multilingual proficiency, agentic tool use, autonomous coding via terminal interfaces, and graduate-level reasoning. The announcement comes just weeks after both OpenAI and Google released updated AI models with enhanced coding capabilities. OpenAI debuted its o3 and GPT-4.1, while Google launched Gemini 2.5 Pro, which drew attention after reportedly completing a full playthrough of Pokémon Blue. New capabilities and Claude Code Anthropic has also announced some new capabilities. Here are the details: Extended reasoning with tools (beta): Claude models can now use tools like web search while engaging in deeper reasoning. This lets them switch between thinking and tool use to generate better results. Improved model abilities: Claude 4 Sonnet and Claude 4 Opus can now run tools simultaneously, follow instructions with greater accuracy, and, when granted access to local files, show enhanced memory—picking up and storing key information to ensure better context retention and learning over time. Claude Code is now widely available: Claude Code is now open for broader use. It supports background processes through GitHub Actions and integrates directly with development tools like VS Code and JetBrains, offering in-line edits for smoother collaborative coding. Claude 4 Sonnet and Opus: Availability Anthropic states that both models are accessible via the Anthropic API, as well as through partners like Amazon Bedrock and Google Cloud's Vertex AI. Pricing for Opus 4 is set at $15 per million input tokens and $75 per million output tokens, while Sonnet 4 is priced at $3 for input and $15 for output per million tokens.

Anthropic launches Claude Opus 4: AI Model includes 7-hour memory, Amnesia fixes — Is it better than OpenAI's GPT-4.1?
Anthropic launches Claude Opus 4: AI Model includes 7-hour memory, Amnesia fixes — Is it better than OpenAI's GPT-4.1?

Time of India

time22-05-2025

  • Business
  • Time of India

Anthropic launches Claude Opus 4: AI Model includes 7-hour memory, Amnesia fixes — Is it better than OpenAI's GPT-4.1?

In a move poised to reshape the artificial intelligence landscape, Anthropic has launched Claude Opus 4 , its most advanced AI model to date. The announcement, made on Thursday, also included the unveiling of Claude Sonnet 4 , forming part of the company's next-generation Claude 4 family. With the ability to autonomously perform complex tasks over extended periods, the Claude 4 models set a fresh benchmark for AI capabilities in both enterprise and creative applications. Claude Opus 4 Redefines AI Performance According to the company, Claude Opus 4 demonstrated the ability to autonomously work on an open-source codebase refactoring project for nearly seven hours at Rakuten—an unprecedented feat in the field of AI. The performance represents a significant shift, transforming AI from a reactive assistant into a proactive collaborator, capable of maintaining task continuity throughout an entire workday. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like What Is Your Writing Missing? Grammarly Install Now Anthropic claims Claude Opus 4 surpassed OpenAI's GPT-4.1 in key benchmarks. Notably, Opus 4 scored 72.5% on the SWE-bench, a challenging software engineering test, compared to GPT-4.1's 54.6%, according to the company's internal reports. A Paradigm Shift Towards Reasoning-Centric AI With AI usage expanding across industries, 2025 has seen a marked shift toward models built on reasoning capabilities rather than pattern recognition. The Claude 4 models lead this new wave by incorporating research, reasoning, and tool use into a seamless decision-making loop. Unlike prior AI systems that required inputs to be fully processed before analysis, Claude Opus 4 can pause mid-task, seek out new information, and adjust its course—mirroring human cognitive behavior more closely than ever before. Live Events Anthropic's dual-mode architecture ensures speed and depth: basic queries are handled with minimal delay, while complex problems benefit from extended processing time. This hybrid capability addresses long-standing friction in AI usage. Memory and Continuity: Solving the 'Amnesia' Problem One of the standout features of the Claude 4 architecture is memory persistence. When granted permissions, the model can extract relevant data from files, summarize documents, and retain this context across user sessions. This advancement resolves what has historically been termed the "amnesia problem" in generative AI—where models failed to maintain continuity over long-term projects. These structured memory functions allow Claude Opus 4 to gradually build domain expertise, enhancing its utility in legal research, software development, and enterprise knowledge management. Competitive Landscape Heats Up Anthropic's latest launch comes just weeks after OpenAI released GPT-4.1 and amid similar announcements from Google and Meta. While Google's Gemini 2.5 focuses on multimodal interaction and Meta's LLaMA 4 emphasizes long-context capabilities, Claude Opus 4 distinguishes itself in professional-grade coding, autonomous task completion, and long-duration performance. The rivalry between these AI labs reflects a marketplace in flux. Each company is staking out unique technological territory, forcing enterprise users to weigh specializations over one-size-fits-all solutions. Enterprise Integration and Revenue Surge Anthropic has expanded Claude's utility through tools like Claude Code, now integrated with GitHub Actions, VS Code, and JetBrains. Developers can view suggested edits in real-time, allowing for deeper collaboration between human coders and AI agents. Notably, GitHub has chosen Claude Sonnet 4 as the default engine for its next-generation coding agent, a decision that underscores confidence in the Claude 4 series' reliability and depth. Anthropic also confirmed that its annualized revenue reached USD 2 billion in Q1 2025, doubling from the previous quarter. The firm recently secured a USD 2.5 billion credit line, further strengthening its financial position in the AI arms race. FAQs What is Claude Opus 4? Claude Opus 4 is Anthropic's most advanced AI model to date, capable of long-duration autonomous task completion. It's part of the new Claude 4 family, alongside Claude Sonnet 4, and is designed for enterprise-grade reasoning, coding, and creative applications. What sets Claude Opus 4 apart from previous models? Claude Opus 4 introduces memory persistence, allowing it to retain context across sessions—solving the so-called 'amnesia problem.' It also autonomously worked for nearly seven hours on a complex coding project, demonstrating an unprecedented level of continuity and cognitive-like behavior.

Google is readying its AI Mode search tool for primetime, whether you like it or not
Google is readying its AI Mode search tool for primetime, whether you like it or not

Yahoo

time20-05-2025

  • Business
  • Yahoo

Google is readying its AI Mode search tool for primetime, whether you like it or not

It sure looks like Google is prepping its controversial AI mode for primetime. This week, some Google users noticed an AI Mode button showing up instead of Google's iconic "I'm feeling lucky" button on the homepage. And today, a Mashable reporter spotted "AI Mode" appearing as an option on search results pages, alongside stalwart Google tools like News, Shopping, Images, and Videos. Notably, this reporter did not proactively sign up to participate in AI Mode through Google Labs. That suggests Google is testing the feature for select users. AI Mode appears for select users on search results pages. Credit: Tim Marcin / Mashable This is what AI Mode looks like in Google Search. Credit: Tim Marcin / Mashable This suggests a widespread release of its AI-powered search tool is coming soon. Maybe at Google I/O next Tuesday? Google has been testing AI search features ever since OpenAI and ChatGPT started siphoning away searchers, particularly younger searchers. And that's just one of many new developments from Gemini-land. Like pretty much every other week, a lot happened in AI news this week. So, we've rounded up the biggest stories and most important AI developments in products, business, politics, and... Catholicism. Here's our recap of AI news this week. xAI's Grok chatbot went off the rails this week, responding to X users with completely unprompted musings about "'white genocide' in South Africa." The company said it was due to an "unauthorized modification" and promised to do better next time. Coincidentally, xAI leader and Grok power user Elon Musk has been repeatedly tweeting about the subject. Even OpenAI CEO Sam Altman joined the ongoing pile-on on X: In OpenAI's world, the company brought GPT-4.1 to ChatGPT "by popular request." Initially, it was only available through the API. Now it's available to ChatGPT Plus, Pro, and Team users with Enterprise and Edu access rolling out soon and GPT-4.1 mini for free users. On Friday, OpenAI also launched a preview version of Codex, a coding agent for engineers. That's rolling out to ChatGPT users Pro, Enterprise, and Team subscribers. Codex is "a version of OpenAI o3 optimized for software engineering." Google held a pre-I/O event for Android news. The main takeaway there is that Google is bringing Gemini to Android operating systems in smart watches, cars, and TVs. There's probably no better fit for image-to-video generation than TikTok, and it has released a new feature that does exactly that. It's called AI Alive, and Mashable's CJ Silva says it's pretty realistic. Credit: TikTok Last but not least, prepare to hear a lot more AI-generated narration with your Audible books. Its parent company Amazon announced this week that it's partnered with publishers to "expand [its] catalog with AI narration." This was also a big week for artificial intelligence in politics and foreign affairs. OpenAI is reportedly already making moves on its global AI infrastructure plans. Bloomberg reports that it is "considering building new data center capacity in the United Arab Emirates." Meanwhile, OpenAI CEO Sam Altman and other tech billionaires joined President Donald Trump in Saudi Arabia for a visit with Crown Prince Mohammed bin Salman, who launched a new AI company called Humain. While business schmoozing went down in the Middle East, Bloomberg also reported that OpenAI's Stargate Project to build AI infrastructure in the U.S. has run into roadblocks. Plans have reportedly been held up by Japanese investor SoftBank over tariff-related concerns. On top of that, Microsoft and OpenAI are reportedly renegotiating the terms of their partnership as OpenAI tries to restructure its for-profit business into a Public Benefit Corporation (PBC) which would still be governed by its nonprofit board, according to the Financial Times. OpenAI needs to keep Microsoft, which has invested $13 billion, happy but the increasingly competing interest has reportedly created tension between the companies. In the public sector, House Republicans proposed a ten-year moratorium on states introducing their own AI regulations, 404 Media reports. This language was nestled in the Budget Reconciliation bill. Don't Republicans like states' rights? We're confused too. Speaking of AI regulation under the Trump Administration, the U.S. Copyright Office published a "pre-publication version" of part three of its highly anticipated AI copyright report last week, which generally favored copyright holders over AI companies claiming fair use. The very next day Trump, fired Copyright Office head Shira Perlmutter. That hasn't stopped plaintiffs in the Kadrey v. Meta from using it as a weapon against Meta, as Mashable first reported. A lot, actually. Pope Leo XIV said AI posed "new challenges for humanity in his first address to the College of Cardinals. And his name choice pays tribute to Pope Leo XIII who presided over the Catholic Church during the Industrial Revolution and advocated for workers rights and social reform. Sensing myself called to continue in this same path, I chose to take the name Leo XIV. There are different reasons for this, but mainly because Pope Leo XIII in his historic Encyclical Rerum Novarum addressed the social question in the context of the first great industrial revolution. In our own day, the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice and labour. Disclosure: Ziff Davis, Mashable's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store