logo
Anthropic takes aim at OpenAI, offers Claude to ‘all three branches of government' for $1

Anthropic takes aim at OpenAI, offers Claude to ‘all three branches of government' for $1

Yahooa day ago
Just a week after OpenAI announced it would offer ChatGPT Enterprise to the entire federal executive branch workforce at $1 per year per agency, Anthropic has raised the stakes. The AI giant said Tuesday it would also offer its Claude models to government agencies for just $1 — but not only to the executive branch. Anthropic is targeting 'all three branches' of the U.S. government, including the legislative and judiciary branches.
The package will be available for one year, says Anthropic.
The move comes after OpenAI, Anthropic, and Google DeepMind were added to the General Services Administration's list of approved AI vendors that can sell their services to civilian federal agencies. TechCrunch has reached out to Google to see if it plans to respond to Anthropic's and OpenAI's challenges in kind.
Anthropic's escalation — a response to OpenAI's attempt to undercut the competition — is a strategic play meant to broaden the company's foothold in federal AI usage.
'We believe the U.S. public sector should have access to the most advanced AI capabilities to tackle complex challenges, from scientific research to constituent services,' Anthropic said in a statement. 'By combining broad accessibility with uncompromising security standards, we're helping ensure AI serves the public interest.'
Anthropic will offer both Claude for Enterprise and Claude for Government. The latter supports FedRAMP High workloads so that federal workers can use Claude for handling sensitive unclassified work, according to the company.
FedRAMP High is a stringent security baseline within the Federal Risk and Authorization Management Program (FedRAMP) for handling unclassified sensitive government data.
Anthropic will also provide technical support to help agencies integrate AI tools into their workflows, according to the company.
Anthropic, along with OpenAI, xAI, and Google, has been granted up to $200 million by the Department of Defense to leverage AI for national security, but the AI firm clearly hopes to integrate into a broader array of government work, including science research and health services. Anthropic noted in its press release that Claude is already being used at Lawrence Livermore National Laboratory to accelerate scientific discoveries, and also by the District of Columbia Department of Health to help residents access health services in multiple languages.
Anthropic says it is able to make such deployments because Claude 'meets the government's highest security standards.' Aside from being certified for FedRAMP High, customers can access Claude through their existing secure infrastructure via partnerships with AWS, Google Cloud, and Palantir, giving them more control over their data.
Anthropic's multicloud access could give it an edge in the competition with OpenAI, whose current official FedRAMP High offering is tied to Azure Government Cloud only. While Azure is widely adopted in government, some government agencies and security teams might prioritize data sovereignty, infrastructure control, and the operational flexibility a multicloud strategy offers.
OpenAI is, however, actively working to reduce its reliance on Azure so it can embrace a more diversified infrastructure approach.
We're always looking to evolve, and by providing some insight into your perspective and feedback into TechCrunch and our coverage and events, you can help us! Fill out this survey to let us know how we're doing and get the chance to win a prize in return!
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

NBA clears Boston Celtics' $6.1 billion sale to Bill Chisholm
NBA clears Boston Celtics' $6.1 billion sale to Bill Chisholm

CNN

time19 minutes ago

  • CNN

NBA clears Boston Celtics' $6.1 billion sale to Bill Chisholm

A group headed by Bill Chisholm is set to take control of the Boston Celtics after the NBA Board of Governors unanimously approved the sale on Wednesday. The NBA wrote in a statement, 'The transaction is expected to close shortly.' The reported $6.1 billion valuation for the club makes it the second-largest sale price for a US sports franchise, behind the $10 billion valuation for the Los Angeles Lakers when Mark Walter purchased that team in June. Chisholm and his partners are buying at least 51 percent of the Celtics. The ownership stake will increase in 2028, according to the purchase contract, when Chisholm's group is scheduled to buy out the remaining minority shareholders at a $7.3 billion valuation. According to multiple media reports, Chisholm will take over as the Celtics' governor when the sale goes through. Outgoing owner Wyc Grousbeck is expected to serve as alternate governor and remain the CEO through 2028. Grousbeck will cede his role when he no longer has the required ownership stake of at least 15 percent. Chisholm, the co-founder and managing partner of the California-based private equity firm STG Partners, is a Massachusetts native and longtime Celtics fan. Grousbeck and the outgoing ownership group Boston Basketball Partners LLC purchased the Celtics for $360 million in 2002. During that group's tenure, the club won NBA titles in 2007-08 and 2023-24 – the latter representing Boston's league-record 18th championship.

AI in education's potential privacy nightmare
AI in education's potential privacy nightmare

Axios

time20 minutes ago

  • Axios

AI in education's potential privacy nightmare

AI is now firmly entrenched in classrooms, but student privacy rules haven't caught up. Why it matters: Chatbots can expose troves of personal data in ways few parents, students or teachers fully understand. The big picture: The 2025-26 school year is shaping up to be one where educators feel that they must embrace AI to keep students competitive. Here are three top concerns with classroom AI, according to privacy advocates and AI companies Axios spoke to. 1. Student work could be used to train AI models AI firms are constantly seeking data to train their models. They're not required to say exactly where they get it, but they do have to say how they're using customer data, especially when they're dealing with students. Guidelines like The Family Educational Rights and Privacy Act (FERPA) don't guarantee meaningful protections for students. FERPA was signed into law under President Ford in 1974 and has not been significantly updated since. "Penalty for violating FERPA is that your federal funding is withheld," Elizabeth Laird, director at the Center for Democracy and Technology, told Axios. "And that has been enforced exactly zero times. Literally never." Most educational AI firms say they're not training models on classroom work. Content submitted by teachers and students is not used to train the foundational AI models that underlie Khan Academy's AI tutor, Khanmigo, the company's chief learning officer, Kristen DiCerbo, told Axios. But training on a diverse set of student data would make the models less biased, DiCerbo said: "There's no easy answer to these things, and it's all trade-offs between different priorities." Institutions technically could allow student work to be used for AI training, though they're unlikely to do so, several educators told Axios. Yes, but: Data that's "publicly available" on the web is a different story. Business Insider recently reported on what it described as a list of sites that Anthropic contractors were allowed to scrape — including domains from Harvard, Princeton, Yale, Northwestern and other universities. Funding mandates often require universities to post student research online, meaning more of it is considered freely available data for training AI. An Anthropic spokesperson told Axios that it could not validate the list of sites found by Business Insider because it was created by a third-party vendor without Anthropic's involvement. 2. Off-the-shelf AI tools could expose student data Many teachers are experimenting with free chatbot tools. Some are from well-known players like OpenAI, Google, Perplexity and Anthropic. Others are from lesser-known startups with questionable privacy policies. In many cases, educators use these apps without district approval or formal guidance. Accelerating pushes from both big tech and President Trump for school and student adoption of AI have changed the vibe around AI heading into the new academic year, ed tech experts told Axios. "Where in the 2024-2025 school year most schools had the LLM on lockdown through their filter, this year all flowers will bloom," Tammy Wincup, CEO of Securly, a software company that builds safety tools for K-12 schools, told Axios. Products designed for educational use, like ChatGPT Edu, do not train on student data, but some of the consumer-facing free and paid versions of ChatGPT and other chatbotshave different policies. "That's where things get tricky," says Melissa Loble, chief academic officer at Instructure, the company behind the learning management system known as Canvas. "If AI tools are used outside our system, the data may not be protected under the school's policies." Yes but: Teachers are often the best judges of AI tools for their students. Ed tech is "a bottom-up adoption industry. It grows and thrives on teachers finding tools they like for teaching and learning and then getting districts to adopt," Wincup says. 3. Hacks are an increasing threat Earlier this year, a breach at PowerSchool — a widely used student information system — exposed sensitive personal data of tens of thousands of students and parents. "When you introduce any new tool, when you collect any new piece of information, you are necessarily introducing increased risk," Laird says. That makes thoughtful planning critical, she added. If AI tools store or process student data, a breach could expose not just grades and attendance records but also behavioral data, writing samples, and private communications. One way to prevent leaks is to delete data periodically. DiCerbo says Khan Academy deletes chats after 365 days. Yes, but: The advantage of using chatbots is that they can remember and learn from previous conversations, so some users want to store more information than might be safe. Between the lines: AI is steamrolling into classrooms and colleges and privacy is just one on a long list of concerns these institutions must manage. Khan Academy's DiCerbo says AI adoption is moving faster than anything she's seen in her 20 years working in ed tech. Khan Academy expects to reach a million students with its AI-powered tutor Khanmigo that launched in 2023. Earlier this year the California State University system introduced ChatGPT Edu to more than 460,000 students and over 63,000 staff and faculty across its 23 campuses. Google just started offering its AI Pro plan for free to students over 18 for a year. What we're watching: Some ed tech providers are looking beyond OpenAI, Anthropic and Google and using services like AWS and Microsoft's Azure to keep student data separate from the model providers. Brisk Teaching, a classroom AI assistant, uses this approach to mitigate concerns that student data might be used to train new models — even though OpenAI and Google say that their education-focused models don't train on user data. Brisk Teaching founder Arman Jaffer told Axios that there's a lot of "lost trust" between schools and the big AI providers. "It's just easier for us to say Google is not touching your data because they could potentially use it to train the next version of their model," he said.

YouTube Music gets another music discovery tool that Spotify doesn't have
YouTube Music gets another music discovery tool that Spotify doesn't have

Android Authority

time20 minutes ago

  • Android Authority

YouTube Music gets another music discovery tool that Spotify doesn't have

Edgar Cervantes / Android Authority TL;DR Google is testing a new Daily Discover feed in YouTube Music. Like the existing weekly discover, the daily discover aims to help you find artists or music similar to what you already listen. Google only appears to be testing it at the moment, as the feature hasn't rolled out widely. Although its supremacy is contentious, YouTube Music is easily among the top five music and podcast streaming services globally. Its popularity naturally stems from the fact that it automatically sorts music (or other audio-based media) uploaded to YouTube. While the automatic sourcing already allows you to discover more tracks, including renditions, covers, audiobooks, etc., YouTube Music is adding a new Daily Discover feed to make the process much easier. We recently learned that Google is testing a new Daily Discover option to recommend new tracks every day based on your preferences. The feature, as spotted by Reddit user One_Flow_8127, is positioned somewhere on the homepage. It appears on top of the 'Trending songs for you' section, which appears after several scrolls on YouTube Music's homepage on the Android app. Based on the screenshots shared, we can see these recommendations show up in a carousel format, and people can scroll left or right to explore multiple recommendations. The feature shows recommendations for particular soundtracks instead of entire playlists, and also tells you the reason why it is being suggested. However, if you prefer, the suggestions also come with a 'Play All' button that should combine all tracks into a new playlist. The primary motive behind this feature is to learn about new artists and their music, which may be loosely based on your interests and listening habits. While its biggest competitor, Spotify, also offers discovery features, it primarily focuses on familiar artists and dispenses these recommendations in playlists instead of regular tracks. Don't want to miss the best from Android Authority? Set us as a preferred source in Google Search to support us and make sure you never miss our latest exclusive reports, expert analysis, and much more. The daily discover option itself isn't new and was previously spotted by another Reddit user, BarisberatWNR, about a month ago. However, for them, the recommendations appeared in a different location on the homepage, suggesting YouTube may be testing varied placements to see what is likely to get the most attention. Last month, another user posted in Google's Community forums about the feature appearing and then being removed from their account. From what we expect, the daily discover feature could complement or substitute YouTube's Discover mix, a playlist refreshed weekly instead of daily. To be able to access this weekly discover playlist, you must scroll down on YouTube Music's homepage and spot it under the 'Mixed for you' tab. This appears to be a limited test for certain users, as many others on the original Reddit post have commented about not receiving it. We can't access the feature either, and it isn't easy to ascertain whether this is meant to be an A/B test or a rollout. Further, it appears to have turned it on from the server-side, so updating the app to a newer version will not achieve positive results, though there is no harm in doing so. Follow

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store