logo
#

Latest news with #Amazon.com-backed

Anthropic offers AI chatbot Claude to U.S. government for $1
Anthropic offers AI chatbot Claude to U.S. government for $1

The Hindu

time2 days ago

  • Business
  • The Hindu

Anthropic offers AI chatbot Claude to U.S. government for $1

Anthropic said on Tuesday it will offer its Claude AI model to the U.S. government for $1, joining a growing list of artificial intelligence startups proposing lucrative deals to win federal contracts. This comes days after OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude were added to the government's list of approved AI vendors. "America's AI leadership requires that our government institutions have access to the most capable, secure AI tools available," CEO Dario Amodei said. Rival OpenAI had announced a similar offer last week, wherein ChatGPT Enterprise was made available to participating U.S. federal agencies for $1 per agency for the next year.

Anthropic offers AI chatbot Claude to US government for $1
Anthropic offers AI chatbot Claude to US government for $1

Indian Express

time2 days ago

  • Business
  • Indian Express

Anthropic offers AI chatbot Claude to US government for $1

Anthropic said on Tuesday it will offer its Claude AI model to the U.S. government for $1, joining a growing list of artificial intelligence startups proposing lucrative deals to win federal contracts. This comes days after OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude were added to the government's list of approved AI vendors. 'America's AI leadership requires that our government institutions have access to the most capable, secure AI tools available,' CEO Dario Amodei said. Rival OpenAI had announced a similar offer last week, wherein ChatGPT Enterprise was made available to participating U.S. federal agencies for $1 per agency for the next year.

Anthropic offers AI chatbot Claude to US government for $1
Anthropic offers AI chatbot Claude to US government for $1

The Star

time2 days ago

  • Business
  • The Star

Anthropic offers AI chatbot Claude to US government for $1

FILE PHOTO: Anthropic logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo (Reuters) - Anthropic said on Tuesday it will offer its Claude AI model to the U.S. government for $1, joining a growing list of artificial intelligence startups proposing lucrative deals to win federal contracts. This comes days after OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude were added to the government's list of approved AI vendors. ""America's AI leadership requires that our government institutions have access to the most capable, secure AI tools available," CEO Dario Amodei said. Rival OpenAI had announced a similar offer last week, wherein ChatGPT Enterprise was made available to participating U.S. federal agencies for $1 per agency for the next year. (Reporting by Arsheeya Bajwa in Bengaluru)

Anthropic CEO says proposed 10-year ban on state AI regulation 'too blunt' in NYT op-ed
Anthropic CEO says proposed 10-year ban on state AI regulation 'too blunt' in NYT op-ed

The Hindu

time06-06-2025

  • Business
  • The Hindu

Anthropic CEO says proposed 10-year ban on state AI regulation 'too blunt' in NYT op-ed

A Republican proposal to block states from regulating artificial intelligence for 10 years is "too blunt," Anthropic Chief Executive Officer Dario Amodei wrote in a New York Times' opinion piece. Amodei instead called for the White House and Congress to work together on a transparency standard for AI companies at a federal level, so that emerging risks are made clear to the people. "A 10-year moratorium is far too blunt an instrument. AI is advancing too head-spinningly fast," Amodei said. "Without a clear plan for a federal response, a moratorium would give us the worst of both worlds - no ability for states to act, and no national policy as a backstop." The proposal, included in President Donald Trump's tax cut bill, aims to preempt AI laws and regulations passed recently in dozens of states, but has drawn opposition from a bipartisan group of attorneys general that have regulated high-risk uses of the technology. Instead, a national standard would require developers working on powerful models to adopt policies for testing and evaluating their models and to publicly disclose how they plan to test for and mitigate national security and other risks, according to Amodei's opinion piece. Such a policy, if adopted, would also mean developers would have to be upfront about the steps they took to make sure their models were safe before releasing them to the public, he said. Amodei said Anthropic already releases such information and competitors OpenAI and Google DeepMind have adopted similar policies. Legislative incentives to ensure that these companies keep disclosing such details could become necessary as corporate incentive to provide this level of transparency might change in light of models becoming more powerful, he argued.

Anthropic CEO says proposed 10-year ban on state AI regulation 'too blunt' in NYT op-ed
Anthropic CEO says proposed 10-year ban on state AI regulation 'too blunt' in NYT op-ed

Yahoo

time05-06-2025

  • Business
  • Yahoo

Anthropic CEO says proposed 10-year ban on state AI regulation 'too blunt' in NYT op-ed

(Reuters) -A Republican proposal to block states from regulating artificial intelligence for 10 years is "too blunt," Anthropic Chief Executive Officer Dario Amodei wrote in a New York Times' opinion piece. Amodei instead called for the White House and Congress to work together on a transparency standard for AI companies at a federal level, so that emerging risks are made clear to the people. "A 10-year moratorium is far too blunt an instrument. AI is advancing too head-spinningly fast," Amodei said. "Without a clear plan for a federal response, a moratorium would give us the worst of both worlds - no ability for states to act, and no national policy as a backstop." The proposal, included in President Donald Trump's tax cut bill, aims to preempt AI laws and regulations passed recently in dozens of states, but has drawn opposition from a bipartisan group of attorneys general that have regulated high-risk uses of the technology. Instead, a national standard would require developers working on powerful models to adopt policies for testing and evaluating their models and to publicly disclose how they plan to test for and mitigate national security and other risks, according to Amodei's opinion piece. Such a policy, if adopted, would also mean developers would have to be upfront about the steps they took to make sure their models were safe before releasing them to the public, he said. Amodei said Anthropic already releases such information and competitors OpenAI and Google DeepMind have adopted similar policies. Legislative incentives to ensure that these companies keep disclosing such details could become necessary as corporate incentive to provide this level of transparency might change in light of models becoming more powerful, he argued.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store