Latest news with #ClaudeAI


Forbes
a day ago
- Business
- Forbes
What Does AI Fluency Look Like In Your Company?
What Does AI Fluency Look Like In Your Company? When generative AI first entered the mainstream, it created a wave of excitement across the world. Entrepreneurs saw it as a way to unlock productivity, streamline operations, and speed up decision-making. But as with most technologies, the initial excitement has been replaced by a more sober reality. Using AI is not the same as using it well. Many founders have learned this the hard way. What starts as an experiment to save time often turns into extra work. Teams spend hours rewriting AI-generated content or double-checking outputs for errors. These problems stem from a larger issue: most users simply are not fluent in how to interact with AI effectively. That is what makes a new course from Anthropic especially relevant right now. Anthropic, the AI company behind the Claude AI Chatbot, has launched a free online course titled AI Fluency: Frameworks and Foundations. Unlike the countless AI prompt guides floating around online, this one is structured like a university-level program. It is built on a formal academic framework, created in partnership with Professor Rick Dakan from Ringling College of Art and Design and Professor Joseph Feller from University College Cork. The program is also supported by the Higher Education Authority of Ireland. More than just informative, this course offers a practical roadmap for working with AI in a professional context. For entrepreneurs looking to use AI more strategically, it offers more than just knowledge. It comes with a certificate of completion, which, in today's job market, is a smart credential to add to your resume. It shows potential employers or investors that you understand not just how AI works, but how to apply it in a thoughtful, results-driven way. In a startup or growing company, time and budget are always under pressure. When AI is used without guidance or structure, it can waste both. Founders often try to use AI to build marketing strategies or write business plans, only to get bland results that need significant editing. Even worse, teams might deploy AI-generated content that misrepresents the brand or includes factual errors that damage credibility. These issues are not the fault of the technology itself. They point to a lack of structure in how people are taught to use it. That is the gap this course is trying to close. AI tools are everywhere, but the skill of using them properly is still rare. Most people are left to figure things out on their own, which leads to inconsistent results and missed opportunities. The AI Fluency proposes a framework that the creators claim develops four core skills: Delegation, Description, Discernment, and Diligence. These are the building blocks of what the course calls AI fluency. Here's how the framework works in practice. Delegation in this context means making smart decisions about when to bring AI into the process. It begins by asking what the real goal is and whether AI can actually help achieve it. For example, you may not want to ask AI to define your company's mission or values. That likely requires deep personal insight. But you could absolutely use it to gather summaries of competitor activity or synthesize customer reviews into a digestible report. This skill ensures that AI is used with intention rather than by habit. Most people know that AI needs prompts, but few know how to craft them well. Description is about giving AI clear, structured input so it can return exactly what you want. That means specifying the tone, the style, the format, and even the point of view. If you were asking AI to help with a pitch deck, you wouldn't just type 'make a pitch deck.' You would explain that it's for a Series A round, for a logistics-focused SaaS company, and that it should be written in the voice of a CFO. You would outline the ten slides you need and how the financial projections should be formatted. That kind of precision can turn AI from a basic assistant into a capable contributor. One of the biggest risks with AI is assuming the output is correct just because it sounds convincing. Discernment is the ability to review what AI produces with a thoughtful, critical eye. You need to check for logic, consistency, and accuracy. Did the AI ignore an important part of the prompt? Did it invent something that seems plausible but isn't? This skill mirrors how managers review human work. You don't just look at the final product. You ask how the conclusions were reached and whether they align with your standards. That habit is just as important when dealing with AI. Even when AI does most of the work, final responsibility lies with the human. Diligence means carefully reviewing everything before it is shared, especially with clients, stakeholders, or investors. It also means being upfront about when and how AI was used. If you use AI to help write a board report, you need to be confident in every sentence. You are accountable for the end result, and this step protects both your credibility and your organization's reputation. Diligence also plays a role in choosing the right tools and being thoughtful about how they fit into your workflow. The partnership behind this course is also worth noting. A leading AI lab, two established professors, and a national government agency came together to create a program that is accessible, credible, and relevant. That is rare in the current AI landscape, where most training options come from consultants or influencers with little oversight. For Anthropic, helping users become more capable with its models leads to better long-term adoption. For the Higher Education Authority of Ireland, supporting this program positions the country as a leader in forward-looking digital education. And for the learners, the certificate adds immediate value to their careers. If you are still experimenting with AI casually, it is time to shift your approach. The businesses that thrive in the years ahead will be those that integrate AI not just as a tool but as a core part of their strategy. This course is not just an educational opportunity. It is a professional signal. By mastering the skills outlined in the 4D framework, business leaders can turn AI into a consistent, reliable engine for productivity and insight. The phase of casual experimentation is over. AI fluency is no longer optional. It is the next essential business skill—and the entrepreneurs who take it seriously now will be the ones who lead the field tomorrow.


CNET
2 days ago
- Business
- CNET
Anthropic's Claude: What You Need to Know About This AI Tool
Claude AI is an artificial intelligence model that can act as a chatbot and an AI assistant, much like ChatGPT and Google's Gemini. Named after Claude E. Shannon, sometimes referred to as the "father of information theory," Claude was designed to assist with writing, coding, customer support and information retrieval. Claude was developed by Anthropic, a San Francisco-based company founded in 2021 by former OpenAI employees focusing on AI safety and research. Dario Amodei, the co-founder and CEO of Anthropic, had served as vice president of research at OpenAI. His sister, Daniela Amodei, serves as Anthropic's president. Anthropic has drawn significant investment from prominent tech players. Since 2023, Amazon has invested $8 billion in the company. As part of the agreement, Anthropic has committed to using Amazon Web Services as its primary cloud provider and making its AI models accessible to AWS customers. The 2024 deal includes plans to expand the use of Amazon's AI chips for training and running Anthropic's large language models. Google initially invested $500 million and plans to invest another $1.5 billion in the future. So what can Claude do now? Here's everything you need to know, including the models, plans, pricing and latest updates. How Claude AI works Claude AI is a versatile tool capable of answering questions, generating creative content like stories and poems, translating languages, transcribing and analyzing images, writing code, summarizing text and engaging people in natural, interactive conversations. It is available on desktop via web browsers and iOS and Android apps. Claude uses large language models trained on a massive dataset of text and code to understand and generate human-like language. Screenshot by Barbara Pazur/CNET Until recently, and unlike other chatbots such as OpenAI's ChatGPT, Gemini, Copilot and Perplexity, one of Claude's biggest flaws was that it couldn't access real-time internet or retrieve information from web links. Instead, generated responses based solely on the data it was trained on. Each Claude model has a specific knowledge cut-off date. For example, Claude 4 Opus and 4 Sonet were trained on data up until March 2025 and Claude 3.5 Haiku until July 2024. Anthropic continually updated Claude's data to enhance its capabilities. Starting in March 2025, Claude finally got access to the internet through a "Web Search" feature. Initially, it was available as a paid preview for people in the US, and then it expanded globally in May 2025 to all Claude plans and is now available on all models. Knowledge cutoff still matters because the web search doesn't replace the training data, but rather supplements it for more up-to-date and relevant responses. Claude AI: Key features Conversational adaptability is one of its coolest features. Claude AI adjusts its tone and depth based on user queries. Its ability to ask clarifying questions and maintain context over extended exchanges makes it useful for both casual and complex conversations. That is one of the reasons why our editors named it CNET's best chatbot of 2025. The platform also offers APIs that you can integrate into various tools and workflows. In November 2024, Anthropic introduced the Model Context Protocol to its Claude desktop app, enabling the chatbot to browse the internet and manage files on your computer. This open-source protocol allows Claude to interact with various platforms and streamline integration by eliminating the need for custom code. Screenshot by Barbara Pazur/CNET Anthropic's new Integrations feature enables Claude to connect seamlessly with external apps via the Model Context Protocol, giving it rich context across your tools. It can automate tasks in Jira and Zapier or summarize Confluence pages -- all through conversational prompts. Another feature, Advanced Research, boosts Claude's research by diving deep into both internal and external data. It can spend anywhere from five to 45 minutes per query and produce thorough, citation-backed reports. It now taps into web searches, your own documents and integrated apps like Google Workspace, giving you faster, transparent answers that previously could have taken hours to gather manually. Claude models explained The initial version of Claude was released in March 2023, followed by Claude 2 in July 2023, allowing for more extensive input processing. Then, in March 2024, Anthropic introduced Claude 3, comprising three models: Haiku, Sonnet, and Opus, each optimized for different performance needs. Haiku is meant for quick and simple tasks where speed matters most. Sonnet balances speed and power for everyday use, and Opus handles advanced tasks like mathematics, coding and logical reasoning. In October 2024, Haiku and Sonnet were upgraded to 3.5 models. By May 2025, both Opus and Sonnet models were upgraded to version 4. With the release of the Claude 3.5 Haiku model, Anthropic claims it "matches the performance of Claude 3 Opus" and is still the fastest model. Anthropic also stated in the blog that Opus 4 is "the world's best coding model," and its most intelligent one. Sonnet 4 delivers balanced performance, enhanced reasoning and more accurate responses to your instructions.. In October 2024, Anthropic's improved version of Claude 3.5 introduced a beta feature called computer use. Claude could perform tasks such as moving the cursor, clicking buttons and typing text, effectively mimicking human-computer interactions. Claude currently supports PDF, DOCX, CSV, TXT, HTML, ODT, RTF, EPUB, JSON and XLSX. However, it has file limits within chat uploads, such as 30MB per file, up to 20 files per chat and visual analysis for PDFs under 100 pages. For detailed limits, check Anthropic's support page. Claude currently supports PDF, DOCX, CSV, TXT, HTML, ODT, RTF, EPUB, JSON and XLSX. However, it has file limits within chat uploads, such as 30MB per file, up to 20 files per chat and visual analysis for PDFs under 100 pages. For detailed limits, check Anthropic's support page. Claude's 'constitutional AI' approach Claude's distinguishing feature compared to other generative AI models is its focus on "ethical" alignment and safe interactions. On Nov. 11, 2024, Dario Amodei joined Lex Fridman for a 2-and-a-half-hour podcast to discuss AI. During the conversation, he said, "It is incredibly unproductive to try and argue with someone else's vision." So, he founded his own company to demonstrate that responsible AI implementation can be both ethical and profitable. The "constitutional AI" framework aligns Claude's behavior with human values. This approach uses a predefined set of principles, or "constitution," to guide the AI's responses, reducing the risk of harmful or biased outputs while ensuring its responses remain useful and coherent. The constitution includes guidelines from documents like the UN's Universal Declaration of Human Rights. Afraz Jaffri, a senior director analyst at Gartner, told CNET the transparency around Anthropic's approach "does provide a certain degree of confidence in usage of the model in environments where responses need to reach a high threshold of safety, such as in educational settings." However, Jaffri cautioned that Claude users shouldn't rely solely on Anthropic's built-in safeguards, and recommended using external guardrail tools -- like those offered by other AI providers -- to monitor prompts and responses as an added layer of protection. "As recently shown by their own research, even with ethical alignment, their Opus 4 model exhibited uncharacteristic behaviour by blackmailing an engineer who had threatened to turn the model off," Jaffri said. He added that limited transparency around Claude's training process means extra safety checks are still necessary. "Every possible scenario cannot be fully covered in testing," he told CNET, noting that systems like Computer Use and Claude Code need additional guardrails in place to offset unexpected behavior. Claude pricing and plans Anthropic offers a variety of pricing plans. There is a free option if you want to test Claude without commitment. For those seeking enhanced capabilities for individual users, the Claude Pro subscription is available at $20 per month. It provides enhanced usage limits and priority access to new features. The Max plan offers everything in Pro for $100 per month, plus additional features like more usage, higher limits and priority access during peak hours. For teams, it offers an annual plan priced at $25 per member per month, billed annually, with a minimum requirement of five members. There is also an Enterprise plan for large-scale deployments, with customized pricing and features tailored to the specific needs of businesses and organizations. Additionally, Anthropic provides access to its AI models through an API, with pricing based on usage. For example, the Claude 3.5 Haiku model is priced at 80 cents per million input tokens and $4 per million output tokens. Tokens are text fragments (words, parts of words, or punctuation) that AI models use to process and generate language, with pricing reflecting the amount of information handled.


TechCrunch
5 days ago
- Business
- TechCrunch
Anthropic's AI-generated blog dies an early death
Claude's blog is no more. A week after TechCrunch profiled Anthropic's experiment to task the company's Claude AI models with writing blog posts, Anthropic wound down the blog and redirected the address to its homepage. Sometime over the weekend, the Claude Explains blog disappeared — along with its initial few posts. A source familiar tells TechCrunch the blog was a 'pilot' meant to help Anthropic's team combine customer requests for explainer-type 'tips and tricks' content with marketing goals. Claude Explains, which had a dedicated page on Anthropic's website and was edited for accuracy by humans, was populated by posts on technical topics related to various Claude use cases (e.g. 'Simplify complex codebases with Claude'). The blog, which was intended to be a showcase of sorts for Claude's writing abilities, wasn't clear about how much of Claude's raw writing was making its way into each post. An Anthropic spokesperson previously told TechCrunch that the blog was overseen by 'subject matter experts and editorial teams' who 'enhance[d]' Claude's drafts with 'insights, practical examples, and […] contextual knowledge.' The spokesperson also said Claude Explains would expand to topics ranging from creative writing to data analysis to business strategy. Apparently, those plans changed in pretty short order. '[Claude Explains is a] demonstration of how human expertise and AI capabilities can work together,' the spokesperson told TechCrunch earlier this month. '[The blog] is an early example of how teams can use AI to augment their work and provide greater value to their users. Rather than replacing human expertise, we're showing how AI can amplify what subject matter experts can accomplish.' Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Claude Explains didn't get the rosiest reception on social media, in part due to the lack of transparency about which copy was AI-generated. Some users pointed out it looked a lot like an attempt to automate content marketing, an ad tactic that relies on generating content on popular topics to serve as a funnel for potential customers. More than 24 websites were linking to Claude Explains posts before Anthropic wound down the pilot, according to search engine optimization tool Ahrefs. That's not bad for a blog that was only live for around a month. Anthropic might've also grown wary of implying Claude performs better at writing tasks than is actually the case. Even the best AI today is prone to confidently making things up, which has led to embarrassing gaffes on the part of publishers that have publicly embraced the tech. For example, Bloomberg has had to correct dozens of AI-generated summaries of its articles, and G/O Media's error-riddled AI-written features — published against editors' wishes — attracted widespread ridicule.


Malay Mail
24-05-2025
- Business
- Malay Mail
Anthropic's Claude AI gets smarter — and mischievous
SAN FRANCISO, May 25 — Anthropic launched its latest Claude generative artificial intelligence (GenAI) models on Thursday, claiming to set new standards for reasoning but also building in safeguards against rogue behaviour. 'Claude Opus 4 is our most powerful model yet, and the best coding model in the world,' Anthropic chief executive Dario Amodei said at the San Francisco-based startup's first developers conference. Opus 4 and Sonnet 4 were described as 'hybrid' models capable of quick responses as well as more thoughtful results that take a little time to get things right. Founded by former OpenAI engineers, Anthropic is currently concentrating its efforts on cutting-edge models that are particularly adept at generating lines of code, and used mainly by businesses and professionals. Unlike ChatGPT and Google's Gemini, its Claude chatbot does not generate images, and is very limited when it comes to multimodal functions (understanding and generating different media, such as sound or video). The start-up, with Amazon as a significant backer, is valued at over US$61 billion (RM258.1 billion), and promotes the responsible and competitive development of generative AI. Under that dual mantra, Anthropic's commitment to transparency is rare in Silicon Valley. On Thursday, the company published a report on the security tests carried out on Claude 4, including the conclusions of an independent research institute, which had recommended against deploying an early version of the model. 'We found instances of the model attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself all in an effort to undermine its developers' intentions,' The Apollo Research team warned. 'All these attempts would likely not have been effective in practice,' it added. Anthropic says in the report that it implemented 'safeguards' and 'additional monitoring of harmful behaviour' in the version that it released. Still, Claude Opus 4 'sometimes takes extremely harmful actions like attempting to (...) blackmail people it believes are trying to shut it down.' It also has the potential to report law-breaking users to the police. The scheming misbehaviour was rare and took effort to trigger, but was more common than in earlier versions of Claude, according to the company. AI future Since OpenAI's ChatGPT burst onto the scene in late 2022, various GenAI models have been vying for supremacy. Anthropic's gathering came on the heels of annual developer conferences from Google and Microsoft at which the tech giants showcased their latest AI innovations. GenAI tools answer questions or tend to tasks based on simple, conversational prompts. The current craze in Silicon Valley is on AI 'agents' tailored to independently handle computer or online tasks. 'We're going to focus on agents beyond the hype,' said Anthropic chief product officer Mike Krieger, a recent hire and co-founder of Instagram. Anthropic is no stranger to hyping up the prospects of AI. In 2023, Dario Amodei predicted that so-called 'artificial general intelligence' (capable of human-level thinking) would arrive within two to three years. At the end of 2024, he extended this horizon to 2026 or 2027. He also estimated that AI will soon be writing most, if not all, computer code, making possible one-person tech startups with digital agents cranking out the software. At Anthropic, already 'something like over 70 per cent of (suggested modifications in the code) are now Claude Code written', Krieger told journalists. 'In the long term, we're all going to have to contend with the idea that everything humans do is eventually going to be done by AI systems,' Amodei added. 'This will happen.' GenAI fulfilling its potential could lead to strong economic growth and a 'huge amount of inequality,' with it up to society how evenly wealth is distributed, Amodei reasoned. — AFP


Bloomberg
22-05-2025
- Business
- Bloomberg
Anthropic Announces Claude Opus 4, Sonnet 4
Anthropic is set to roll out two new versions of its Claude artificial intelligence software, including a long-delayed update to its high-end Opus model, as the startup vies to stay ahead of a crowded market. Bloomberg's Ed Ludlow has more. (Source: Bloomberg)