logo
What Does AI Fluency Look Like In Your Company?

What Does AI Fluency Look Like In Your Company?

Forbes13-06-2025
What Does AI Fluency Look Like In Your Company?
When generative AI first entered the mainstream, it created a wave of excitement across the world. Entrepreneurs saw it as a way to unlock productivity, streamline operations, and speed up decision-making. But as with most technologies, the initial excitement has been replaced by a more sober reality.
Using AI is not the same as using it well. Many founders have learned this the hard way. What starts as an experiment to save time often turns into extra work. Teams spend hours rewriting AI-generated content or double-checking outputs for errors. These problems stem from a larger issue: most users simply are not fluent in how to interact with AI effectively.
That is what makes a new course from Anthropic especially relevant right now. Anthropic, the AI company behind the Claude AI Chatbot, has launched a free online course titled AI Fluency: Frameworks and Foundations. Unlike the countless AI prompt guides floating around online, this one is structured like a university-level program. It is built on a formal academic framework, created in partnership with Professor Rick Dakan from Ringling College of Art and Design and Professor Joseph Feller from University College Cork. The program is also supported by the Higher Education Authority of Ireland.
More than just informative, this course offers a practical roadmap for working with AI in a professional context. For entrepreneurs looking to use AI more strategically, it offers more than just knowledge. It comes with a certificate of completion, which, in today's job market, is a smart credential to add to your resume. It shows potential employers or investors that you understand not just how AI works, but how to apply it in a thoughtful, results-driven way.
In a startup or growing company, time and budget are always under pressure. When AI is used without guidance or structure, it can waste both. Founders often try to use AI to build marketing strategies or write business plans, only to get bland results that need significant editing. Even worse, teams might deploy AI-generated content that misrepresents the brand or includes factual errors that damage credibility.
These issues are not the fault of the technology itself. They point to a lack of structure in how people are taught to use it. That is the gap this course is trying to close. AI tools are everywhere, but the skill of using them properly is still rare. Most people are left to figure things out on their own, which leads to inconsistent results and missed opportunities.
The AI Fluency proposes a framework that the creators claim develops four core skills: Delegation, Description, Discernment, and Diligence. These are the building blocks of what the course calls AI fluency.
Here's how the framework works in practice.
Delegation in this context means making smart decisions about when to bring AI into the process. It begins by asking what the real goal is and whether AI can actually help achieve it. For example, you may not want to ask AI to define your company's mission or values. That likely requires deep personal insight. But you could absolutely use it to gather summaries of competitor activity or synthesize customer reviews into a digestible report. This skill ensures that AI is used with intention rather than by habit.
Most people know that AI needs prompts, but few know how to craft them well. Description is about giving AI clear, structured input so it can return exactly what you want. That means specifying the tone, the style, the format, and even the point of view.
If you were asking AI to help with a pitch deck, you wouldn't just type 'make a pitch deck.' You would explain that it's for a Series A round, for a logistics-focused SaaS company, and that it should be written in the voice of a CFO. You would outline the ten slides you need and how the financial projections should be formatted. That kind of precision can turn AI from a basic assistant into a capable contributor.
One of the biggest risks with AI is assuming the output is correct just because it sounds convincing. Discernment is the ability to review what AI produces with a thoughtful, critical eye. You need to check for logic, consistency, and accuracy. Did the AI ignore an important part of the prompt? Did it invent something that seems plausible but isn't?
This skill mirrors how managers review human work. You don't just look at the final product. You ask how the conclusions were reached and whether they align with your standards. That habit is just as important when dealing with AI.
Even when AI does most of the work, final responsibility lies with the human. Diligence means carefully reviewing everything before it is shared, especially with clients, stakeholders, or investors. It also means being upfront about when and how AI was used.
If you use AI to help write a board report, you need to be confident in every sentence. You are accountable for the end result, and this step protects both your credibility and your organization's reputation. Diligence also plays a role in choosing the right tools and being thoughtful about how they fit into your workflow.
The partnership behind this course is also worth noting. A leading AI lab, two established professors, and a national government agency came together to create a program that is accessible, credible, and relevant. That is rare in the current AI landscape, where most training options come from consultants or influencers with little oversight.
For Anthropic, helping users become more capable with its models leads to better long-term adoption. For the Higher Education Authority of Ireland, supporting this program positions the country as a leader in forward-looking digital education. And for the learners, the certificate adds immediate value to their careers.
If you are still experimenting with AI casually, it is time to shift your approach. The businesses that thrive in the years ahead will be those that integrate AI not just as a tool but as a core part of their strategy.
This course is not just an educational opportunity. It is a professional signal. By mastering the skills outlined in the 4D framework, business leaders can turn AI into a consistent, reliable engine for productivity and insight.
The phase of casual experimentation is over. AI fluency is no longer optional. It is the next essential business skill—and the entrepreneurs who take it seriously now will be the ones who lead the field tomorrow.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI in education's potential privacy nightmare
AI in education's potential privacy nightmare

Axios

timean hour ago

  • Axios

AI in education's potential privacy nightmare

AI is now firmly entrenched in classrooms, but student privacy rules haven't caught up. Why it matters: Chatbots can expose troves of personal data in ways few parents, students or teachers fully understand. The big picture: The 2025-26 school year is shaping up to be one where educators feel that they must embrace AI to keep students competitive. Here are three top concerns with classroom AI, according to privacy advocates and AI companies Axios spoke to. 1. Student work could be used to train AI models AI firms are constantly seeking data to train their models. They're not required to say exactly where they get it, but they do have to say how they're using customer data, especially when they're dealing with students. Guidelines like The Family Educational Rights and Privacy Act (FERPA) don't guarantee meaningful protections for students. FERPA was signed into law under President Ford in 1974 and has not been significantly updated since. "Penalty for violating FERPA is that your federal funding is withheld," Elizabeth Laird, director at the Center for Democracy and Technology, told Axios. "And that has been enforced exactly zero times. Literally never." Most educational AI firms say they're not training models on classroom work. Content submitted by teachers and students is not used to train the foundational AI models that underlie Khan Academy's AI tutor, Khanmigo, the company's chief learning officer, Kristen DiCerbo, told Axios. But training on a diverse set of student data would make the models less biased, DiCerbo said: "There's no easy answer to these things, and it's all trade-offs between different priorities." Institutions technically could allow student work to be used for AI training, though they're unlikely to do so, several educators told Axios. Yes, but: Data that's "publicly available" on the web is a different story. Business Insider recently reported on what it described as a list of sites that Anthropic contractors were allowed to scrape — including domains from Harvard, Princeton, Yale, Northwestern and other universities. Funding mandates often require universities to post student research online, meaning more of it is considered freely available data for training AI. An Anthropic spokesperson told Axios that it could not validate the list of sites found by Business Insider because it was created by a third-party vendor without Anthropic's involvement. 2. Off-the-shelf AI tools could expose student data Many teachers are experimenting with free chatbot tools. Some are from well-known players like OpenAI, Google, Perplexity and Anthropic. Others are from lesser-known startups with questionable privacy policies. In many cases, educators use these apps without district approval or formal guidance. Accelerating pushes from both big tech and President Trump for school and student adoption of AI have changed the vibe around AI heading into the new academic year, ed tech experts told Axios. "Where in the 2024-2025 school year most schools had the LLM on lockdown through their filter, this year all flowers will bloom," Tammy Wincup, CEO of Securly, a software company that builds safety tools for K-12 schools, told Axios. Products designed for educational use, like ChatGPT Edu, do not train on student data, but some of the consumer-facing free and paid versions of ChatGPT and other chatbotshave different policies. "That's where things get tricky," says Melissa Loble, chief academic officer at Instructure, the company behind the learning management system known as Canvas. "If AI tools are used outside our system, the data may not be protected under the school's policies." Yes but: Teachers are often the best judges of AI tools for their students. Ed tech is "a bottom-up adoption industry. It grows and thrives on teachers finding tools they like for teaching and learning and then getting districts to adopt," Wincup says. 3. Hacks are an increasing threat Earlier this year, a breach at PowerSchool — a widely used student information system — exposed sensitive personal data of tens of thousands of students and parents. "When you introduce any new tool, when you collect any new piece of information, you are necessarily introducing increased risk," Laird says. That makes thoughtful planning critical, she added. If AI tools store or process student data, a breach could expose not just grades and attendance records but also behavioral data, writing samples, and private communications. One way to prevent leaks is to delete data periodically. DiCerbo says Khan Academy deletes chats after 365 days. Yes, but: The advantage of using chatbots is that they can remember and learn from previous conversations, so some users want to store more information than might be safe. Between the lines: AI is steamrolling into classrooms and colleges and privacy is just one on a long list of concerns these institutions must manage. Khan Academy's DiCerbo says AI adoption is moving faster than anything she's seen in her 20 years working in ed tech. Khan Academy expects to reach a million students with its AI-powered tutor Khanmigo that launched in 2023. Earlier this year the California State University system introduced ChatGPT Edu to more than 460,000 students and over 63,000 staff and faculty across its 23 campuses. Google just started offering its AI Pro plan for free to students over 18 for a year. What we're watching: Some ed tech providers are looking beyond OpenAI, Anthropic and Google and using services like AWS and Microsoft's Azure to keep student data separate from the model providers. Brisk Teaching, a classroom AI assistant, uses this approach to mitigate concerns that student data might be used to train new models — even though OpenAI and Google say that their education-focused models don't train on user data. Brisk Teaching founder Arman Jaffer told Axios that there's a lot of "lost trust" between schools and the big AI providers. "It's just easier for us to say Google is not touching your data because they could potentially use it to train the next version of their model," he said.

Australia's Falling Competition Cost $2,000 a Person, RBA Finds
Australia's Falling Competition Cost $2,000 a Person, RBA Finds

Bloomberg

time2 hours ago

  • Bloomberg

Australia's Falling Competition Cost $2,000 a Person, RBA Finds

A decline in business competition in Australia from the mid-2000s to the Covid-19 pandemic has hurt productivity and household incomes, according to new analysis by the Reserve Bank. If competition had not dropped, productivity and therefore output would have been 1% to 3% higher due to resources being better allocated across firms in the economy, the RBA's Jonathan Hambur and Owen Freestone said in a research paper released Thursday. This equates, at the upper end, to around A$3,000 ($2,000) per person, they said.

UK insurer Aviva raises dividend after half-year profit jumps 22%
UK insurer Aviva raises dividend after half-year profit jumps 22%

Yahoo

time3 hours ago

  • Yahoo

UK insurer Aviva raises dividend after half-year profit jumps 22%

(Reuters) -British insurer Aviva raised its interim dividend on Thursday after reporting a 22% jump in half-year operating profit, driven by strong growth in UK & Ireland general insurance premiums and higher wealth net flows. The company, which closed a 3.7 billion pound ($5.02 billion) takeover of smaller rival Direct Line in July, said it would provide more details on Direct Line and the impact of the deal on its targets in November. "(Direct Line) integration is well underway... and we are confident the deal will contribute significantly to Aviva's future growth," CEO Amanda Blanc said in a statement. Aviva, which also provides wealth and retirement services, said it expected some areas of rate softening in the second half of the year for its general insurance business, but its wealth and health divisions were expected to grow further. The company posted an operating profit of 1.07 billion pounds for the six-month period ended June 30, compared with the 875 million pounds reported for the year-earlier period. The insurer raised its dividend by 10% to 13.1 pence per share. ($1 = 0.7365 pounds) Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store