
What Does AI Fluency Look Like In Your Company?
What Does AI Fluency Look Like In Your Company?
When generative AI first entered the mainstream, it created a wave of excitement across the world. Entrepreneurs saw it as a way to unlock productivity, streamline operations, and speed up decision-making. But as with most technologies, the initial excitement has been replaced by a more sober reality.
Using AI is not the same as using it well. Many founders have learned this the hard way. What starts as an experiment to save time often turns into extra work. Teams spend hours rewriting AI-generated content or double-checking outputs for errors. These problems stem from a larger issue: most users simply are not fluent in how to interact with AI effectively.
That is what makes a new course from Anthropic especially relevant right now. Anthropic, the AI company behind the Claude AI Chatbot, has launched a free online course titled AI Fluency: Frameworks and Foundations. Unlike the countless AI prompt guides floating around online, this one is structured like a university-level program. It is built on a formal academic framework, created in partnership with Professor Rick Dakan from Ringling College of Art and Design and Professor Joseph Feller from University College Cork. The program is also supported by the Higher Education Authority of Ireland.
More than just informative, this course offers a practical roadmap for working with AI in a professional context. For entrepreneurs looking to use AI more strategically, it offers more than just knowledge. It comes with a certificate of completion, which, in today's job market, is a smart credential to add to your resume. It shows potential employers or investors that you understand not just how AI works, but how to apply it in a thoughtful, results-driven way.
In a startup or growing company, time and budget are always under pressure. When AI is used without guidance or structure, it can waste both. Founders often try to use AI to build marketing strategies or write business plans, only to get bland results that need significant editing. Even worse, teams might deploy AI-generated content that misrepresents the brand or includes factual errors that damage credibility.
These issues are not the fault of the technology itself. They point to a lack of structure in how people are taught to use it. That is the gap this course is trying to close. AI tools are everywhere, but the skill of using them properly is still rare. Most people are left to figure things out on their own, which leads to inconsistent results and missed opportunities.
The AI Fluency proposes a framework that the creators claim develops four core skills: Delegation, Description, Discernment, and Diligence. These are the building blocks of what the course calls AI fluency.
Here's how the framework works in practice.
Delegation in this context means making smart decisions about when to bring AI into the process. It begins by asking what the real goal is and whether AI can actually help achieve it. For example, you may not want to ask AI to define your company's mission or values. That likely requires deep personal insight. But you could absolutely use it to gather summaries of competitor activity or synthesize customer reviews into a digestible report. This skill ensures that AI is used with intention rather than by habit.
Most people know that AI needs prompts, but few know how to craft them well. Description is about giving AI clear, structured input so it can return exactly what you want. That means specifying the tone, the style, the format, and even the point of view.
If you were asking AI to help with a pitch deck, you wouldn't just type 'make a pitch deck.' You would explain that it's for a Series A round, for a logistics-focused SaaS company, and that it should be written in the voice of a CFO. You would outline the ten slides you need and how the financial projections should be formatted. That kind of precision can turn AI from a basic assistant into a capable contributor.
One of the biggest risks with AI is assuming the output is correct just because it sounds convincing. Discernment is the ability to review what AI produces with a thoughtful, critical eye. You need to check for logic, consistency, and accuracy. Did the AI ignore an important part of the prompt? Did it invent something that seems plausible but isn't?
This skill mirrors how managers review human work. You don't just look at the final product. You ask how the conclusions were reached and whether they align with your standards. That habit is just as important when dealing with AI.
Even when AI does most of the work, final responsibility lies with the human. Diligence means carefully reviewing everything before it is shared, especially with clients, stakeholders, or investors. It also means being upfront about when and how AI was used.
If you use AI to help write a board report, you need to be confident in every sentence. You are accountable for the end result, and this step protects both your credibility and your organization's reputation. Diligence also plays a role in choosing the right tools and being thoughtful about how they fit into your workflow.
The partnership behind this course is also worth noting. A leading AI lab, two established professors, and a national government agency came together to create a program that is accessible, credible, and relevant. That is rare in the current AI landscape, where most training options come from consultants or influencers with little oversight.
For Anthropic, helping users become more capable with its models leads to better long-term adoption. For the Higher Education Authority of Ireland, supporting this program positions the country as a leader in forward-looking digital education. And for the learners, the certificate adds immediate value to their careers.
If you are still experimenting with AI casually, it is time to shift your approach. The businesses that thrive in the years ahead will be those that integrate AI not just as a tool but as a core part of their strategy.
This course is not just an educational opportunity. It is a professional signal. By mastering the skills outlined in the 4D framework, business leaders can turn AI into a consistent, reliable engine for productivity and insight.
The phase of casual experimentation is over. AI fluency is no longer optional. It is the next essential business skill—and the entrepreneurs who take it seriously now will be the ones who lead the field tomorrow.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
26 minutes ago
- Tom's Guide
I don't like my reminders app — so I made my own using Claude 4
If I don't write something down, it will be gone from my brain forever. I am the kind of person who needs every date, name, and random bit of trivia noted down, or it will vanish into the abyss. This has resulted in me trying just about every reminder or notes app under the sun. And it's been a pretty hit-and-miss experience. While I've been able to make things work with these off-the-shelf apps, it's still not the exact thing I want. So, when I saw how advanced Anthropic's Claude has become for coding, I thought I'd try something that would have seemed impossible just a year ago: creating my own app entirely via a chatbot. Making an app with an AI chatbot normally requires some knowledge of prompting. It's not quite as simple as saying 'make me this app'. Instead, you need to give some clear instructions and be prepared for some back and forth. Claude makes this step slightly easier, offering a pre-designed prompt to best get the model working. I used this prompt, with an edit to make it better fit what I was looking for: Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Claude instantly jumped into coding, but also asked me two questions: 1: How do you want to distinguish between brain dumps and important notes 2: What's most important in a reminder? Due dates, reminders, or more advanced features With these two questions answered, Claude jumped into action. After just a minute or two of generating, Claude returned a completed app. It had a search function and three categories to add notes: Brain dump, important notes, and reminders. While it had everything I asked for, I decided I wanted these in separate tabs. After asking for the change, it took Claude a few more minutes to do this and my app was completed. I could search for specific notes and had my three categories, capable of adding as many notes as I needed to each. Each note has the date of creation and with the reminders section, I could choose the exact time a reminder was set for. With the app finished, I could then publish it. This gives it a public URL and allows you to come back to it at any time. I can bookmark it, and access it via browser from any of my devices. I've tried "vibe coding", the concept of coding via an AI chatbot, with all the big names now: ChatGPT, Grok, Gemini and Claude. Claude is, by a long way, the best experience of this. It was so easy getting this app up and running. I was able to decide exactly what I wanted from it and didn't need any coding experience to do it. While there is still a long way to go, and this is just a URL I can use to get to this reminders app, it shows the impressive jump forward AI is taking when it comes to coding. I'm already trying to decide what my next app should be.
Yahoo
7 hours ago
- Yahoo
New York passes a bill to prevent AI-fueled disasters
New York state lawmakers passed a bill on Thursday that aims to prevent frontier AI models from OpenAI, Google, and Anthropic from contributing to disaster scenarios, including the death or injury of more than 100 people, or more than $1 billion in damages. The passage of the RAISE Act represents a win for the AI safety movement, which has lost ground in recent years as Silicon Valley and the Trump Administration have prioritized speed and innovation. Safety advocates including Nobel prize laureate Geoffrey Hinton and AI research pioneer Yoshua Bengio have championed the RAISE Act. Should it become law, the bill would establish America's first set of legally mandated transparency standards for frontier AI labs. The RAISE Act has some of the same provisions and goals as California's controversial AI safety bill, SB 1047, which was ultimately vetoed. However, the co-sponsor of the bill, New York state Senator Andrew Gounardes told TechCrunch in an interview that he deliberately designed the RAISE Act such that it doesn't chill innovation among startups or academic researchers — a common criticism of SB 1047. 'The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,' said Senator Gounardes. 'The people that know [AI] the best say that these risks are incredibly likely […] That's alarming.' The RAISE Act is now headed for New York Governor Kathy Hochul's desk, where could either sign the bill into law, send it back for amendments, or veto it altogether. If signed into law, New York's AI safety bill would require the world's largest AI labs to publish thorough safety and security reports on their frontier AI models. The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. If tech companies fail to live up to these standards, the RAISE Act empowers New York's Attorney General to bring civil penalties of up to $30 million. The RAISE Act aims to narrowly regulate the world's largest companies — whether they're based in California (like OpenAI and Google) or China (like DeepSeek and Alibaba). The bill's transparency requirements apply to companies whose AI models were trained using more than $100 million in computing resources (seemingly, more than any AI model available today), and are being made available to New York residents. While similar to SB 1047 in some ways, the RAISE Act was designed to address criticisms of previous AI safety bills, according to Nathan Calvin, the Vice President of State Affairs and General Counsel at Encode, who worked on this bill and SB 1047. Notably, the RAISE Act does not require AI model developers to include a 'kill switch' on their models, nor does it hold companies that post-train frontier AI models accountable for critical harms. Nevertheless, Silicon Valley has pushed back significantly on New York's AI safety bill, New York state Assemblymember and co-sponsor of the RAISE Act Alex Bores told TechCrunch. Bores called the industry resistance unsurprising, but claimed that the RAISE Act would not limit innovation of tech companies in any way. 'The NY RAISE Act is yet another stupid, stupid state level AI bill that will only hurt the US at a time when our adversaries are racing ahead,' said Andreessen Horowitz general partner Anjney Midha in a Friday post on X. Andreessen Horowitz, alongside the startup incubator Y Combinator, were some of the fiercest opponents to SB 1047. Anthropic, the safety-focused AI lab that called for federal transparency standards for AI companies earlier this month, has not reached an official stance on the bill, co-founder Jack Clark said in a Friday post on X. However, Clark expressed some grievances over how broad the RAISE Act is, noting that it could present a risk to 'smaller companies.' When asked about Anthropic's criticism, state Senator Gounardes told TechCrunch he thought it 'misses the mark,' noting that he designed the bill not to apply to small companies. OpenAI, Google, and Meta did not respond to TechCrunch's request for comment. Another common criticism of the RAISE Act is that AI model developers simply wouldn't offer their most advanced AI models in the state of New York. That was a similar criticism brought against SB 1047, and it's largely what's played out in Europe thanks to the continent's tough regulations on technology. Assemblymember Bores told TechCrunch that the regulatory burden of the RAISE Act is relatively light, and therefore, shouldn't require tech companies to stop operating their products in New York. Given the fact that New York has the third largest GDP in the U.S., pulling out of the state is not something most companies would take lightly. 'I don't want to underestimate the political pettiness that might happen, but I am very confident that there is no economic reasons for them to not make their models available in New York,' said Assemblymember Borres. Sign in to access your portfolio


Forbes
10 hours ago
- Forbes
5 Ways College Must Adapt To Prepare Students For 2025 And Beyond
Colleges need to better prepare graduates for the future According to Federal Student Aid, the average student loan debt reached $38,375 by the end of 2024, with the total U.S. student debt now totaling $1.8 trillion. Meanwhile, coding bootcamp graduates earn an average starting salary of $70,698, often surpassing entry-level salaries for traditional college graduates. This data reveals a fundamental disconnect: students are paying more for education that may not deliver proportional career returns. Research by USC professor Dave Kang, who has tracked Fortune 500 CEO educational backgrounds for 20 years, found that only 11.8% of Fortune 100 CEOs attended Ivy League schools as undergraduates. Seven to eight Fortune 500 CEOs had no undergraduate degree at all—more than graduated from any single college. The message is clear: prestigious degrees don't guarantee career success, but practical skills and adaptability do. Traditional higher education emphasizes theoretical knowledge over practical application. This approach fails to prepare students for a workforce that prioritizes demonstrated capabilities over academic credentials. According to Course Report, 69% of employers believe boot camp graduates are qualified for tech roles, and 80% would hire another boot camp graduate. This employer confidence stems from bootcamps' emphasis on hands-on projects and real-world applications. What colleges can do: Integrate project-based learning across every major - Partner with local businesses and nonprofits for real assignments - Create for-credit internships with measurable outcomes Real-world example: At Northeastern University, students complete up to three six-month cooperative education programs during their degree. These aren't traditional internships—students take full-time roles with measurable responsibilities and outcomes. How to implement: Business students could manage actual marketing budgets for local nonprofits. Engineering majors could solve real infrastructure problems in their communities. Liberal arts students could develop content strategies for emerging companies. The key difference is making these experiences count toward graduation requirements rather than treating them as optional additions. Artificial intelligence affects every industry, yet most college curricula treat it as a computer science elective. This creates a dangerous skills gap for graduates entering an AI-integrated workforce. Students need practical AI fluency regardless of their major. This means understanding how to work with large language models, recognizing AI-generated content, and knowing when human judgment remains essential. What colleges can do: Introduce basic AI literacy modules in general education requirements - Train faculty to integrate AI tools into assignments across disciplines -Offer electives on prompt engineering, AI ethics, and human-AI collaboration Real-world example: Some institutions are beginning this integration. The MIT Media Lab has developed an AI and Ethics curriculum that teaches students to think critically about algorithmic bias and the societal impact of AI. Universities can adopt similar approaches for undergraduate programs across disciplines. How to implement: A journalism course could challenge students to use AI for background research and then fact-check and verify the findings. An art history class might explore how AI image generation affects concepts of authorship and creativity. The goal isn't to turn every student into a programmer—it's to ensure graduates can work confidently with AI tools while maintaining critical thinking skills. Grade point averages tell employers little about real-world capabilities. Today's hiring managers want to see what candidates have built, written, or accomplished outside traditional coursework. Data shows that Amazon increased its bootcamp graduate hires from 1,077 in 2021-22 to 2,468 in 2024—a 129% growth. Companies like Google, Apple, JPMorgan Chase, and Accenture are actively hiring bootcamp-trained talent across multiple industries. What colleges can do: Encourage students to document and share their projects online - Offer academic credit for building personal brands, portfolios, or digital products - Shift from GPA-centric evaluations to include "proof of work" assessments Real-world example: Progressive art schools are leading this shift toward portfolio-based assessment. Many design programs now require students to maintain digital portfolios throughout their studies, documenting projects and creative development over time. How to implement: Economics students could publish data analysis projects on GitHub. Education majors could document innovative teaching methods through video case studies. Pre-med students could showcase community health initiatives they've designed and implemented. Employers and graduate schools increasingly want to see what applicants can demonstrate, not what they've memorized. The average professional changes careers seven times during their working life. Yet most college programs operate as if students will pursue single careers for decades. What colleges can do: Offer flexible degrees that span multiple fields (tech + ethics, business + design) - Normalize major changes and allow "exploration semesters" with dedicated advising - Replace outdated prerequisites with modular, skill-based learning tracks Real-world example: At Arizona State University, students can combine multiple fields through flexible concentrations—pairing computer science with psychology or business with environmental science. These interdisciplinary approaches better reflect how modern careers actually develop. How to implement: A student who starts as a biology major but discovers a passion for product design should transition seamlessly into a hybrid path without extending graduation by two years or losing credits. Colleges can offer stackable certificates, microcredentials, and project-based validation of knowledge to support career pivots. Students no longer need to wait until graduation to start building careers. The most successful young professionals often launch projects, businesses, or creative ventures while still in college and high school. Established companies like Google, Facebook, LinkedIn, Amazon, JPMorgan, Goldman Sachs, and American Express all hire from coding bootcamps. These companies recognize that practical experience often matters more than traditional credentials. What colleges can do: Provide seed funding for student-led ventures and social impact ideas - Replace traditional advising with access to entrepreneurial mentors and alumni networks - Host demo days, pitch competitions, and startup accelerators on campus Real-world example: At Babson College, students can access seed funding for viable business ideas. The University of Pennsylvania offers mentorship programs that connect students with successful alumni entrepreneurs. How to implement: Every college can empower students to build something tangible during their studies. Offer dedicated workspace for student ventures, access to legal and accounting guidance, connections to local business networks, and academic credit for entrepreneurial projects—students who launch something meaningful during college graduate with proof of their capabilities rather than just academic promise. Coding bootcamp graduates see average salary increases of 50.5% or $23,724 after completing their programs. Seventy-one percent of coding bootcamp graduates find jobs within six months of graduation. These outcomes reflect programs designed around the needs of employers and student career success rather than traditional academic structures. Higher education doesn't need to be dismantled, but it must be redesigned. Students entering college in 2025 need institutions that prepare them for a world shaped by constant change, technological advancement, and entrepreneurial opportunity. The colleges that adapt first will attract the most motivated students and produce the most successful graduates. Those who resist change risk becoming increasingly irrelevant in a world where practical skills and demonstrated capabilities outweigh institutional prestige.