Latest news with #AItraining
Yahoo
16-07-2025
- Automotive
- Yahoo
How to know which AI tools are best for your business needs—with examples
Choosing the right generative AI model—from OpenAI, Google, or Anthropic—can be tricky amid a growing alphabet soup of names and versions. But experts say testing different tools, understanding the complexity of your tasks, knowing how quickly you need results, and how often you'll need to make changes or improvements are key to using AI effectively—and avoiding being left behind. If you're finding it tricky to navigate the ever-changing generative AI landscape, which shifts weekly as vendors compete to top leaderboards, you're not alone. An AI training gap has emerged across the business world. According to one recent survey, two-thirds of leaders expect employees to have AI skills, but only a third of companies have clear policy on what technology to use—and how to use it. While utilizing generative AI chatbots has a relatively low barrier to entry, figuring out the right model remains a challenge for people who have not studied their nuances. Here's what you need to know about picking the right AI tools, such as ChatGPT, Claude, Gemini, Perplexity, and Copilot—and the underlying models that power them. Learn more: Mastering AI at work: a practical guide to using ChatGPT, Gemini, Claude, and more Speed date with AI Today's generative AI is still relatively new, and for this reason, there is an abundance of models on the market; no one company or model dominates. As a result, experimenting with a variety of tools is beneficial. 'Think of it like test driving a car,' Jules White, a computer-science professor at Vanderbilt University, told Fortune. 'What is it like to drive on the highway, to park, how does the stereo work, how cushy is the seat, etc. I suggest doing the equivalent for models.' For Maggie Vo, head of user education at Anthropic, that test drive should include consideration of three factors: task complexity, time sensitivity, and the need to refine your work. Something like writing a strategic plan might call for using the most capable model whereas a faster model might be most useful for reformatting data or quick summaries. If you're planning to iterate multiple times, then use a combination: 'Start with a smarter model to refine your approach, then use a smaller model to execute,' she said. 'The real skill is developing 'Platform Awareness'—understanding not just different models but different AI systems entirely,' Vo added. 'What works in Claude might need adjustment in other systems. Experiment across platforms to build intuition about their unique strengths.' In practice, this can be as simple as typing the same prompt in different AI chatbots—as well as the models for each one (the option to change may be at top of the screen or close to the search bar)—and compare which tool gives the most helpful response. As Vo implied, as AI models have been developed, certain tasks are more easily completed with some AI models over others. The faster, typically free models, are usually best for simple chats—whereas the more advanced models may cost a fee and take more than a few moments to process, but will typically yield better results. The best models, according to UPenn's Ethan Mollick Ethan Mollick, a professor at the University of Pennsylvania's Wharton School—is one of the most widely-read experts on how to use AI models to help with business tasks. He is known for his prolific research and analysis on LinkedIn. According to a recent Substack post, his everyday AI use is focused within the products of OpenAI, Anthropic, and Google. 'With all (three) of the options, you get access to both advanced and fast models, a voice mode, the ability to see images and documents, the ability to execute code, good mobile apps, the ability to create images and video (Claude lacks here, however), and the ability to do Deep Research,' he wrote. The ultimate challenge comes with deciding which model is best to use, he added. But like White, he connects it to selecting a vehicle: 'Think of it like choosing between a sports car and a pickup truck; both are vehicles, but you'd use them for very different tasks.' Mollick summarized the decision making into three categories: good for chats, good for work, and good for hard problems. Good for chats Claude 4 Sonnett, ChatGPT-4o, and Gemini 2.5 Flash are fast—and not as smart, according to Mollick. But as generally free AI products, they are the most widely accessible and can still make a difference in workplace efficiency. Here are some examples of their capabilities: Market and competitive analysis: 'Compare OpenAI, Anthropic, and Google Gemini's positioning in the enterprise AI space.' Financial modeling and forecasting: 'Create a three-year revenue forecast for a SaaS startup with 20% YoY growth and 75% gross margin.' Content creation and editing: 'Draft a LinkedIn post announcing a $10M Series A funding round that sounds confident but not boastful.' Good for work Claude 4 Opus, ChatGPT o3, and Gemini 2.5 Pro are smarter, have lower error rates, but are slower, according to Mollick. They may still hallucinate, but tend to be more successful at coding, logic, or solving tougher problems that require multiple steps. Here's an example of their capability: Market and competitive intelligence dashboards: Who better to summarize AI's priorities than AI itself? Here's what the models had to say: Prompt: 'In one sentence, please describe your relative strengths. I am writing this in the context of other generative tools, including [Competitor] and [Competitor].' OpenAI's GPT-4o: 'ChatGPT's relative strength lies in its ability to generate highly polished, context-aware, and conversationally natural responses—particularly strong in editing, writing, and task-oriented problem-solving—while integrating tools like code execution and image analysis more seamlessly than Claude or Gemini.' Anthropic's Claude Sonnet 4: 'I excel at nuanced reasoning, ethical decision-making, and producing well-structured, thoughtful responses while maintaining strong safety guardrails, though I'm generally more cautious and less willing to engage with borderline requests compared to some other AI assistants.' Google's Gemini 2.5 Flash: 'Gemini excels in multimodal understanding and generation, seamlessly integrating text, code, images, and other data formats while leveraging real-time information from Google's vast ecosystem to provide highly accurate and contextually rich responses.' Some other examples include the ability to conduct customer-service automation & quality control as well as financial-scenario modeling & risk analysis. Mollick admitted he primarily uses the 'Good for work' models, though he also has favorites for specific tasks outside of this list, like using GPT-4.5 when writing. Good for hard problems Claude 4 Opus extended thinking, ChatGPT-o3-pro, and Gemini 2.5 Pro are the most advanced AI models on the market, and for that reason, they remain slow and have limited use, Mollick added in his Substack post. Here's an example of a prompt that would push the limit of ChatGPT of, but ChatGPT-o3-pro might be best equipped to tackle: 'Below is the full text of a 180-page Phase III oncology trial dossier (including statistical appendices and 40 pages of adverse-event tables). Extract every efficacy endpoint, its p-value, confidence interval, and sample size. Re-compute the primary endpoint's hazard ratio from the raw survival-curve data provided in Appendix C, flagging any inconsistencies with the sponsor's reported figure. Summarize all Grade 3–4 adverse events, grouped by organ system, and calculate their absolute-risk increase versus control. Draft a 500-word briefing for the FDA that: Identifies any statistical or methodological red flags. Assesses whether the benefit–risk profile justifies accelerated approval. Proposes two post-marketing study designs to validate long-term safety.' Other models No matter what model or AI company used, it's always important to double check the AI for errors or hallucinations. Accuracy is the priority of Perplexity, according to Jesse Dwyer, head of communications at the AI company. 'Perplexity's sole focus is accurate, trustworthy AI—we use all top models and post-train them for accuracy,' Dwyer said. 'Models that have been trained to experience some hallucination are helpful if you want videos of high-diving cats, but they can be dangerous when you are making business or financial decisions.' Copilot is also a widely used AI chatbot, thanks to its integration with Microsoft products, but it can be difficult to switch between models. DeepSeek r1 and Grok by Elon Musk's xAI also are options on the market, but, according to Mollick, each have missing features. Mollick did not respond to Fortune's request for comment. The takeaway: Practice makes perfect While there remains no perfect method for using AI in the workplace, the most effective ways to remain on top of your game is exploration and education—and that starts with simply using them. 'The difference between casual users and power users isn't prompting skill (that comes with experience); it's knowing these features exist and using them on real work,' Mollick wrote. And because many business leaders, including CEOs, have already begun using AI, Dwyer suggested trying to emulate how they are using it in their work. 'AI is one of the first business tools adopted first by managers instead of frontline workers,' he told Fortune. 'It makes sense that leaders with experience getting the best work from their teams and software tools will be naturally prepared to work with AI.' This story was originally featured on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


WIRED
08-07-2025
- WIRED
Microsoft, OpenAI, and a US Teachers' Union Are Hatching a Plan to ‘Bring AI into the Classroom'
Jul 8, 2025 7:30 AM The National Academy for AI Instruction will make artificial intelligence training accessible to educators across the country, according to leaked details. Photograph: izusek/Getty Images Microsoft and OpenAI are planning to announce Tuesday that they are helping to launch an AI training center for members of the second-largest teachers' union in the US, according to details about the initiative that appear to have been inadvertently published early on YouTube. The National Academy for AI Instruction will be based in New York City and aims to equip kindergarten up to 12th grade instructors in the American Federation of Teachers with 'the tools and confidence to bring AI into the classroom in a way that supports learning and opportunity for all students,' according to the description of a publicly accessible YouTube livestream scheduled for Tuesday morning. The YouTube page also lists Anthropic, which develops the Claude chatbot, as a collaborator on what's described as a $22.5 million initiative to bring free 'AI training and curriculum' to teachers. The three AI companies and the union did not immediately respond to requests for comment about the information released on YouTube. On Monday, Microsoft and the union declined to share details ahead of an announcement planned for Tuesday morning in New York. Schools have struggled over the past few years to keep pace with students' adoption of AI chatbots such as OpenAI's ChatGPT, Microsoft's Copilot, and Google's Gemini. While highly capable at helping write papers and solve some math problems, the technologies can also confidently make costly errors. And they have left parents, educators, and employers concerned about whether chatbots rob students of the opportunity to develop essential skills on their own. Schools have deployed new tools to catch AI-assisted cheating, and teachers have begun rolling out lessons about what they view as responsible use of generative AI. Some educators use AI to help with the time-consuming work of developing teaching plans and materials, and they also tout how it has introduced greater interactivity and creativity in the classroom. American Federation of Teachers president Randi Weingarten has said that educators must have a seat at the table in how AI is integrated into their profession. The new national academy could help teachers better understand fast-changing AI technologies and evolve their curriculum to prepare students for a world in which the tools are core to many jobs. But the program is likely to draw rebuke from some union members concerned about the commercial incentives of tech giants shaping what happens in US classrooms. Google, Apple, and Microsoft have competed for years to get their tools into schools in hopes of turning children into lifelong users. (Microsoft and OpenAI have also increasingly become competitors, despite a once-close relationship.) Just last week, several professors in the Netherlands published an open letter calling for local universities to reconsider financial relationships with AI companies and ban AI use in the classroom. All-out bans appear unlikely amid the growing usage of generative AI chatbots. So AI companies, employers, and labor unions may be left to try to find some common ground. The forthcoming training academy follows a partnership Microsoft struck in December 2023 to work with the American Federation of Labor and Congress of Industrial Organizations, or AFL-CIO, on developing and deploying AI systems. The American Federation of Teachers is part of the AFL-CIO, and Microsoft had said at the time it would work with the union to explore AI education for workers and students. The federation's website says it represents about 1.8 million workers, which beside K-12 teachers also includes school nurses and college staff. The National Education Association, the largest US teachers' union, has about 3 million members, according to its website.


Forbes
07-07-2025
- Business
- Forbes
AI Development Issues Synthetic Data Can Help You Overcome
As AI systems become more sophisticated, the challenges of training them effectively—and responsibly—continue to grow. The use of real-world data often comes with concerns and roadblocks—privacy risks, inconsistent formats, gaps in edge cases and regulatory hurdles can all slow development or skew outcomes. Synthetic data offers a promising alternative, delivering clean, scalable and customizable datasets that can augment—or even replace—traditional data in key use cases. Below, members of Forbes Technology Council share real-world challenges that come with training AI systems and how synthetic data can help address them. Their insights highlight how developers can overcome data-related barriers while building smarter, safer AI models. 1. Lack Of Edge Case Data Synthetic data can help address the challenge of edge cases in your real-world data, which, by definition, doesn't have enough examples to create a training set. The real-world data can be used to identify an edge case your AI may encounter, but you leverage synthetic data to create variations of that edge case for machine learning. This hybrid approach is often most effective in terms of cost, time and so on. - Radha Basu, iMerit 2. Inconsistency And Lack Of Control One of the major challenges is inconsistency and lack of control. Real-world data is messy, biased and often incomplete, making it hard to scale or use reliably in training high-performance models. Synthetic data solves this by offering precision, balance and control at scale. Synthetic data gives AI developers the ability to test, stress and scale models in ways real-world data simply can't match. - Alexandre de Vigan, Nfinite Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify? 3. Unpredictable Training Environments Remember the debate about synthetic versus petroleum-based oil? Similar to how older engines were run with nonsynthetic oil, legacy businesses use messy, sensitive and unpredictable real-world data, resulting in poor AI. For smart, modern businesses, synthetic data, like synthetic oil, allows developers to train models in predictable and controllable environments, ensuring strong performance in the real world. - Robert Clark, Cloverleaf Analytics 4. Privacy And Scale Limitations Synthetic data avoids sensitive personal information, which reduces privacy concerns, and can be generated efficiently at scale, making it ideal for training large models. Real-world data—like patient records—is often messy and unpredictable. However, training only on synthetic data can limit a model's ability to perform well in complex, real-world scenarios. - Tim O'Connell, emtelligent 5. Incomplete And Biased Datasets One challenge with real-world data is that it can be biased or incomplete—like teaching someone to drive only on city streets but not highways or country roads. Synthetic data fills these gaps, adding detail and diversity that real-world data might lack. However, synthetic data only leads to smarter, fairer, more reliable AI if it's high-quality and generated with standards to minimize bias. - China Widener, Deloitte 6. Sensitive Industry Restrictions AI developers face the significant challenge of addressing issues related to data privacy. Obtaining relevant information within sensitive industries presents unique difficulties, especially when dealing with regulated elements. Synthetic data helps alleviate this burden. - Michael Gargiulo, 7. 'Class Imbalance' Real data has bias due to 'class imbalance.' Imagine a scenario where using AI for résumé screening fails since the model has been unintentionally trained on a dominant class (favoring male over female or giving more weight to a certain demographic), as that's what was available as historical data. Synthetic data can overcome this, as long as proper context is given for data generation. - Arjun Srinivasan, Wesco 8. Scarcity Of Rare Scenarios I believe that AI developers often struggle with obtaining large, diverse datasets that include rare edge cases—such as unusual driving scenarios for autonomous vehicles—which are difficult and costly to capture in the real world. Synthetic data can generate these rare but critical conditions at scale, improving model robustness without compromising user privacy. - Mark Vena, SmartTech Research 9. Privacy And Distribution Barriers Real-world data often comes with privacy constraints and regulatory friction. Synthetic data solves distribution gaps in training sets by generating edge cases for mission-critical systems. It allows AI developers to simulate realistic, diverse datasets without exposing sensitive information. - Andrey Kalyuzhnyy, 8allocate 10. Rare Event Modeling Needs For AI models, the rule is often 'the more data, the better the model.' However, if you are trying to model events that rarely happen, such as communications involving insider trading, bribery, harassment and other such events, the only way to get enough data is to create it. By using synthetic data that is then reviewed by subject matter experts, you can have enough examples to create great models. - Vall Herard, Saifr 11. Voice Diversity Challenges One key challenge with real-world voice data is obtaining sufficient diversity and volume, especially for rare accents, speaking styles or noisy environments. Synthetic voice data overcomes this by generating limitless, tailored examples, including difficult-to-capture scenarios, without privacy concerns. This enables training more robust AI models. - Harshal Shah 12. High Data Acquisition Costs The simple acquisition of real-world data can be difficult and costly. Synthetic data can help, but it needs to be evaluated carefully for quality and assessed for its potential impact on the training of a model. - Leonard Lee, neXt Curve 13. Privacy-Conscious Experimentation Real-world data often limits innovation due to privacy and regulatory barriers. Synthetic data helps AI developers simulate edge cases and future scenarios that don't yet exist. This enables safer experimentation, faster iteration and smarter models without compromising sensitive information. - Rishi Kumar, MatchingFit 14. Ignored Edge Users In CX Data Real-world customer experience data often ignores edge users—the 'silent majority' who never complain; they just leave. Synthetic data enables you to simulate and operationalize a retention strategy before it's too late. It's not just a data problem; it's a CX risk. - April Ho-Nishimura, Infineon Technologies AG 15. Corrupted Or Low-Quality Real Data The use of corrupted real-world datasets for training can silently compromise AI models and cause unreliable results. Synthetic data eliminates this risk by providing clean, controlled datasets when real-world data quality issues are affecting model performance. - Chongwei Chen, DataNumen, Inc. 16. Simulating Future Scenarios Real-world data is stuck in yesterday's world—permissioned, fragmented and slow. Synthetic data isn't just a privacy workaround; it's a simulation engine. Developers can now model edge-case chaos, future scenarios or AI-on-AI interactions at scale, long before reality catches up. That's not a patch. That's evolution. - Akhilesh Sharma, A3Logics Inc. 17. Cross-Silo Collaboration Barriers AI developers attempting to collaborate across silos (such as government agencies) where data sharing is challenging or explicitly forbidden are able to exchange synthetic datasets. This improves model stability and time to release by allowing multiple parties to share in the model evaluation process and reproduce bugs to broaden the troubleshooting audience. - Matthew Peters, CAI 18. Reactive Versus Proactive Modeling A major challenge with real-world data is its stagnancy. It reflects what has been, not what could be. Synthetic data allows AI developers to generate rich, forward-looking scenarios that model emerging trends, unseen behaviors or disruptive events. It shifts AI from reactive to proactive, enabling systems to anticipate and adapt in a world that evolves faster than yesterday's data. - Sandipan Biswas 19. Inconsistent Labeling Real-world data is messy. Labels are often inconsistent, even among experts, and that noise quietly limits how far your models can go. Synthetic data gives us clean, perfectly labeled ground truth. We use it to identify annotation errors and train models that handle uncertainty more effectively and surpass accuracy ceilings—without incurring the costs of relabeling. - Gavita Regunath, Advancing Analytics
Yahoo
30-06-2025
- Business
- Yahoo
Scale AI's rivals say they're going hard to win its contractors and clients: 'Our servers are melting'
Scale AI's competitors say they are seeing an uptick in client inquiries after Meta's investment. AI training companies like Appen and Prolific are pitching themselves as neutral platforms. Rival companies also told BI they have seen a rise in interest from contractors on their platforms. Meta spent $14.3 billion to acquire nearly half of Scale AI and level up in the AI race — but the startup's rivals spy an opportunity as well. Executives at five Scale AI competitors told Business Insider that they have seen a big uptick in client inquiries and job interest since the Meta deal was announced on June 13. Meta now holds a 49% stake in a company handling AI training data for many of its competitors, like Google, OpenAI, and xAI. In response, those three companies paused at least some of their work with Scale AI. Independence from Big Tech has now become a core part of the pitch for rival AI training companies vying for those contracts. In a blog post following the Meta deal, Scale AI reassured clients that it remains a "neutral, independent partner." Ryan Kolln, the CEO of data annotation firm Appen, told BI the deal would "create a pretty big disruption to our industry and create huge opportunities for Appen and our peers to fill the hole that's going to be left by Scale." "The added pitch is, 'hey, we are a publicly listed company and we're really focused on data neutrality,'" added Kolln, whose company counts Amazon and Nvidia as clients. "Our customers are really evaluating their vendor ecosystem." UK-based Prolific, which provides vetted freelancers for academic and commercial AI research, is also using neutrality as a selling point, its CEO, Phelim Bradley, told BI. "We don't build models. We don't compete with our customers. We don't have conflicting incentives," Bradley said. He added that clients are now reluctant to go all in on a single AI training provider. Big companies often spread their work among vendors, like cloud providers. "Scale benefited a lot from their awareness and being synonymous with data labeling for Big Tech," Bradley said. "Now, it's a much easier question to answer: 'How are you different from Scale?'" A Scale AI spokesperson told BI that "nothing has changed" about its customer data protection. "A lot of this confusion is being driven by smaller competitors who seek to gain from promoting false claims," they added. "Security and customer trust have always been core to our business, and we will continue to ensure the right protections are in place to help safeguard all of our work with customers." Meta did not respond to a request for comment. Jonathan Siddharth, the CEO of Turing, which trains models for major AI labs including Meta, Anthropic, and Google, said that discussions with customers have increased tenfold as frontier labs realize they need "top talent and impartial partners." "Labs increasingly want a Switzerland-like collaborator — someone model-agnostic — who can help them win the AGI race, rather than being tied to a single player," he said, referring to artificial general intelligence. He added that data annotation companies are often working on the exact capabilities that differentiate one AI model from another. None of the Scale AI rivals that BI spoke with quantified the number of inquiries they have received from Big Tech companies since Meta's investment. Scale AI's competitors are also moving to pick up its freelance workers, some of whom have had projects they are working on paused after clients like Google halted them. Scale has at least 240,00 gig workers globally who conduct AI training projects, such as flagging harmful chatbot responses. After some of Scale AI's projects were paused, the market became flooded with freelancers looking for work. Sapien AI CEO Rowan Stone told BI that his company had 40,000 new annotators join within 48 hours of Meta's Scale AI deal. "Our servers are currently melting," Stone said last week. "Our engineering team spent the entire weekend bolstering load balancers, spinning up new infrastructure, and getting us ready for the load that we're seeing." Many of these new sign-ups were from India and the Philippines — regions where Scale AI had long been a leader, Stone added. "The change in user signup pattern coincides pretty neatly with the Scale news," he said. Mercor AI's head of product, Osvald Nitski, said that the startup has received applications from full-time Scale employees, adding, "Our hiring bar is extremely high — we're only taking the best people." Mercor says it works with six of the "Magnificent Seven" tech companies and is picking up projects from clients leaving Scale. In terms of contractors, Nitski said the company is focused on recruiting elite-level annotators, like International Math Olympiad medalists, Rhodes Scholars, and Ph.D. students. Nitski said it's been a busy two weeks at Mercor, as it has seen a sharp increase in inbound interest from major tech clients. "There was simply no time for podcasts and blog posts these past few weeks with all of the demand to be fulfilled," Nitski said. Read the original article on Business Insider Sign in to access your portfolio


CNET
28-06-2025
- Business
- CNET
Anthropic's AI Training on Books Is Fair Use, Judge Rules. Authors Are More Worried Than Ever
Claude maker Anthropic's use of copyright-protected books in its AI training process was "exceedingly transformative" and fair use, US senior district judge William Alsup ruled on Monday. It's the first time a judge has decided in favor of an AI company on the issue of fair use, in a significant win for generative AI companies and a blow for creators. Two days later, Meta won part of its fair use case. Fair use is a doctrine that's part of US copyright law. It's a four-part test that, when the criteria is met, lets people and companies use protected content without the rights holder's permission for specific purposes, like when writing a term paper. Tech companies say that fair use exceptions are essential in order for them to access the massive quantities of human-generated content they need to develop the most advanced AI systems. Writers, actors and many other kinds of creators have been equally clear in arguing that the use of their work to propel AI is not fair use. On Friday, a group of famous authors signed an open letter to publishers urging the companies to pledge never to replace human writers, editors and audiobook narrators with AI and to avoid using AI throughout the publishing process. The signees include Victoria Aveyard, Emily Henry, R.F. Kuang, Ali Hazelwood, Jasmine Guillory, Colleen Hoover and others. "[Our] stories were stolen from us and used to train machines that, if short-sighted capitalistic greed wins, could soon be generating the books that fill our bookstores," the letter reads. "Rather than paying writers a small percentage of the money our work makes for them, someone else will be paid for a technology built on our unpaid labor." The letter is just the latest in a series of battles between authors and AI companies. Publishers, artists and content catalog owners have filed lawsuits alleging that AI companies like OpenAI, Meta and Midjourney are infringing on their protected intellectual property in attempt to circumvent costly, but standard, licensing procedures. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) The authors suing Anthropic for copyright infringement say their books were also obtained illegally -- that is, the books were pirated. That leads to the second part of Alsup's ruling, based on his concerns about Anthropic's methods of obtaining the books. In the ruling, he writes that Anthropic co-founder Ben Mann knowingly downloaded unauthorized copies of 5 million books from LibGen and an additional 2 million from Pirate Library Mirror (PirLiMi). The ruling also outlines how Anthropic deliberately obtained print copies of the books it previously pirated in order to create "its own catalog of bibliographic metadata." Anthropic vice president Tom Turvey, the ruling says, was "tasked with obtaining 'all the books in the world' while still avoiding as much 'legal/practice/business slog.'" That meant buying physical books from publishers to create a digital database. The Anthropic team destroyed and discarded millions of used books in this process in order to prep them for machine-readable scanning, by stripping them from their bindings and cutting them down to fit. Anthropic's acquisition and digitization of the print books was fair use, the ruling says. But it adds: "Creating a permanent, general-purpose library was not itself a fair use excusing Anthropic's piracy." Alsup ordered a new trial regarding the pirated library. Anthropic is one of many AI companies facing copyright claims in court, so this week's ruling is likely to have massive ripple effects across the industry. We'll have to see how the piracy claims resolve before we know how much money Anthropic may be ordered to pay in damages. But if the scales tip to grant multiple AI companies fair use exceptions, the creative industry and the people who work in it will certainly suffer damages, too. For more, check out our guide to understanding copyright in the age of AI.