
What are the copyright implications of using Artificial Intelligence for schoolwork?
Consider, for instance, the protests that erupted when Christie's Auction House announced an AI art auction in February of 2025; more than 6,000 artists signed an open letter calling for the cancellation of the auction, citing concerns that the artwork was incentivizing the 'mass theft of human artists' work.'
'These [AI] models, and the companies behind them, exploit human artists,' read the letter. 'Using their work without permission or payment to build commercial AI products that compete with them.' And yet, even as these protests pick up steam, and AI-related copyright infringement lawsuits continue to spring up around the world, as of right now the laws surrounding AI copyright infringement are still undefined and evolving – much like the technology itself.
Where will these legal battles go in the future? If the United States does rule that the use of Artificial Intelligence tools constitutes copyright infringement, how will that impact students using these AI tools in High School or Higher Education – spaces which rigidly enforce anti-plagiarism rules? Are we on the brink of major AI restrictions across the country?
While plagiarism itself is an ethical concept and not a legally enforced offense, copyright infringement is. Copyright law in the United States is a complex system that intends to protect the expression of original works by an artist or creator, and it is within this system that Artificial Intelligence has faced its greatest hurdles so far.
The biggest development in the copyright battle against AI actually began two years before the generative AI boom, when the tech conglomerate Thomson Reuters sued a legal AI startup called Ross Intelligence back in 2020. Thomson Reuters would go on to win the lawsuit in 2025, when U.S. Circuit Judge Stephanos Bibas ruled that Ross Intelligence was not allowed to copy information and content from Reuters – marking the first major blow in the concept of 'fair use' in AI.
'Fair use' is a complex foundational concept that companies like OpenAI and Meta Platforms use to justify their services, claiming that their AI services are studying AI copyrighted materials to create new content, while opponents claim these companies are stealing to directly compete with them.
As mentioned, the notion of 'fair use' in AI is still open to legal interpretation, though it's possible we will see a definitive ruling on this hotly-contested topic in one of the country's ongoing AI Copyright lawsuits, such as Advance Local Media v. Cohere. In this case, a group of news publishers including Conde Nast, The Atlantic, and Vox, alleged copyright infringement against Cohere Inc. – claiming that the company used their copyrighted publications to build and operate its AI. Because this case constitutes multiple allegations of copyright infringement and Lanham Act violations, the ending to Advance Local Media v. Cohere may be the first ruling that definitively restricts 'fair use' in AI.
These cases demonstrate how AI plagiarism is not illegal yet, but as more and more cases are settled, we may see an increased crackdown on AI usage in art, in professional writing, and in schoolwork. In the future, use of AI for schoolwork may open you up to copyright infringement as well as plagiarism, and it's important to understand the safe, legal measures we can take to use this technology correctly.
So, what can be done to differentiate plagiarism from using AI in a responsible way? Originality.ai explains that there are still many ways to use AI tools 'responsibly' during the content creation process, simply by taking care to recognize the copyright implications of your work. One method for doing so is extensively citing your sources when writing essays or completing assignments, since AI often doesn't cite its sources directly during content generation. If AI does not cite sources of information that you wanted to include, it might even be best to leave out those facts entirely.
Not only that, but AI should always be used as an assistan t to your writing, rather than the author of what should be your own work. AI should never entirely replace your writing, but rather offer suggestions and additions to your content, or help you proofread what you have done. As our society moves towards greater AI integration in all walks of life (and with legal crackdowns on AI looming in the future) it's essential that we use these tools purely to enhance our work, not to replace it.
The use of AI plagiarism checkers can also be helpful. These can be used to confirm that you are not including plagiarized content, but instead using AI in an ethical way. By following these steps when using AI in your work, you can be sure that you are not plagiarizing others, establishing a baseline for ethical AI use in advance of any upcoming legal settlements.
Looking to the future, it's very possible that these AI tools find themselves at the center of large copyright infringement lawsuits and restrictions, and as students we need to prepare ourselves for that eventuality. In studying the greater legal implications of AI, we can not only protect ourselves from plagiarism, but elevate our own work by refusing to take advantage of an ethically-ambiguous shortcut. Related
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New York Post
22 minutes ago
- New York Post
China restricts AI across the country to prevent kids cheating, America could learn from it
China now turns off AI for the whole country during exam weeks. That's because the Chinese Communist Party knows their youth learn less when they use artificial intelligence. Surely, President Xi Jinping is reveling in this leg up over American students, who are using AI as a crutch and missing out on valuable learning experiences as a result. It's just one of the ways China protects their youth, while we feed ours into the jaws of Big Tech in the name of progress. Advertisement When Chinese students sat for gaokao exams — intense four day college placement tests — in June, AI companies Alibaba, ByteDance, Tencent, and Moonshot all shut off useful features for would-be cheaters, including a photo-upload function that solves exam questions for you. 6 Chinese students sit for intense four day gaokao exams to determine what colleges they will attend. Getty Images 'China is a generally techno-optimist country,' Scott Singer, a tech scholar at the Carnegie Endowment for National Peace, told The Post Advertisement 'That said, the government will clamp down when it thinks technology will cause societal harm and when certain uses run counter to the country's interests. And China's government has shown it's not afraid to clamp down on its tech companies when it believes the circumstances require it.' All Chinese users who attempted to use the feature during exam days received an error message, according to Bloomberg. None of the companies that modified services made any public statement about the freeze in service, and none responded to request for comment from The Post. Moonshot could not be reached for comment. 6 Tristan Harris discussed China's regulation of artificial intelligence on Real Time with Bill Maher. Real Time with Bill Maher Advertisement Center for Humane Technology co-founder Tristan Harris said on Real Time with Bill Maher earlier this month that the move is 'actually really smart, because what it means is that students during the year can't just rely on AI to do all their homework.' Harris, a former design ethicist at Google turned Big Tech whistleblower, says that American kids, by contrast, are suffering learning losses from AI: 'We are seeing kids who are in a race. If the other kids in their class are cheating… they're gonna start cheating and using AI to outsource their thinking.' The science backs this up. A June MIT study suggests that AI degrades critical thinking skills. Researchers found that people who wrote essays with AI had less brain activity while doing work, retained less of the content, and outsourced more and more of their workload to AI over time. 6 High school teacher Murphy Kenefick says AI is a 'fight every assignment' in his classroom. Courtest of Murphy Kenefick Advertisement 'It's a fight every assignment,' Murphy Kenefick, a Nashville high school literature teacher, told The Post. 'I've caught it about 40 times, and who knows how many other times they've gotten away with it.' AI optimists often argue that, if we pump the breaks on AI, China will just surpass us. But Harris argues that whatever country learns to better regulate the new tech will be the real victor — because they'll have smarter citizens. 'What's guiding this is the race between the US and China — if we don't build it, we're just gonna lose to the country that will,' he explained. 'But this is a mistake, because [the winner is] actually who's better at governing the technology.' He added, 'We beat China to social media. Did that make us stronger or did that make us weaker?' 6 People who use AI for assignments have less brain activity while completing work, according to a recent MIT study. Alina – Harris is right. The US might have been cutting edge on rolling out platforms like Instagram and YouTube, but we were also cutting edge in hooking our kids and turning them into doom-scrolling zombies. China ultimately came out with the heroin of social media: TikTok. But, unlike us, they've always taken great care to protect their populace from harm. The CCP exported TikTok — with its twerking trends and dangerous challenges, while giving their own citizens a modified, less addictive, and more pro-social version. Advertisement 6 The CCP has heavily regulated youth access to technology under Xi Jinping. REUTERS Douyin, the Chinese iteration, has voice reminders and interruptions for users who scroll for too long. Teens under 14 are limited to only 40 minutes a day and are shown inspirational content, like science experiments, patriotic videos, and educational content, according to Harris. Douyin also censors information deemed counter to national interests, including content from economists who were critical of the Chinese economy, according to the New York Times. TikTok declined to comment on this matter. Advertisement Unlike the American government, the CCP wields authoritarian control over their populace and their tech companies. America shouldn't copy them wholesale. 6 Chinese citizens only have access to the Douyin app instead of TikTok. Mojahid Mottakin – But China is cunning, clever, and forward-looking. If they've decided that endless scrolling on TikTok and homework help from AI is bad for their kids, it's probably bad for ours too. 'China is correct to take the risks of AI seriously, not just for education but for society as a whole,' Anthony Aguirre, co-Founder and Executive Director of the Future of Life Institute, told The Post. Advertisement 'The United States will have very different ways of addressing this, but the answer can't be to do nothing. Lawmakers must step up now with clear safeguards to protect children and society from repeating the same mistakes we did with social media.' As we unleash AI — which has the potential to be the most transformative modern technology ever invented — onto the world, we must take great care to do so cautiously, especially when it comes to our youth. If we fail to, the next generation in China may leave their tech-addled counterparts in America in the dust. Perhaps the real arms race is the long game.


The Hill
22 minutes ago
- The Hill
How the ‘tech titans' can save Social Security
The Wall Street Journal reports that several of 'Silicon Valley's elite' believe the artificial intelligence boom will generate massive wealth even as it displaces millions of workers, leaving them unemployed. Their solution to this unemployment crisis is to use the newly generated wealth from AI to provide a universal basic income for everyone. It's a terrible idea for several reasons. But it could be tweaked to address a huge public policy problem: funding Social Security. The idea of a universal basic income has been around for decades. While proposals vary, the basic plan is a taxpayer-funded redistribution scheme: providing everyone with a designated amount of money — usually between $500 to $1,000 per month — with no strings or work attached. Wall Street Journal reporter Josh Reich notes that Twitter co-founder Jack Dorsey and Facebook co-founder Chris Hughes have used their own money to set up limited universal basic income, or UBI, pilot programs to see if they work. And they aren't alone. Stanford University's Basic Income Lab tracks UBI programs around the world. While most are in the U.S. and created at the state or local level, there are several in other countries. Some are funded by donations; others are taxpayer funded. Reich says OpenAI CEO Sam Altman is a UBI proponent and funded an experiment in 2016 that gave $1,000 a month to low-income individuals for three years. Elon Musk has talked about 'universal high income,' and boasted that AI 'will automate most production and the public can share in the revenue.' And tech entrepreneur Andrew Yang proposed a $1,000 a month UBI program as part of his failed Democratic presidential campaign in 2020. What's different now is that some tech titans 'see a future flush with wealth generated by artificial intelligence.' And 'that revenue can be shared under a massive wealth-redistribution system.' It's not clear how AI-created wealth could be redistributed. Wealth is usually created by a company or an individual, who owns the wealth. To redistribute that wealth, the owner has to voluntarily donate it or the government must take it. Even if AI were to dramatically boost federal revenues, the federal government currently has a $37 trillion debt and a $1.9 trillion 2025 budget deficit that need to be reduced. And there are other problems. UBI programs have a dubious record, as a recent National Bureau of Economic Research paper demonstrates. The funds aren't enough to reduce income inequality, especially since participants tend to work fewer hours. Even many center-left organizations oppose them. Finland tried a pilot UBI project for two years. About 2,000 unemployed Finns received the equivalent of $634 per month. The hope was the money would encourage the unemployed to find a job. The BBC asks, 'Did it help unemployed people in Finland find jobs, as the centre-right Finnish government had hoped? No, not really.' However, if untold riches flow from AI advancements — and that's a very big 'if' — as some tech titans seem to think, there is a way that money could help both individuals and the country. Instead of handing people a monthly check, establish something like a special individual retirement account for everyone. The individual could not take out any funds until retirement age — say 60 or 65. Individuals would have limited, broad-based investment options to prevent speculation. Workers would continue to pay their current payroll taxes to Social Security. They wouldn't be funding this new type of IRA because the money would come instead from the tech titans' predicted AI revenue. At retirement, an individual could then choose between standard Social Security and their AI-funded retirement accounts. If enough money has been deposited and appreciated in these accounts over the years, retirees might choose it over standard Social Security. Plus, they would have ownership rights, meaning any money left over would be passed on in their estate. Retirees who choose their special private retirement account would forfeit their claim on traditional Social Security, leaving fewer retirees relying on Social Security's underfunded financial position. Those who choose traditional Social Security would forfeit their special account. Unlike a universal basic income, the special account would be more like a universal basic retirement program. Because people would not have access to their universal basic retirement funds until retirement, there would be no economic incentive to reduce work. And it wouldn't be a new entitlement, because retirees would choose between traditional Social Security or their private retirement option. Although AI will surely make a lot of money for some people, it is unlikely to produce the flood of revenue some tech billionaires anticipate. That said, if AI does produce massive wealth, using it to improve retirement and save Social Security is a much better and more workable option than a universal basic income.


TechCrunch
22 minutes ago
- TechCrunch
Meta rolls out AI-powered translations to creators globally, starting with English and Spanish
Meta is rolling out an AI-powered voice translation feature to all users on Facebook and Instagram globally, the company announced on Tuesday. The new feature, which is available in any market where Meta AI is available, allows creators to translate content into other languages so it can be viewed by a broader audience. The feature was first announced at Meta's Connect developer conference last year, where the company said it would pilot test automatic translations of creators' voices in reels across both Facebook and Instagram. Meta notes that the AI translations will use the sound and tone of the creator's own voice to make the dubbed voice sound authentic when translating the content to a new language. In addition, creators can optionally use a lip sync feature to align the translation with their lip movements, which makes it seem more natural. Image Credits:Meta At launch, the feature supports translations from English to Spanish and vice versa, with more languages to be added over time. These AI translations are available to Facebook creators with 1,000 or more followers and all public Instagram accounts globally, where Meta AI is offered. To access the option, creators can click on 'Translate your voice with Meta AI' before publishing their reel. Creators can then toggle the button to turn on translations and choose if they want to include lip syncing, too. When they click 'Share now' to publish their reel, the translation will be available automatically. Creators can view translations and lip syncs before they're posted publicly, and can toggle off either option at any time. (Rejecting the translation won't impact the original reel, the company notes.) Viewers watching the translated reel will see a notice at the bottom that indicates it was translated with Meta AI. Those who don't want to see translated reels in select languages can disable this in the settings menu. Image Credits:Meta Creators are also gaining access to a new metric in their Insights panel, where they can see their views by language. This can help them better understand how their content is reaching new audiences via translations — something that will be more helpful as additional languages are supported over time. Meta recommends that creators who want to use the feature face forward, speak clearly, and avoid covering their mouth when recording. Minimal background noise or music also helps. The feature only supports up to two speakers, and they should not talk over each other for the translation to work. Plus, Facebook creators will be able to upload up to 20 of their own dubbed audio tracks to a reel to expand their audience beyond those in English or Spanish-speaking markets. This is offered in the 'Closed captions and translations' section of the Meta Business Suite, and supports the addition of translations both before and after publishing, unlike the AI feature. Meta says more languages will be supported in the future, but did not detail which ones would be next to come or when. 'We believe there are lots of amazing creators out there who have potential audiences who don't necessarily speak the same language,' explained Instagram head Adam Mosseri, in a post on Instagram. 'And if we can help you reach those audiences who speak other languages, reach across cultural and linguistic barriers, we can help you grow your following and get more value out of Instagram and the platform.'