logo
Media company Thomson Reuters wins AI copyright case

Media company Thomson Reuters wins AI copyright case

Euronews13-02-2025
Thomson Reuters has won an early battle in court about whether artificial intelligence (AI) programs can train on copyrighted material.
The media company filed a lawsuit in 2020 against now-defunct legal research firm Ross Intelligence. In it, Thomson Reuters argues the company used their own legal platform Westlaw to train an AI model without permission.
In his decision, judge Stephanos Bibas affirmed that Ross Intelligence was not permitted under US copyright law, known as the "fair use doctrine," to use the company's content in order to build a competing platform.
The 'fair use' doctrine of US laws allows for limited uses of copyrighted materials such as for teaching, research, or transforming the copyrighted work into something different.
"We are pleased that the court granted summary judgment in our favour," according to a statement from Thomson Reuters to Euronews Next.
"The copying of our content was not 'fair use'".
Ross Intelligence did not immediately respond to a request for comment from Euronews Next.
Thomson Reuters' win comes as a growing number of lawsuits have been filed by authors, visual artists, and music labels against developers of AI models over similar issues.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Pandora second quarter operating profit in line with forecast, keeps 2025 outlook
Pandora second quarter operating profit in line with forecast, keeps 2025 outlook

Fashion Network

time5 days ago

  • Fashion Network

Pandora second quarter operating profit in line with forecast, keeps 2025 outlook

By Reuters Danish jewellery maker Pandora posted second-quarter operating profit in line with expectations on Friday, and stuck to its full-year growth forecast. "In these turbulent times, we are satisfied with yet another quarter of high single-digit organic growth and strong profitability," CEO Alexander Lacik said in a statement. "Despite the macroeconomic challenges to top and bottom line, we are confident that we will deliver on our targets for the year driven by an exciting product pipeline, new marketing campaigns and operational agility," he said. Operating profit for the second quarter was 1.29 billion Danish crowns ($201.6 million), the same as forecast by analysts in a poll gathered by the company. Organic revenue growth stood at 8%, also in line with analyst expectations. ​ The charm bracelet maker stuck to its full-year guidance of 7-8% organic sales growth and an operating profit margin at around 24%. © Thomson Reuters 2025 All rights reserved.

Why are social media sites betting on crowdsourced fact-checking?
Why are social media sites betting on crowdsourced fact-checking?

Euronews

time09-08-2025

  • Euronews

Why are social media sites betting on crowdsourced fact-checking?

TikTok is the latest social media platform to launch a crowdsourced fact-checking feature. The short-form video app is rolling out the feature, called Footnotes, first in the United States. It lets users write a note with more context on a video and vote on whether other comments should appear under a video. A footnote could share a researcher's view on a 'complex STEM-related topic' or highlight new statistics to give a fuller picture on an ongoing event, the company said. The new feature is similar to other community-based fact-checking features on social media platforms such as X and Meta's Facebook or Instagram. But why are social media giants moving towards this new system to fact-check online claims? What is community fact-checking? Scott Hale, an associate professor at the Oxford Internet Institute, said that Twitter, now X, started the charge to community notes in 2021 with a feature called Birdwatch. The experiment carried on after Elon Musk took control of the company in 2022. Otavio Vinhas, a researcher at the National Institute of Science and Technology in Informational Disputes and Sovereignties in Brazil, said that Meta's introduction of a community notes programme earlier this year is in line with a trend led by US President Donald Trump to move towards a more libertarian view of free speech on social media. 'The demand is that platforms should commit to this [libertarian view],' Vinhas told Euronews Next. 'For them, fair moderation would be moderation that prioritises free speech without much concern to the potential harm or the potential false claims it can push up'. Hale told Euronews Next there is some scientific proof behind crowdsourcing, with studies showing that crowds could often arrive at the right verdict when evaluating whether information was well fact-checked or not. They often agreed with professionals, he said. But TikTok's Footnotes is slightly different than other crowdsourcing initiatives on Meta or X, Vinhas said. That's because the programme still asks users to add the source of information for their note, which Vinhas says is not mandatory on X. Most notes don't end up on the platforms Where the challenge lies for all social media companies is getting the right people to see the notes, Hale said. All three community programmes use a bridge-based ranking system that ranks how similar you are to another user based on the content that a user consumes, based either on the other accounts they follow or the videos they watch, Hale said. The algorithm shows the content to users that are considered 'dissimilar' to each other to see if they both find the note helpful, Hale said. Notes that pass the test will then be visible on the platform. What tends to happen, though, is that the vast majority of the notes that are written on the platform are actually never seen, Vinhas said. A June study from the Digital Democracy Institute of the Americas (DDIA) of English and Spanish community notes on X found that over 90 per cent of the 1.7 million notes available on a public database never made it online. Notes that did make it to the platform took an average of 14 days to be published, down from 100 days in 2022, even though there are still delays to how quickly X responds to these notes, the DDIA report continued. 'I don't think these platforms can achieve the promise of bringing consensus and make the internet this marketplace of ideas in which the best information and the best ideas end up winning the argument,' Vinhas said. Hale said it can be difficult for users to come across notes that might contradict their point of view because of 'echo chambers' on social media, where content is shown that reinforces the beliefs that users already have. 'It's very easy to get ourselves into parts of networks that are similar to us,' he said. One way to improve the efficiency of community notes would be to gamify them, Hale continued. He suggested the platforms could follow Wikipedia's example, where contributing users have their own page with their edits. The platform also offers a host of service awards to editors based on the value of their contributions and the length of their service, and lets them take part in contests and fundraisers. What else do social media sites do to moderate content on their platforms? Community fact-checking is not the only method that social media companies use to limit the spread of mis- or disinformation on their platforms, Hale and Vinhas said. Meta, X, and TikTok all use some degree of automated moderation to distinguish potentially harmful or violent content. Over at Meta, the company said it relies on artificial intelligence (AI) systems to scan content proactively and immediately remove it if it matches known violations of its community standards or code of conduct. When that content is flagged, human moderators review individual posts to see if the content actually breaches the code or if some context is missing. Hale said that it can be difficult for automated systems to flag new problematic content because it recognises the repeated claims of misinformation that it has been trained on, meaning new lies can slip through the cracks. Users themselves can also report to the platforms when there is a piece of content that may violate community standards, Hale said. However, Meta said that community notes would replace relationships with conventional fact-checkers, who flagged and labeled misinformation for almost a decade in the United States. So far, there are no signs that the platform will end these partnerships in the United Kingdom and the European Union, media reports suggest. Hale and Vinhas said professional fact-checking and community notes can actually complement one another if done properly. In that case, platforms would have an engaged community of people adding context in notes as well as the rigor of professional fact-checkers who can take additional steps, such as calling experts or going straight to a source to verify whether something is true or not, Hale added. Professional fact-checkers often have context as well to the political, social, and economic pulse of the countries where disinformation campaigns may be playing out, Vinhas said. 'Fact-checkers will be actively monitoring [a] political crisis on a 24-7 basis almost, while users may not be as much committed to information integrity,' he said. For now, Vinhas said TikTok's model is encouraging because it is being used to contribute to a 'global fact-checking programme,' but he said there's no indication whether this will continue to be the case.

Sweden's leader uses ChatGPT. Should politicians use AI chatbots?
Sweden's leader uses ChatGPT. Should politicians use AI chatbots?

Euronews

time07-08-2025

  • Euronews

Sweden's leader uses ChatGPT. Should politicians use AI chatbots?

Swedish Prime Minister Ulf Kristersson has stirred up public debate over politicians' use of artificial intelligence (AI) after telling local media he uses ChatGPT to brainstorm and seek a 'second opinion' on how to run the country. Kristersson told the Swedish newspaper Dagens Industri that he uses ChatGPT and the French service LeChat, and that his colleagues also use AI in their everyday work. 'I use it myself quite often, if for nothing else than for a second opinion. 'What have others done? And should we think the complete opposite?' Those types of questions,' he said. The comment sparked backlash, with critics arguing that voters had elected Kristersson, not ChatGPT, to lead Sweden. Technology experts in Sweden have since raised concerns about politicians using AI tools in such a way, citing the risk of making political decisions based on inaccurate information. Large language models' (LLMs) training data can be incomplete or biased, causing chatbots to give incorrect answers or so-called 'hallucinations'. 'Getting answers from LLMs is cheap, but reliability is the biggest bottleneck,' Yarin Gal, an associate professor of machine learning at the University of Oxford, previously told Euronews Next. Experts were also concerned about sensitive state information being used to train later models of ChatGPT, which is made by OpenAI. Its servers are based in the United States. Kristersson's press team brushed aside security concerns. 'Of course, it's not security-sensitive information that ends up there. It's used more as a sounding board,' Tom Samuelsson, Kristersson's press secretary, told the newspaper Aftonbladet. Should politicians use AI chatbots? This is not the first time a politician has been placed under fire due to their use of AI – or even the first time in Sweden. Last year, Olle Thorell, a Social Democrat in Sweden's parliament, used ChatGPT to write 180 written questions to the country's ministers. He faced criticism of overburdening ministers' staff, as they are required to answer within a set time frame. Earlier this year, United Kingdom tech secretary Peter Kyle's use of ChatGPT came under fire after the British magazine, New Scientist revealed he had asked the chatbot why AI adoption is so slow in the UK business community and which podcasts he should appear on to 'reach a wide audience that's appropriate for ministerial responsibilities'. Some politicians make no secret of their AI use. In a newspaper column, Scottish Member of Parliament Graham Leadbitter said he uses AI to write speeches because it helps him sift through dense reading and gives him 'a good basis to work from' – but emphasised that he still calls the shots. 'I choose the subject matter, I choose the evidence I want it to access, I ask for a specific type of document, and I check what's coming out accords with what I want to achieve,' Leadbitter wrote in The National. And in 2024, the European Commission rolled out its own generative AI tool, called GPT@EC, to help staff draft and summarise documents on an experimental basis. ChatGPT available to US public servants Meanwhile, OpenAI announced a partnership this week with the US government to grant the country's entire federal workforce access to ChatGPT Enterprise at the nominal cost of $1 for the next year. The announcement came shortly after the Trump administration launched its AI Action Plan, which aims to expand AI use across the federal government to boost efficiency and slash time spent on paperwork, among other initiatives. In a statement, OpenAI said the programme would involve 'strong guardrails, high transparency, and deep respect' for the 'public mission' of federal government workers. The company said it has seen the benefits of using AI in the public sector through its pilot programme in Pennsylvania, where public servants reportedly saved an average of about 95 minutes per day on routine tasks using ChatGPT. 'Whether managing complex budgets, analysing threats to national security, or handling day-to-day operations of public offices, all public servants deserve access to the best technology available,' OpenAI said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store