logo
#

Latest news with #RossIntelligence

Taming the AI ‘beast' without losing ourselves
Taming the AI ‘beast' without losing ourselves

Business Times

time11-08-2025

  • Business
  • Business Times

Taming the AI ‘beast' without losing ourselves

THE rise of artificial intelligence (AI) in the workplace is a double-edged sword. On the one hand, it promises unparalleled efficiency, cost savings and innovation. On the other hand, it fuels anxiety, job insecurity and mental strain for millions of workers. As AI continues its relentless march into every corner of the workplace, the psychological toll on employees cannot be ignored. The question is no longer whether AI will reshape work (it already has), but how we can harness its power without sacrificing human well-being. The challenge is not insignificant. The beauty: AI's promise AI's transformative potential is undeniable. In law, tools such as Ross Intelligence and Casetext analyse legal precedents in seconds, saving lawyers hours of painstaking research. AI-driven contract platforms such as LexisNexis and Kira Systems flag risks and suggest edits with near-human precision. For accountants, AI automates data entry, compliance checks and even audit sampling, reducing errors and freeing up time for higher-value work. A NEWSLETTER FOR YOU Friday, 3 pm Thrive Money, career and life hacks to help young adults stay ahead of the curve. Sign Up Sign Up The gains are real. It is estimated that lawyers in the US, thanks to AI, could reclaim 266 million hours of billable time a year – roughly US$100,000 in additional annual revenue per lawyer. Similar efficiencies ripple across industries, from healthcare to finance. AI doesn't just streamline tasks; it redefines what's possible. But this efficiency has a human cost. The uncomfortable (unarticulated) challenge is clear: These lawyers must now quickly uncover new value-added services to replace work that AI performs more quickly and cheaply. The beast: AI's psychological toll The dark side of AI's workplace revolution is the pervasive fear of obsolescence for the individual. In 2023, Pew Research found that 62 per cent of workers worry that AI could replace their jobs. Goldman Sachs estimates that 300 million jobs worldwide may be affected by AI and its algorithmic automation potential. The disruption is widespread, affecting low-skilled roles as well as professionals in law, accounting and even creative fields. This AI challenge has a stark impact on individuals' mental health. Chronic job insecurity breeds stress, depression and burnout. The American Psychological Association links automation anxiety to decreased job satisfaction and heightened workplace tension. A 2023 study by the Organization for Economic Co-operation and Development tied rapid upskilling demands to rising burnout rates, while Gallup found 48 per cent of workers feel overwhelmed by the pace of technological change, finding it hard to keep up – much less compete – with AI. For those who do lose jobs to AI, the consequences are even grimmer. University of Cambridge research shows that communities hit by AI automation experience higher rates of substance abuse and suicide. Unemployed individuals are twice as likely to suffer mental health disorders, based on research from the University of Erlangen-Nuremberg. Put simply, AI's efficiency gains come with a high yet hidden tax on human well-being. Taming the beast: mitigation strategies So, how do we reconcile AI's benefits with its human costs? The answer lies in proactive, multipronged strategies that prioritise both productivity and mental health. Upskilling is often touted as the antidote to AI-driven job loss. Companies such as Amazon and Google have invested billions in training programmes – Amazon's 'Upskilling 2025', for instance, pledged US$1.2 billion towards AI and cloud computing education for its employees. These initiatives are critical, but insufficient on their own. The pressure to constantly reskill can be a source of stress in itself. This is similar to 'technostress', the strain experienced by employees in digital fields who must continuously learn new software and tools. Reskilling programmes must be paired with career and personal counselling, flexible timelines and realistic expectations. Otherwise, we risk trading job insecurity for burnout. Employers must recognise and treat AI-related stress as a workplace hazard. Access to mental health programmes, therapists and peer support networks can help employees navigate this AI-induced uncertainty. A 2023 Deloitte report highlighted that companies investing in mental health saw not just happier employees, but higher productivity. Workers in organisations with robust mental health support saw a 30 per cent drop in absenteeism. Transparency and candour in the workplace are also key. Workers need clear communication about how AI will be integrated in the workplace, which roles may change, and how the company plans to support them. The principle behind this approach is simple: Uncertainty fuels anxiety, whereas clarity fosters trust. Preserving human connection Unfortunately, AI's rise has coincided with a decline in workplace socialisation. Chatbots, virtual assistants and remote work tools reduce in-person interaction, exacerbating employee isolation. The American Psychological Association notes that remote workers relying heavily on AI report higher levels of isolation and loneliness. Employers should design workflows that balance automation with human collaboration. Hybrid models, team-building activities and 'AI-free' zones can help maintain needed social bonds. After all, productivity is not just about output; it is also about people. But corporate initiatives alone will not solve the systemic challenges. Policymakers must step in with stronger social safety nets – universal healthcare, unemployment benefits tailored for displaced workers, and incentives for companies to retain human labour in the workforce. Ethical AI frameworks are also essential. Tech developers should prioritise tools that augment human work rather than replace it outright. The goal should be partnership and optimisation, not displacement. This may seem idealistic, but it is critical to the individual employee, the community and, ultimately, society. Finding a way AI is here to stay. The choice is not between embracing it or rejecting it. Rather, it is about shaping its integration with humanity in mind. OpenAI's Sam Altman mused about the potential for a one-person, billion-dollar company powered by AI, but we must ask: At what cost? This is not a zero-sum game. AI can drive progress without eroding mental well-being, but only if we act deliberately and intentionally. Employers, policymakers and tech leaders must collaborate to ensure that the AI revolution lifts people up rather than leaves them behind. The stakes are high. If we fail, we risk winning the battle for efficiency (and technology), but losing the war for human mental wellness and relevance in the workplace. Should this happen, it would be a human tragedy of epic proportions – and one entirely of our own doing. The writer is the group general counsel of Jardine Cycle & Carriage, a member of the Jardine Matheson Group. He sits on several commercial boards, including that of the charity Jardines Mindset, which focuses on mental health, and the global guiding council of the US mental health charity One Mind at Work.

What are the copyright implications of using Artificial Intelligence for schoolwork?
What are the copyright implications of using Artificial Intelligence for schoolwork?

Los Angeles Times

time20-05-2025

  • Business
  • Los Angeles Times

What are the copyright implications of using Artificial Intelligence for schoolwork?

As artificial intelligence has evolved and entered mainstream use — with AI filters invading every corner of social media, and AI writing tools churning out everything from student essays to Sports Illustrated articles — there have been multiple instances of public outcry about the alleged plagiarism these tools encourage. Consider, for instance, the protests that erupted when Christie's Auction House announced an AI art auction in February of 2025; more than 6,000 artists signed an open letter calling for the cancellation of the auction, citing concerns that the artwork was incentivizing the 'mass theft of human artists' work.' 'These [AI] models, and the companies behind them, exploit human artists,' read the letter. 'Using their work without permission or payment to build commercial AI products that compete with them.' And yet, even as these protests pick up steam, and AI-related copyright infringement lawsuits continue to spring up around the world, as of right now the laws surrounding AI copyright infringement are still undefined and evolving – much like the technology itself. Where will these legal battles go in the future? If the United States does rule that the use of Artificial Intelligence tools constitutes copyright infringement, how will that impact students using these AI tools in High School or Higher Education – spaces which rigidly enforce anti-plagiarism rules? Are we on the brink of major AI restrictions across the country? While plagiarism itself is an ethical concept and not a legally enforced offense, copyright infringement is. Copyright law in the United States is a complex system that intends to protect the expression of original works by an artist or creator, and it is within this system that Artificial Intelligence has faced its greatest hurdles so far. The biggest development in the copyright battle against AI actually began two years before the generative AI boom, when the tech conglomerate Thomson Reuters sued a legal AI startup called Ross Intelligence back in 2020. Thomson Reuters would go on to win the lawsuit in 2025, when U.S. Circuit Judge Stephanos Bibas ruled that Ross Intelligence was not allowed to copy information and content from Reuters – marking the first major blow in the concept of 'fair use' in AI. 'Fair use' is a complex foundational concept that companies like OpenAI and Meta Platforms use to justify their services, claiming that their AI services are studying AI copyrighted materials to create new content, while opponents claim these companies are stealing to directly compete with them. As mentioned, the notion of 'fair use' in AI is still open to legal interpretation, though it's possible we will see a definitive ruling on this hotly-contested topic in one of the country's ongoing AI Copyright lawsuits, such as Advance Local Media v. Cohere. In this case, a group of news publishers including Conde Nast, The Atlantic, and Vox, alleged copyright infringement against Cohere Inc. – claiming that the company used their copyrighted publications to build and operate its AI. Because this case constitutes multiple allegations of copyright infringement and Lanham Act violations, the ending to Advance Local Media v. Cohere may be the first ruling that definitively restricts 'fair use' in AI. These cases demonstrate how AI plagiarism is not illegal yet, but as more and more cases are settled, we may see an increased crackdown on AI usage in art, in professional writing, and in schoolwork. In the future, use of AI for schoolwork may open you up to copyright infringement as well as plagiarism, and it's important to understand the safe, legal measures we can take to use this technology correctly. So, what can be done to differentiate plagiarism from using AI in a responsible way? explains that there are still many ways to use AI tools 'responsibly' during the content creation process, simply by taking care to recognize the copyright implications of your work. One method for doing so is extensively citing your sources when writing essays or completing assignments, since AI often doesn't cite its sources directly during content generation. If AI does not cite sources of information that you wanted to include, it might even be best to leave out those facts entirely. Not only that, but AI should always be used as an assistan t to your writing, rather than the author of what should be your own work. AI should never entirely replace your writing, but rather offer suggestions and additions to your content, or help you proofread what you have done. As our society moves towards greater AI integration in all walks of life (and with legal crackdowns on AI looming in the future) it's essential that we use these tools purely to enhance our work, not to replace it. The use of AI plagiarism checkers can also be helpful. These can be used to confirm that you are not including plagiarized content, but instead using AI in an ethical way. By following these steps when using AI in your work, you can be sure that you are not plagiarizing others, establishing a baseline for ethical AI use in advance of any upcoming legal settlements. Looking to the future, it's very possible that these AI tools find themselves at the center of large copyright infringement lawsuits and restrictions, and as students we need to prepare ourselves for that eventuality. In studying the greater legal implications of AI, we can not only protect ourselves from plagiarism, but elevate our own work by refusing to take advantage of an ethically-ambiguous shortcut. Related

Media company Thomson Reuters wins AI copyright case
Media company Thomson Reuters wins AI copyright case

Euronews

time13-02-2025

  • Business
  • Euronews

Media company Thomson Reuters wins AI copyright case

Thomson Reuters has won an early battle in court about whether artificial intelligence (AI) programs can train on copyrighted material. The media company filed a lawsuit in 2020 against now-defunct legal research firm Ross Intelligence. In it, Thomson Reuters argues the company used their own legal platform Westlaw to train an AI model without permission. In his decision, judge Stephanos Bibas affirmed that Ross Intelligence was not permitted under US copyright law, known as the "fair use doctrine," to use the company's content in order to build a competing platform. The 'fair use' doctrine of US laws allows for limited uses of copyrighted materials such as for teaching, research, or transforming the copyrighted work into something different. "We are pleased that the court granted summary judgment in our favour," according to a statement from Thomson Reuters to Euronews Next. "The copying of our content was not 'fair use'". Ross Intelligence did not immediately respond to a request for comment from Euronews Next. Thomson Reuters' win comes as a growing number of lawsuits have been filed by authors, visual artists, and music labels against developers of AI models over similar issues.

Thomson Reuters scores early win in AI copyright battles in the US
Thomson Reuters scores early win in AI copyright battles in the US

Yahoo

time12-02-2025

  • Business
  • Yahoo

Thomson Reuters scores early win in AI copyright battles in the US

LOS ANGELES (AP) — Thomson Reuters has won an early battle in court over the question of fair use in artificial intelligence-related copyright cases. The media and technology company filed a lawsuit against Ross Intelligence — a now-defunct legal research firm — in 2020, arguing they had used materials from Thomson Reuters' own legal platform Westlaw to train an AI model without permission. Judge Stephanos Bibas of the 3rd U.S. Circuit Court of Appeals issued a decision Tuesday that affirmed Ross Intelligence was not permitted under U.S. copyright law to use the company's content in order to build a competing platform. See for yourself — The Yodel is the go-to source for daily news, entertainment and feel-good stories. By signing up, you agree to our Terms and Privacy Policy. Thomson Reuters and Ross Intelligence did not immediately respond to a request for comment. In his summary judgment, Bibas said that 'none of Ross's possible defenses holds water' and ruled in favor of Thomson Reuters on the issue of 'fair use.' The 'fair use' doctrine of U.S. laws allows for limited uses of copyrighted materials such as for teaching, research or transforming the copyrighted work into something different. Thomson Reuters' win comes as a growing number of lawsuits have been filed by authors, visual artists and music labels against developers of AI models over similar issues. What links each of these cases is the claim that tech companies ingested huge troves of human writings to train AI chatbots to produce human-like passages of text, without getting permission or compensating the people who wrote the original works. OpenAI and its business partner Microsoft have also battled copyright infringement cases led by writers such as John Grisham, Jodi Picoult and 'Game of Thrones' novelist George R. R. Martin; and another set of lawsuits from media outlets such as The New York Times, Chicago Tribune and Mother Jones.

Some Good News for Hollywood Creators Suing AI Companies
Some Good News for Hollywood Creators Suing AI Companies

Yahoo

time12-02-2025

  • Business
  • Yahoo

Some Good News for Hollywood Creators Suing AI Companies

Copyright law bars a former competitor of Thomson Reuters from using the company's content to create an artificial intelligence-based legal platform, a court has ruled in a decision that could lay the foundation for similar rulings over the legality of using copyrighted works to train AI systems. U.S. District Judge Stephanos Bibas on Tuesday rejected arguments from Ross Intelligence that it's protected by the 'fair use' exception to copyright protections. The court's ruling on the novel issue will likely be cited by creators suing tech companies across Hollywood, though the case doesn't involve the creation of new content created by AI systems. More from The Hollywood Reporter Elon Musk Enlists Ari Emanuel Among Investors In Bid to Take Over OpenAI and Court Hollywood The New York Times Has Spent $10.8M In Its Legal Battle With OpenAI So Far How Many Humans Do You Need to Make An AI Movie Script Copyrightable? 'Originality is central to copyright,' Bibas wrote. Shortly after the court issued the ruling, Concord Music Group moved for a federal judge overseeing its lawsuit against Amazon-owned Anthropic over the use of song lyrics to train Claude to consider the order in its evaluation of the case. The case revolves around a Thomson Reuters legal research platform in which users pay to access information regarding case law, state and federal statutes, law journals and regulations. Content includes headnotes that summarize key points of law and case holdings, which are copyrighted. Ross, a now-defunct AI company backed by venture firm Y Combinator, used a form of those headnotes to train a competing legal search engine after Thomson Reuters declined to license the content. The key difference between this case and other AI lawsuits is that there was an intermediary that repurposed the copyrighted work for AI training. In lawsuits against OpenAI, Meta and Anthropic, among others, creators allege wholesale copying of material. In Tuesday's ruling, Bibas found that Ross may have infringed on more than 2,200 headnotes. To decide damages, a jury will determine whether any of Thomson Reuters' copyrights have expired. The court's decision turned, in part, on whether the headnotes constitute original works protected by intellectual property law. Bibas, who ruled in favor of Ross Intelligence on summary judgment in 2023 in a decision that was withdrawn shortly before trial, sided with Thomson Reuters on the issue since headnotes can 'introduce creativity by distilling, synthesizing, or explaining part of an opinion.' 'More than that, each headnote is an individual, copyrightable work,' the judge wrote. 'That became clear to me once I analogized the lawyer's editorial judgment to that of a sculptor. A block of raw marble, like a judicial opinion, is not copyrightable. Yet a sculptor creates a sculpture by choosing what to cut away and what to leave in place. That sculpture is copyrightable.' Also of note: Bibas declined to find fair use, which provides protection for the utilization of copyrighted material to make another work as long as it's 'transformative.' On this issue, he noted that Ross intended to profit off of its use of Thomson Reuters' headnotes, which 'disfavors fair use.' The court stressed, 'Even taking all facts in favor of Ross, it meant to compete with Westlaw by developing a market substitute. And it does not matter whether Thomson Reuters has used the data to train its own legal search tools; the effect on a potential market for AI training data is enough.' The court pointed several times to the Supreme Court's decision in Andy Warhol Foundation for the Visual Arts v. Goldsmith, which effectively reined in fair use. In that case, the majority said that an analysis of whether an allegedly infringing work was sufficiently transformed must be balanced against the 'commercial nature of the use.' Creators are leveraging that ruling to argue that AI companies could've simply licensed the copyrighted material and that the markets for their works were undermined. Randy McCarthy, an intellectual property lawyer at Hall Estill, says that the court's ruling will be 'heralded by existing groups of artists and content creators as the key to their case against the other generative AI systems.' He adds, 'One thing is clear, merely using copyrighted material as training data to an AI cannot be said to be fair use per se.' Whether the legal doctrine applies remains among the primary battlegrounds for the mainstream adoption of AI. Best of The Hollywood Reporter How the Warner Brothers Got Their Film Business Started Meet the World Builders: Hollywood's Top Physical Production Executives of 2023 Men in Blazers, Hollywood's Favorite Soccer Podcast, Aims for a Global Empire

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store