Latest news with #AICompanies


CNET
29-05-2025
- Business
- CNET
Use AI at Work? Your Coworkers May Be Judging You
Bosses everywhere are saying generative AI is the future. The signals emanating from the C-suites of corporations big and small are clear: If artificial intelligence doesn't take your job, it will at least change it significantly. The catch: If you use AI at work, your coworkers and maybe even your managers may think you're lazy. That is if you can get hired in the first place. This is the finding of a new study by researchers at Duke University published this month in the journal PNAS. Across four studies, the researchers examined whether people who used AI at work worried others would see them as lazy or incompetent and whether those fears were valid. "We found there was this universal social evaluation penalty where people described as using AI are evaluated as being less competent, less diligent, lazier than people who are described as receiving help from all sorts of other searches," Jessica Reif, a Ph.D. candidate at the Duke University Fuqua School of Business and lead author of the study, told me. The study highlights the difference between the hype over AI at work and the reality on the ground. Although business leaders and AI companies can't stop themselves from envisioning a utopian AI future in which autonomous agents do most of the work and humans focus on truly creative tasks, workers are skeptical. That skepticism — only 23% of American adults said they expect AI will improve how people do their jobs in a recent survey by Pew — affects how people view coworkers who use these tools. People worry they are judged for using AI The Duke University team first looked at whether employees would hesitate to admit they use an AI tool relative to a non-AI tool. The first of four studies found the 500 online participants were more likely to believe they would be judged by a manager or colleague as being lazy, replaceable or less competent if they said they use a generative AI tool versus a non-AI tool. The second test confirmed it. The 1,215 participants read a paragraph about an employee and rated how lazy, competent, diligent, ambitious, independent, self-assured or dominant they perceived the person to be. The people being rated were described as either receiving help from generative AI (like a lawyer using a tool to summarize information) or non-AI sources (like a paralegal) or were in a control group with no statement about help. People who received AI help were seen as more lazy, less competent, less diligent, less independent and less self-assured than either the control group or those receiving non-AI help. The case of a lawyer getting help from AI versus a paralegal is just one example. The researchers used 384 different scenarios, with different jobs and types of help. "What we found is that this was pretty consistent across all the occupations we queried," Reif said. In their third study, the researchers had 1,718 participants serve as "managers" to hire someone for a task. Some of the "candidates" were reported as using AI regularly, and some were people who never use AI. The managers were also asked about their own AI use. Managers who use AI regularly were more likely to see candidates who use AI as a good fit, while those who don't usually preferred candidates who don't. The third study was unclear about whether AI would actually be helpful for the task, so in the final study, participants were asked to imagine they were hiring a gig worker for a task. They were then asked to evaluate workers who either used AI tools or non-AI tools and rate how they would perceive them for manual tasks or digital tasks. The results found that while people who used AI were seen as more lazy, that perception is reduced if the evaluator uses AI or if AI is clearly useful for the task. But just because there isn't a penalty doesn't mean there's an advantage, perception-wise, for AI users in that last study, according to Richard Larrick, one of the authors and a professor of management at Duke University. "The people themselves who are heavy AI users don't actually kind of give any particular benefit or reward, in terms of their perceptions, to the AI user," Larrick said. "So it isn't like there's some boost in perceptions when high AI users think about another AI user. It's just that you wipe out for them the laziness perception." Your CEO may think AI is the future Ever since large language models like ChatGPT burst onto the scene in 2022, management consultants and corporate executives have been touting generative AI as the next big thing in the workplace. Workplace apps from companies like Google and Microsoft seem more packed each day with new AI functions and prompts. As the technology has matured a bit and more useful applications have arisen, that perception has only gotten stronger for many companies. Shopify and Duolingo, for instance, both recently announced they would prioritize AI-driven work and try to see if an AI can do a job before hiring a new employee or contractor. A commandment from a CEO to be AI-first is one thing. Actually changing the culture in your workplace and among the people you work around is entirely different. "I think there are cases where, when the rubber meets the road implementing tools like generative AI, there are challenges," Reif said. "What we're showing is just one such challenge of many." She speculated as more employers, especially tech-savvy ones, prioritize AI use and skills, the social costs will drop eventually. "I think it's going to take a while for this penalty to really go away," she said. Larrick said that even if general perceptions around AI users change, the social penalty may only disappear for certain tasks. For some work, using generative AI will be more acceptable. For others, it won't. How to avoid judgment from coworkers One way not to be judged at work is not to use AI on the job. And that may be what people are doing already, just based on the simple fact that people will judge you, as the researchers found in their first study. "As long as my choice of adopting AI is based on my theory of what others will think, even as what other people think changes, if my theory doesn't change fast enough, I still might be reluctant to use it and to reveal it," Larrick said. Another way to deal with the perception of laziness is to point out whether AI is saving you time and whether the time you save is being used well, Reif said. Perceived laziness isn't the only problem with using generative AI at work. There are concerns about whether the work you ask it to do is accurate or competent. So be sure you're checking your work and show that you are, in fact, using skills that can't be easily replaced, said Jack Soll, one of the authors and a professor of management at Duke University. "The more that employees can make their peers and their bosses understand that it takes skill and knowledge in order to use it appropriately, I think others can then appreciate their AI use," he said.


Entrepreneur
10-05-2025
- Business
- Entrepreneur
Publishers Seek Legal Safeguards Against Artificial Intelligence Use
Major publishing houses call for increased legal protections against artificial intelligence systems that use their content without permission or compensation. This growing concern comes as AI companies continue to train... This story originally appeared on Calendar Major publishing houses call for increased legal protections against artificial intelligence systems that use their content without permission or compensation. This growing concern comes as AI companies continue to train their models on vast amounts of published material, often without explicit authorization from content creators. The publishing industry, already facing significant economic challenges in the digital era, now confronts what many executives describe as an existential threat from AI technologies that can generate content based on their intellectual property. The Core of the Dispute The issue is how AI systems are developed and trained. Large language models and other AI systems require massive datasets to learn from, and published books, articles, and other content represent some of the highest-quality training material available. Publishers argue that when AI companies use this content without permission or payment, it constitutes copyright infringement. Our authors and publishing houses invest significant resources to create original works,' said one industry representative. When AI systems absorb and repurpose this content without compensation, they're essentially building commercial products on the back of our intellectual property. Several major publishing groups have begun taking legal action and filing lawsuits against prominent AI developers. These cases may establish essential precedents for how copyright law applies to AI training data. Economic Impact on the Publishing Industry Publishers point to several ways AI threatens their business model: AI-generated content competing with human-written works Decreased licensing revenue when AI systems use content without payment Potential market disruption as AI tools become more sophisticated Financial analysts note that the publishing industry operates on thin margins, particularly vulnerable to technological disruption. Small and medium-sized publishers may face the most significant risk if AI systems can generate content that satisfies consumer demand without compensating original creators. Proposed Solutions Publishers are advocating for several approaches to address their concerns. Many support legislation that would explicitly require AI companies to obtain licenses for training data. Others propose technical solutions allowing content owners to opt out of AI training datasets. 'We're not against innovation,' explained a publishing executive. We simply want fair compensation when our intellectual property is used to build commercial AI products.' Some AI companies have begun negotiating licensing agreements with certain publishers, suggesting a potential path forward. These agreements typically involve financial compensation and limitations on how the AI can use or reproduce the licensed content. The Broader Debate The publishers' concerns reflect a broader conversation about AI's impact on creative industries. Musicians, visual artists, and filmmakers have raised similar issues about their work being used to train AI systems without permission or payment. Legal experts note that existing copyright law wasn't designed with AI in mind, creating uncertainty about how courts will rule on these disputes. Some scholars argue that AI training falls under 'fair use' provisions, while others contend that commercial AI development requires explicit licensing. Technology advocates caution that overly restrictive regulations could hamper AI innovation. They suggest that it is essential to find a balance between protecting creators' rights and allowing technological progress. As these legal battles unfold, they will likely shape the future of publishing and how AI development proceeds across creative industries. For publishers, the outcome may determine whether AI becomes a partner or competitor in creating and distributing written content. The post Publishers Seek Legal Safeguards Against Artificial Intelligence Use appeared first on Calendar.