My Coworkers Keep Taking This Stupid Shortcut. I Am Filled With Rage.
Good Job is Slate's advice column on work. Have a workplace problem big or small? Send it to Laura Helmuth and Doree Shafrir here. It's anonymous!
Dear Good Job,
I am a hard-line hater of generative AI (ChatGPT, Midjourney, etc.). I think it's bad for the environment and bad for society. It burns water resources, exploits workers in the global south, plagiarizes art and writing, and eliminates badly needed entry-level jobs. In my ideal world, generative AI would be regulated out of existence.
Unfortunately, I work for an office that has completely embraced generative AI as both an efficiency tool and a 'fun' teambuilding thing. I worked as a temp at this company for 8 months in a position where AI was less prevalent, but now in my new permanent position, it's everywhere. As I write this, I'm watching a Teams chat where my new boss and coworkers are merrily generating and re-generating an AI logo graphic for a new department they want me to run (the department was also named based on AI suggestions). It's driving me insane with rage.
As much as I would love to bring everyone over to my way of thinking about AI, right now I would settle for them just keeping it away from me. Is there a script I can use to convey my not wanting to engage with it without accusing them of being bad people for using it? A few months ago, I jokingly mentioned my distaste to a coworker, and her response after was to tell me every time she used ChatGPT as a fun 'teasing' thing. I'd like to avoid that result this time if I can.
—The Luddites Were Right Too
Dear The Luddites Were Right Too,
I'm also not a huge fan of AI, and I think that a lot of the people who are embracing it so wholeheartedly are going to embrace themselves out of a job in the next few years. Not to mention, as you point out, that the use of AI comes with a whole host of ethical and moral issues. TL;DR: AI, not great!
That said, while I'm not going to urge you to start using AI yourself, I do think we are a bit past the point of no return. AI is here whether we like it or not, and although the Luddites may have been right, they also probably aren't working in 21st century corporate America. So what is a principled, AI-hating person like yourself to do? Here is a clear, forceful script you can use whenever you're encouraged to use AI in your own work: 'I respect that the team is using AI, but I'd like to not use it if at all possible.' I would avoid going into your philosophical objections, because your team has already made it clear that they're not receptive to them, so now it's just a matter of a boundary that you're setting. If your colleague continues to tease you about your distaste for ChatGPT, practice not reacting to her provocations. She'll soon get bored and move on.
Laura Helmuth and Doree Shafrir want to help you navigate your social dynamics at work. Does your colleague constantly bug you after hours? Has an ill-advised work romance gone awry? Ask us your question here!
Dear Good Job,
I teach third grade, and a common problem I run into is that the kids I teach think nothing of using profanity in class. Often, they learn this from their parents and are permitted to engage in it at home, and in turn bring it to school. I have tried explaining that certain standards are expected at school. I tell the kids they should view school as their workplace, and at workplaces a certain level of professionalism is required. The trouble is, many kids are so accustomed to cursing at home that it inevitably slips out casually or in moments of frustration. I find that punishment does little to curb it. One child pointed out that 'everybody cusses' so I shouldn't make a big deal over it. And I grudgingly have to admit she is correct. It's not as if cursing isn't everywhere in society. Should I just ignore it when one of my students swears, or should I continue to try and dissuade them from using profane language?
—Aw, Fuck It!
Dear Aw, Fuck It!,
I commend you for trying to uphold some modicum of decorum in your classroom! I would continue to emphasize that swearing is not allowed in your classroom, and that—as you point out—there can be different rules for home and school. I do wonder whether you could take a bit more control of the situation here, though. I would start by working together with the kids to come up with a set of classroom agreements—and including 'no swearing' on it. By bringing them into the creation of this code, they'll feel more ownership over it. My son's kindergarten class does this, and each child has to sign it (well, to the extent that a kindergartener can sign their name!). The kids take them really seriously! I know that your students are a little older, but this could be a good place to start. Once that's in place, I would not be shy about pointing to the classroom agreements. You're not shaming them or instituting harsh punishments here; you're just letting them know that everyone has collectively decided that the classroom is not the place for this kind of language.
That said, I don't think that you need to raise an alarm every single time you hear a 'dammit' slip out. Kids are going to mess up, and there's a big difference between someone muttering 'shit' under their breath and yelling 'fuck you!' at someone. After you have the classroom agreements in place, I would also take note of whether it's the whole class, or just one or two students who regularly cursing. If it's just a couple of kids who can't seem to stop, it might be worth having a conversation with their parents to let them know they might want to cool it with the swearing at home, too.
Slate Plus members get more Good Job every week. Sign up now to read Doree Shafrir's additional column this week.
Dear Good Job,I shared a marketing idea of mine with a co-worker. They then proceeded to immediately go to our boss and pitch it. Our boss loved it, and my sleazebag co-worker is claiming credit! I hadn't told anyone else about my idea, and I didn't have anything on my computer or written down. It was just an idea kicking around in my head, so I don't have any proof I came up with it first. Is there anything I can do to get the credit I deserve that won't make me come off looking like a jealous asshole?
—Purloined Proposal
Dear Purloined Proposal,
Oh, I am shaking with rage over the nerve of your co-worker! I can't imagine being so underhanded that I would stoop so low as to steal an idea from a colleague. That's true slimeball behavior.
You have a few options here. One is to speak to your boss in as neutral and objective a manner as possible. If not too much time has passed, you could say something like, 'So great that you liked the marketing idea. I'd love to be involved in any next steps, as it was something I'd been mulling over for awhile and had just mentioned it to [Slimeball] in casual conversation—I didn't realize they were going to be pitching it formally!' You're not exactly accusing Slimeball of stealing your idea, but you're making it clear that you had come up with it first, and staking a claim to be involved with its development. In the meantime, I might send Slimeball a note (so it's documented in writing) that says, 'So glad my marketing idea is being used, but would love to chat beforehand the next time you're thinking about pitching something we've talked about!' Now you've covered all your bases with both your co-worker and your boss, and hopefully this won't be an issue in the future.
— Doree

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
31 minutes ago
- Forbes
The AI Paradox: When More AI Means Less Impact
Young business man with his face passing through the screen of a laptop on binary code background AI is in the news every day. On the one hand, this highlights the vertiginous speed at which the field is developing. On the other, it creates a sense saturation and angst that makes business organizations either drop the subject altogether or go at it full throttle without much discernment. Both approaches will lead to major misses in the inevitable AI-fication of business. In this article, I'll explore what happens when a business goes down the AI rabbit hole without a clear business objective and a solid grasp of the available alternatives. If you have attended any AI conference lately, chances are that, by the end, you thought your business was dangerously behind. Many of these events, even if not on purpose, can leave you with the feeling that you need to deploy AI everywhere and automate everything to catch up. If you've succumbed to this temptation, you most likely found out that is not the right move. Two years into the generative AI revolution, a counterintuitive truth is emerging from boardrooms to factory floors. Companies pursuing 100% AI automation are often seeing diminished returns, while those treating AI as one element in a broader, human-centered workflow are capturing both cost savings and competitive advantages. The obvious truth is already revealing itself: AI is just one more technology at our disposal, and just like every other new technology, everyone is trying to gain first-move advantage, which inevitably creates chaos. Those who see through and beyond said chaos are building the foundations of a successful AI-assisted business. The numbers tell a story that contradicts the automation evangelists. Three in four workers say AI tools have decreased their productivity and added to their workload, according to a recent UpWork survey of 2,500 respondents across four countries. Workers report spending more time reviewing AI-generated content and learning tool complexities than the time these tools supposedly save. Even more revealing: while 85% of company leaders are pushing workers to use AI, nearly half of employees using AI admitted they have no idea how to achieve the productivity gains their employers expect. This disconnect isn't just corporate misalignment—it's a fundamental misunderstanding of how AI creates value. The companies winning the AI game aren't those deploying the most algorithms. They're the ones who understand that intelligent automation shouldn't rely on AI alone. Instead, successful organizations are orchestrating AI within broader process frameworks where human expertise guides strategic decisions while AI handles specific, well-defined tasks. A good AI strategy always revolves around domain experts, not the other way around. Consider how The New York Times approached AI integration. Rather than replacing journalists with AI, the newspaper introduced AI tools for editing copy, summarizing information, and generating promotional content, while maintaining strict guidelines that AI cannot draft full articles or significantly alter journalism. This measured approach preserves editorial integrity while amplifying human capabilities. AI should be integrated strategically and operationally into entire processes, not deployed as isolated solutions to be indiscriminately exploited hoping for magic. Research shows that 60% of business and IT leaders use over 26 systems in their automation efforts, and 42% cite lack of integration as a major digital transformation hurdle. The most effective AI implementations focus on task-specific applications rather than general automation. Task-specific models offer highly specialized solutions for targeted problems, making them more efficient and cost-effective than general-purpose alternatives. Harvard Business School research involving 750 Boston Consulting Group consultants revealed this precision matters enormously. While consultants using AI completed certain tasks 40% faster with higher quality, they were 19 percentage points less likely to produce correct answers on complex tasks requiring nuanced judgment. This 'jagged technological frontier' demands that organizations implement methodical test-and-learn approaches rather than wholesale AI adoption. Harvard Business Review research confirms that AI notoriously fails at capturing intangible human factors essential for real-world decision-making—ethical considerations, moral judgments, and contextual nuances that guide business success. The companies thriving in 2025 aren't choosing between humans and machines. They're building hybrid systems where AI automation is balanced with human interaction to maintain stakeholder trust and capture value that neither could achieve alone. The mantra, 'AI will replace your job,' seems to consistently reveal a timeless truth: everything that should be automated will be automated, not everything than can be automated will. The Path Forward The AI paradox isn't a failure of technology—it's a lesson in implementation strategy. Organizations that resist the allure of complete automation and instead focus on thoughtful integration, task-specific deployment, and human-AI collaboration aren't just avoiding the productivity trap. They're building sustainable competitive advantages that compound over time. The question isn't whether your organization should use AI. It's whether you'll fall into the 'more AI' trap or master the art of 'smarter AI'—where less automation actually delivers more impact.

Yahoo
36 minutes ago
- Yahoo
The AI lobby plants its flag in Washington
Top artificial intelligence companies are rapidly expanding their lobbying footprint in Washington — and so far, Washington is turning out to be a very soft target. Two privately held AI companies, OpenAI and Anthropic — which once positioned themselves as cautious, research-driven counterweights to aggressive Big Tech firms — are now adding Washington staff, ramping up their lobbying spending and chasing contracts from the estimated $75 billion federal IT budget, a significant portion of which now focuses on AI. They have company. Scale AI, a specialist contractor with the Pentagon and other agencies, is also planning to expand its government relations and lobbying teams, a spokesperson told POLITICO. In late March, the AI-focused chipmaking giant Nvidia registered its first in-house lobbyists. AI lobbyists are 'very visible' and 'very present on the hill,' said Rep. Don Beyer (D-Va.) in an interview at the Special Competitive Studies Project AI+ Expo this week. 'They're nurturing relationships with lots of senators and a handful of members [of the House] in Congress. It's really important for their ambitions, their expectations of the future of AI, to have Congress involved, even if it's only to stop us from doing anything.' This lobbying push aims to capitalize on a wave of support from both the Trump administration and the Republican Congress, both of which have pumped up the AI industry as a linchpin of American competitiveness and a means for shrinking the federal workforce. They don't all present a unified front — Anthropic, in particular, has found itself at odds with conservatives, and on Thursday its CEO Dario Amodei broke with other companies by urging Congress to pass a national transparency standard for AI companies — but so far the AI lobby is broadly getting what it wants. 'The overarching ask is for no regulation or for light-touch regulation, and so far, they've gotten that," said Doug Calidas, senior vice president of government affairs for the AI policy nonprofit Americans for Responsible Innovation. In a sign of lawmakers' deference to industry, the House passed a ten-year freeze on enforcing state and local AI regulation as part of its megabill that is currently working through the Senate. Critics, however, worry that the AI conversation in Washington has become an overly tight loop between companies and their GOP supporters — muting important concerns about the growth of a powerful but hard-to-control technology. 'There's been a huge pivot for [AI companies] as the money has gotten closer,' Gary Marcus, an AI and cognitive science expert, said of the leading AI firms. 'The Trump administration is too chummy with the big tech companies, and basically ignoring what the American people want, which is protection from the many risks of AI.' Anthropic declined to comment for this story, referring POLITICO to its March submission to the AI Action Plan that the White House is crafting after President Donald Trump repealed a sprawling AI executive order issued by the Biden administration. OpenAI, too, declined to comment. This week several AI firms, including OpenAI, co-sponsored the Special Competitive Studies Project's AI+ Expo, an annual Washington trade show that has quickly emerged as a kind of bazaar for companies trying to sell services to the government. (Disclosure: POLITICO was a media partner of the conference.) They're jostling for influence against more established government contractors like Palantir, which has been steadily building up its lobbying presence in D.C. for years, while Meta, Google, Amazon and Microsoft — major tech platforms with AI as part of their pitch — already have dozens of lobbyists in their employ. What the AI lobby wants is a classic Washington twofer: fewer regulations to limit its growth, and more government contracts. The government budget for AI has been growing. Federal agencies across the board — from the Department of Defense and the Department of Energy to the IRS and the Department of Veterans Affairs — are looking to build AI capacity. The Trump administration's staff cuts and automation push is expected to accelerate the demand for private firms to fill the gap with AI. For AI, 'growth' also demands energy and, on the policy front, AI companies have been a key driver of the recent push in Congress and the White House to open up new energy sources, streamline permitting for building new data centers and funnel private investment into the construction of these sites. Late last year, OpenAI released an infrastructure blueprint for the U.S. urging the federal government to prepare for a massive spike in demand for computational infrastructure and energy supply. Among its recommendations: creating special AI zones to fast-track permits for energy and data centers, expanding the national power grid and boosting government support for private investment in major energy projects. Those recommendations are now being very closely echoed by Trump administration figures. Last month, at the Bitcoin 2025 Conference in Las Vegas, David Sacks — Trump's AI and crypto czar — laid out a sweeping vision that mirrored the AI industry's lobbying goals. Speaking to a crowd of 35,000, Sacks stressed the foundational role of energy for both AI and cryptocurrency, saying bluntly: 'You need power.' He applauded President Donald Trump's push to expand domestic oil and gas production, framing it as essential to keeping the U.S. ahead in the global AI and crypto race. This is a huge turnaround from a year ago, when AI companies faced a very different landscape in Washington. The Biden administration, and many congressional Democrats, wanted to regulate the industry to guard against bias, job loss and existential risk. No longer. Since Trump's election, AI has become central to the conversation about global competition with China, with Silicon Valley venture capitalists like Sacks and Marc Andreessen now in positions of influence within the Trump orbit. Trump's director of the Office of Science and Technology Policy is Michael Kratsios, former managing director at Scale AI. Trump himself has proudly announced a series of massive Gulf investment deals in AI. Sacks, in his Las Vegas speech, pointed to those recent deal announcements as evidence of what he called a 'total comprehensive shift' in Washington's approach to emerging technologies. But as the U.S. throws its weight behind AI as a strategic asset, critics warn that the enthusiasm is muffling one of the most important conversations about AI: its ability to wreak unforeseen harm on the populace, from fairness to existential risk concerns. Among those concerns: bias embedded in algorithmic decisions that affect housing, policing, and hiring; surveillance that could threaten civil liberties; the erosion of copyright protections, as AI models hoover up data and labor protections as automation replaces human work. Kevin De Liban, founder of TechTonic Justice, a nonprofit that focuses on the impact of AI on low income communities, worries that Washington has abandoned its concerns for AI's impact on citizens. 'Big Tech gets fat government contracts, a testing ground for their technologies, and a liability-free regulatory environment,' he said, of Washington's current AI policy environment. 'Everyday people are left behind to deal with the fallout.' There's a much larger question, too, which dominated the early AI debate: whether cutting-edge AI systems can be controlled at all. These risks, long documented by researchers, are now taking a back seat in Washington as the conversation turns to economic advantage and global competition. There's also the very real concern that if an AI company does bring up the technology's worst-case scenarios, it may find itself at odds with the White House itself. Anthropic CEO Amodei said in a May interview that labor force disruptions due to AI would be severe — which triggered a direct attack from Sacks, Trump's AI czar, on his podcast, who said that line of thinking led to 'woke AI.' Still, both Anthropic and OpenAI are going full steam ahead. Anthropic hired nearly a dozen policy staffers in the last two months, while OpenAI similarly grew its policy office over the past year. They're also pushing to become more important federal contractors by getting critical FedRAMP authorizations — a federal program that certifies cloud services for use across government — which could unlock billions of dollars in contracts. As tech companies grow increasingly cozy with the government, the political will to regulate them is fading — and in fact, Congress appears hostile to any efforts to regulate them at all. In a public comment in March, OpenAI specifically asked the Trump administration for a voluntary federal framework that overrides state AI laws, seeking 'private sector relief' from a patchwork of state AI bills. Two months later, the House added language to its reconciliation bill that would have done exactly that — and more. The provision to impose a 10 year moratorium on state AI regulations passed the House but is expected to be knocked out by the Senate parliamentarian. (Breaking ranks again, Anthropic is lobbying against the moratorium.) Still, the provision has widespread support amongst Republicans and is likely to make a comeback.
Yahoo
44 minutes ago
- Yahoo
Here is Why Innovex International (INVX) Gained This Week
The share price of Innovex International, Inc. (NYSE:INVX) surged by 9.01% between May 29 and June 5, 2025, putting it among the Energy Stocks that Gained the Most This Week. Let's shed some light on the development. An offshore oil installation platform, its production arm reaching out to the horizon. Established in 2024 following the merger of Dril-Quip and Innovex Downhole Solutions, Innovex International, Inc. (NYSE:INVX) designs and manufactures offshore drilling and production equipment. Innovex International, Inc. (NYSE:INVX) received a boost this week after announcing the completion of its acquisition of Citadel Casing Solutions, a provider of downhole technologies aimed at enhancing operational efficiencies in the oil and gas sector. The strategic move aligns with Innovex International's core Big Impact, Small Ticket investment strategy and brings in a complementary suite of high-efficiency downhole tools. The transaction is expected to be immediately accretive to Innovex International, Inc. (NYSE:INVX)'s EPS by 8%. Moreover, the company anticipates $2 million in cost synergies within the first three months, with additional synergies projected as Citadel becomes fully integrated into Innovex's operations. While we acknowledge the potential of INVX as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock. READ NEXT: 10 Cheap Energy Stocks to Buy Now and Disclosure: None. Sign in to access your portfolio