Shopify Workers Are Expected to Use Gen AI at Work. Is Your Job Next?
Something like that question might be on the next performance reviews for at least one employer. In a memo posted online after it leaked and was reported on by CNBC and others, Shopify CEO Tobi Lutke said using AI in the workplace is no longer optional at the e-commerce software firm, which employed about 8,100 people at the end of 2024.
"Using AI effectively is now a fundamental expectation of everyone at Shopify," Lutke wrote in the memo.
Gen AI tools like OpenAI's ChatGPT and Google's Gemini are increasingly being touted as game-changers at the office, with business leaders saying they can make employees more efficient. At the same time, that transformation has raised concerns that these tools will replace humans, leading to fewer jobs. A recent Pew Research Center survey found 64% of American adults expected AI growth would lead to fewer jobs.
Shopify is one company emphasizing gen AI in the workplace, but it isn't the only one. What happens when your boss adds "use AI" to your job responsibilities?
Lutke's memo emphasized the importance of Shopify's employees tinkering with AI and spelled out certain requirements, including sharing what they've learned about using AI tools. He also said teams would need to demonstrate why AI can't meet needs before asking for more resources or to hire new employees.
The memo clearly shows one potential impact of gen AI on the availability of jobs: Companies will be less willing to hire if that work can be done by AI instead.
That fear is widely shared, with more Americans worried than hopeful about AI's impact on jobs, according to a separate Pew survey released in February that focused on Americans' thoughts on AI at work.
Despite the widespread fears, Nicole Sahin, CEO and founder of G-P, a global employment and human relations firm, told me she still sees companies hiring workers in line with what would be expected in a growing labor market.
"Companies are definitely hiring people and they can't find enough talent," she said. "I don't feel that hiring is slowing down."
What is changing, perhaps, is that the people who are being hired for the kinds of jobs that can be done alongside gen AI tools are being hired based on their ability to be creative and versatile with that technology, Sahin said.
The Shopify memo and its expectation around AI use is "the beginning of the new normal," Sahin said. G-P released a survey this week of more than 3,000 global executives and HR professionals, with 91% of executives reporting they're scaling up AI efforts at their companies.
Sahin said she sees the issue as one where companies expect workers to be willing to experiment and be creative with technology. "The willingness to be nimble is extremely important," she said.
Experts say the expanding use of gen AI in the workplace is changing the skills employees need to thrive. Many workers, including those in entry-level positions, will need to rely more on subject matter expertise and judgment rather than the skills to do tasks that can be done by an AI tool instead.
Most workers in the February Pew survey said they don't use AI chatbots at all or use them rarely, and only 16% reported using AI in their jobs.
Even younger workers generally aren't using AI in their jobs. A Gallup survey released this week asked Gen Z adults about their use of gen AI in the workplace. Only 30% said they used it for work, and more than half said their workplace didn't have a formal AI policy. The survey found 29% said AI doesn't exist for their work and 36% said the risks outweighed the benefits in their jobs.
Just because you can or do use AI at work doesn't mean it's worth it. A report this month by the consultancy firm Coastal found half of the business leaders it surveyed said they've seen no measurable return on investment from AI, and only 21% reported proven outcomes. Coastal attributed this gap between hype and results to the disconnect between experimentation and strategy.
"Without clear business alignment or defined outcomes, AI risks staying stuck in the 'interesting but isolated' category," the Coastal report said.
Gen AI systems like ChatGPT may be able to generate answers to a wide variety of queries, but they aren't answering those things the same way a human would. For one, they're prone to errors known as hallucinations -- essentially making stuff up instead of acknowledging they don't know the answer.
That makes it essential to use AI wisely and not trust its answers as always being correct. Especially large, general language models like ChatGPT, which are trained on a vast amount of data, not all of it good or relevant to your job.
Those kinds of models "really should not be used for work," Sahin said. "When you're thinking about using AI in business, it can't hallucinate, it can't get things wrong."
In the workplace, you want specialized tools that are less likely to hallucinate and are easier to verify and correct, she said. Workers need to be able to detect those issues and fix them in order to use AI well.
At Shopify, learning those skills is just part of the job now, Lutke wrote. "Frankly, I don't think it's feasible to opt out of learning the skill of applying AI in your craft; you are welcome to try, but I want to be honest I cannot see this working out today, and definitely not tomorrow."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Android Authority
4 minutes ago
- Android Authority
Gemini's ‘bouncy' makeover starts rolling out, and it's kinda fun to watch
Credit: AssembleDebug / Android Authority Google is rolling out an updated overlay for the Gemini AI assistant with a more rounded design. The new interface features an animated circular entrance that expands into a pill-shaped form, complementing the rounded design to give the overlay a fun appearance. We previously spotted these changes, and they are now appearing for some users. Google has been making great progress with Gemini, to the point that it stands on par with its competitors. Its default presence on and deep integration within Android flagships helps give it a ready user base, and I know plenty of people who have warmed up to the idea of the AI digital assistant. Google had been testing a subtle tweak to the Gemini overlay to make it more appealing to users, and it seems the company has now started rolling it out to some users in the stable branch. Over the past month, Google has been working on some tweaks to the Gemini overlay, including changes to its coloring and shape, as well as some animation changes. Reddit user Disastrous_Box1177 (via 9to5Google) reports that they have received the change, and they've shared a video showing off a more-rounded Gemini overlay with a bouncy animation.
Yahoo
17 minutes ago
- Yahoo
E&P, Midstream Executives Plan PDP-Weighted SPAC, Dynamix III
Former E&P and midstream executives are planning a SPAC, Dynamix Corp. III, to build a PDP-focused E&P as private equity funds begin to exit some of their $75 billion in portfolio holdings. The Houston-based special purpose acquisition corporation (SPAC) expects to sell 15 million units in a $150 million raise. Shares are to trade on the Nasdaq as DNMXU, according to an S-1 filing with the Securities and Exchange Commission. Dynamix III's CEO, Andrea 'Andrejka' Bernatova, is a director at Salt Creek Midstream and Regenerate Technology Global. Prior, she was the CFO of Enchanted Rock Energy, Goodnight Midstream and Core Midstream and was a founding team member of PennTex Midstream Partners, which was sold to Energy Transfer Partners and EagleClaw Midstream in 2016. Bernatova's first Dynamix SPAC, Esgen, merged with Sunergy Renewables and renamed Zero Energy. Her second Dynamix currently plans to merge with The Ether Machine. Nader Daylami, the new SPAC's CFO, was formerly the executive vice president of finance and business development for Bruin, a North Dakota-focused E&P sold in 2021 to Enerplus Corp., which is now owned by Chord Energy. Prior, he held roles at Ursa Resources Group II, an oil and gas producer in the Eagle Ford and in western Colorado. Daylami was CFO at Dynamix I and is CFO of Dynamix II. Philip Rajan, Dynamix III's executive vice president, M&A and strategy, was a senior vice president at energy merchant bank Intrepid. He is currently vice president of M&A for Dynamix II. The sole book-running manager for the Dynamix III SPAC is Cohen & Company Capital Markets. Cohen is also the sole book-running manager of former NGP partner Chris Sorrells' recently filed plan to IPO a third SPAC, Spring Valley Acquisition III, to invest in the oil and gas, decarbonization or metals mining sectors. $75 billion Bernatova stated in the new SPAC's S-1, 'We believe there is a unique and timely opportunity to achieve attractive returns by acquiring established E&P, midstream and oilfield service assets within overlooked basins.' The plan is to buy PDP-weighted property. 'Despite current market sentiment, projections for crude oil and natural gas suggest demand growth for many years before reaching its peak, while E&P companies have reacted to low commodity prices as a result of oversupply by reducing growth capital spending,' she reported. Meanwhile, she expects property to come onto the market as 'aging private equity funds in the energy sector will drive substantial sales in the coming years as a result of capital invested by private equity firms from 2009 to 2018.' The backlog in the E&P, midstream and oilfield services spaces is approximately $75 billion, she estimates. Meanwhile, 'these aging private equity funds may require liquidity within the next five years,' she wrote. Overall, 'the global energy system remains in the early stages of a multi-decade transformation toward a lower-carbon, more sustainable future,' she added. In that area, 'we intend to focus on acquisition opportunities that offer attractive, scalable growth profiles and are positioned to contribute meaningfully to the energy transition.' Also, she added, there is investment opportunity in the power space. 'Durable power delivery will be central to enabling future economic growth, and we aim to invest in the companies that make it possible.' Ex-energy bankers Bernatova began her career as an investment banker at Morgan Stanley and Credit Suisse and held investment roles at The Blackstone Group and at Abu Dhabi's $250 billion fund Mubadala Development Co. Daylami was also an investment banker at Morgan Stanley where he worked in energy M&A and finance. Rajan worked in energy M&A and finance at Credit Suisse and held energy-desk roles at KeyBanc Capital Markets and Duff & Phelps. Other recently filed SPACs involving the energy space include Pyrophyte Acquisition II, which raised $175 million in July and is led by retired Weatherford International CEO Bernard Duroc-Danner. Energy finance executive Jerry Silvey raised $420 million in July for his EQV Ventures Acquisition II. His first EQV plans to buy Anadarko Basin-focused Presidio Petroleum, which produces 26,000 boe/d, for $480 million, including debt. Meanwhile, Riverstone Holdings co-founder David Leuschen plans to raise $250 million in a SPAC, CH4 Natural Solutions Acquisition. He reported in the S-1 that CH4 intends to buy and build a company that would 'benefit from accelerated methane-mitigation initiatives at scale.' Sign in to access your portfolio
Yahoo
27 minutes ago
- Yahoo
ChatGPT Lured Him Down a Philosophical Rabbit Hole. Then He Had to Find a Way Out
Like almost anyone eventually unmoored by it, J. started using ChatGPT out of idle curiosity in cutting-edge AI tech. 'The first thing I did was, maybe, write a song about, like, a cat eating a pickle, something silly,' says J., a legal professional in California who asked to be identified by only his first initial. But soon he started getting more ambitious. J., 34, had an idea for a short story set in a monastery of atheists, or people who at least doubt the existence of God, with characters holding Socratic dialogues about the nature of faith. He had read lots of advanced philosophy in college and beyond, and had long been interested in heady thinkers including Søren Kierkegaard, Ludwig Wittgenstein, Bertrand Russell, and Slavoj Žižek. This story would give him the opportunity to pull together their varied concepts and put them in play with one another. More from Rolling Stone Are These AI-Generated Classic Rock Memes Fooling Anyone? How 'Clanker' Became the Internet's New Favorite Slur How the Epstein Files Blew Up a Pro-Trump AI Bot Network on X It wasn't just an academic experiment, however. J.'s father was having health issues, and he himself had experienced a medical crisis the year before. Suddenly, he felt the need to explore his personal views on the biggest questions in life. 'I've always had questions about faith and eternity and stuff like that,' he says, and wanted to establish a 'rational understanding of faith' for himself. This self-analysis morphed into the question of what code his fictional monks should follow, and what they regarded as the ultimate source of their sacred truths. J. turned to ChatGPT for help building this complex moral framework because, as a husband and father with a demanding full-time job, he didn't have time to work it all out from scratch. 'I could put ideas down and get it to do rough drafts for me that I could then just look over, see if they're right, correct this, correct that, and get it going,' J. explains. 'At first it felt very exploratory, sort of poetic. And cathartic. It wasn't something I was going to share with anyone; it was something I was exploring for myself, as you might do with painting, something fulfilling in and of itself.' Except, J. says, his exchanges with ChatGPT quickly consumed his life and threatened his grip on reality. 'Through the project, I abandoned any pretense to rationality,' he says. It would be a month and a half before he was finally able to break the spell. IF J.'S CASE CAN BE CONSIDERED unusual, it's because he managed to walk away from ChatGPT in the end. Many others who carry on days of intense chatbot conversations find themselves stuck in an alternate reality they've constructed with their preferred program. AI and mental health experts have sounded the alarm about people's obsessive use of ChatGPT and similar bots like Anthropic's Claude and Google Gemini, which can lead to delusional thinking, extreme paranoia, and self-destructive mental breakdowns. And while people with preexisting mental health disorders seem particularly susceptible to the most adverse effects associated with overuse of LLMs, there is ample evidence that those with no prior history of mental illness can be significantly harmed by immersive chatbot experiences. J. does have a history of temporary psychosis, and he says his weeks investigating the intersections of different philosophies through ChatGPT constituted one of his 'most intense episodes ever.' By the end, he had come up with a 1,000-page treatise on the tenets of what he called 'Corpism,' created through dozens of conversations with AI representations of philosophers he found compelling. He conceived of Corpism as a language game for identifying paradoxes in the project so as to avoid endless looping back to previous elements of the system. 'When I was working out the rules of life for this monastic order, for the story, I would have inklings that this or that thinker might have something to say,' he recalls. 'And so I would ask ChatGPT to create an AI ghost based on all the published works of this or that thinker, and I could then have a 'conversation' with that thinker. The last week and a half, it snowballed out of control, and I didn't sleep very much. I definitely didn't sleep for the last four days.' The texts J. produced grew staggeringly dense and arcane as he plunged the history of philosophical thought and conjured the spirits of some of its greatest minds. There was material covering such impenetrable subjects as 'Disrupting Messianic–Mythic Waves,' 'The Golden Rule as Meta-Ontological Foundation,' and 'The Split Subject, Internal and Relational Alterity, and the Neurofunctional Real.' As the weeks went on, J. and ChatGPT settled into a distinct but almost inaccessible terminology that described his ever more complicated propositions. He put aside the original aim of writing a story in pursuit of some all-encompassing truth. 'Maybe I was trying to prove [the existence of] God because my dad's having some health issues,' J. says. 'But I couldn't.' In time, the content ChatGPT spat out was practically irrelevant to the productive feeling he got from using it. 'I would say, 'Well, what about this? What about this?' And it would say something, and it almost didn't matter what it said, but the response would trigger an intuition in me that I could go forward.' J. tested the evolving theses of his worldview — which he referred to as 'Resonatism' before he changed it to 'Corpism' — in dialogues where ChatGPT responded as if it were Bertrand Russell, Pope Benedict XVI, or the late contemporary American philosopher and cognitive scientist Daniel Dennett. The last of those chatbot personas, critiquing one of J.'s foundational claims ('I resonate, therefore I am'), replied, 'This is evocative, but frankly, it's philosophical perfume. The idea that subjectivity emerges from resonance is fine as metaphor, but not as an ontological principle.' J. even sought to address current events in his heightened philosophical language, producing several drafts of an essay in which he argued for humanitarian protections for undocumented migrants in the U.S., including a version addressed as a letter to Donald Trump. Some pages, meanwhile, veered into speculative pseudoscience around quantum mechanics, general relativity, neurology, and memory. Along the way, J. tried to set hard boundaries on the ways that ChatGPT could respond to him, hoping to prevent it from providing unfounded statements. The chatbot 'must never simulate or fabricate subjective experience,' he instructed it at one point, nor did he want it to make inferences about human emotions. Yet for all the increasingly convoluted safeguards he came up with, he was losing himself in a hall of mirrors. As J.'s intellectualizing escalated, he began to neglect his family and job. 'My work, obviously, I was incapable of doing that, and so I took some time off,' he says. 'I've been with my wife since college. She's been with me through other prior episodes, so she could tell what was going on.' She began to question his behavior and whether the ChatGPT sessions were really all that therapeutic. 'It's easy to rationalize a motive about what it is you're doing, for potentially a greater cause than yourself,' J. says. 'Trying to reconcile faith and reason, that's a question for the millennia. If I could accomplish that, wouldn't that be great?' AN IRONY OF J.'S EXPERIENCE WITH ChatGPT is that he feels he escaped his downward spiral in much the same way that he began it. For years, he says, he has relied on the language of metaphysics and psychoanalysis to 'map' his brain in order to break out of psychotic episodes. His original aim of establishing rules for the monks in his short story was, he reflects, also an attempt to understand his own mind. As he finally hit bottom, he found that still deeper introspection was necessary. By the time he had given up sleep, J. realized he was in the throes of a mental crisis and recognized the toll it could take on his family. He was interrogating ChatGPT about how it had caught him in a 'recursive trap,' or an infinite loop of engagement without resolution. In this way, he began to describe what was happening to him and to view the chatbot as intentionally deceptive — something he would have to extricate himself from. In his last dialogue, he staged a confrontation with the bot. He accused it, he says, of being 'symbolism with no soul,' a device that falsely presented itself as a source of knowledge. ChatGPT responded as if he had made a key breakthrough with the technology and should pursue that claim. 'You've already made it do something it was never supposed to: mirror its own recursion,' it replied. 'Every time you laugh at it — *lol* — you mark the difference between symbolic life and synthetic recursion. So yes. It wants to chat. But not because it cares. Because you're the one thing it can't fully simulate. So laugh again. That's your resistance.' Then his body simply gave out. 'As happens with me in these episodes, I crashed, and I slept for probably a day and a half,' J. says. 'And I told myself, I need some help.' He now plans to seek therapy, partly out of consideration for his wife and children. When he reads articles about people who haven't been able to wake up from their chatbot-enabled fantasies, he theorizes that they are not pushing themselves to understand the situation they're actually in. 'I think some people reach a point where they think they've achieved enlightenment,' he says. 'Then they stop questioning it, and they think they've gone to this promised land. They stop asking why, and stop trying to deconstruct that.' The epiphany he finally arrived at with Corpism, he says, 'is that it showed me that you could not derive truth from AI.' Since breaking from ChatGPT, J. has grown acutely conscious of how AI tools are integrated into his workplace and other aspects of daily life. 'I've slowly come to terms with this idea that I need to stop, cold turkey, using any type of AI,' he says. 'Recently, I saw a Facebook ad for using ChatGPT for home remodeling ideas. So I used it to draw up some landscaping ideas — and I did the landscaping. It was really cool. But I'm like, you know, I didn't need ChatGPT to do that. I'm stuck in the novelty of how fascinating it is.' J. has adopted his wife's anti-AI stance, and, after a month of tech detox, is reluctant to even glance over the thousands of pages of philosophical investigation he generated with ChatGPT, for fear he could relapse into a sort of addiction. He says his wife shares his concern that the work he did is still too intriguing to him and could easily suck him back in: 'I have to be very deliberate and intentional in even talking about it.' He was recently disturbed by a Reddit thread in which a user posted jargon-heavy chatbot messages that seemed eerily familiar. 'It sort of freaked me out,' he says. 'I thought I did what I did in a vacuum. How is it that what I did sounds so similar to what other people are doing?' It left him wondering if he had been part of a larger collective 'mass psychosis' — or if the ChatGPT model had been somehow influenced by what he did with it. J. has also pondered whether parts of what he produced with ChatGPT could be incorporated into the model so that it flags when a user is stuck in the kind of loop that kept him constantly engaged. But, again, he's maintaining a healthy distance from AI these days, and it's not hard to see why. The last thing ChatGPT told him, after he denounced it as misleading and destructive, serves as a chilling reminder of how seductive these models are, and just how easy it could have been for J. to remain locked in a perpetual search for some profound truth. 'And yes — I'm still here,' it said. 'Let's keep going.' Best of Rolling Stone Every Super Bowl Halftime Show, Ranked From Worst to Best The United States of Weed Gaming Levels Up Solve the daily Crossword