I sat in on an AI training session at KPMG. It was almost like being back at journalism school.
The Big Four firm recommended their interns try out five prompting techniques.
The session showed me that learning to use AI is more about using language well than tech skills.
On a sweaty Monday morning in June, I joined 90 KPMG tax interns in an air-conditioned classroom in Florida.
We were there for one reason: to learn how to use AI.
We gathered at Lakehouse, KPMG's gleaming training facility in Lake Nona, Florida. I had been invited to spend two days at the facility as part of my ongoing reporting about AI and the consulting industry, and as the class had already started, I slipped in quietly and took a seat.
Given that KPMG helps some of the world's biggest companies figure out how to use AI, I was curious to see how the consulting giant was teaching its own employees to use the technology.
My biggest takeaway? Upskilling for the AI era can be surprisingly basic.
The five ways of asking AI a question
Four large screens — two on each wall— hung down from the ceiling in the classroom, and the 90 interns sat around circular tables in groups of six.
Sherry Magee, a senior director at KPMG, told me during a tour of the property that the space was designed so there was no "front row of the classroom," which she said encourages participation.
The training session was run by two KPMG employees. They opened with a question about the interns' use of AI:"Who just generally goes straight to it and just starts typing?"
Pretty much everyone in the room raised a hand.
One of the instructors explained that instead of chatting with AI, there were five prompting techniques they could use to adjust the model's response to get the "best, most relevant, and accurate output."
The techniques are:
They work best for different tasks, the instructors explained. For example, a "chain of thought" prompt can be useful to get AI to show its work. This "thinking aloud" approach is more transparent and can be useful for tax professionals who need to check the results for inaccuracies.
"Flipped interaction" prompts could be used in a tax setting to prepare a client profile or to tailor advice.
The instructor told the interns that asking AI to prompt them with questions can be helpful for "making you think of things you don't often think of yourself."
Echoes from my journalism degree
The two-hour training session covered the foundations of KPMG's AI tool for tax, the Digital Gateway. It also explained the concept of AI personas, introduced techniques to reduce hallucinations, and taught the interns KPMG's key ethical principles to apply when using AI.
The interns were also taught how to provide AI tools with the right detail and tone for their target audience.
I'm no tax expert, but I am a journalist — and what struck me during the session is how much these teachings echoed the ones I was taught in journalism school. The success of an interview hinges on the quality of the questions I ask the other person. As KPMG stressed in its training sessions, by thinking about how you communicate information to an AI, you can also get the most helpful outcome from said AI.
"The more detail you give it, the more likely it is to predict the next thing correctly," explained the instructor.
The intern training focused on admin-related examples of using AI, like drafting emails or creating slide decks. It wasn't the most complicated or advanced stuff, but the session I attended was for summer interns in the tax division, so I wasn't seeing how the firm's leading technicians tackle AI.
More senior employees are using AI for industry research and preliminary audit memos, Becky Sproul, a KPMG audit partner, told me during an interview later that day.
They're presenting AI with client documentation, auditing, and accounting standards, and asking it to write "a memo going through all the various attributes of the accounting standard," she said.
That preliminary work can help get tax professionals "80% of the way there," Sproul said.
The firm is also building AI agents where the "agent almost becomes like a team member," and is using engagement metrics to encourage employees to use AI, she said.
The other Big Four firms — Deloitte, EY, and PwC — have also deployed agentic AI platforms this year, which they all present as being transformative for the workforce and productivity.
Teaching methods
The simple teaching methods used in the session were another reminder that while AI is complicated, learning how to use it doesn't have to be.
The interns were learning about technology that is transforming workplaces, but they were still using large cardboard flipboards to share their ideas.
The session had one extra reminder for workers in the AI age: Take a break from your screen.
At one point, an employee from the Lakehouse's "stretch" team, its on-site gym, ran into the center of the room and announced he was there to lead them through a "wellness break," which entailed five minutes of stretching and breathwork accompanied by relaxing music.
The interns stood up with bemused expressions, but soon the whole class was loosening up, which the instructor told them would help them stay focused.
"This is what we signed up for, right?" I heard one intern joke as he lunged forward.
Choosing to prioritize my reporting over a relaxing stretch, I didn't join in.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Miami Herald
an hour ago
- Miami Herald
Veteran trader highlights crypto miner after Google deal
TheStreet Pro's Stephen Guilfoyle knows what you're thinking. The veteran trader recently turned his attention to TeraWulf (WULF) , which saw its stock skyrocket on Aug. 14. Don't miss the move: Subscribe to TheStreet's free daily newsletter "Sarge, isn't Terawulf a cryptocurrency mining operation?" he wrote. "Yes, but that said, the firm is transitioning into something bigger and potentially far more consequential than that." Guilfoyle said TeraWulf has pivoted toward providing infrastructure to so-called hyperscalers, the large cloud service providers offering massive computing power and storage capacity, with a focus on AI-related workloads. "In short, the firm is likely trying to position itself as a competitor to CoreWeave (CRWV) ," he said, referring to the AI cloud-computing startup. Image source: East Bay Times via Getty Images Founded in 2021, TeraWulf said on its website that it provided "domestically produced bitcoin by using more than 90% zero carbon energy today." Guilfoyle, whose career dates back to the floor of the New York Stock Exchange in the 1980s, said Terawulf reached two 10-year agreements with AI cloud platform company Fluidstack to supply high-performance computing clusters to large cloud providers. Google parent Alphabet (GOOGL) has agreed to provide funding of $1.8 billion to help finance this project. In return, Alphabet received warrants to acquire roughly 41 million shares of TeraWulf that would amount to an 8% stake when exercised. More Experts Stocks & Markets Podcast: Sectors to Avoid With Jay WoodsTrader makes bold call with Boeing stock after defense workers strikeVeteran fund manager sends urgent 9-word message on stocks "These are truly a game changer for TeraWulf," Chief Financial Officer Patrick Fleury told analysts during the second-quarter earnings call. "The Fluid Stack lease and Google support agreement are carefully structured to enhance our credit profile and position us to scale quickly." TeraWulf's stock has surged 55.4% this year and skyrocketed 144% from this time in 2024. TeraWulf beat Wall Street's quarterly earnings expectations, with revenue increasing 34% year-over-year to $47.6 million. The company cited a higher average bitcoin price and expanded mining capacity, offset partly by expected headwinds from increased network difficulty and the April 2024 halving, where bitcoin reduced the block reward by 50%. "My target price is around $9.50," Guilfoyle said. "This is a trade, not an investment, and I expect to be flat the name by the closing bell should short-term traders take profits en masse on Friday." Clear Street analyst Brian Dobson raised the investment firm's price target on TeraWulf to $12 from $9 and affirmed a buy rating on the shares, according to The Fly. The colocation agreements with Fluidstack, supported by Google's $1.8 billion lease backstop and equity stake, and 80-year ground lease at the Cayuga site in New York, "materially enhance" TeraWulf's long-term growth profile, the analyst said. The firm upped its 2027 Ebitda estimate to reflect TeraWulf's expanding high performance computing portfolio. It sees potential upside to its outlook as it does not consider new business wins. Adding Fluidstack as a client, along with Google's commitment, "will create significant momentum and increase the likelihood of additional contract wins going forward," Dobson contended. Citizens JMP analyst Greg Miller raised the firm's price target on TeraWulf to $13 from $7 and maintained an outperform rating on the shares. Related: AI leader stuns Google with move that could reshape the internet TeraWulf reported solid Q2 results, underscoring progress in its strategic pivot toward high-performance computing hosting, the analyst said. The company is likely to exit mining by the next halving event, and it retains the flexibility to redeploy mining capacity toward HPC, aligning with customer demand trends, the firm says. Analysts have noted a shift from bitcoin mining to AI data centers, as both require huge amounts of electricity. A report by the International Energy Association said that electricity demand from data centers worldwide is set to more than double by 2030 to around 945 terawatt-hours, slightly more than the entire electricity consumption of Japan today. "Hyperscalers with generative AI needs are particularly interested in converting to bitcoin mining data centers due to the substantial power requirements and the urgency of deployment timelines," Prakash Vijayan, a senior analyst with Driehaus Capital Management, wrote in November. Vijayan said generative AI applications demand immense computational power and energy, often 10 times more than standard operations. "Bitcoin mining data centers are equipped with advanced cooling systems and have access to cheap, substantial energy sources," he said. "This presents an ideal solution for these needs." By repurposing existing bitcoin mining facilities, Vijayan said, hyperscalers can significantly reduce timelines and meet the growing demand for AI services more efficiently. "Given these trends, bitcoin miners are increasingly transitioning to AI data centers as a strategic move to diversify their revenue streams and leverage their existing infrastructure," he added. Related: The stock market is being led by a new group of winners The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

Miami Herald
an hour ago
- Miami Herald
Sam Altman's latest delusional AI prediction doesn't pass the test
Artificial intelligence has come a long way since OpenAI introduced the world to ChatGPT 3.5 in late 2022. The release of the large language model sets the stage for an AI revolution that the world is still trying to understand. AI has many more uses than running large language models (LLMs), and the business world is already using the advanced technology to transform e-commerce (like when Amazon uses AI to recommend products), automotive navigation (AI underpins the autonomous technology most car companies use), robotics, and image generation. Related: Tesla's biggest strength has suddenly become a liability ChatGPT, specifically, has been especially disruptive to education. Enterprising students have used ChatGPT to enhance their studies and reduce research time, while other students have just used the technology to cheat. But OpenAI co-founder Sam Altman says children born today will never be able to surpass the "intelligence" of his products. Sam Altman believes we are living in a technological Stone Age that will have future generations snickering at how bad we had it. "A kid born today will never be smarter than AI, ever. And a kid born today, by the time that kid kind of understands the way the world works, will just always be used to an incredibly fast rate of things improving and discovering new science," Altman said in a recent interview floating around on social media. Related: Economists send harsh message to White House over BLS nominee "They will never know any other world. It will seem totally natural. It will seem unthinkable and stone-age-like that we used to use computers or phones or any kind of technology that was not way smarter than we were." The citizens of his mythical future will eventually pity us lowly non-AI users. "We will think like, 'how bad those people of the 2020s had it,'" Altman added. The beauty of Altman's prediction is that he can be proven right in multiple ways. The tech could become so advanced and helpful that humans literally can't live without it, or children could become so slavish to LLMs like ChatGPT in the future that they become reliant on it to perform the simplest tasks. Last year, Turnitin's AI detection tool scanned over 200 million writing assignments and detected AI use in one in 10 assignments. However, only three out of every 100 assignments were primarily generated by AI. "We hit a steady state, and it hasn't changed dramatically since then," said Turnitin Chief Product Officer Annie Chechitelli. "There are students who are leaning on AI too much. But it's not pervasive. It wasn't this, 'The sky is falling.'" But Altman expects AI adoption to only increase in the near future, so the number of students using the tech to cheat will likely also increase. Harvard Graduate School of Education recently published a report on the use of AI by teens and young adult students. When asked what they wanted adults to know about AI, the kids said that AI is not just used to cheat, but is also helpful for finding answers to unasked questions. But 41% of the kids today (not the ones in Altman's future) said they are "acutely aware" of the potential issues with generative AI. "Across demographics, those surveyed also expressed fears about AI's impact on their future job markets, especially the creative pursuits they mentioned above. They also highlighted potential privacy concerns, and generative AI's use to create and spread misinformation and disinformation," the report said. So it appears Sam Altman and company have some work to do to create the future they seem to believe is inevitable. Related: Car buyers beware, the market appears to be shifting gears The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.


Digital Trends
an hour ago
- Digital Trends
Blindsided by brutal AI chess bots? This one thinks like a human
For years, the game of chess has been seen as a litmus test for how far AI can go against the human intellect. When IBM's Deep Blue supercomputer beat reigning Chess world champion Garry Kasparov in 1997, it was deemed a pivotal moment. The Wall Street Journal called it 'one giant leap backward for mankind.' It hasn't been a total route for humanity, however. Just a month ago, Norwegian chess grandmaster Magnus Carlsen beat ChatGPT in a chess game without losing a single piece. Interestingly, the AI bots are fighting, too. Earlier this month, ChatGPT (backed by OpenAI's GPT-o3 reasoning model) beat Grok, an AI chatbot developed by Elon Musk-led xAI, in a chess tournament. Recommended Videos But how does an average chess-loving person fare against AI bots at chess? Well, it's frustrating. One of the most recurring themes you will see on chess forums is that bots play 'differently from humans.' Of course, when you're playing against a chess computing bot like Stockfish 16 that can assess over ten million positions per second, not many players stand a chance. On the other hand, some seasoned players say beating chess bots is easier because they follow a pattern, and that one must know how to survive the initial assault to beat them. But at the end of the day, AI bots don't play by conventions. A researcher at Carnegie Mellon University has now come up with a less alien solution – an AI chess bot that plays like a human. Say hello to Allie The bot named Allie is the brainchild of Yiming Zhang, a PhD candidate at the Language Technologies Institute (LTI) in CMU's School of Computer Science. Interestingly, Zhang found himself eager to play chess after watching Netflix's popular series, 'The Queen's Gambit.' But soon after dipping into the world of online chess, he found himself frustrated by chess bots. After playing against them, he realized that these bots play unnaturally. Moreover, the underlying tactics behind a chess engine often make it nearly impossible to beat them, thanks to their training, which involves winning at all costs by doing increasingly complex calculations. That's where Allie differs from your average chess-acing bot. It has been trained on 91 million transcripts of chess games played between humans. As a result, the way it contemplates moves, makes attacking advances, and defends positions feels like an average human player. 'Allie is trained on log sequences of real chess games to model the behaviors of human chess players across the skill spectrum, including non-move behaviors such as pondering times and resignations,' says the research paper. During evaluations, researchers found that Allie actually 'ponders' at critical situations in the game. Go ahead and test your mettle The fact that Allie has been trained to think like a human doesn't mean it's a weak player. Far from it, actually. It can hold its fort against everyone from amateurs to grandmasters. 'Against grandmaster-level (2500 Elo) opponents, Allie with adaptive search exhibits the strength of a fellow grandmaster, all while learning exclusively from humans.' Since being deployed publicly, it has amassed more than 11,500 online chess games on the online platform Lichess, where you can also try your skills against it. So far, it has over 6,500 victories against human players, lost just over 4,000 games, and more than 500 battles have ended in a draw. 'For beginners, it's not interesting or instructive to play against chess bots because the moves they make are often bizarre and incomprehensible to humans,' Zhang explains. Interestingly, Allie is completely free and open-source, which means other researchers can build atop it. Do keep in mind it's only accepting invites for Blitz games. Moreover, if you want to learn how the human-like AI chess bot makes its moves before going against Allie, you can watch it in action versus other human players at Lichess. And if you want to take a peek at the code, head over to the GitHub repository.