The secret to successful AI in hiring? Stop overthinking it
Good news for any talent leaders feeling overwhelmed by AI's developmental onslaught: "If they're reading this article, they shouldn't worry," Trey Causey, Indeed head of Responsible AI, said. "Anyone even considering using AI is already ahead of the curve."
In this interview, Causey cuts through the hype and breaks down what you need to know about the latest AI technology in hiring — generative AI (GenAI) and agentic AI — and the impending arrival of artificial general intelligence (AGI), or superintelligence. The below conversation has been edited for length and clarity.
GenAI, such as ChatGPT, has been around for a while. How does agentic AI differ, and what use cases do you see in hiring and retention?
Causey: What makes GenAI unique is that, as the name suggests, it generates new outputs and content based on a prompt or a set of instructions the user provides. That could be a job description, handling customer service inquiries — anything where you need ideas.
Agentic AI is the natural next step. Many current GenAI systems are chat-based, and everything happens within the confines of that chat. But AI "agents" are like assistants that can do things for us. It's their independent actions that separate agentic AI.
For instance, you can set up an agent to review the daily applications to an open role, summarize those applications, identify the candidates who have the skills and experience you're seeking, and generate a report that orders those candidates with an overall summary to review at the end of your day. Then you can tell it to craft and send a personalized message to each candidate you approve, invite them to a screening call, and alert you when they've responded.
Before, the AI wouldn't be able to interact with other systems; it wouldn't be able to go get those resumes unless you somehow had put them all into the context that it had access to. But with agentic AI, you can keep adding steps to this chain and it'll work in the background while you take care of other things. We're working on streamlining all of that on Indeed.
Are there misconceptions or pitfalls unique to agentic AI that users should be aware of?
Causey: AI systems are prone to flaws and mistakes. Just because it's the next evolution doesn't mean it's perfect.
These agents are designed to take action independently, but that means the cost of mistakes is higher — if you reach out to a candidate, you can't take that back. It's important to be intentional about what you enable AI agents to do, making sure you have a way to review the tasks and outputs. It would be misguided to immediately delegate all of your work to an agent right now.
It's like the early days of self-driving cars: You still need your hands on the wheel.
Indeed's recent global report reveals both employers and job seekers support skills-first hiring, but limited time and resources are barriers. How can agentic AI help?
Causey: In the transition to skills-first hiring, the biggest puzzles are:
How do we know that job seekers have the skills we need?
Do the job seekers even know?
How do we verify both sides of that equation in a way that both the job seeker and the employer trust?
Imagine having an AI agent automatically look at resumes and not only extract the skills listed, but also use information on the back end to map other skills to the positions candidates held previously. It could even follow up with the job seeker to say, "These skills aren't on your resume. But from your experience in jobs A, B, and C, you might have them. Would you like to take an assessment?"
Automating that back-and-forth avoids ruling someone out for not using the "right" language on their resume and prevents the recruiter from submitting someone for review only when they have the time to follow up on those skills and wait for a response. The assessments close the trust gap so the employer can quickly verify the essentials and get to interviewing.
Indeed's global survey also shows that workers increasingly value learning and development opportunities when choosing employers, even over pay. How can employers use AI in L&D to better attract and retain talent?
Causey: AI opens up a lot of opportunities to make L&D on-demand to employees at scale and at a relatively low cost. It can construct personalized learning plans and study materials, then create an assessment to see how well you're learning and provide opportunities to practice at your own pace.
But there are still social elements. It's difficult to stay accountable with online learning. Maybe it's nine at night, you just put your kid to bed, and you really don't want to learn Python right now. That's where a manager can support and motivate. The human component is always key to success.
Can AI also help with work wellbeing?
Causey: While we don't want to create a surveillance culture, I do think it can be useful if a manager gets overburdened and may not notice one of their team members is becoming disengaged.
For example, imagine you've collected data on absences. An agent can regularly compile a report to identify employees who might need a break. There are so many ways we can aggregate data to make it easily accessible and actionable.
How does artificial general intelligence, AI's supposed next evolution, differ from the other forms of AI we've discussed?
Causey: Artificial general intelligence is basically a system or set of systems that can outperform humans at any task. But there's no agreed-upon definition of what that looks like, so some jokingly say it's "whatever we don't have yet." It's more of an academic debate at the moment.
Most of the large AI labs have been shortening their timelines for when we'll see AGI, including the engineers actually working on these systems. This has led to some proposed AGI nightmare scenarios that I don't find super compelling. Just because something is very intelligent or has the appearance of intelligence doesn't mean it can do everything humans do.
So what do employers need to know about AGI right now?
Causey: My hot take is they don't need to care. Regarding the macroeconomic implications of AGI, so many outcomes are equally probable right now that you can't do anything until there's more information. Whether or not AGI happens and when is much less important than what we're doing with the systems we have now.
Rather than spending time figuring out the right type of AI to use or where to use it, just start using AI in everything (within your company's policy and the parameters provided to you, of course). An experiment-driven approach lowers the stakes and relieves the pressure of perfectionism. Using AI is like anything else: If you don't practice, you don't get good at it.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
27 minutes ago
- Forbes
Japan's Largest Companies 2025: Rare Interest Rate Hikes Lead To A Volatile Year
Toyota and other Japanese automakers have been hampered by Trump's tariffs. Getty Images Japan's stock market has been on a roller-coaster ride over the past 12 months. Its benchmark Nikkei index reached an all-time high in July 2024, driven by corporate governance reforms and robust company earnings, then crashed more than 25% in less than four weeks on a surprise interest rate hike by the Bank of Japan. Though the index rebounded shortly after, its gains were trimmed in early 2025 as U.S. President Donald Trump ignited his trade war. Japan has 180 companies on this year's Forbes Global 2000 ranking of the world's largest public corporations, down slightly from 182 in 2024, making it the third most-represented country after the U.S. and China. The list weighs market value, revenue, profit and assets equally, using the latest 12 months of data as of April 25. Toyota Motor, the highest-ranking Japanese company, is in a sector particularly hard hit by Trump's sweeping tariffs. The U.S. in early April imposed a 25% tax on foreign-made cars, followed in early May by the same levy on auto parts, a blow to Japan's mainstay industry and its export-led economy. The world's top-selling carmaker slipped three places to No. 14 after its stock tumbled 22% over the year. Though its revenues and profits in the year through December were roughly flat at $309 billion and $34 billion, respectively, Toyota warned that the tariffs would result in a $1.3 billion hit to operating profit in April and May. Some of Toyota Motor's peers suffered even steeper declines. Nissan Motor, long plagued by deteriorating financials, sank 366 spots to No. 707 after its profit in the 12 months through December plunged 76% to $702.6 million. After the cut-off date for the list, the automaker posted a $4.7 billion loss for the three months ended March. Nissan is struggling to restructure after merger talks with larger rival Honda Motor collapsed in February. The failed tie-up, together with the tariffs, relegated Honda to No. 117 from No. 91 as its stock fell 17% over the year. Mitsubishi Motors, whose biggest shareholder is Nissan, tumbled 379 places to No. 1,562 as its shares skidded almost 10%. Companies in the AI space were a bright spot. Billionaire Masayoshi Son's SoftBank investment powerhouse climbed 331 spots to No. 130 on a 425% surge in 12-month profit through December to $5.6 billion, driven partly by increases in the value of portfolio companies such as ByteDance, the Chinese parent of TikTok. SoftBank is ramping up its AI bet, with plans to invest up to $30 billion in U.S.-based ChatGPT maker OpenAI while also investing $100 billion to build AI infrastructure stateside as part of its Stargate Project joint venture with OpenAI and Oracle. The AI boom also lifted Advantest, the world's largest semiconductor testing equipment maker by market share and a supplier to AI-chip giant Nvidia. It scaled 509 places to No. 1,231 as its profit in the year through March more than doubled to $1.1 billion on a 52% surge in sales to $5.1 billion. Other notable climbers included companies in the defense industry. IHI Corp, Mitsubishi Heavy Industries (MHI) and Kawasaki Heavy Industries (KHI) were among the best performers on the Nikkei over the year as Japan ramped up military spending. IHI, an engineering company that makes everything from turbines for power plants to rocket systems for space travel, debuted on the Global 2000 at No. 1,349 after its stock skyrocketed 176%. A more than doubling in MHI stock elevated the company 75 spots to No. 372 while KHI vaulted 513 places to No. 1,331 on a 52% share increase.
Yahoo
42 minutes ago
- Yahoo
One ChatGPT query uses same energy as a second of baking, OpenAI says
A single query to the AI software ChatGPT consumes as much electricity as roughly one second of baking something in an oven, according to the developer OpenAI. Meanwhile the water consumption from each query - owing to data centres needing to be cooled - comes down to about one-fifteenth of a teaspoon, OpenAI chief executive Sam Altman wrote in a blog post. Experts have for years been warnings about the massively escalating energy demands from the widespread use of AI services. While individual queries may require less energy due to efficiency gains in chip and server technology, the sheer volume of usage continues to drive a sharp increase in energy demand for AI data centres. Companies such as Microsoft, Google and Amazon are planning to rely on nuclear energy in the US to help meet this demand without proportionally increasing emissions of climate-damaging carbon dioxide. The need to cool data centres has also raised concerns about water consumption. In recent years, several studies have attempted to calculate the environmental impact of increased AI usage, but researchers must rely on numerous assumptions. Altman shared OpenAI's energy and water figures in a blog post in which he painted a generally positive picture of the future of AI. He acknowledged that there would be significant disruptions, such as the elimination of entire job categories. "But on the other hand the world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas we never could before." In recent years, the idea of a universal basic income, funded by productivity gains, has been discussed in connection with AI. According to the OpenAI chief executive, an average ChatGPT query consumes approximately 0.34 watt-hours of electricity ("about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes"). He quantified the water consumption at 0.000085 gallons (0.00032176 litres). Altman did not provide further details about the basis for these calculations.


WIRED
an hour ago
- WIRED
AI Agents Are Too Cheap for Our Own Good
Jun 12, 2025 7:00 AM AI tools cost a fraction of human labor—and may undermine the jobs needed to build careers. Illustration: Jacqui VanLiew In 2007, Luke Arrigoni, an AI entrepreneur, earned $63,000 at his first job as a junior software developer. Today, he says AI tools that write better code than he did back then cost just $120 annually. The numbers don't sit right with him. Arrigoni, who runs Loti AI, a company that helps Hollywood stars find unauthorized deepfakes, worries that underpriced AI tools encourage companies to eliminate entry-level roles. He wants to flip the incentive structure so people's careers don't end before they begin. 'If you make the AI systems more expensive, then you have an economic incentive to hire someone that is starting out,' he says. AI transforming—or altogether eliminating—jobs has become a perennial anxiety. But the concern is gaining new urgency as demand for AI agents grows. Those AI systems can now make sales calls and write software code, work that was once reserved for humans. So far, the situation isn't dire. Hiring platform ZipRecruiter estimates that this year, summer internships in the US rebounded to roughly the same level as they were before the pandemic. But that might change in the near future. At the Snowflake Summit in San Francisco last week, OpenAI CEO Sam Altman compared current AI tools to interns. The next-generation technology would be like a more 'experienced' worker, he said. In some companies, managers have already started overseeing 'a bunch of agents' the way they traditionally have 'relatively junior employees,' Altman claimed. OpenAI has talked about mitigation efforts like reskilling programs to stave off a potential jobs crisis—but it hasn't mentioned charging higher prices for its services to slow the transition to AI work. That's what has Arrigoni on edge. Even after accounting for the priciest add-on features, AI coding agents cost a fraction of a junior engineer. If inexperienced workers can't get a gig, Arrigoni believes, they might not gain the expertise needed to lead future teams—whether human or machine. OpenAI did not respond to a request for comment. 'Less Than Human' AI pricing has fluctuated since ChatGPT launched as a free chatbot in 2022 and triggered an AI boom. Generally, many AI companies still offer free tiers for limited use, and prices for basic tiers have declined. Top-tier plans for the newest features have grown pricier, though not to the point of generating profits for the companies offering them—or deterring adoption. Startup executives and pricing consultants attribute low prices to intense competition among AI purveyors. 'Their only way to win is mass adoption,' says Ajit Ghuman, CEO of pricing strategy company Monetizely. That means AI companies need to charge the same affordable prices as their rivals. Unless electricity or GPU shortages become major problems, or one company corners the AI market, it's difficult to see prices rising significantly, Ghuman says. Decagon, a San Francisco startup that sells a customer service chatbot used by retailers and tech companies, charges $1 or less per conversation—roughly half the cost of human support. In some cases, the chatbot may be more effective than a person, but Decagon believes its clients would never pay more for it. 'The reason to invest in AI is efficiency,' CEO Jesse Zhang says. 'You're going to be less than human labor. That's kind of like the point of technology.' Zhang says his company makes money on each individual conversation after excluding certain overhead costs, but he declined to comment on the startup's overall profitability. With $100 million raised from venture capitalists including Andreessen Horowitz and Accel, Decagon has the flexibility to prioritize growth over profitability. 'Whether we could be pricing more, it's always like a 'what if?'' he says. 'But in general we're pretty happy right now.' 'So Cheap' Erica Brescia, a managing director at the investment firm Redpoint Ventures, had an epiphany about AI agent pricing last month. The $250 price tag on Google's new AI Ultra plan astounded her. 'All this is so cheap,' she recalls thinking. 'It's disproportionate to the value people are getting.' She felt a price of at least double would make more sense. (That same week, Nvidia CEO Jensen Huang told Stratechery that he would hire an AI agent for $100,000 per year 'in a heartbeat.' ) Previously, Brescia worked as the chief operating officer of GitHub, which helped set the bar for AI pricing. GitHub's Copilot coding assistant started at $10 a month in 2022, months before ChatGPT's debut. Brescia says GitHub went with a price that would attract a critical mass of users. The goal was gathering data to improve the service, and GitHub's parent company, Microsoft, didn't mind taking a loss on the new tool to make that happen. In reality, a price 100 times higher would now better reflect the value Copilot provides to software developers, Brescia estimates. (GitHub chief operating officer Kyle Daigle tells WIRED that the company's goal is to support, not replace, developers and that 'pricing reflects a commitment to democratizing access to powerful tools.') Today, Copilot tops out at $21 a month. And similar tools have followed its lead, including Zed, which has received $12.5 million in funding from Redpoint and others. In May, the company started charging a minimum of $20 a month for an AI-assisted code editor it built from the ground up. Zed CEO Nathan Sobo expects AI companies to charge more over time because the current pricing models aren't sustainable. But relative to humans, he wants to keep AI agents affordable so anyone can use them to augment their work, develop better software, and create new jobs. 'I want as much intelligence at my disposal at as low a cost as possible,' he says. 'But to me, included in that is potentially a junior engineer using this technology, ideally at as low a cost as possible.' Decagon's Zhang feels the same way about AI coding tools. 'Would we pay more? Marginally? Yeah,' he says. But '$2,000? Probably not.' He adds 'the hunger for good engineers is infinite.' AI entrepreneurs suggest that agents could command higher prices if they were easier to set up and more reliable to use. For instance, Nandita Giri, a senior software engineer who has worked at Amazon, Meta, and Microsoft, says she would pay thousands of dollars annually for an AI personal assistant. 'But strict conditions apply—you can't get frustrated by using it,' she says. Unfortunately, that day feels far away. As a personal project, Giri tried developing an AI agent that could prevent psychological burnout. 'It just canceled all my meetings,' she says. Certainly a solution, but not the ideal one. Now, some companies are hiring 'AI architects' to help oversee agentic systems and cut down on gaffes. The question is who will occupy those roles in the future if early-career workers are cut off from opportunities today. Simon Johnson, an economist at the Massachusetts Institute of Technology, doesn't expect companies to take into account the social cost of career disruption in making their pricing decisions. He suggests governments lower payroll taxes for entry-level roles to encourage hiring. 'The right lever to pull is one that reduces costs to employers,' Johnson says. Arrigoni is choosing a third path. At Loti AI, he has prioritized steadily hiring junior engineers and hasn't employed AI coding tools. If the job apocalypse comes, 'I don't want to be at fault,' he says.