
How To Strategically Use AI To Launch Your Career In 2025
The class of 2025 should use AI in their job search
Landing your first full-time job is an impressive feat at any time, but today it's becoming even more of an accomplishment. Between the current economic uncertainty and the AI-ification of the workforce, the class of 2025 is struggling to find their place in a fast-changing world.
A study released today by iCIMS found that although entry-level hiring is up 6% over last year, there are still far too many candidates waiting in the wings. For every entry-level job opening, 36 people applied—compared to 29 applicants per entry-level job opening last year.
Gen Z applicants are also dealing with the disconnect between companies' professed skills-first hiring strategies and how they actually hire. iCIMS found that although 95% of recruiters say they use skills-based practices, when assessing entry-level candidates they rank experience (37%) and education (34%) above skills (28%) as the most important factors.
Yet for all this, there are still great opportunities out there for persistent job seekers. 'While the job market is highly competitive, graduates who remain adaptable and resilient in their job search efforts can find meaningful opportunities,' says Jake Gomez, Head of NA Vertical Strategy, ManpowerGroup. 'The job market is evolving, not closing, and job seekers must adapt accordingly.'
I recently connected with Gomez to discuss the current state of entry-level hiring and how new grads can optimize their chances in a shifting employment landscape. Here's what we covered.
3 major challenges in today's job market
Gomez sees three major hurdles that this year's college graduates must overcome to launch their careers.
All of this adds up to a growing pessimism as the Class of 2025 surveys its hiring prospects. And it's not all in their heads. 'The job market challenges are evident, coupled with a 1.6% higher unemployment rate for new graduates compared to the current unemployment rate,' says Gomez. 'Even in our just released ManpowerGroup Employment Outlook Survey (MEOS), U.S. hiring intentions declined slightly to 30%, a 4% drop from the previous quarter.'
Standing out
In today's job market, Gomez estimates that it will take most graduates 4–6 months to find work—and upwards of 8 months for certain degrees. And they'll need to go beyond the basics.
'They need to make themselves relevant and stand out from the crowd,' he says. To help differentiate yourself, Gomez believes you should:
'And network like it's your job because until you find one, it is.'
How AI is affecting the job search
AI is a two-edged sword: while it can help young job seekers, it can also hurt if overused or not used properly.
Gomez points to ManpowerGroup's latest research on AI in the workplace, which shows that 85% of employers now use AI in hiring processes. Most, however, recognize its limitations. One-third (33%) say AI can't replicate ethical judgment, while 31% cite customer service as uniquely human.
'Yet the research reveals a nuance that while AI won't replace people, people who can leverage AI will have more value than people who don't,' says Gomez. 'These findings underscore a growing consensus toward AI as a tool for augmentation, not replacement.'
So where can AI be a help, not a hindrance?
Where should you be cautious of AI? Gomez says it can often be too generic. 'Overreliance on AI-generated content without personalization can make applications feel unauthentic,' he says. AI may also add skills that don't reflect your actual capabilities. 'This can backfire in an interview if the job seeker can't speak to the skills,' he warns. Finally, it's all too easy to become overly dependent on AI. 'AI can become a crutch and hamper growth in critical skills,' says Gomez.
To make the best use of AI in your job search, says Gomez, you should co-create with AI to generate a starting point or framework. 'But you must edit and ensure it reflects your voice,' he says. 'Make sure your answers stay 'you.' Ensure you stay true to who you are, embracing your strengths, values, and interests.'
Job searching beyond AI
While AI is now a big piece of the employment puzzle, it's not the only tool you can use to get hired. Gomez identifies three key actions you can take to improve your employability:
It's true, the current job market isn't an easy one to break into—but the class of 2025 has already achieved so much. 'By focusing on skill development, networking, and flexibility, new graduates can navigate this landscape and find rewarding career paths,' says Gomez.
'And most importantly, control what you can control—your effort and attitude. Don't give up, ask for help, and be kind to yourself and others.
'You will succeed.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
a minute ago
- Tom's Guide
GPT-5 users aren't happy with the update — try these alternative chatbots instead
For months, the OpenAI team has been building up to a massive launch. They spoke of a landscape-changing update that would change what AI could do. Now, that update is here in the form of GPT-5. However, not everyone is happy with this newest version of ChatGPT. Critics have complained that not only is it not a great update, but that they would rather use the earlier version over this. However, once your device updates to GPT-5, there is no option to use older versions of the tool. OpenAI has since said that it will correct this, offering users the ability to use GPT-4 instead. Of course, this is still the early days. GPT-5 will get better over time with updates as the team has had the chance to understand how people are using it. Not everyone is unhappy either, there seems to be a big split, with just as many enjoying the tool as those who aren't getting on with it. However, if you fall into the camp that feels let down by GPT-5, the good news is that this is a packed market. There are plenty of other great tools on the market to try instead. These are our top picks. Google and OpenAI have been battling it out since the earliest stages of chatbots. The two titans of tech have the money and technology to be the best, and that makes Gemini a worthy opponent for ChatGPT. With Gemini, you're getting one of the best chatbots for coding, and also a model that isn't quite as sycophantic as ChatGPT. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Gemini has a lot of the same features as ChatGPT, and also offers fairly similar benefits in the tool. One feature that stands out with Gemini is the depth of its deep research. It can produce incredibly detailed responses, diving into every corner of the internet for your isn't as stylish as ChatGPT, and it does have a tendency to misunderstand what you're asking sometimes, and we've noticed a higher rate of hallucinations. However, it is an otherwise great ChatGPT alternative. Claude has very quickly risen through the ranks to become a leading competitor in this crowded market. The chatbot has a focus on deliberate and careful research, and the team behind the tool reflects this. They perform lots of independent research and have created a chatbot that feels in keeping with that philosophy. It's great for coding, allowing you to publish any tool you code onto its own web link, and offers a wide variety of pre-built apps to explore. Like Gemini, it is also more to-the-point than ChatGPT, focusing less on being friendly and more on getting you an immediate answer. While it has always performed well on benchmarks, it hasn't been as successful as ChatGPT or Gemini overall, but over the next few years, it will be a truly competitive force of nature. One feature, that will either make or break it for you, is that Claude has no memory. That means, unlike ChatGPT and Gemini, it can't store information about you. ChatGPT uses this information to personalize answers, giving responses that better fit your needs. xAI's Grok has a slippery reputation. On benchmarks, it is one of the most powerful AI chatbots around. It is great for coding and has proved to be a formidable force when it comes to deep research. However, it has also found itself in constant controversy, whether thats because of its AI girlfriend feature, or repeated mistakes where the chatbot accidentally starts to support conspiracy theories. Grok doesn't get the same recognition as the options above, and there is good reason for that, with a somewhat mixed bag of experiences. However, that is not to say it isn't one of the best chatbots on the market, especially when put through legitimate AI tests. Perplexity isn't technically a chatbot. Think of it more like an overpowered Google, complete with lots of AI tricks up its imaginary sleeve. If you've been using ChatGPT as an alternative to Google, Perplexity could feel like a natural fit. It thrives when you ask it questions, or if you're simply using ChatGPT to learn new information about the world around you. However, it falls down in some of the newer features we're seeing in chatbots. Its image and video generation isn't as good as the competitors', and it can't generate code like its alternatives. While it is more than capable of handling most queries, it does have some areas that fall outside of its remit. However, these are few and far between for the average user. One of the lesser-known options out there right now, Le Chat is a French chatbot service created by Mistral. It doesn't perform as well as its peers above on benchmarking, and arguably isn't quite as smart. However, it makes up for that in other ways that might be important to certain users. In a recent examination, it scored the highest out of all chatbots on its privacy and how it deals with data, it also has a function for ultra-fast responses, where it will generate 10 times faster than normal. On top of that, Le Chat doesn't store any data from your conversations and, like Claude, it doesn't have a memory of previous chats. In other words, this is the model to switch to for the privacy concerned.


Forbes
2 minutes ago
- Forbes
Beware Of Agentic AI's Heel Turn As Corporate Security Villain
Pieter Danhieux is the Co-Founder and Chairman/CEO of Secure Code Warrior. As fast as generative artificial intelligence and large language models (LLMs) like ChatGPT have permeated business, academia and personal communications, the next phase of AI advancement is poised to just as quickly become part of the engine driving everything from customer service and supply chain management to healthcare and cybersecurity. Agentic AI brings autonomy to AI systems, building on AI techniques to make decisions, take action and pursue goals independently, or at least with minimal human supervision. Where generative AI can write a report for you based on the prompts you give it, agentic AI can decide when to write the report, what to say in it and to whom to say it. And, it might not even require asking for your permission first. The technology in its current form is still nascent, but it is being heralded as the next great leap in autonomous systems, boldly performing next-phase functions where previous AI systems could not tread, such as dynamically reconfiguring supply chains in response to natural or manmade emergencies or proactively ensuring that complex IT systems avoid downtime. Gartner has forecast that by 2028, 33% of enterprise software applications will include agentic AI (in 2024, it was less than 1%), making it possible for 15% of all day-to-day work decisions to be made autonomously. However, the great promise of agentic AI doesn't come without significant caveats. Its capabilities and autonomy present a potent enterprise threat vector beyond the realm of garden-variety security concerns. Giving self-optimizing, proactive AI systems the keys to perform independent actions can lead to adversarial behaviors, amplified biases that can cause systemic vulnerabilities and questions of accountability in the event of AI-orchestrated breaches or disruptions. Enterprises need to assert AI governance and ensure that developers are equipped to maintain oversight, with the security skills to safely prompt and review AI-assisted code and commits. A report by the Open Worldwide Application Security Project (OWASP) points out that agentic AI introduces new or 'agentic variations' of existing threats, some of them resulting from new components in the application architecture for agentic AI. Among those threats are memory poisoning and tool misuse resulting from the integration of agent memory and tools. Other risks associated with tool misuse include remote code execution (RCE) and code attacks, which can arise from code generation, creating new attack vectors. Other threats can arise when user identities are involved. For example, a new bug, referred to as a 'confused deputy' vulnerability, has been uncovered involving user identities embedded inside integrated tools and APIs. It can happen when an agentic AI, acting as a deputy to a human user, has higher privileges than the user it is working with at the time. The agent can then be fooled into taking unauthorized actions on behalf of the user. And if an agent doesn't have proper privilege isolation, it may not be able to distinguish between legitimate requests from its lower-privilege users and those that are part of an attack. To stop this (as well as to prevent hijacking via prompt injections, identity spoofing and impersonation), organizations should be sure to reduce agent privileges when operating on behalf of a user. OWASP also recommends several other key steps, including ways to prevent memory poisoning, AI knowledge corruption and the manipulation of AI agent reasoning. Meanwhile, enterprises must also be on guard against the rapidly mounting threat from attacks fueled by agentic AI. A report by Palo Alto Networks' Unit 42 detailed how agentic AI can be used to increase 'the speed, scale and sophistication of attacks' that have already been greatly accelerated by AI. For example, they found that the mean time to exfiltrate (MTTE) data after an attacker gains access to a system dropped from an average of nine days in 2021 to two days in 2024. In one of five cases, MTTE happened in less than an hour. Unit 42 simulated a ransomware attack using AI at every stage of the process. They transitioned from initial compromise to data exfiltration in 25 minutes, representing a 100-fold increase in speed compared to a typical attack. Agentic AI, with its ability to autonomously perform complex, multi-step operations and adapt its tactics during an attack, will only intensify offensive operations—possibly conducting entire attack campaigns with minimal human intervention in the near future. Despite the speed, power and sophistication that agentic AI can bring to cyberattacks, enterprises aren't necessarily overmatched. Agentic AI may eventually lead to new styles of attacks, but currently, it appears that it will mostly turbocharge existing, known attacks. Organizations can, as OWASP advises, tighten identity controls and take other steps to prevent memory poisoning and AI corruption. They can also fight fire with fire, using agentic AI to enhance network monitoring and analysis of specific threats. The foundations of good security need to be bolstered. And in the current environment, that begins with protecting software through secure coding practices performed by proactive developers with verified security expertise. They need to continue their ongoing education programs to effectively apply security best practices at the beginning of the software development lifecycle. People also need new guidance on how to use agentic AI tools safely. Developers with the proficiency to both prompt and review code output are also crucial to ensuring the safe and secure use of agentic AI. Organizations that do not prioritize uplifting and continuous measurement of developer security skills will find themselves in a precarious position, fighting against a deluge of AI-generated code that is not being utilized with the critical thinking and hands-on threat assessment required to deem it safe, and ultimately realize the productivity gains these tools offer. Security programs must modernize at the breakneck pace at which code is now being delivered. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Bloomberg
2 minutes ago
- Bloomberg
An AI Replay of the Browser Wars, Bankrolled by Google
A quarter of a century ago, when Microsoft Corp. used its dominance of the personal computer market to force people to use Internet Explorer, it famously led to a devastating antitrust lawsuit loss that some believe held the company back for more than a decade. Only now, revitalized by Chief Executive Officer Satya Nadella's quick moves in artificial intelligence, is the Windows-maker back on top. As AI rivalries intensify, Google is now trying to put Microsoft through its '90s nightmare again. The search giant is funding the Browser Choice Alliance, an industry group involving Google and a number of smaller browser makers, such as Opera, Vivaldi and others, set up to put pressure on Microsoft. The complaint, as it was back then, is that Microsoft is once again using its ownership of the Windows operating system to give its own browser an unfair leg up — at a moment that some feel is as pivotal as the emergence of the internet.