
Graduating Into Uncertainty: Why Skills-Based Hiring Matters
By Lara Albert, Chief Marketing Officer, SAP SuccessFactors
Graduation season is here, and for the Class of 2025, the usual mix of excitement and anticipation comes with an added layer of anxiety. These grads are entering one of the toughest job markets in recent memory, marked by economic uncertainty, hiring slowdowns, and rising competition for junior roles as AI displaces entry-level positions at an accelerated rate.
But uncertainty is nothing new for the Class of 2025. These students applied to colleges during a global pandemic, spent formative years learning in hybrid or remote classrooms, and are now witnessing AI reshape the future of work in real time. According to research, of the 57% of seniors who entered college with a 'dream job,' fewer than half have that same goal today, and more than half report feeling pessimistic about starting their careers in the current economy.
This wave of uncertainty presents an opportunity for both employers and graduates to adapt. For HR teams, it's a chance to rethink how early talent is identified and supported, shifting the focus from traditional credentials to skills and long-term potential. For graduates, it means continuing to build new skills post-graduation, especially those that AI can't easily replicate, and staying agile in a fast-changing job market.
As organizations continue to face growing skills gaps, HR teams are starting to rethink what a 'qualified' candidate looks like. Traditional markers like degrees, GPA, or prior experience don't always reflect someone's true potential and can unintentionally screen out capable candidates, especially those who've followed nontraditional paths.
That's why leading organizations today, like Capgemini, Grundfos, Frit Ravich, and SAP are investing to adopt skills-based hiring. By evaluating candidates based on their capabilities—what they can do, not just what they have done—organizations can uncover hidden potential, expand their talent pools, and open doors for candidates who may not follow a conventional path but offer tremendous value. In turn, skills-based hiring helps organizations build a workforce that's resilient and future-ready.
In fact, two-thirds (64.8%) of employers surveyed by NACE reported that they already use skills-based hiring practices for new entry-level hires. By focusing less on resumes and more on real-world potential, grads gain a better shot at landing roles where they can grow and thrive, and organizations benefit from employees who can adapt and drive ongoing innovation and business success.
A skills-first approach creates a more equitable and effective way to identify talent —highlighting ability over background, and uncovering value both externally and within your existing workforce. Here's how to get started:
Rethink job requirements: Start by identifying the core skills needed for success in open roles. Many job listings include degree or experience requirements that may unintentionally exclude qualified candidates. Focus on must-have skills that will drive performance.
Use skills-based assessments: Integrate practices like case study exercises or skills assessments into interviews rather than relying solely on candidates' resumes, educational background, or screener interviews.
Increase skills visibility: Equip hiring managers with technology that makes it easy to see the skills a candidate has and where they align with organizational needs at scale. This allows for faster and more strategic hiring decisions.
If you're graduating this year, don't let uncertainty hold you back. You may be entering a shifting job market, but your resilience, adaptability, and fresh perspectives are skills and qualities employers value. Here are a few ways to stand out:
Lead with transferable skills: Communication, critical thinking, adaptability, and collaboration are among the most valued and transferable skills. These skills are in high demand and are often hard for AI to replicate.
Show, don't tell: Use internships, job assignments, or volunteer work to demonstrate real world applicability of your skills. Portfolios, personal websites, or even social media content can bring your experience to life and give employers a tangible sense of what you can do.
Embrace life-long learning: Learning doesn't stop when you graduate. Show prospective employers you're committed to growth by taking advantage of free or low-cost courses that help you build valuable new skills.
Be flexible: Your first job is a steppingstone, but it doesn't define your career path. Stay open to opportunities that help you gain experience, even if they don't perfectly align with your dream job aspirations.
This year's graduates are entering a job market in flux, but with the right tools and mindset, both HR teams and early talent can turn uncertainty into opportunity.
Discover how SAP SuccessFactors helps organizations adopt skills-based hiring strategies.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
22 minutes ago
- Yahoo
HIVE Digital Technologies Ltd (HIVE) Q1 2026 Earnings Call Highlights: Record Revenue and ...
Release Date: August 14, 2025 For the complete transcript of the earnings call, please refer to the full earnings call transcript. Positive Points HIVE Digital Technologies Ltd (NASDAQ:HIVE) reported a record quarter with over $45 million in total revenue, primarily driven by Bitcoin mining operations. The company achieved a significant growth in earnings per share, increasing by 206% year over year. HIVE's strategic expansion in Paraguay has been transformative, allowing the company to rapidly scale its Bitcoin mining operations. The company maintains a strong balance sheet with $24.6 million in cash and $47.3 million in digital currencies. HIVE's focus on renewable energy and sustainable practices positions it well for future growth, particularly in the AI and HPC sectors. Negative Points The volatility of Bitcoin prices poses a risk to HIVE's financial performance, as evidenced by the significant non-cash reevaluation of Bitcoin on their balance sheet. High depreciation charges due to the purchase of new GPU and ASIC chips for AI and Bitcoin buildout could impact profitability. The company's expansion and scaling efforts require significant capital investment, which could strain financial resources if not managed carefully. HIVE's growth strategy involves complex operations across multiple countries, which may present logistical and regulatory challenges. The competitive landscape in the Bitcoin mining and AI sectors is intensifying, which could pressure HIVE's market position and margins. Q & A Highlights Warning! GuruFocus has detected 7 Warning Signs with HIVE. Q: Can you provide an overview of HIVE's financial performance for Q1 2026? A: Aiden Killick, President and CEO, highlighted that HIVE had a record quarter with over $45 million in total revenue, 90% of which came from Bitcoin mining operations and 10% from their HPC AI business. The company achieved a gross operating margin of 38%, yielding about $15.8 million in cash flow from operations, and reported a net income of $35 million with $44.6 million in adjusted EBITDA. Q: How is HIVE managing its Bitcoin holdings and what strategies are in place for future growth? A: Aiden Killick explained that HIVE ended the quarter with 435 Bitcoin on the balance sheet and has a Bitcoin pledge strategy allowing them to purchase Bitcoin back at zero interest. This strategy has enabled HIVE to scale its Bitcoin mining business without dilution or taking on debt, effectively using $200 million worth of CapEx. Q: What are the key developments in HIVE's expansion efforts, particularly in Paraguay? A: Aiden Killick noted that HIVE has significantly expanded its operations in Paraguay, completing phase one of their expansion ahead of schedule. They are currently operating at over 15 exahash and are fully funded to reach 25 exahash by American Thanksgiving. This expansion is part of their strategy to maintain a 440 megawatt green energy footprint for Bitcoin mining. Q: How does HIVE's AI and HPC business contribute to its overall strategy? A: Craig Tavares, President of Buzz HPC, explained that HIVE's AI and HPC business is rapidly scaling, with a target of reaching $100 million ARR. The company operates over 5,000 GPUs and is focused on providing a full suite of infrastructure services for AI, leveraging their existing data centers and renewable energy sources. Q: What are HIVE's future plans for data center expansion and AI infrastructure? A: Craig Tavares mentioned that HIVE is expanding its data center footprint with recent acquisitions in Toronto and Sweden. These facilities will support their sovereign AI strategy and are expected to go live next year. The Toronto data center, in particular, will be a tier 3 facility leveraging liquid cooling infrastructure to support high-density GPU clusters. For the complete transcript of the earnings call, please refer to the full earnings call transcript. This article first appeared on GuruFocus.
Yahoo
an hour ago
- Yahoo
White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days
The White House AI advisor discussed "AI psychosis" on a recent podcast. David Sacks said he doubted the validity of the concept. He compared it to the "moral panic" that surrounded earlier tech leaps, like social media. AI can create a diet plan, organize a calendar, and provide answers to an endless variety of burning questions. Can it also cause a psychiatric breakdown? David Sacks, the White House official spearheading America's AI policies, doesn't think so. President Donald Trump's AI and crypto czar discussed "AI psychosis" during an episode of the "All-In Podcast" published Friday. While most people engage with chatbots without a problem, a small number of users say the bots have encouraged delusions and other concerning behavior. For some, ChatGPT serves as an alternative to professional therapists. A psychiatrist earlier told Business Insider that some of his patients exhibiting what's been described as "AI psychosis," a nonclinical term, used the technology before experiencing mental health issues, "but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities." During the podcast, Sacks doubted the whole concept of "AI psychosis." "I mean, what are we talking about here? People doing too much research?" he asked. "This feels like the moral panic that was created over social media, but updated for AI." Sacks then referred to a recent article featuring a psychiatrist, who said they didn't believe using a chatbot inherently induced "AI psychosis" if there aren't other risk factors — including social and genetic — involved. "In other words, this is just a manifestation or outlet for pre-existing problems," Sacks said. "I think it's fair to say we're in the midst of a mental health crisis in this country." Sacks attributed the crisis instead to the COVID-19 pandemic and related lockdowns. "That's what seems to have triggered a lot of these mental health declines," he said. After several reports of users suffering mental breaks while using ChatGPT, OpenAI CEO Sam Altman addressed the issue on X after the company rolled out the highly anticipated GPT-5. "People have used technology, including AI, in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," Altman wrote. "Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot." Earlier this month, OpenAI introduced safeguards in ChatGPT, including a prompt encouraging users to take breaks after long conversations with the chatbot. The update will also change how the chatbot responds to users asking about personal challenges. Read the original article on Business Insider Solve the daily Crossword


Forbes
an hour ago
- Forbes
4 Things Schools Need To Consider When Designing AI Policies
Artificial intelligence has moved from Silicon Valley boardrooms into homes and classrooms across America. A recent Pew Research Center study reveals that 26% of American teenagers now utilize AI tools for schoolwork—twice the number from two years prior. Many schools are rushing to establish AI policies. The result? Some are creating more confusion than clarity by focusing solely on preventing cheating while ignoring broader educational opportunities AI presents. The challenge shouldn't be about whether to allow AI in schools—it should be about how to design policies that strike a balance between academic integrity and practical preparation for an AI-driven future. Here are four essential considerations for effective school AI policies. 1. Address Teacher AI Use, Not Just Student Restrictions The most significant oversight in current AI policies? They focus almost exclusively on what students can't do while completely ignoring teacher usage. This creates confusion and sends mixed messages to students and families. Most policies spend paragraphs outlining student restrictions, but fail to answer basic questions about educator usage: Can teachers use AI to create lesson plans? Are educators allowed to use AI for generating quiz questions or providing initial feedback on essays? What disclosure requirements exist when teachers use AI-generated content? When schools prohibit students from using AI while allowing teachers unrestricted access, the message becomes hypocritical. Students notice when their teacher presents an AI-generated quiz while simultaneously forbidding them from using AI for research. Parents wonder why their children face strict restrictions while educators operate without clear guidelines. If students are required to disclose AI usage in assignments, teachers should identify when they've used AI for lesson materials. This consistency builds trust and models responsible AI integration. 2. Include Students in AI Policy Development Most AI policies are written by administrators who haven't used ChatGPT for homework or witnessed peer collaboration with AI tools. This top-down approach creates rules that students either ignore or circumvent entirely. When we built AI guidelines for WITY, our AI teen entrepreneurship platform at WIT - Whatever It Takes, we worked directly with students. The result? Policies that teens understand and respect because they helped create them. Students bring critical information about real-world AI use that administrators often miss. They are aware of which platforms their classmates use, how AI supports various subjects, and where current rules create confusion. When students participate in policy creation compliance increases significantly because the rules feel collaborative rather than punitive. 3. Balance AI Guardrails With Innovation Opportunities Many AI policies resemble legal warnings more than educational frameworks. Fear-based language teaches students to view AI as a threat rather than a powerful tool requiring responsible use. Effective policies reframe restrictions as learning opportunities. Instead of "AI cannot write your essays," try "AI can help you brainstorm and organize ideas, but your analysis and voice should drive the final work." Schools that blanket-ban AI usage miss opportunities to prepare students for careers where AI literacy will be essential. AI access can vary dramatically among students. While some students have premium ChatGPT subscriptions and access to the latest tools, others may rely solely on free versions or school-provided resources. Without addressing this gap, AI policies can inadvertently increase educational inequality. 4. Build AI Literacy Into Curriculum and Family Communication In an AI-driven economy, rules alone don't prepare students for a future where AI literacy is necessary. Schools must teach students to think critically about AI outputs, understand the bias in AI systems, and recognize the appropriate applications of AI across different contexts. Parents often feel excluded from AI conversations at school, creating confusion about expectations. This is why schools should explain their AI policies in plain language, provide examples of responsible use, and offer resources for parents who want to support responsible AI use at home. When families understand the educational rationale behind AI integration—including teacher usage and transparency requirements—they become partners in developing responsible use habits rather than obstacles to overcome. AI technology changes rapidly, making static policies obsolete within months. Schools should schedule annual policy reviews that include feedback from students, teachers, and parents about both student and teacher AI usage. AI Policy Assessment Checklist School leaders should evaluate their current policies against these seven criteria: Teacher Guidelines: Do policies clearly state when and how teachers can use AI? Are disclosure requirements consistent between students and educators? Student Input: Have students participated in creating these policies? Do rules reflect actual AI usage patterns among teens? Equity Access: Can all students access the same AI tools, or do policies create advantages for families with premium subscriptions? Family Communication: Can parents easily understand the policies? Are expectations clear for home use? Are there opportunities for workshops for parents? Innovation Balance: Do policies encourage responsible experimentation or only focus on restrictions? Is the school policy focusing on preparing students for the AI-driven workforce? Regular Updates: Is there a scheduled review process as AI technology evolves? Does the school welcome feedback from students, teachers and parents? Skills Development: Do policies include plans for teaching AI literacy alongside restrictions? Who is teaching this class or workshop? Moving Forward: AI Leadership The most effective approach treats students as partners, not adversaries. When teens help create the rules they'll follow, when teachers model responsible usage, and when families understand the educational reasoning behind policies, AI becomes a learning tool rather than a source of conflict. Schools that embrace this collaborative approach will produce graduates who understand how to use AI ethically and effectively—exactly the capabilities tomorrow's economy demands.