logo
No, Graduates: AI Hasn't Ended Your Career Before It Starts

No, Graduates: AI Hasn't Ended Your Career Before It Starts

WIRED16-05-2025

May 16, 2025 10:00 AM In a commencement speech at Temple University, I shared my views on how new college graduates can compete with powerful artificial intelligence. Photo-Illustration:Imagine graduating with a liberal arts degree as the age of AI dawns. That's the mindset I faced when addressing the Temple University College of Liberal Arts (where I'm an alum) earlier this month. Truth be told, no one knows what will happen with AI, including those who are building it. I took an optimistic view based on one core truth: As amazing as AI might become, by definition it cannot be human, and therefore the human connection we homo sapiens forge with each other is unique—and gives us an edge.
Here's the speech:
I am thrilled to address the Temple College of Liberal Arts Class of 2025. You have prevailed under the curse of living in interesting times. You coped with Covid in high school and your early years here, navigated your way through the noise of social media, and now face a troubling political climate. The last part of that resonates with me. I attended Temple University at a time of national unrest. Richard Nixon was our president, the war was raging in Vietnam, and the future seemed uncertain.
This is an essay from the latest edition of Steven Levy's Plaintext newsletter.
SIGN UP for Plaintext to read the whole thing, and tap Steven's unique insights and unmatched contacts for the long view on tech.
But there is one concern that you have that I or my classmates could not have conceived of when we graduated over 50 years ago: the fear that artificial intelligence would perform our future jobs and render our career dreams useless.
I didn't touch a computer keyboard during my four years at Temple. It wasn't until almost 10 years after my graduation that I finally interacted directly with a computer. I was assigned a story for Rolling Stone about computer hackers. I was energized and fascinated by their world, and decided to keep writing about it.
Not long after my article was published I ventured to MIT and met Marvin Minsky, one of the scientists who came up with the idea of artificial intelligence at a summer conference at Dartmouth in 1956. Minsky and his peers thought it would only be a few years until computers could think like humans. That optimism—or naivety—became a punch line for many decades. High-level AI was always 10 years away, 20 years away. It was a science fiction fantasy.
Until about 20 years ago or so that was still the case. And then in this century, some computer scientists made breakthroughs in what were called neural nets. It led to rapid progress, and in 2017 another big breakthrough led to the terrifyingly capable large language models like ChatGPT. Suddenly AI is here.
My guess is that every single one of you has used a large language model like ChatGPT as a collaborator. Now I hope this isn't the case, but some of you may have used it as a stand-in for your own work. Please don't raise your hand if you've done this—we haven't given out the diplomas yet, and your professors are standing behind me.
Much of my time at WIRED the past few years has been spent talking to and writing about the people leading this field. Some refer to their efforts as creating 'the last invention.' They use that term because when AI reaches a certain point, supposedly computers will shove us humans aside and drive progress on their own. They refer to this as reaching artificial general intelligence, or AGI. That's the moment when AI will, in theory, perform any task a human can, but better.
So as you leave this institution for the real world, this moment of joy may well be mixed with anxiety. At the least, you may be worried that for the rest of your work life, you will not only be collaborating with AI but competing with it. Does that make your prospects bleak?
I say … no. In fact my mission today is to tell you that your education was not in vain. You do have a great future ahead of you no matter how smart and capable ChatGPT, Claude, Gemini, and Llama get. And here is the reason: You have something that no computer can ever have. It's a superpower, and every one of you has it in abundance.
Your humanity.
Liberal arts graduates, you have majored in subjects like Psychology. History. Anthropology. African American, Asian, and Gender Studies. Sociology. Languages. Philosophy. Political Science. Religion. Criminal Justice. Economics. And there's even some English majors, like me.
Every one of those subjects involves examining and interpreting human behavior and human creativity with empathy that only humans can bring to the task. The observations you make in the social sciences, the analyses you produce on art and culture, the lessons you communicate from your research, have a priceless authenticity, based on the simple fact that you are devoting your attention, intelligence, and consciousness to fellow homo sapiens. People, that's why we call them the humanities .
The lords of AI are spending hundreds of billions of dollars to make their models think LIKE accomplished humans. You have just spent four years at Temple University learning to think AS accomplished humans. The difference is immeasurable.
This is something that even Silicon Valley understands, starting from the time Steve Jobs told me four decades ago that he wanted to marry computers and the liberal arts. I once wrote a history of Google. Originally, its cofounder Larry Page resisted hiring anyone who did not have a computer science degree. But the company came to realize that it was losing out on talent it needed for communications, business strategy, management, marketing, and internal culture. Some of those liberal arts grads it then hired became among the company's most valuable employees.
Even inside AI companies. liberal arts grads can and do thrive. Did you know that the president of Anthropic, one of the top creators of generative AI, was an English major? She idolized Joan Didion.
Furthermore, your work does something that AI can never do: it makes a genuine human connection. OpenAI recently boasted that it trained one of its latest models to churn out creative writing. Maybe it can put together cool sentences—but that's not what we really seek from books, visual arts, films and criticism. How would you feel if you read a novel that shifted the way you saw the world, heard a podcast that lifted your spirit, saw a movie that blew your mind, heard a piece of music that moved your soul, and only after you were inspired and transformed by it, learned that it was not created by a person, but a robot? You might feel cheated.
And that's more than a feeling. In 2023, some researchers published a paper confirming just that. In blind experiments human beings valued what they read more when they thought it was from fellow humans and not a sophisticated system that fakes humanity. In another blind experiment, participants were shown abstract art created by both humans and AI. Though they couldn't tell which was which, when subjects were asked which pictures they liked better, the human-created ones came out on top. Other research studies involved brain MRIs. The scans also showed people responded more favorably when they thought humans, not AI, created the artworks. Almost as if that connection was primal.
Everything you have learned in the liberal arts—the humanities—depends on that connection. You bring your superpower to it.
I'm not going to sugarcoat things. AI is going to have a huge impact on the labor market, and some jobs will be diminished or eliminated. History teaches us that with every big technological advance, new jobs replace those lost.
Those jobs will exist, as there are countless roles AI can never fill because the technology can't replicate true human connection. It's the one thing that AI can't offer. Combined with the elite skills you have learned at Temple, that connection will make your work of continuing value. Especially if you perform it with the traits that make you unique: curiosity, compassion, and a sense of humor.
As you go into the workforce, I urge you to lean into your human side. Yes, you can use AI to automate your busy work, explain complicated topics, and summarize dull documents. It might even be an invaluable assistant. But you will thrive by putting your heart into your own work. AI has no such heart to employ. Ultimately, flesh, blood, and squishy neurons are more important than algorithms, bits, and neural nets.
So class of 2025, let me send you out into the world with an expression that I encourage you to repeat during these challenging years to come. And that is the repetition of the simple truth that will guide your career and your life as you leave this campus. Here it is: I. Am. Human. Can you say that with me?
I Am Human.
Congratulations, and go out and seize the world. It is still yours to conquer. And one final note—I did not use AI to write this speech. Thank you.
(You can see me deliver the speech here, in full academic regalia.)

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Datadog Expands AI Security Capabilities to Enable Comprehensive Protection from Critical AI Risks
Datadog Expands AI Security Capabilities to Enable Comprehensive Protection from Critical AI Risks

Associated Press

time11 minutes ago

  • Associated Press

Datadog Expands AI Security Capabilities to Enable Comprehensive Protection from Critical AI Risks

Launch of Code Security and new security capabilities strengthen posture across the AI stack, from data and AI models to applications New York, New York--(Newsfile Corp. - June 10, 2025) - Datadog, Inc. (NASDAQ: DDOG), the monitoring and security platform for cloud applications, today announced new capabilities to detect and remediate critical security risks across customers' AI environments -from development to production-as the company further invests to secure its customers' cloud and AI applications. AI has created a new security frontier in which organizations need to rethink existing threat models as AI workloads foster new attack surfaces. Every microservice can now spin up autonomous agents that can mint secrets, ship code and call external APIs without any human intervention. This means one mistake could trigger a cascading breach across the entire tech stack. The latest innovations to Datadog's Security Platform, presented at DASH, aim to deliver a comprehensive solution to secure agentic AI workloads. 'AI has exponentially increased the ever-expanding backlog of security risks and vulnerabilities organizations deal with. This is because AI-native apps are not deterministic; they're more of a black box and have an increased surface area that leaves them open to vulnerabilities like prompt or code injection,' said Prashant Prahlad, VP of Products, Security at Datadog. 'The latest additions to Datadog's Security Platform provide preventative and responsive measures-powered by continuous runtime visibility-to strengthen the security posture of AI workloads, from development to production.' Securing AI Development Developers increasingly rely on third-party code repositories which expose them to poisoned code and hidden vulnerabilities, including those that stem from AI or LLM models, that are difficult to detect with traditional static analysis tools. To address this problem, Datadog Code Security, now Generally Available, empowers developer and security teams to detect and prioritize vulnerabilities in their custom code and open-source libraries, and uses AI to drive remediation of complex issues in both AI and traditional applications-from development to production. It also prioritizes risks based on runtime threat activity and business impact, empowering teams to focus on what matters most. Deep integrations with developer tools, such as IDEs and GitHub, allow developers to remediate vulnerabilities without disrupting development pipelines. Hardening Security Posture of AI Applications AI-native applications act autonomously in non-deterministic ways, which makes them inherently vulnerable to new types of attacks that attempt to alter their behavior such as prompt injection. To mitigate these threats, organizations need stronger security controls-such as separation of privileges, authorization bounds, and data classification across their AI applications and the underlying infrastructure. Datadog LLM Observability, now Generally Available, monitors the integrity of AI models and performs toxicity checks that look for harmful behavior across prompts and responses within an organization's AI applications. In addition, with Datadog Cloud Security, organizations are able to meet AI security standards such as the NIST AI framework out-of-the-box. Cloud Security detects and remediates risks such as misconfigurations, unpatched vulnerabilities, and unauthorized access to data, apps, and infrastructure. And with Sensitive Data Scanner (SDS), organizations can prevent sensitive data-such as personally identifiable information (PII)-from leaking into LLM training or inference data-sets, with support for AWS S3 and RDS instances now available in Preview. Securing AI at Runtime The evolving complexity of AI applications is making it even harder for security analysts to triage alerts, recognize threats from noise and respond on-time. AI apps are particularly vulnerable to unbound consumption attacks that lead to system degradation or substantial economic losses. The Bits AI Security Analyst, a new AI agent integrated directly into Datadog Cloud SIEM, autonomously triages security signals-starting with those generated by AWS CloudTrail-and performs in-depth investigations of potential threats. It provides context-rich, actionable recommendations to help teams mitigate risks more quickly and accurately. It also helps organizations save time and costs by providing preliminary investigations and guiding Security Operations Centers to focus on the threats that truly matter. Finally, Datadog's Workload Protection helps customers continuously monitor the interaction between LLMs and their host environment. With new LLM Isolation capabilities, available in preview, it detects and blocks the exploitation of vulnerabilities, and enforces guardrails to keep production AI models secure. To learn more about Datadog's latest AI Security capabilities, please visit: Code Security, new tools in Cloud Security, Sensitive Data Scanner, Cloud SIEM, Workload and App Protection, as well as new security capabilities in LLM Observability were announced during the keynote at DASH, Datadog's annual conference. The replay of the keynote is available here. During DASH, Datadog also announced launches in AI Observability, Applied AI, Log Management and released its Internal Developer Portal. About Datadog Datadog is the observability and security platform for cloud applications. Our SaaS platform integrates and automates infrastructure monitoring, application performance monitoring, log management, user experience monitoring, cloud security and many other capabilities to provide unified, real-time observability and security for our customers' entire technology stack. Datadog is used by organizations of all sizes and across a wide range of industries to enable digital transformation and cloud migration, drive collaboration among development, operations, security and business teams, accelerate time to market for applications, reduce time to problem resolution, secure applications and infrastructure, understand user behavior and track key business metrics. Forward-Looking Statements This press release may include certain 'forward-looking statements' within the meaning of Section 27A of the Securities Act of 1933, as amended, or the Securities Act, and Section 21E of the Securities Exchange Act of 1934, as amended including statements on the benefits of new products and features. These forward-looking statements reflect our current views about our plans, intentions, expectations, strategies and prospects, which are based on the information currently available to us and on assumptions we have made. Actual results may differ materially from those described in the forward-looking statements and are subject to a variety of assumptions, uncertainties, risks and factors that are beyond our control, including those risks detailed under the caption 'Risk Factors' and elsewhere in our Securities and Exchange Commission filings and reports, including the Annual Report on Form 10-K filed with the Securities and Exchange Commission on May 6, 2025, as well as future filings and reports by us. Except as required by law, we undertake no duty or obligation to update any forward-looking statements contained in this release as a result of new information, future events, changes in expectations or otherwise. Contact Dan Haggerty [email protected] To view the source version of this press release, please visit

Bitcoin's Exuberant Coming Out Party In Sin City
Bitcoin's Exuberant Coming Out Party In Sin City

Forbes

time16 minutes ago

  • Forbes

Bitcoin's Exuberant Coming Out Party In Sin City

This is a published version of our weekly Forbes Crypto Confidential newsletter. Sign up here to get Crypto Confidential days earlier free in your inbox. LAS VEGAS, NEVADA - MAY 28: U.S. Vice President JD Vance delivers a keynote address at The Bitcoin Conference at The Venetian Convention & Expo Center on May 28, 2025, in Las Vegas, Nevada. (Photo by) Getty Images Yet another Bitcoin Conference has thundered to a close. This year's pilgrimage for the faithful took place at the Venetian, Las Vegas' most ostentatiously faux palace—a mock Venice built in the Nevada desert, complete with a casino, chlorinated canals and costumed gondoliers. Fitting, really, for the new reality of bitcoin: loud, celebrated, and depending on your vantage point, either triumphant or gaudy. Some 35,000 people poured in, an all-time high for what began as a fringe gathering of cypherpunks. The conference even set a Guinness World Record for the most bitcoin payments in a single day, if that sort of thing impresses you. Bitcoin itself hit $111,000 days before the event, a milestone barely celebrated in a market where all-time highs are now routine. At last year's conference in Nashville, Donald Trump promised to make America 'the crypto capital of the planet' and 'the bitcoin superpower of the world.' This year's installment felt like, well, proof-of-work. Trump's lieutenants Vice President JD Vance and Crypto and AI Czar David Sacks took the stage in the main venue to offer receipts: The Securities and Exchange Commission's warfare against crypto is over. Gary Gensler is out. Landmark crypto bills are making their way through Congress. The bitcoin reserve is in the works. Ross Ulbricht, the Silk Road founder, is free. Operation Chokepoint 2.0, the campaign to choke crypto's access to banking, has become a phrase mentioned mostly in postmortems. Major financial institutions, once terrified of reputational blowback, are swiftly rolling out digital asset desks again. And dozens of companies, including public ones, are adopting crypto treasury strategies. All this within the first 100 days of Trump's second presidency. 'Promises made, promises kept,' as Trump's crew likes to say. "I'm here today to say loud and clear, with President Trump, crypto finally has a champion and an ally in the White House," Vance boomed at his 9 a.m. keynote (doors opened at 5:30 a.m. to handle the traffic). 'We prioritize eliminating the rules, the red tape and the lawfare that we saw aimed at crypto by our predecessors.' During his panel with the Winklevoss twins, David Sacks asked the audience what else was on the industry's wishlist. The loudest answer? 'Remove the capital gains tax.' Cameron Winklevoss added: 'It would be great if the American government started proactively buying bitcoin.' The irony is hard to overlook. Bitcoin was meant to be apolitical, decentralized, pure—a middle finger to the traditional financial establishment and states' monopoly on money. And yet, the conference at times felt like an RNC spinoff, complete with speeches by Eric and Donald Trump Jr., and ovations for politicians who wouldn't have touched the asset five years ago. The president himself called bitcoin a scam in 2021. Now he's all in. After all, crypto has boosted Trump's net worth by about $1 billion in under a year, according to Forbes' calculations. His digital asset holdings are now worth more than any single piece of real estate he owns, including the combined value of Mar-a-Lago and Trump Tower, writes Dan Alexander. A week before the conference, Trump hosted the largest buyers of his meme coin at a White House dinner. His family has not-quietly-at-all seeded an entire portfolio of crypto ventures: DeFi-focused World Liberty Financial, the publicly traded crypto miner American Bitcoin, and Trump Media, which announced during the conference it's raising $2.5 billion to buy bitcoin for its corporate treasury. I kept wondering what Satoshi would say. Maybe he's watching. If so, I doubt he's smiling. Some purists did complain. Not just about the politicization of bitcoin, but about the creeping presence of 'shitcoinery.' Companies behind cryptocurrencies like XRP and Sui, long associated with the other side of crypto's ideological divide, were apparently among the sponsors/supporters. Among prominent panel topics were stablecoins and tokenization. 'Most bitcoiners don't care about the SEC's opinion or what BlackRock and Robinhood have to say. Most bitcoiners, I guess, want to see more bitcoin stuff. Don't forget bitcoin is about the separation of money and state. We should not invite the state. We can't forget all the pain, suffering, murders, famine, and stealing that the state has done for centuries. bitcoin liberated us from the state,' vented one observer. Of course, internal squabbles like this barely register outside the crypto echo chamber. The reality is that the conference, like bitcoin itself, has outgrown its rebel roots, morphing into something far bigger, louder and more institutionally embedded than its earliest adopters ever imagined. Among other high-profile guests were billionaires Michael Saylor, Justin Sun (a major investor in Trump's crypto ventures), Robinhood CEO Vlad Tenev, Tether CEO Paolo Ardoino, SEC Commissioner Hester Peirce, members of Congress and Bo Hines, executive director of the President's Council of Advisors for Digital Assets. Ross Ulbricht delivered a closing speech, his first after being freed from prison. Three themes dominated the agenda. First, the rise of crypto treasuries. There are now over 70 public companies holding bitcoin (and we were early to spot this trend). It may still be the center of gravity, but it's no longer the only asset in the orbit: there's also growing attention on companies adding solana, ether and even XRP to their balance sheets. Second, legislative momentum. The stablecoin-focused GENIUS Act is on track to pass Congress. Excitement is building around the U.S. market structure bill, which could finally bring clarity and codified legitimacy to crypto in America at large. Third, cultural mainstreaming. The asset once thought of as a tool for opting out of the financial system is now being folded neatly into it: politicians are using it as a campaign plank, nation-states are buying in (the head of Pakistan's crypto council announced at the conference that the country is also planning to establish a bitcoin reserve) and corporations are writing playbooks around it. 'There's an old-time saying 'Everything is good for bitcoin',' assured me Jack Mallers, CEO of Strike and Twenty One Capital—one of the new bitcoin treasury companies backed by Tether and SoftBank Group. 'Do I love every single thing Trump is doing? No, there's plenty of stuff I would do differently personally if I were president. But are there things I love about the administration? Yes, I love having a bank account in the United States, where my passport is issued,' he said, alluding to the unraveling of Operation Choke Point 2.0. Perhaps he's right. After all, partially thanks to the administration's push on the bitcoin reserve and its wholesale embrace of the industry born out of Satoshi Nakamoto's creation, more people than ever know about bitcoin and use it. Meme coins too, but that's besides the point. Turns out, decentralization scales best with help from the state. Also: Circle Soars In First-Ever Stablecoin IPO, Making CEO A Billionaire ELSEWHERE: Trump Crypto Feud Heats Up With Cease-And-Desist Letter [Bloomberg] Apple, X, And Airbnb Among Growing Number Of Big Tech Firms Exploring Crypto Adoption [Fortune] Plans $1B Token Sale At $4B Valuation: Sources [Blockworks]

TNB Tech Minute: Huawei Founder Downplays Impact of U.S. Export Controls - Tech News Briefing
TNB Tech Minute: Huawei Founder Downplays Impact of U.S. Export Controls - Tech News Briefing

Wall Street Journal

time42 minutes ago

  • Wall Street Journal

TNB Tech Minute: Huawei Founder Downplays Impact of U.S. Export Controls - Tech News Briefing

Full Transcript This transcript was prepared by a transcription service. This version may not be in its final form and may be updated. Victoria Craig: Here's your TNB Tech Minute for Tuesday, June 10th. I'm Victoria Craig for the Wall Street Journal. The founder of Chinese Telecom equipment maker Huawei, has dismissed worries that the company will be squeezed by U.S. export controls. In an interview with the People's Daily, a government mouthpiece, Huawei's founder said The firm is finding workarounds to improve its chip performance, which are still one generation behind those made by its U.S. peers. He was also upbeat on China's AI industry saying that the country's electric grid capacity is a solid foundation for AI development. You can hear more about how U.S. companies are fighting potential roadblocks to competitiveness with China on data center power in tomorrow's Tech News Briefing podcast right here in this feed. Elsewhere, Uber and self-driving car startup Wayve Technologies are launching trials of fully autonomous vehicles on public roads in London. They chose the British capital because its road layouts and traffic laws are significantly different from U.S. locations where testing has so far been done. There's no targeted date for trials to begin. The U.K.'s Transportation Secretary said the government is fast-tracking pilots of self-driving cars to next spring. Uber and Wayve said the London tests will make it easier to deploy autonomous vehicles across European markets. And finally, the Journal exclusively reports several U.S. government agencies tracked foreign nationals coming and going to Elon Musk's properties in 2022 and 2023 according to people familiar with the matter. An investigation by Homeland Security and the Justice Department focused on people visiting the tech billionaire from Eastern European nations and others who might've been trying to influence him. Until last week, Musk was one of President Trump's closest advisors. The current status of the investigation couldn't be determined. As chief executive of SpaceX, which has worked with national security agencies for years, Musk has top secret security clearance, which gives him access to some national security secrets. For a deeper dive into what's happening in tech, check out Wednesday's Tech News Briefing podcast.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store