Latest news with #KevinRoose


New York Times
a day ago
- Business
- New York Times
The A.I. Jobpocalypse, Building at Anthropic with Mike Krieger and Hard Fork Crimes Division
Hosted by Kevin Roose and Casey Newton Produced by Rachel Cohn and Whitney Jones Edited by Matt Collette Engineered by Chris Wood Original music by Dan PowellRowan Niemisto and Diane Wong This week, we dive into Kevin's recent column about how A.I. is affecting the job market for new graduates, and debate whether the job apocalypse is already here for entry-level work. Then Mike Krieger joins us to discuss the new Claude 4 model, the future of work and the online chatter over whether an A.I. system could blackmail you. And finally, it's time to open up the case files for another round of Hard Fork Crimes Division. Guest: Mike Krieger, chief product officer at Anthropic Additional Reading: For Some Recent Graduates, the A.I. Job Apocalypse May Already Be Here Another Suspect Is Charged in Bitcoin Kidnapping and Torture Case Elizabeth Holmes's Partner Has a New Blood-Testing Start-Up 'Hard Fork' is hosted by Kevin Roose and Casey Newton and produced by Rachel Cohn and Whitney Jones. This episode was edited by Matt Collette. Engineering by Chris Wood and original music by Dan Powell, Diane Wong and Rowan Niemisto. Fact-checking by Ena Alvarado. Our executive producer is Jen Poyant. Special thanks to Paula Szuchman, Pui-Wing Tam, Dahlia Haddad and Jeffrey Miranda.


Time of India
5 days ago
- Business
- Time of India
Demis Hassabis, who won Nobel Prize for inventing an AI model, has a warning for students starting college
Demis Hassabis , CEO of Google DeepMind , made a bold prediction at the recent Google I/O developer conference—artificial general intelligence (AGI) could be less than a decade away. Hassabis, who leads Google's AI initiatives including the Gemini chatbot, advised young people, particularly college students , to 'immerse' themselves in AI technologies and become proficient in using cutting-edge tools. 'Whatever happens with these AI tools, you'll be better off understanding how they work and what you can do with them,' he said, urging students to focus on 'learning to learn' to stay adaptable in a rapidly changing technological landscape. What Google DeepMind CEO told students In a previous interview at his alma mater, the University of Cambridge, Hassabis offered similar guidance to students, stressing that adaptability is one of the most vital skills for the future. Answering questions from undergraduates, he urged them to identify how they learn best and to build the ability to quickly grasp new concepts—a key trait in an ever-evolving tech landscape. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Soluciones para subir escaleras sin obras ni esfuerzo Stair Lifts Haz clic aquí Undo 'The world you're entering will face an incredible amount of disruption and change,' he told students during a March discussion with Professor Alastair Beresford at Queens' College, Cambridge. Hassabis highlighted emerging fields like AI, virtual reality (VR), augmented reality (AR), and quantum computing as promising industries over the next decade. He noted that technological shifts historically disrupt some jobs but create others that are often more interesting and valuable, as discussed on the 'Hard Fork' podcast with hosts Kevin Roose and Casey Newton. Live Events 'Anytime there is change, there is also huge opportunity,' he said, encouraging graduates to blend deep knowledge of their interests with adaptability to thrive in an AI-driven future. "Over the next 5 to 10 years, I think we're going to find what normally happens with big new technology shifts, which is that some jobs get disrupted," he recently told co-hosts Kevin Roose and Casey Newton on an episode of "Hard Fork," a podcast about the future of technology. However, he said, "new, more valuable, usually more interesting jobs get created" in the wake of that kind of disruption.

Business Insider
24-05-2025
- Business Insider
Teens should be training to become AI 'ninjas, ' Google DeepMind CEO says
Just as millennials had the internet and personal computers and Gen Z had smartphones and tablets, generative AI is the transformative technology of Gen Alpha's time — and they should embrace it, the AI leader said on a recent episode of "Hard Fork," a podcast about the future of technology. "Over the next 5 to 10 years, I think we're going to find what normally happens with big new technology shifts, which is that some jobs get disrupted," he told co-hosts Kevin Roose and Casey Newton. However, he said, "new, more valuable, usually more interesting jobs get created" in the wake of that kind of disruption. The generative AI arms race began in earnest with the release of OpenAI's ChatGPT in 2022. The technology has advanced rapidly ever since, sparking both excitement and concern for how it will revolutionize the workplace and the world at large. Google DeepMind is the research lab behind Google's AI projects, including Gemini, the company's chatbot. Hassabis is leading Google's charge toward the AI race's ultimate prize — artificial general intelligence. There is little agreement on a definition for AGI, but generally, it is considered an AI model that can reason in the same way a human does. Hassabis said Tuesday during a live interview at the Google I/O developer conference that DeepMind is less than 10 years away from creating its own. "Whatever happens with these AI tools, you'll be better off understanding how they work, and how they function, and what you can do with them," Hassabis said, referring to young people. He advised those headed to college to "immerse yourself now" and strive to "become a sort of ninja using the latest tools." Hassabis said they should spend time "learning to learn" — the same advice he gave to students at the University of Cambridge. Other AI leaders have also encouraged teenagers anxious about AI to learn about it. Microsoft AI CEO Mustafa Suleyman told young people to play with the new technology and learn its weaknesses. In higher education, Rice University announced Tuesday that it will join a growing number of colleges offering AI degrees. That doesn't mean they should abandon the building blocks that make for a good STEM student, Hassabis said. He still recommends getting good at coding and building up fundamental skills for success.


Indian Express
04-05-2025
- Business
- Indian Express
Could eye-scanning crypto orbs save us from a bot apocalypse?
Written by Kevin Roose Spend enough time in San Francisco, peering into the cyberpunk future, and you may find that weird things start seeming normal. Fleets of self-driving cars? Yawn. A startup trying to resurrect the woolly mammoth? Sure, why not. Summoning a godlike artificial intelligence that could wipe out humanity? Ho-hum. You may even find yourself, as I did Wednesday night, standing in a crowded room in the Marina district, gazing into a glowing white sphere known as the Orb, having your eyeballs scanned in exchange for cryptocurrency and something called a World ID. The event was hosted by World, a San Francisco startup co-founded by Sam Altman of OpenAI that has come up with one of the more ambitious (or creepy, depending on your view) tech projects in recent memory. The company's basic pitch is this: The internet is about to be overrun with swarms of realistic AI bots that will make it nearly impossible to tell whether we're interacting with real humans on social networks, dating sites, gaming platforms and other online spaces. To solve this problem, World has created a program called World ID — you can think of it as Clear or TSA PreCheck for the internet — that will allow users to verify their humanity online. To enroll, users stare into an Orb, which collects a scan of their irises. Then they follow a few instructions on a smartphone app and receive a unique biometric identifier that is stored on their device. There are baked-in privacy features, and the company says it doesn't store the images of users' irises, only a numerical code that corresponds to them. In exchange, users receive a cryptocurrency called Worldcoin, which they can spend, send to other World ID holders or trade for other currencies. (As of Wednesday night, the sign-up bonus was worth about $40.) At the event, Altman pitched World as a solution to the problem he called 'trust in the age of AGI.' As artificial general intelligence nears and humanlike AI systems come into view, he said, the need for a mechanism that tells bots and humans apart is becoming more urgent. 'We wanted a way to make sure that humans stay special and central in a world where the internet was going to have lots of AI-driven content,' Altman said. Eventually, Altman and World CEO Alex Blania believe that something like Worldcoin will be needed to distribute the proceeds from powerful AI systems to humans, perhaps in the form of a universal basic income. They discussed various ways to create a 'real human network' that would combine a proof-of-humanity verification scheme with a financial payments system that would allow verified humans to transact with other verified humans — all without relying on government-issued IDs or the traditional banking system. 'The initial ideas were very crazy,' Altman said. 'Then we came down to one that was just a little bit crazy, which became World.' The project launched two years ago internationally, and it found much of its early traction in developing countries such as Kenya and Indonesia, where users lined up to get their Orb scans in exchange for cryptocurrency rewards. The company has raised roughly $200 million from investors including Andreessen Horowitz and Khosla Ventures. There have been some hiccups. World's biometric data collection has faced opposition from privacy advocates and regulators, and the company has been banned or investigated in places including Hong Kong and Spain. There have also been reports of scams and worker exploitation tied to the project's crypto-based rewards system. But it appears to be growing quickly. Roughly 26 million people have signed up for World's app since it launched two years ago, Blania said, and more than 12 million have received Orb scans to verify themselves as humans. World stayed out of the United States at first, partly out of concern that regulators would balk at its plans. But the Trump administration's crypto-friendly policies have given it an opening. On Wednesday, World announced that it was launching in the United States and opening retail outposts in cities including San Francisco, Los Angeles and Nashville, Tennessee, where new users can scan their eyes and get their World IDs. It plans to have 7,500 Orbs in the country by the end of the year. The company also revealed a new version of its Orb, the Orb Mini — which is not, in fact, an orb. Instead, it looks like a smartphone with glowing eyes, but serves the same purpose as the larger device. And World announced partnerships with other businesses including Razer, a gaming company, and Match Group, a dating app conglomerate, which will soon allow Tinder users in Japan to verify their humanity using their World IDs. It's not clear yet how any of this will make money, or whether privacy-conscious Americans will be as eager to fork over their biometric data for a few crypto tokens as people in developing parts of the world have been. It's also not clear whether World can overcome basic skepticism about how strange and sinister the whole thing can feel. Personally, I'm sympathetic to the idea that we need a way to tell bots and humans apart. But World's proposed fix — a global biometric registry, backed by a volatile cryptocurrency and overseen by a private company — may sound too much like a 'Black Mirror' episode to reach mainstream acceptance. And even Wednesday, in a room packed with eager early adopters, I met plenty of people who were reluctant to stare into the Orb. 'I don't give up my personal data easily, and I consider my eyeballs personal data,' a tech worker told me. World's connection to Altman has also drawn scrutiny. During the event, a few skeptics pointed out that by virtue of his position atop OpenAI, he is in some sense fueling the problem — an internet full of hyperconvincing bots — that World is trying to solve. But it's also possible that Altman's connection could help World scale quickly, if it teams up with OpenAI or integrates with its AI products in some way. Maybe the social network that OpenAI is reportedly building will have a 'verified humans only' mode, or perhaps users who contribute to OpenAI's products in valuable ways will someday be paid in Worldcoin. (The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied the claims.) It's also entirely possible that privacy norms may shift in World's favor and that what feels strange and sinister today may be normalized tomorrow. (Remember how weird it felt the first time you saw a Clear kiosk at the airport? Did you promise that you'd never hand over your biometric data, then eventually relent and accept it as the cost of convenience?) When it was my turn to step up to the Orb, I removed my glasses, opened my World app and followed the instructions it gave me. (Look this way, look that way, step back a bit.) The Orb's cameras whirred for a minute, capturing my iris's texture. A ring around the Orb glowed yellow, and it let out a happy chime. A few minutes later, I was the owner of a World ID and 39.22 Worldcoin tokens. (The tokens are worth $40.77 at today's prices, and I'll be donating them to charity, once I figure out how to get them off my phone.) My Orb scan was quick and painless, but I spent the rest of the night feeling vaguely vulnerable — like I had just agreed to participate in a clinical trial for some risky new drug without reading about the possible side effects. But many in attendance seemed to have no such qualms. 'What am I hiding, anyway?' a social media influencer named Hannah Stocking said, as she stepped up to take her Orb scan. 'Who cares? Take it all.'


CNN
27-03-2025
- Business
- CNN
Apple's AI isn't a letdown. AI is the letdown
Apple has been getting hammered in tech and financial media for its uncharacteristically messy foray into artificial intelligence. After a June event heralding a new AI-powered Siri, the company has delayed its release indefinitely. The AI features Apple has rolled out, including text message summaries, are comically unhelpful. The critique of Apple's halting rollout is not entirely unfair. Though it is, at times, missing the point. Apple, like every other big player in tech, is scrambling to find ways to inject AI into its products. Why? Well, it's the future! What problems is it solving? Well, so far that's not clear! Are customers demanding it? LOL, no. In fact, last year the backlash against one of Apple's early ads for its AI was so hostile the company had to pull the commercial. The real reason companies are doing this is because Wall Street wants them to. Investors have been salivating for an Apple 'super cycle' — a tech upgrade so enticing that consumers will rush to get their hands on the new model. In a rush to please shareholders, Apple made a rare stumble. The company is owning its error, it seems, and has said the delayed features would roll out 'in the coming year.' Of course, the cryptic delay has only given oxygen to the narrative that Apple has become a laggard in the Most Important Tech Advancement in decades. And that is where the Apple-AI narrative goes off the rails. There's a popular adage in policy circles: 'The party can never fail, it can only be failed.' It is meant as a critique of the ideological gatekeepers who may, for example, blame voters for their party's failings rather than the party itself. That same fallacy is taking root among AI's biggest backers. AI can never fail, it can only be failed. Failed by you and me, the smooth-brained Luddites who just don't get it. (To be sure, even AI proponents will acknowledge available models' shortcomings — no one would argue that the AI slop clogging Facebook is anything but, well, slop — but there is a dominant narrative within tech that AI is both inevitable and revolutionary.) Tech columnists such as the New York Times' Kevin Roose have suggested recently that Apple has failed AI, rather than the other way around. 'Apple is not meeting the moment in AI,' Roose said on his podcast, Hard Fork, earlier this month. 'I just think that when you're building products with generative AI built into it, you do just need to be more comfortable with error, with mistakes, with things that are a little rough around the edges.' To which I would counter, respectfully: Absolutely not. Roose is right that Apple is, to put it mildly, a fastidious creator of consumer products. It is, after all, the $3-trillion empire built by the notoriously detail-obsessed Steve Jobs. The Apple brand is perhaps the most meticulously controlled corporate identity on the planet. Its 'walled garden' of iOS — despised by developers and fair game for accusations of monopolistic behavior, to be sure — is also part of the reason one billion people have learned to trust Apple with their sensitive personal data. Apple's obsession with privacy and security is the reason most of us don't think twice to scan our faces, store bank account information or share our real-time location via our phones. And not only do we trust Apple to keep our data safe, we trust it to design things that are accessible out of the box. You can buy a new iPhone, AirPods or Apple Watch and trust that the moment you turn it on, a user-friendly system will hold your hand through the setup and seamlessly sync it with your other devices. You will almost never need a user manual filled with tiny print. Even your Boomer parents will be able to navigate FaceTime calls with minimal effort. Roose contends, at one point in the episode, that 'there are people who use AI systems who know that they are not perfect,' and that those regular users understand there's a right way and a wrong way to query a chatbot. This is where we, the people, are apparently failing AI. Because in addition to being humans with jobs and social lives and laundry to fold and art to make and kids to raise, we should also learn how to tiptoe around the limitations of large language models that may or may not return accurate information to us. Apple, Roose says, should keep pushing AI into its products and just get used to the idea that those features may be unpolished and a little too advanced for the average user. And again, respectfully, I would ask: To what end? As Hard Fork co-host Casey Newton notes in the same episode, it's not as if Google or Amazon has figured out some incredible use case that's making users rush to buy a new Pixel phone or an Echo speaker. 'AI is still so much more of a science and research story than it is a product story,' Newton notes. In other words: Large language models are fascinating science. They are an academic wonder with huge potential and some early commercial successes, such as OpenAI's ChatGPT and Anthropic's Claude. But a bot that's 80% accurate — a figure Newton made up, but we'll go with it — isn't a very useful consumer product. Back in June, Apple floated a compelling scenario for its newfangled Siri. Imagine yourself, frazzled and running late for work, simply saying into your phone: Hey Siri, what time does my mom's flight land? And is it at JFK or LaGuardia? In theory, Siri could scan your email and texts with your mom and give you an answer. That saves you several annoying steps of opening your email to find the flight number, copying it, then pasting it into Google to find the flight's status. If it's 100% accurate, it's a fantastic time saver. If it is anything less than 100% accurate, it's useless. Because even if there's a 2% chance it's wrong, there's a 2% chance you're stranding mom at the airport, and mom will be, rightly, very disappointed. Our moms deserve better! Bottom line: Apple is not the laggard in AI. AI is the laggard in AI.