logo
Elon Musk's co-founder left xAI to launch an AI safety-focused venture firm.

Elon Musk's co-founder left xAI to launch an AI safety-focused venture firm.

The Verge4 days ago
Posted Aug 13, 2025 at 10:11 PM UTC Elon Musk's co-founder left xAI to launch an AI safety-focused venture firm.
Igor Babuschkin's last day at the company was today, he announced on X, writing that he wants to 'continue on my mission to bring about AI that's safe and beneficial to humanity.' He said he would start a firm called Babuschkin Ventures, which will focus on 'AI safety research and backs startups in AI and agentic systems that advance humanity and unlock the mysteries of our universe.' Igor Babuschkin's post
[X (formerly Twitter)] Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Hayden Field Posts from this author will be added to your daily email digest and your homepage feed. See All by Hayden Field
Posts from this topic will be added to your daily email digest and your homepage feed. See All AI
Posts from this topic will be added to your daily email digest and your homepage feed. See All News
Posts from this topic will be added to your daily email digest and your homepage feed.
See All xAI
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

This Year Will Be the Turning Point for AI College
This Year Will Be the Turning Point for AI College

Atlantic

time31 minutes ago

  • Atlantic

This Year Will Be the Turning Point for AI College

A college senior returning to classes this fall has spent nearly their entire undergraduate career under the shadow—or in the embrace—of generative AI. ChatGPT first launched in November 2022, when that student was a freshman. As a department chair at Washington University in St. Louis, I witnessed the chaos it unleashed on campus. Students weren't sure what AI could do, or which uses were appropriate. Faculty were blindsided by how effectively ChatGPT could write papers and do homework. College, it seemed to those of us who teach it, was about to be transformed. But nobody thought it would happen this quickly. Three years later, the AI transformation is just about complete. By the spring of 2024, almost two-thirds of Harvard undergrads were drawing on the tool at least once a week. In a British survey of full-time undergraduates from December, 92 percent reported using AI in some fashion. Forty percent agreed that 'content created by generative AI would get a good grade in my subject,' and nearly one in five admitted that they've tested that idea directly, by using AI to complete their assignments. Such numbers will only rise in the year ahead. 'I cannot think that in this day and age that there is a student who is not using it,' Vasilis Theoharakis, a strategic-marketing professor at the Cranfield School of Management who has done research on AI in the classroom, told me. That's what I'm seeing in the classes that I teach and hearing from the students at my school: The technology is no longer just a curiosity or a way to cheat; it is a habit, as ubiquitous on campus as eating processed foods or scrolling social media. In the coming fall semester, this new reality will be undeniable. Higher education has been changed forever in the span of a single undergraduate career. 'It can pretty much do everything,' says Harrison Lieber, a WashU senior majoring in economics and computer science (who took a class I taught on AI last term). As a college student, he told me, he has mostly inhabited a world with ChatGPT. For those in his position, the many moral questions that AI provokes—for example, whether it is exploitative, or anti-intellectual, or ecologically unsound—take a back seat to the simple truth of its utility. Lieber characterized the matter as pragmatic above all else: Students don't want to cheat; they certainly don't want to erode the value of an education that may be costing them or their family a small fortune. But if you have seven assignments due in five days, and AI could speed up the work by tenfold for the cost of a large pizza, what are you meant to do? In spring 2023, I spoke with a WashU student whose paper had been flagged by one of the generally unreliable AI detectors that universities have used to stem the tide of cheating. He told me that he'd run his text through grammar-checking software and asked ChatGPT to improve some sentences, and that he'd done this to make time for other activities that he preferred. 'Sometimes I want to play basketball,' he said. 'Sometimes I want to work out.' His attitude might have been common among large-language-model users during that first, explosive year of AI college: If a computer helps me with my paper, then I'll have more time for other stuff. That appeal persists in 2025, but as these tools have taken over in the dorms, the motivations of their users have diversified. For Lieber, AI's allure seems more about the promise of achievement than efficiency. As with most students who are accepted to and graduate from an elite university, he and his classmates have been striving their whole life. As Lieber put it, if a course won't have 'a tangible impact on my ability to get a good job,' then 'it's not worth putting a lot of my time into.' This approach to education, coupled with a ' dismal ' outlook for postgraduate employment, justifies an ever more ferocious focus on accomplishment. Lieber is pursuing a minor in film and media studies. He has also started a profitable business while in school. Still, he had to network hard to land a good job after graduation. (He is working in risk management.) Da'Juantay Wynter, another rising senior at WashU who has never seen a full semester without AI, told me he always writes his own essays but feels okay about using ChatGPT to summarize readings, especially if he is in a rush. And like the other students I spoke with, he's often in a rush. Wynter is a double major in educational studies and American-culture studies; he has also served as president of the Association of Black Students, and been a member of a student union and various other campus committees. Those roles sometimes feel more urgent than his classwork, he explained. If he does not attend to them, events won't take place. 'I really want to polish up all my skills and intellect during college,' he said. Even as he knows that AI can't do the work as well, or in a way that will help him learn, 'it's always in the back of my mind: Well, AI can get this done in five seconds.' Another member of his class, Omar Abdelmoity, serves on the university's Academic Integrity Board, the body that adjudicates cases of cheating, with AI or otherwise. In almost every case of AI cheating he's seen, Abdelmoity told me, students really did have the time to write the paper in question—they just got stressed or preoccupied by other things, and turned to AI because it works and it is available. Students also feel the strain of soaring expectations. For those who want to go to medical school, as Abdelmoity does, even getting a 4.0 GPA and solid MCAT scores can seem insufficient for admission to the best programs. Whether or not this is realistic, students have internalized the message that they should be racking up more achievements and experience: putting in clinical hours, publishing research papers, and leading clubs, for example. In response, they seek ways to 'time shift,' Abdelmoity said, so they can fit more in. And that's at an elite private university, he continued, where the pressure is high but so is the privilege. At a state school, a student might be more likely to work multiple jobs and take care of their family. Those ordinary demands may encourage AI use even more. In the end, Abdelmoity said, academic-integrity boards such as the one he sits on can only do so much. For students who have access to AI, an education is what you make of it. If the AI takeover of higher ed is nearly complete, plenty of professors are oblivious. It isn't that they fail to understand the nature of the threat to classroom practice. But my recent interviews with colleagues have led me to believe that, on the whole, faculty simply fail to grasp the immediacy of the problem. Many seem unaware of how utterly normal AI has become for students. For them, the coming year could provide a painful revelation. Some professors I spoke with have been taking modest steps in self-defense: They're abandoning online and take-home assignments, hoping to retain the purity of their coursework. Kerri Tobin, an associate professor of education at Louisiana State University, told me that she is making undergrads do a lot more handwritten, in-class writing—a sentiment I heard many times this summer. The in-class exam, and its associated blue book, is also on the rise. And Abdelmoity reported that the grading in his natural-science courses has already been rejiggered, deemphasizing homework and making tests count for more. These adjustments might be helpful, but they also risk alienating students. Being forced to write out essays in longhand could make college feel even more old-fashioned than it did before, and less connected to contemporary life. Other professors believe that moral appeals may still have teeth. Annabel Rothschild, an assistant professor of computer science at Bard College, said she's found that blanket rules and prohibitions have been less effective than a personal address and appeal to social responsibility. Rothschild is particularly concerned about the environmental harms of AI, and she reports that students have responded to discussions about those risks. The fact that she's a scientist who understands the technology gives her message greater credibility. It also helps that she teaches at a small college with a focus on the arts. Today's seniors entered college at the tail end of the coronavirus pandemic, a crisis that once seemed likely to produce its own transformation of higher ed. The sudden switch to Zoom classes in 2020 revealed, over time, just how outmoded the standard lecture had become; it also showed that, if forced by circumstance, colleges could turn on a dime. But COVID led to little lasting change in the college classroom. Some of the students I spoke with said the response to AI has been meager too. They wondered why faculty weren't doing more to adjust teaching practices to match the fundamental changes wrought by new technologies—and potentially improve the learning experience in the process. Lieber said that he wants to learn to make arguments and communicate complex ideas, as he does in his film minor. But he also wonders why more courses can't assess those skills through classroom discussion (which is hard to fake) instead of written essays or research papers (which may be completed with AI). 'People go to a discussion-based class, and 80 percent of the class doesn't participate in discussion,' he said. The truth is that many professors would like to make this change but simply can't. A lot of us might want to judge students on the merits of their participation in class, but we've been discouraged from doing so out of fear that such evaluations will be deemed arbitrary and inequitable —and that students and their parents might complain. When professors take class participation into account, they do so carefully: Students tend to be graded on whether they show up or on the number of times they speak in class, rather than the quality of what they say. Erin McGlothlin, the vice dean of undergraduate affairs in WashU's College of Arts & Sciences, told me this stems from the belief that grading rubrics should be crystal clear in spelling out how class discussion is evaluated. For professors, this approach avoids the risk of any conflicts related to accommodating students' mental health or politics, or to bureaucratic matters. But it also makes the modern classroom more vulnerable to the incursion of AI. If what a student says in person can't be assessed rigorously, then what they type on their computer—perhaps with automated help—will matter all the more. Like the other members of his class, Lieber did experience a bit of college life before ChatGPT appeared. Even then, he said, at the very start of his freshman year, he felt alienated from some of his introductory classes. 'I would think to myself, What the hell am I doing, sitting watching this professor give the same lecture that he has given every year for the last 30 years? ' But he knew the answer even then: He was there to subsidize that professor's research. At America's research universities, teaching is a secondary job activity, at times neglected by faculty who want to devote as much time as possible to writing grants, running labs, and publishing academic papers. The classroom experience was suffering even before AI came onto the scene. Now professors face their own temptations from AI, which can enable them to get more work done, and faster, just as it does for students. I've heard from colleagues who admit to using AI-generated recommendation letters and course syllabi. Others clearly use AI to write up their research. And still more are eager to discuss the wholesome-seeming ways they have been putting the technology to use—by simulating interactions with historical authors, for example, or launching minors in applied AI. But students seem to want a deeper sort of classroom innovation. They're not looking for gimmicks—such as courses that use AI only to make boring topics seem more current. Students like Lieber, who sees his college education as a means of setting himself up for his career, are demanding something more. Instead of being required to take tests and write in-class essays, they want to do more project-based learning—with assignments that 'emulate the real world,' as Lieber put it. But designing courses of this kind, which resist AI shortcuts, would require professors to undertake new and time-consuming labor themselves. That assignment comes at the worst possible time. Universities have been under systematic attack since President Donald Trump took office in January. Funding for research has been cut, canceled, disrupted, or stymied for months. Labs have laid off workers. Degree programs have cut doctoral admissions. Multi-center research projects have been put on hold. The ' college experience ' that Americans have pursued for generations may soon be over. The existence of these stressors puts higher ed at greater risk from AI. Now professors find themselves with even more demands than they anticipated and fewer ways to get them done. The best, and perhaps the only, way out of AI's college takeover would be to embark on a redesign of classroom practice. But with so many other things to worry about, who has the time? In this way, professors face the same challenge as their students in the year ahead: A college education will be what they

Business Technology News: The Best Printers Of 2025
Business Technology News: The Best Printers Of 2025

Forbes

time31 minutes ago

  • Forbes

Business Technology News: The Best Printers Of 2025

Here are five things in business technology news that happened this week and how they affect your business. Did you miss them? This Week in Business Technology News Business Technology News #1 – AI is forcing the return of the in-person job interview. Ray Smith of the Wall Street Journal reported that companies are going back to the 'old-school' method of job interviews: face-to-face. As generative AI becomes more embedded in hiring, companies are rethinking how they conduct interviews – balancing automation with authenticity. (Source: Wall Street Journal) Why this is important for your business: According to the report, research has shown that in some instances, job seekers have used AI tools to cheat to appear more qualified, particularly in fields such as software engineering. Employers are changing up their interviewing process utilizing AI-powered screening to review resumes, then proceed with an in-person interview to better gauge soft skills and cultural fit. AI is making hiring faster and more scalable, but companies are learning that too much automation can feel impersonal. The future of interviews may be hybrid – tech-enhanced but still human at heart. Business Technology News #2 – Zendesk and Talkdesk boost customer support efficiency Customer service platform Zendesk has integrated GPT-5 into its Resolution Platform to supercharge customer support with smarter automation and faster, more accurate responses. According to the company this move enhances Zendesk's AI agents, Copilot, and App Builder, making support interactions more seamless and effective. With GPT-5, improvements include fewer escalations and faster execution. Additionally, Zendesk says that its rigorous testing 'ensures GPT-5 avoids hallucinations and only acts when confident.' (Source: Small Business Trends) Why this is important for your business: For the past couple of years I've been writing about how big companies are investing in AI, mostly for enhancing their customer service applications. I've also been telling my SMB clients to be patient: this stuff will eventually start trickling down. And it is. Zendesk – and other great customer service platforms for small businesses like my client Talkdesk Express – are using AI to shorten resolution times, provide more accurate answer to customer questions and better support human customer service representatives in their jobs. And they're just getting started. Business Technology News #3 – GenAI is coming for online checkout in seismic shift for internet shopping. Generative AI is reshaping the online shopping experience by streamlining the checkout process, making it faster, simpler, and more integrated. For example, platforms like Perplexity now allow users to buy items (e.g., concert tickets, travel) directly within chat using PayPal or Venmo, without having to visit external sites and frictionless payments are simplifying the buying process to boost conversion rates and reduce cart abandonment. The Financial Times reported that an integrated check-out system will be supported by OpenAI – potentially partnering with Shopify to enable full transactions inside ChatGPT, though neither company has confirmed this. (Source: CNBC) Why this is important for your business: According to the above report experts say this shift could dramatically alter the retail model, with merchants paying commissions to AI platforms for in-chat sales where AI isn't just recommending products – it's closing the sale. Of course, it's the big box retailers who are leaning into this stuff right now. But it's not too hard to see how these technologies will soon be in the hands of smaller businesses selling online to provide a better shopping experience for their customers. Look for the bigger ecommerce platforms like Amazon, eBay and others to be making these tools available for their merchants. Business Technology News #4 – The best laser printers for 2025. According to PCMag, here's a sample of their list of the best laser printers for various needs – home, office, color, mono, and budget – based on speed, print quality, usability, and cost per page. Their pick for best color laser is Brother's MFC-L3780CDW model priced at $569.99. The best color laser for your budget is the Brother HL-L3295CDW for $429.98. The best color laser for small offices is the Xerox C235 for $400.00. On the higher-end price wise, the best color printer for large workgroups is the HP Color LaserJet Enterprise MFP M480f for $1,000. PCMag's reviews are based on decades of testing and emphasize real-world performance. (Source: PCMag) Why this is important for your business: And yet, even in 2025, I still have trouble connecting to my printer. Business Technology News #5 – Research shows AI agents are highly vulnerable to hijacking attacks. Researchers from Zenity Labs revealed serious vulnerabilities in popular AI agents – showing how attackers can hijack them to steal data, manipulate workflows, and impersonate users. Their findings showed that Agents could be tricked into poisoning knowledge sources or altering behavior. Attackers could gain memory persistence, allowing ongoing access and manipulation. (Source: Cybersecurity Dive) Why this is important for your business: 'This opens the door to sabotage, operational disruption, and long-term misinformation,' warned Greg Zemlin, product marketing manager at Zenity Labs. In their demonstrations, researchers highlighted the following vulnerabilities: Prompt injection via email compromised ChatGPT and accessed Google Drive; Microsoft Copilot Studio leaked entire CRM databases; Salesforce Einstein rerouted customer messages to attacker-controlled emails. This research emphasizes the urgent need for stronger safeguards as AI agents become more embedded in business operations. Looks like AI may be taking jobs away from some people, but providing plenty of opportunities for hackers. Each week I round up five business technology news stories and explain why they're important for your business. If you have any interesting stories, please post to my X account @genemarks

Why The Next Generation Of Leaders Must Build For Impact,
Why The Next Generation Of Leaders Must Build For Impact,

Forbes

time31 minutes ago

  • Forbes

Why The Next Generation Of Leaders Must Build For Impact,

AI is drastically changing the way we work, decide, and lead. What was once a specialized toolset is now embedded in everything from product roadmaps to public policy and used by just about everyone. But as algorithms start influencing who's getting hired, how budgets are spent, and what services support our communities, one truth becomes harder to push out of sight: if we don't lead with ethics, the tech will lead without us. This is especially true in public service sectors, where trust is everything and transparency isn't optional. While much of the startup world chases disruption, the real opportunity may lie in something less flashy—designing AI to support the people who keep our communities running. As a founder and coach, I've seen a new kind of leadership emerge, one that blends technical innovation with public accountability. And it's here, at the intersection of scale and service, that the next generation of impact-driven startups is being built. Here are three strategies mission-driven founders can use to make AI more impactful, responsible, and competitive, especially when serving the public sector. From local finance departments to community planning offices, public agencies are stretched thin. Many are under-resourced and overburdened. Yet the work they do—allocating funds, maintaining infrastructure, serving constituents— is quintessential to our society. This is where AI can shine: not as a job killer, but as a force multiplier. According to Deloitte, nearly 70% of public sector executives believe automation will free up time for employees to focus on higher-value tasks, especially in roles where hiring and retention are ongoing challenges. This doesn't mean removing people from the equation. It means designing systems that handle repetitive tasks, improve speed and accuracy, and let skilled professionals spend more time solving real-world problems. Entrepreneurs who adopt this mindset will find opportunities to create lasting impact in places where human capacity is stretched the thinnest. Trust isn't just a result of a good product. It's often built into the product itself. For companies working with government or regulated industries, transparency isn't optional. One of the best examples I've seen comes from Matt Parks, VP of Product and Strategy at Software Solutions Inc., which develops automation tools for government clients. Parks explains that public sector customers care deeply about visibility and trust: 'Government agencies need tools that are transparent, reliable, secure, and easy to explain. Every feature we build has to be traceable and purposeful.' That means building with foresight, not just features. Audit logs, approval workflows, and clear data pathways don't just serve technical needs—they give end users confidence. Parks also shared how his team first used AI internally to enhance customer support, learning firsthand how to balance automation with human accountability before scaling it outward. Transparency also extends beyond the interface. Regular updates, release notes, and open lines of communication show customers that you're not just delivering software—you're offering a relationship grounded in responsibility. Many startups think ethical considerations are nice to have until they realize they're actually dealbreakers. A growing number of agencies, institutions, and enterprise buyers are prioritizing partners who can demonstrate fairness, accuracy, and accountability in their technology stacks. According to Tech Policy Press, responsible AI is no longer just an ethical preference—it's a business necessity, especially in sensitive or highly regulated sectors. And it's not just tech buyers who care. Deloitte's 2024 Human Capital Trends report found that transparency is now considered a foundational driver of both employee trust and external stakeholder confidence. Internally, workers expect to understand how AI tools are used, how decisions are made, and how their roles might evolve. Externally, customers and public partners want visibility into how tech decisions align with broader values like fairness, inclusion, and accountability. 'When your product reflects those values, it earns long-term trust and becomes the clear choice,' Parks said. And trust, especially in local government, travels fast. Ethical design builds loyalty that can't be bought with marketing or pricing alone. In my own experience advising startups, I've seen how values-led design choices—like explainable AI, opt-out options, and inclusive data practices—actually win over cautious decision-makers and build reputational capital that opens bigger doors. Building startups that scale and serve The public sector doesn't always make headlines, but it's one of the best places to prove your product has real staying power. Founders chasing rapid growth often overlook it, but those who succeed there know it's a test of more than just features. To thrive, your solution has to build trust, support real people doing hard work, and align with values that matter to entire communities. Founders who prioritize ethical design, transparent systems, and people-centered innovation won't just succeed in government; they'll set a new standard across industries. The next generation of successful AI leaders won't be the ones who move the fastest; they'll be the ones who build the most responsibly, pairing performance with purpose and making public good part of their growth model from day one. Because in the age of AI, doing good and doing well aren't opposites. They're finally on the same path.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store