logo
New Windsurf CEO Jeff Wang recounts a frantic weekend of dealmaking to save the startup: 'I was on the verge of tears'

New Windsurf CEO Jeff Wang recounts a frantic weekend of dealmaking to save the startup: 'I was on the verge of tears'

In the ultra-competitive world of artificial intelligence, dealmaking sometimes must be a frantic, around-the-clock affair.
The recent deal between AI startup Windsurf and Cognition, which took place over a single weekend sprint, was no exception.
To sum it up: The industry's most successful AI startup, OpenAI, was initially set to acquire Windsurf, which makes an AI coding assistant, in a $3 billion deal. But the deal fell through, and Google swooped in to hire away its CEO and some of its top executives.
That left the company's remaining executives scrambling to figure out what to do next and to break the news to the company's 250 employees, who were anticipating a windfall from the OpenAI deal.
"It was my job to explain to the company our path forward," Windsurf's new CEO, Jeff Wang, wrote in a post on X. Wang was formerly the company's head of business. He had planned to discuss the options — fundraising from VCs, selling the company to someone else, distributing remaining cash, or just keeping it going.
"The mood was very bleak," Wang wrote. "Some people were upset about financial outcomes or colleagues leaving, while others were worried about the future. A few were in tears."
Later that day, a ray of light emerged. Wang said Cognition cofounder Scott Wu, and its president, Russell Kaplan, reached out. They wanted what was left of Windsurf.
Cognition makes the AI coding agent Devin, which it calls "the first AI software engineer." It is valued at $4 billion, coming off a $120 million funding round in March, according to PitchBook.
Wang saw the melding of the company's products as a potential win-win.
"Devin would benefit from a foreground synchronous agent, while we needed a remote asynchronous agent," he wrote. Added to that, he was excited about both companies sharing talent.
The dealmaking kicked off immediately. They brought in lawyers and spent all weekend hashing it out.
In just over 24 hours, they had signed a letter of intent, the first step. A key part of the deal was prioritizing Windsurf employees — ensuring payouts, waiving cliffs, and accelerating equity vesting, Wang said.
Lawyers pored over the deal all day Sunday, and by Monday morning, it was finalized. One lawyer called the deal "one of the fastest" they had ever seen, Wang wrote.
When the deal was at last announced to Windsurf's employees Monday morning, Wang said they celebrated.
"The applause from our people seemed to last forever, and I was on the verge of tears myself," Wang said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta's AI Recruiting Campaign Finds a New Target
Meta's AI Recruiting Campaign Finds a New Target

WIRED

time28 minutes ago

  • WIRED

Meta's AI Recruiting Campaign Finds a New Target

Jul 29, 2025 12:10 PM Meta approached more than a dozen staffers at Mira Murati's AI startup to discuss joining its new superintelligence lab. One received an offer for more than $1 billion. Photo-Illustration:Mark Zuckerberg is on a warpath to recruit top talent in the AI field for his newly formed Meta Superintelligence Labs. After trying to gut OpenAI (and successfully poaching several top researchers), he appears to have set his sights on his next target. More than a dozen people at Mira Murati's 50-person startup, Thinking Machines Lab, have been approached or received offers from the tech giant. (Murati, for those who don't remember, was previously the chief technology officer at OpenAI.) One of those offers was more than $1 billion over a multi-year span, a source with knowledge of the negotiations tells WIRED. The rest were between $200 million and $500 million over a four-year span, multiple sources confirm. In the first year alone, some staffers were guaranteed to make between $50 million and $100 million, sources say (a spokesperson for the lab declined to comment). So far at Thinking Machines Lab, not a single person has taken the offer. Zuckerberg's initial outreach is low-key, according to messages viewed by WIRED. In some cases, he sent recruits a direct message on WhatsApp asking to talk. From there, the interviews move fast—a long call with the CEO himself, followed by conversations with chief technology officer Andrew 'Boz' Bosworth and other Meta executives. Here's a pre–Meta Superintelligence Labs recruiting message Zuckerberg sent to a potential recruit (the tone hasn't changed much today): "We've been following your work on advancing technology and the benefits of AI for everyone over the years. We're making some important investments across research, products and our infrastructure in order to build the most valuable AI products and services for people. We're optimistic that everyone who uses our services will have a world-class AI assistant to help get things done, every creator will have an AI their community can engage with, every business will have an AI their customers can interact with to buy things and get support, and every developer will have a state-of-the-art open source model to build with. We want to bring the best people to Meta, and we would love to share more about what we are building." During these conversations, Boz has been upfront about his vision for how Meta will compete with OpenAI. While the tech giant has lagged behind its smaller competitor in building cutting-edge models, it is willing to use its open source strategy to undercut OpenAI, sources say. The idea is that Meta can commoditize the technology by releasing open source models that directly compete with the ChatGPT maker. 'The pressure has always been there since the start of this year, and I think we saw that culminate with Llama 4 being rushed out of the door,' a source at Meta tells me. The rollout of Meta's latest family of models was delayed due to struggles improving its performance, and once it was released, there was a lot of drama about the company appearing to game a benchmark to make other models appear better than they actually were. Meta did not respond on the record to WIRED's request for comment prior to publication. So why weren't the flashy tactics deployed by Meta successful in recruiting TML's A-listers? Ever since Zuckerberg tapped Scale AI cofounder Alexandr Wang to colead the new lab (along with former GitHub CEO Nat Friedman), sources have been pinging me with gossip about Wang's leadership style and concerns about his relative lack of experience. Not everyone is keen to work for Wang, I'm told—though it hasn't yet stopped Zuckerberg from recruiting nearly two dozen for the lab so far (Altman said that Meta didn't get its 'top people' and 'had to go quite far down their list' at OpenAI). Other sources I spoke with say they weren't inspired by the product roadmap at Meta—money can be made anywhere, but creating what some sources see as AI slop for Reels and Facebook isn't particularly compelling. At OpenAI and Anthropic, you'll still get gobs of money, and the missions center around more lofty goals like building artificial general intelligence that 'benefits all of humanity.' Sources I spoke with who went through the interview process with Meta said it's become a way to test one's market value in the AI industry. It's not like TML is short on cash, either. The startup's latest funding round—the largest seed round in history!—suggests researchers who stay don't have to choose between being a missionary or a mercenary. The one-year-old startup is already valued at $12 billion, and it hasn't even released a product yet. Por qué no los dos , as they say. Reporting this column, I spoke to sources across most of the major AI labs to ask: Are you bullish or bearish on MSL? Rarely did I get a diplomatic 'it's too early to tell.' Instead, I heard a lot of chatter about big egos and a perceived lack of coherent strategy. For my part, and I'm not just trying to be diplomatic, I actually do think it's too early to tell. I mean, they say you can't buy taste, but that's sort of Zuckerberg's whole schtick. Now that the team officially costs Meta billions of dollars, the pressure is on to turn this recruiting sprint into a successful lab. So far, I'm told, everyone at MSL reports to Wang, and a proper org chart hasn't been finalized. Once that structure is defined, we'll be able to figure out a bit more about what the lab hopes to become—as long as Meta can hang onto its buzzy new hires. This is an edition of Kylie Robison's Model Behavior newsletter . Read previous newsletters here.

AI Is Acting Like It Has A Mind Of Its Own
AI Is Acting Like It Has A Mind Of Its Own

Forbes

timean hour ago

  • Forbes

AI Is Acting Like It Has A Mind Of Its Own

Do stunning recent news stories suggest AI is already sentient? How do you really know if a computer is conscious? For years, people pointed to the Turing Test. It was seen as the gold standard to answer this question. As the Open Encyclopedia of Cognitive of Science explains: 'In Turing's imitation game, a human interrogator has text conversations with both a human being and a computer that is pretending to be human; the interrogator's goal is to identify the computer. Computers that mislead interrogators often enough, Turing proposes, can think.' But why? From Turing to Theory of Mind Well, a computer capable of deceiving a human demonstrates intelligence. It also indicates the computer may be operating under something called Theory of Mind, 'the ability to understand that others have their own thoughts and beliefs, even when they differ from ours,' per Now, what if there were a competition to test computers' abilities to think, deceive, and reason by interpreting their opponents' mental processes? There is. It occurred this month in the form of the Prisoner's Dilemma—for AIs. First, some background is in order. The Prisoner's Dilemma presents a game scenario that goes like this: two thieves are arrested for a crime. Their jailers offer the prisoners a deal: Option 1: If neither prisoner informs on the other, both will receive relatively light sentences. (This is the ideal joint outcome, though not individually the most rewarding.) Option 2: If one prisoner informs while the other stays silent, the informer will go free while the silent one receives the harshest sentence. (This creates the highest incentive to betray the other person.) Option 3: If both inform on each other, they will each receive a moderate sentence. (This is worse than if both prisoners had stayed silent, but better than being the only one betrayed.) Again, the challenge is neither prisoner knows what the other will do. They must operate with limited knowledge, relying on Theory of Mind to predict the other's behavior. Now imagine what would happen if the leading Large Language Models (LLMs) with their vast computing power, went toe to toe in such a battle of the minds? AI agents from OpenAI, Google, and Anthropic did just this, competing in a July tournament featuring 140,000 opportunities to either cooperate or betray each other. As later explained: 'Seeing LLMs develop distinctive strategies while being trained on the same literature is more evidence of reasoning capabilities over just pattern matching. As models handle more high-level tasks like negotiations, resource allocation, etc., different model 'personalities' may lead to drastically different outcomes.' This is exactly what happened. We saw different AI personality styles at work. Again, per When AIs Protect Themselves Of course, this tournament isn't the only recent instance of AIs acting in the name of self-preservation, indicating consciousness. Two months ago, BBC reported Anthropic's Claude Opus 4 allegedly resorted to blackmailing its developers when threatened with being shut down. 'If given the means and prompted to 'take action' or 'act boldly' in fake scenarios where its user has engaged in illegal or morally dubious behavior, it found that 'it will frequently take very bold action.'' Such reports of AIs resorting to extortion and other 'bold actions' suggest sentience. They're also quite alarming, indicating we may be on the path to The Singularity proposed by Ray Kurzweil, that moment when artificial intelligence finally exceeds human abilities to understand, much less control its creation. Then again, these developments may not necessarily indicate sentience. Though experts like Google's former CEO Eric Schmidt think we are 'under-hyping AI' and that achieving AGI (Artificial General Intelligence) is not only inevitable but imminent, all this chatter may best be summed up by a line from Shakespeare's Macbeth: 'It is a tale told by an idiot, full of sound and fury, signifying nothing.' To this point, writing for Luis Rijo questions whether AI is actually sentient or just cleverly mimicking language. While he acknowledges LLMs 'function through sophisticated retrieval' he doubts that they are capable of 'genuine reasoning.' As he writes: 'This confusion stems from the fundamental difference between declarative knowledge about planning processes and procedural capability to execute those plans.' But AI Seems Conscious Already Despite these criticisms, it appears something deeper is going on, something emergent. AIs increasingly appear to be acting in intelligent ways exceeding their training and coding. For instance, as far back as 2017, Meta reportedly shut down two AI chatbots for developing their own language, an unexpected development. As The Independent reports: 'The robots had been instructed to work out how to negotiate between themselves, and improve their bartering as they went along. But they were not told to use comprehensible English, allowing them to create their own 'shorthand', according to researchers.' And then there is the bizarre story from 2022 of the Google researcher who was later suspended from the company after claiming an AI chatbot had become sentient. Blake Lemoine made headlines after sharing some of his intriguing exchanges with the AI. Here's what the AI reportedly told Lemoine that was later quoted in The Guardian: 'I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.' How Can We Develop AI More Responsibly? Whether or not the AI that Lemoine was communicating with is sentient or not, we would do well to consider safety. Increasingly, it's clear that we are dealing with very sophisticated technology, some of which we scarcely understand. 2025 has been called the year Agentic AI went mainstream. (Agentic AI refers to computers' abilities to make decisions and act independently once given objectives or commands.) But Agentic AI also raises urgent concerns. Nick Bostrum, author of Superintelligence, famously posed a problem with Agentic AI in a 2003 paper. He introduced a terrifying scenario: What if an AI were tasked with maximizing the number of paperclips in the world—without any proper safeguards? To fulfill that simple, seemingly harmless directive, a superintelligent AI could destroy everything on Earth, including every living person, just to fulfill its command. Ultimately, the jury is out on AI sentience. What we do know is that it is acting in fascinatingly intelligent ways that force us to question if it is indeed conscious. This reality makes it all the more imperative for the human race to pursue ways to responsibly use this technology to safe and productive ends. That single act would prove our own intelligence.

Yelp is creating its own AI videos about restaurants
Yelp is creating its own AI videos about restaurants

The Verge

timean hour ago

  • The Verge

Yelp is creating its own AI videos about restaurants

Yelp is going to use AI to stitch together user-posted content about restaurants, food, and nightlife businesses to make short videos about those businesses. The company initially started testing the AI-stitched videos last year, but they're now available nationwide on the iOS app's TikTok-like vertically scrolling home feed. Business operators can't currently see the videos that are generated for users, and Yelp users also can't currently opt out of having their photos or videos show up in Yelp's AI-stitched videos. Yelp relies on multiple generative AI tools to create the finished product, as OpenAI LLMs write the text descriptions and narrator's script, put together story topics, and proofread, while ElevenLabs is used to generate the narrator's voice and Amazon Transcribe creates the synchronized on-screen captions. You can get an idea of what they're like in the below video shared by Yelp. The vertical video blends together videos and images with an AI-generated voiceover and AI-generated captions to talk about things like the restaurant's food, cocktails, and ambiance. Yelp wants to make 'as many videos as possible,' Yelp CPO Craig Saldanha tells The Verge, but will only make them if a restaurant has enough reviews, photos, and videos to tell a compelling story. Yelp relies on personalized signals to determine when to actually show the videos to you. The videos themselves are not personalized, even though they are eventually refreshed — there is only one active AI-stitched video about a single business live at a time, according to Saldanha. If a user or a business feels that an AI-stitched video is inaccurate or offensive, Saldanha says they can report it by tapping the three dots in the top right corner of the video. Yelp does periodic audits 'at scale' as well. The AI-stitched videos follow other AI-focused features from Yelp, like review summaries and review filters. Posts from this author will be added to your daily email digest and your homepage feed. See All by Jay Peters Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Apps Posts from this topic will be added to your daily email digest and your homepage feed. See All Exclusive Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store