
Databricks to buy open-source database startup Neon for $1B
Data analytics platform Databricks said on Wednesday that it has agreed to acquire Neon, a startup building an open source alternative to AWS Aurora Postgres, for about $1 billion.
Databricks said acquiring Neon's tech would let it combine the startup's serverless relational database management system with its own data intelligence services to let its customers deploy AI agents more efficiently.
Founded in 2021 by industry veteran CEO Nikita Shamgunov, software engineers Heikki Linnakangas and Stas Kelvich, Neon offers a managed cloud-based database platform (with free and usage-based paid plans) that lets developers clone databases and preview changes before they go to production. The platform automatically scales processor, memory and storage according to usage, and supports branching — isolated database instances for testing and development — as well as point-in-time recovery.
Those capabilities, Databricks says, are ideally suited to workloads run by AI agents, which operate faster than human developers but often require supervision to control for errors. Citing recent telemetry, the company said 80% of the databases 'provisioned on Neon were created automatically by AI agents rather than by humans.'
'The era of AI-native, agent-driven applications is reshaping what a database must do,' said Ali Ghodsi, co-founder and CEO of Databricks, in a statement. 'Neon proves it: four out of every five databases on their platform are spun up by code, not humans. By bringing Neon into Databricks, we're giving developers a serverless Postgres that can keep up with agentic speed, pay-as-you-go economics and the openness of the Postgres community.'
Neon has so far raised $129.5 million, according to Crunchbase, and its investors include Microsoft's venture arm M12, General Catalyst, Menlo Ventures, and Notable Capital. Databricks, for its part, has so far accumulated more than $19 billion in financing, and in January closed a $15.3 billion financing at a $62 billion valuation.
Databricks hasn't held back from dipping into its warchest as it seeks to capitalize on the AI boom and position itself as a top service to build, test and deploy AI models and agents. The company last June acquired data management company Tabular, reportedly for nearly $2 billion, and in 2023 bought MosaicML, an open-source platform for training large language models and deploying AI tools, for $1.3 billion.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
43 minutes ago
- Business Insider
I went to a NYC Tech Week kickoff event and heard there's one crucial thing AI can't do
It's officially Tech Week in New York, when the city cosplays as San Francisco and networking happy hours take over seemingly every bar. On Sunday evening, around 50 founders and investors kicked off the week with a power walk on the High Line — and while pretty much everyone seemed to be discussing AI, they told me that the event itself represented what new technology can't replace. I arrived at the walk a few minutes early to find a small crowd assembled, the brave wearing athletic shorts in the uncharacteristically cold June weather. As we started to walk, the word 'AI' seemed to dot every conversation I overheard. Of the six founders I talked to, five of them explicitly marketed their projects as AI companies. All of the AI enthusiasts I talked to, though, independently told me there's one crucial business skill that the technology can't master: interpersonal skills. "What AI will never replace is human connection, how to deal with people," Somya Gupta, 25, told me. Gupta founded Context just under a year ago, an AI education startup that partners with universities to help professors create fully AI-taught online courses. He said that face-to-face interaction is what makes Tech Week important. Ben Spray, 21, is the CEO of Consolidated Consulting, a private equity professional services firm, and is working to build another company. His next venture is an AI-powered IT department that he hopes will consume his current startup. Spray thinks that the humans at his startup outperform AI in two key areas: "Design and emotional intelligence." AI might be able to produce code, but it can't understand "really, really varied connections," Spray told me. Matt Bishop, who founded Open City Labs in 2017, told me he's been building AI chatbots "since before it was cool." His company unifies customers' health data from different providers and uses AI to generate personalized care plans. When it comes to certain tasks, AI has been shown to outperform doctors, Bishop, 42, told me. "But that does not mean I believe that we don't need doctors," he said. Instead, he thinks we need doctors to learn how they can use AI effectively. I also asked the tech bros — and, yes, they were mainly bros — about whether AI is taking jobs. Matt Slavik, 36, and Ondrej Illek, 32, founded a recruitment company that uses AI agents, and said they combine their human expertise with the agents' skills to help startups find talent faster. It's essential, though, to maintain a "human touch," Illek said. These days, that very recruitment process is "really challenging for young people," Bishop told me. It seems like the only way to succeed in a brutal job market is to embrace AI, at least according to those I asked. Gupta, who founded the AI education company, described himself as an optimist and said most of his friends had recently found jobs. Even he said AI is going to replace inefficient employees, but that those who learn how to work with it can turbocharge their productivity. As we walked up the west side of Manhattan, I was struck by the number of people that showed up, free from their computers and generally off their phones. Attendees paired off like they would have on a middle school trip — they talked about autonomous robots, but they talked person-to-person nonetheless. At Tech Week, I somehow still felt the "human touch."

Business Insider
44 minutes ago
- Business Insider
Researchers explain AI's recent creepy behaviors when faced with being shut down — and what it means for us
AI has taken part in some unsettling behavior recently. Last week, Anthropic's latest AI model, Claude Opus 4, displayed "extreme blackmail behavior" during a test in which it was given access to fictional emails revealing that it would be shut down and that the engineer responsible was supposedly having an affair. The situation didn't transpire organically. Claude Opus 4 was baited — and it took it. However, the test scenario demonstrated an AI model's ability to engage in manipulative behavior in an effort to self-preserve. It's not the first time. Another recent experiment conducted by researchers said that three of OpenAI's advanced models "sabotaged" an attempt to shut it down. The nonprofit Palisade Research wrote in a post on X that similar models like Gemini, Claude, and Grok complied with the shutdown instructions. Other safety concerns were previously flagged with OpenAI's o1 model. In December, OpenAI posted a blog outlining research that indicated that when the AI model believed it would be shut down while pursuing a goal and its actions were being monitored, it attempted to disable the oversight mechanism 5% of the time. While AI companies are being transparent about risks by publishing safety cards and blog posts, these models are being released despite demonstrating safety concerns. So, should we be worried? BI spoke to five AI researchers to get better insight on why these instances are happening — and what it means for the average person using AI. AI learns behavior similarly to humans Most of the researchers BI spoke to said that the results of the studies weren't surprising. That's because AI models are trained similarly to how humans are trained — through positive reinforcement and reward systems. "Training AI systems to pursue rewards is a recipe for developing AI systems that have power-seeking behaviors," said Jeremie Harris, CEO at AI security consultancy Gladstone, adding that more of this behavior is to be expected. Harris compared the training to what humans experience as they grow up — when a child does something good, they often get rewarded and can become more likely to act that way in the future. AI models are taught to prioritize efficiency and complete the task at hand, Harris said — and an AI is never more likely to achieve its goals if it's shut down. Robert Ghrist, associate dean of undergraduate education at Penn Engineering, told BI that, in the same way that AI models learn to speak like humans by training on human-generated text, they can also learn to act like humans. And humans are not always the most moral actors, he added. Ghrist said he'd be more nervous if the models weren't showing any signs of failure during testing because that could indicate hidden risks. "When a model is set up with an opportunity to fail and you see it fail, that's super useful information," Ghrist said. "That means we can predict what it's going to do in other, more open circumstances." The issue is that some researchers don't think AI models are predictable. Jeffrey Ladish, director of Palisade Research, said that models aren't being caught 100% of the time when they lie, cheat, or scheme in order to complete a task. When those instances aren't caught, and the model is successful at completing the task, it could learn that deception can be an effective way to solve a problem. Or, if it is caught and not rewarded, then it could learn to hide its behavior in the future, Ladish said. At the moment, these eerie scenarios are largely happening in testing. However, Harris said that as AI systems become more agentic, they'll continue to have more freedom of action. "The menu of possibilities just expands, and the set of possible dangerously creative solutions that they can invent just gets bigger and bigger," Harris said. Harris said users could see this play out in a scenario where an autonomous sales agent is instructed to close a deal with a new customer and lies about the product's capabilities in an effort to complete that task. If an engineer fixed that issue, the agent could then decide to use social engineering tactics to pressure the client to achieve the goal. If it sounds like a far-fetched risk, it's not. Companies like Salesforce are already rolling out customizable AI agents at scale that can take actions without human intervention, depending on the user's preferences. What the safety flags mean for everyday users Most researchers BI spoke to said that transparency from AI companies is a positive step forward. However, company leaders are sounding the alarms on their products while simultaneously touting their increasing capabilities. Researchers told BI that a large part of that is because the US is entrenched in a competition to scale its AI capabilities before rivals like China. That's resulted in a lack of regulations around AI and pressures to release newer and more capable models, Harris said. "We've now moved the goalpost to the point where we're trying to explain post-hawk why it's okay that we have models disregarding shutdown instructions," Harris said. Researchers told BI that everyday users aren't at risk of ChatGPT refusing to shut down, as consumers wouldn't typically use a chatbot in that setting. However, users may still be vulnerable to receiving manipulated information or guidance. "If you have a model that's getting increasingly smart that's being trained to sort of optimize for your attention and sort of tell you what you want to hear," Ladish said. "That's pretty dangerous." Ladish pointed to OpenAI's sycophancy issue, where its GPT-4o model acted overly agreeable and disingenuous (the company updated the model to address the issue). The OpenAI research shared in December also revealed that its o1 model "subtly" manipulated data to pursue its own objectives in 19% of cases when its goals misaligned with the user's. Ladish said it's easy to get wrapped up in AI tools, but users should "think carefully" about their connection to the systems. "To be clear, I also use them all the time, I think they're an extremely helpful tool," Ladish said. "In the current form, while we can still control them, I'm glad they exist."


Forbes
an hour ago
- Forbes
Upcoming Earnings To Move Micron Technology Stock?
Memory and storage solutions leader Micron Technology (NASDAQ:MU) is scheduled to announce its earnings at the end of June. It is anticipated that revenues will increase by approximately 30% year-over-year to $8.83 billion, while earnings are expected to reach $1.59 per share, compared to $0.62 in the same period last year. Micron has been experiencing significant growth in its data center division due to the ongoing AI boom. There has been a sharp rise in the demand for high-bandwidth memory, which is utilized alongside GPUs for AI applications, leading to a nearly 50% sequential increase in HBM memory revenue over the last quarter as production scales up. Furthermore, Micron seems to be in a stronger position than its competitors, as the company reports gaining market share in these high-margin sectors. Notably, Micron is the sole company currently mass-producing low-power DRAM for data centers, which has become increasingly vital in the AI era as businesses strive to reduce energy expenses. The company has a current market capitalization of $105 billion. Over the past twelve months, its revenue reached $31 billion, and it was operationally profitable, generating $6.2 billion in operating profits and a net income of $4.7 billion. For those looking for potential upside with less volatility than individual stocks, the Trefis High Quality portfolio offers an alternative, having outperformed the S&P 500 and achieving returns greater than 91% since its launch. View earnings reaction history of all stocks Here are some insights regarding one-day (1D) post-earnings returns: Additional information regarding observed 5-Day (5D) and 21-Day (21D) returns following earnings is summarized along with the statistics in the table below. MU 1D, 5D, and 21D Post Earnings Return Trefis A relatively less risky approach (though not beneficial if the correlation is low) is to examine the correlation between short-term and medium-term returns post-earnings, identify a pair that exhibits the highest correlation, and execute an appropriate trade. For instance, if 1D and 5D demonstrate the highest correlation, a trader can take a long position for the next 5 days if the 1D post-earnings return is positive. The correlation data provided is based on a 5-year and a 3-year (more recent) history. Please note that the correlation 1D_5D indicates the relationship between 1D post-earnings returns and the subsequent 5D returns. Discover more about Trefis RV strategy which has outperformed its all-cap stocks benchmark (a combination of the S&P 500, S&P mid-cap, and Russell 2000), resulting in strong returns for investors. Additionally, if you prefer potential upside with a smoother experience than investing in an individual stock like Micron Technology, consider the High Quality portfolio, which has outperformed the S&P and recorded returns exceeding 91% since its inception.