
Snowflake Summit 2025: CEO Sridhar Ramaswamy and Sam Altman come together to accelerate enterprise AI adoption
'The true magic of great technology is taking something very complicated and making it feel easy,' said Sridhar Ramaswamy in his keynote address at the Snowflake Summit 2025, setting the vision for the company. The first day of the four-day summit taking place in San Francisco was nothing short of eclectic as thousands of data professionals, tech innovators, and enterprise leaders thronged to attend what Ramaswamy called 'our biggest summit yet'. One of the major highlights of the keynote was OpenAI's Sam Altman joining Ramaswamy for a fireside chat.
'The world's hardest and most ambitious ideas, from personalised medicine based on your genetic data to autonomous factory floors to even virtual shopping experiences, these things aren't science fiction anymore. They can become realities through the power of data,' Ramaswamy told the packed auditorium. His sentiment was reinforced by Altman in what can be called the anticipated conversations in the realm of enterprise technology this year.
When asked about his advice for enterprise leaders navigating the AI landscape in 2025, Altman said, 'Just do it. There's still a lot of hesitancy. The models are changing so fast, and there's reason to wait for the next model. But as a general principle of technology, when things are changing quickly, the companies that have the quickest iteration speed and make the cost of making mistakes the lowest win.'
The interaction between the Snowflake and OpenAI CEOs touched on a significant shift in the AI landscape over the past year. Altman too acknowledged that his advice to enterprises has evolved dramatically over time. 'I wouldn't quite have said the same thing last year. To a startup last year, yes, but to a big enterprise, I would have said you can experiment a little bit, but this might not be totally ready for production use. That has really changed. Our enterprise business has grown dramatically.'
Building on this sentiment, Ramaswamy emphasised the importance of 'curiosity' in driving AI adoption. 'There's so much that we take for granted about how things used to work, which is not true anymore. OpenAI and Snowflake have made the cost of experimenting very low. You can run lots of little experiments, get value from it and build on that strength.'
The CEOs agreed that this shift from experimental to production-ready AI is being demonstrated across industries. During his keynote speech, Ramaswamy highlighted how century-old industrial giant Caterpillar was using Snowflake's AI Data Cloud to create unified views of customer and dealer operations. The company essentially transformed siloed data into real-time insights. Similarly, pharma giant AstraZeneca has been leveraging its data foundation to accelerate productivity and get critical products to patients faster.
Another recurring theme throughout the summit has been the relationship between data and AI success. 'There is no AI strategy without a data strategy,' Ramaswamy asserted. 'Data is the fuel for AI, and Snowflake's AI Data Cloud is powered by a connected ecosystem of data.' And this ecosystem approach can be seen from Snowflake's marketplace, which now features over 3,000 listings from over 750 partners, enabling thousands of customers to share data, applications, and models.
According to Ramaswamy, Snowflake's recent US Department of Defence (DOD) IL5 authorisation serves as validation of the enterprise-grade trust required for mission-critical AI applications.
Perhaps one of the most interesting segments of the fireside chat revolved around AI agents and the path toward artificial general intelligence (AGI). Altman went on to share his recent experience with OpenAI's coding agent Codex. 'The coding agent we just launched has been one of my 'feel AGI' moments. You can give it tasks; it works in the background; it's really quite smart. Maybe today it's like an intern that can work for a couple of hours, but at some point, it'll be like an experienced software engineer that can work for days.'
When pressed about AGI timelines and definitions, Altman offered a rather pragmatic view. 'If you could go back five years and show someone today's ChatGPT, I think most people would say that's AGI. We're great at adjusting our expectations. The question of what AGI is doesn't matter as much as the rate of progress.'
For Altman, the true marker of AGI would be 'a system that can autonomously discover new science or be such an incredible tool that our rate of scientific discovery quadruples.' This vision aligns with Ramaswamy's own ambitious goals, as he referenced the potential for AI to tackle projects that could advance humanity significantly.
Through the keynote, Ramaswamy emphasised that successful AI implementation came from simplicity. 'Complexity creates risk, complexity creates cost, and complexity creates friction and makes it harder to get the job done. Whereas simplicity drives results.' This philosophy is reflected in Snowflake's approach to product development, where the prime goal is to enable a user to ask a question with a voice memo and get an answer on their enterprise data or even launch a customer app without having to write a line of code.
The ongoing summit showcased several examples of AI driving real business value. One of the most compelling examples came from Lynn Martin, President of NYSE Group, who shared how the exchange has scaled from handling 350 billion incoming order messages per day in 2022 to 1.2 trillion messages by April 2025. 'We can't do that without having incredible technology and AI,' Martin explained, highlighting the critical role of data sanctity in powering effective AI systems.
Ramaswamy's closing message captured the spirit of the moment: 'This community is here to build what's next together.' With rapidly advancing AI capabilities, enterprises are finally ready to move from experimentation to production. Snowflake Summit 2025 has positioned itself as a crucial gathering where the future of enterprise AI is being written in real-time.
Bijin Jose, an Assistant Editor at Indian Express Online in New Delhi, is a technology journalist with a portfolio spanning various prestigious publications. Starting as a citizen journalist with The Times of India in 2013, he transitioned through roles at India Today Digital and The Economic Times, before finding his niche at The Indian Express. With a BA in English from Maharaja Sayajirao University, Vadodara, and an MA in English Literature, Bijin's expertise extends from crime reporting to cultural features. With a keen interest in closely covering developments in artificial intelligence, Bijin provides nuanced perspectives on its implications for society and beyond. ... Read More
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
41 minutes ago
- Mint
Parmy Olson: AI chatbots could become advertising vehicles
Chatbots might hallucinate and flatter their users too much, but at least their subscription model is healthy for our well-being. Many Americans pay about $20 a month to use the premium versions of OpenAI's ChatGPT, Google's Gemini Pro or Anthropic's Claude. The result is that the products are designed to provide maximum utility. Don't expect this status quo to last. Subscription revenue has a limit. Even the most popular models are under pressure to find new revenue streams. Unfortunately, the most obvious one is advertising—the web's most successful business model. AI builders are already exploring ways to plug more ads into their products, and while that's good for their bottom lines, it also means we're about to see a new chapter in the attention economy that fuelled the internet. Also Read: Rahul Matthan: Brace for a wave of AI-enabled criminal enterprise If social media's descent into engagement-bait is any guide, the consequences will be profound. One cost is addiction. OpenAI says a cohort of 'problematic' ChatGPT users are hooked on the tool. Putting ads into ChatGPT, which now has more than 500 million active users, won't spur the company to help those people reduce their use of the product. Quite the opposite. Advertising was the reason companies like Mark Zuckerberg's Meta designed algorithms to promote engagement and keep users scrolling so they saw more ads and drove more revenue. It's the reason behind the so-called 'enshittification' of the web, a place now filled with clickbait and social media posts that spark outrage. Baking such incentives into AI will almost certainly lead its designers to find ways to trigger more dopamine spikes, perhaps by complimenting users even more, asking personal questions to get them talking for longer or even cultivating emotional attachments. Millions in the Western world already view chatbots in apps like Chai, Talkie, etc, as friends or romantic partners. Imagine how persuasive such software could be when its users are beguiled. Imagine a person telling their AI they're feeling depressed, and the system recommending some holiday destinations or medication to address the problem. Also Read: Friend or phone: AI chatbots could exploit us emotionally Is that how ads would work in chatbots? The answer is subject to much experimentation. Google's ad network, for instance, recently started putting ads in third-party chatbots. Chai, a romance and friendship chatbot, serves pop-up ads. The AI answer engine Perplexity displays sponsored questions. After an answer to a question about job hunting, for instance, it might include a list of suggested follow-ups, including, at the top, 'How can I use Indeed to enhance my job search?" Perplexity's CEO Aravind Srinivas told a podcast in April that the company was looking to go further by building a browser to 'get data even outside the app" to enable what he called 'hyper-personalized" ads. For some apps, that may mean weaving ads directly into conversations, using the intimate details shared by users to predict and potentially even manipulate them into wanting something, then selling those intentions to the highest bidder. Also Read: Netflix's 'Adolescence' and the cost of profits: Why kids online are not okay Researchers at Cambridge University referred to this as the forthcoming 'intention economy" in a recent paper, with chatbots steering conversations toward a brand or even a direct sale. As evidence, they pointed to a 2023 blog post from OpenAI calling for 'data that expresses human intention" to help train its models, a similar effort from Meta and Apple's 2024 developer framework that helps apps work with Siri to 'predict actions someone might take in the future." As for OpenAI's Sam Altman, nothing says 'we're building an ad business" like poaching Fidji Simo, the person who built delivery app Instacart into an advertising powerhouse, to help OpenAI 'scale as we enter a next phase of growth." In Silicon Valley parlance, to 'scale' often means to quickly expand your user base by offering a service for free, with ads. Tech companies will inevitably claim that advertising is a necessary part of democratizing AI. But we have seen how 'free' services cost people their privacy, autonomy and even their mental health. AI knows more about us than Google or Facebook ever did—details about our health concerns, relationship issues and work. In two years, chatbots have also built a reputation as trustworthy companions and arbiters of truth. When people trust artificial intelligence that much, they're more vulnerable to targeted manipulation. AI advertising should be regulated before it becomes too entrenched, or we'll repeat the mistakes made with social media—scrutinizing the fallout of a lucrative business model only after the damage is done. ©Bloomberg The author is a Bloomberg Opinion columnist covering technology.

The Hindu
an hour ago
- The Hindu
China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.
OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some content also criticised U.S. President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?". In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarised social media content supporting both sides of divisive topics within U.S. political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.


Time of India
an hour ago
- Time of India
OpenAI finds more Chinese groups using ChatGPT for malicious purposes
OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology , which can quickly and easily produce human-like text, imagery and audio. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 2 Simple Profitable Strategies That Can Make You 5K Per Day thefutureuniversity Learn More OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Live Events Some content also criticised US President Donald Trump 's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?". Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories In another example, China-linked threat actors used AI to support various phases of their cyber operations , including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within U.S. political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.