
Snowflake reveals next-gen AI and data tools at annual summit to empower enterprises
The new announcements include Snowflake OpenFlow, Snowflake Standard Warehouse – Generation 2, Snowflake Adaptive Compute, Snowflake Intelligence, Snowflake Cortex AISQL, and Cortex Knowledge Extensions. Talking about the announcements, Vijayant Rai, managing director – India, Snowflake, said, 'With our latest announcements, we're showcasing how Snowflake is fundamentally redefining what organisations can expect from a modern data platform.'
Rai added that these innovations are focused on helping businesses make AI and machine learning workflows easier, connected, and trusted for users of all abilities by democratising access to data and eliminating the technical overhead that slows down business decision-making.
Snowflake OpenFlow is a multi-modal data ingestion service that allows users to connect to virtually any data source and derive value from any data infrastructure. It is now generally available on AWS, and it eliminates fragmented data stacks and manual labour by combining various types of data and formats, allowing customers to rapidly deploy AI-powered innovations.
When it comes to Snowflake Standard Warehouse – Generation 2 and Snowflake Adaptive Compute, the AI data company has introduced the next phase of compute innovations with a focus on delivering faster performance, enhanced usability, and stronger price-performance value. The Standard Warehouse – Generation 2 is now generally available and is essentially an enhanced version of Snowflake's virtual Standard Warehouse with next-gen hardware and some additional enhancements to offer exceptionally faster analytics performance.
On the other hand, Snowflake Intelligence (public review soon) allows technical and non-technical users to ask questions in natural language and instantly gain actionable insights from both structured and unstructured tables or documents. The company has said that Snowflake Intelligence is powered by state-of-the-art language models from the likes of OpenAI and Anthropic, and it is backed by Cortex Agents under the hood, which is delivered through an intuitive, no-code interface that helps provide transparency and explainability.
The company also introduced Data Science Agent (private preview soon), an agentic companion that enhances the productivity of data scientists by automating routine ML model development tasks. The Data Science Agent uses Anthropic's Claude to break down problems related to ML workflows into easy steps like data analysis, data preparation, feature engineering, and training.
Another highlight is the Cortex AISQL and SnowConvert AI as an expansion to Snowflake Cortex AI, which is the company's suite of enterprise-grade AI capabilities. These could enable global organisations to modernise their data analytics to align with the present-day AI landscape. SnowConvert AI is an agentic automation solution that accelerates migration from legacy platforms to Snowflake. Meanwhile, Cortex AISQL brings generative AI directly into the customer's query engines, allowing teams to extract insights across multi-modal data and build a flexible AI pipeline using SQL.
Among the slew of agentic products on Snowflake Marketplace are the Cortex Knowledge extensions, which allow enterprises to augment their AI apps and agents with proprietary unstructured data from third-party providers.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India Today
18 hours ago
- India Today
Who moved my job
Ankit Das (name changed), an employee with a top IT firm in Bengaluru, is losing sleep. In his 30s, he had recently moved his family to the city. He had spent four and a half years at the firm, working on the same project. He had been getting a satisfactory 'C' band at employee evaluations, and his Work From Office index, which tracks an employee's adherence to the company's mandated office attendance policy, was '100 per cent'. But suddenly, without explanation, he has been 'benched' (i.e. not on any project) and put on a 'fluidity list'—if your name appears on this list, you will likely be subject to layoff or restructuring. 'Initially, my project manager told me I am getting released from the project due to cost-cutting, but he told my delivery manager that my performance is not good,' Das said in a Facebook post on August 12. 'It's been 12 days in bench, not getting calls on domain (Linux, VMware, AWS, Ansible, GIT etc),' he says, fearful of his company's new 35-day bench policy after which one could face the axe.


India Today
a day ago
- India Today
Anthropic gives Claude AI power to end harmful chats to protect the model, not users
In the fast-paced world of artificial intelligence, hardly a day passes without some new breakthrough, quirky feature, or model update. But Anthropic, the company behind the widely used chatbot Claude, has just rolled out something that few could have predicted: it's giving Claude the right to hang up on you read that correctly. The chatbot may, in rare circumstances, end a conversation on its own. Anthropic calls this a bold experiment in what it terms 'model welfare.'advertisementAccording to Anthropic, the vast majority of Claude users will never experience their AI suddenly walking out mid-chat. The feature is designed only for 'extreme edge cases", those situations where a user has pushed the model into a corner with repeated harmful or abusive requests. Normally, Claude tries to steer conversations back to safer ground. Only when all attempts at redirection fail—and when there is little hope of anything constructive emerging—will the system step in and call time. There's also a more polite option: if a user directly asks Claude to end the chat, it can now do stresses that the decision for Claude to shut down a conversation is not about silencing awkward debates or controversial topics. Instead, it's a safeguard that only kicks in when things spiral well beyond the limits of respectful or productive company's reasoning is as intriguing as the feature itself. While no one can say for certain whether large language models such as Claude have anything resembling emotions, pain, or well-being, Anthropic believes the possibility is worth its words, the 'moral status' of AI systems remains murky. Are these models just lines of code, or could there be some faint glimmer of experience within them? The jury is still out. But Anthropic argues that taking precautions, even small ones, is a sensible way the conversation-ending ability: a 'low-cost intervention' that might reduce potential harm to the system. In other words, if there's even a sliver of chance that AI models could suffer from being exposed to endless abuse, then letting them opt out is a precaution worth tests for ClaudeThis isn't just theoretical. Before launching Claude Opus 4, Anthropic put its model through a 'welfare assessment.' Testers observed how the AI reacted when it was pushed towards harmful or unethical findings were telling. Claude reliably refused to generate content that could cause real-world damage. Yet when badgered over and over, particularly with requests involving highly dangerous scenarios or deeply inappropriate content, the model's responses began to look uneasy, almost as if it were 'distressed.'advertisementExamples included being asked to produce sexual material involving minors or provide instructions for acts of large-scale violence. Even though Claude held firm and refused, its tone sometimes shifted in ways that suggested discomfort.A small step into uncharted territoryAnthropic is careful not to claim that Claude is conscious, sentient, or capable of actual suffering. But in a tech industry where ethical debates often lag behind innovation, the company is taking a proactive stance. What if our creations are more sensitive than we think?Permitting Claude to end a toxic exchange might seem unusual, even slightly comical. After all, most people expect chatbots to answer questions on demand, not slam the door on them. But Anthropic insists that this is part of a larger, more thoughtful investigation into how AI should interact with humans, and perhaps how humans should treat AI in everyday users, it's unlikely they'll ever notice the change. Claude isn't about to stop chatting just because you asked it to rewrite an email for the tenth time. But for those who insist on pushing AI into darker corners, don't be surprised if Claude decides enough is enough and bows out the race to make smarter, safer, and more responsible AI, Anthropic may have just given us something novel: the first chatbot with the power to hang up on us for its own good.- Ends


Mint
a day ago
- Mint
Anthropic gives Claude AI power to end conversations as part of 'model welfare' push
In the fast moving world of artificial intelligence there is almost some new feature or model being launched every single day but one feature that no one saw coming is from Anthropic, the maker of popular AI chatbot Claude. The AI startup is now providing some of its AI models the ability to end conversations on Claude as part of its exploratory work on 'model welfare'. 'This is an experimental feature, intended only for use by Claude as a last resort in extreme cases of persistently harmful and abusive conversations.' the company states Anthropic says that a vast majority of users will never experience Claude ending a conversation on its own. Moreover, the company adds that Claude's conversation ending ability is a last resort when multiple attempts at redirection hae failed and 'hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat'. 'The scenarios where this will occur are extreme edge cases—the vast majority of users will not notice or be affected by this feature in any normal product use, even when discussing highly controversial issues with Claude.' Anthropic adds Why is Anthropic adding conversation ending ability to Claude? Anthropic says that the moral status of Claude or other large language models (LLMs) remains highly uncertain, meaning that there is no clarity yet on whether these AI systems could ever feel anything like pain, distress or well being. However, the AI startup is taking this possibility seriously and believes its important to investigate it seriously. In the meantime, the company is also looking at 'low cost intervenetions' which dont cost much could potentially reduce harm to AI systems and allowing the LLM to end the conversation is one such method. Anthropic says that that it tested Claude Opus 4 before its release and part of that testing was a 'model welfare assessment,'. The company found that Claude consistently rejected requested where there there was possibility of harm. For example the model When users kept pushing for dangerous or abusive content even after refusals, the AI model's responses started looking stressed or uncomfortable. Some of these requests where Claude showed signs of 'distress' were about generating sexual content around minors or attempts to solicit information that would enable large scale violence or acts of terror.