
Snowflake boosts AI with real-time licensed content access
The introduction of Cortex Knowledge Extensions enables enterprise customers to enrich their AI applications and agents with updated, reliable content from publishers such as Stack Overflow, USA TODAY, and Packt. This approach ensures proper attribution and licensing of content, distinguishing it from other systems that use scraped material without consent from original publishers.
According to Snowflake, this new capability is designed to address challenges faced by both enterprises and publishers. Enterprises often struggle to gain access to timely external information for their AI systems, limiting accuracy and depth of insight. Meanwhile, publishers are seeking a secure and fair way to allow their content to be used by enterprise AI, with assurance of both compensation and control.
"Building powerful AI apps and agents at scale hinges on enterprises having access to a wealth of internal and external data that adds rich context to AI outputs. Snowflake is raising the bar on enterprise-wide collaboration to make it even easier for customers to fuel their AI initiatives with AI-ready data and harness the power of agentic apps — regardless of whether the data and apps reside within their own four walls or come from trusted third-party sources. Our latest innovations enable teams to turn possibilities into reality with data and AI, all without worrying about security and governance risk," Prasanna Krishanan, Head of Apps & Collaboration and Horizon at Snowflake, commented on the launch.
With Cortex Knowledge Extensions, publishers are able to list their content, such as news articles, textbooks, and research papers, on the Snowflake Marketplace. Enterprises can then purchase this content and integrate it into their AI-powered apps and agents, including Cortex Agents, Cortex Search, and the soon-to-be-available Snowflake Intelligence. This functionality enables AI systems to provide responses informed by timely and relevant information while allowing publishers to monetise their intellectual property under agreed licensing terms.
The mechanism for delivering content through Cortex Knowledge Extensions relies on retrieval-augmented generation and is underpinned by Snowflake's Zero-ETL Sharing functionality. This setup empowers publishers to revoke access to content if necessary, while always displaying clear attribution and links to the original source, thereby enhancing reliability and provenance.
Alongside Cortex Knowledge Extensions, Snowflake has introduced Semantic Model Sharing, which is currently in private preview. Semantic Model Sharing allows enterprises to integrate and interact with AI-ready structured data within their Snowflake Cortex AI applications — whether the data originates from internal sources or third-party providers.
The use of semantic models helps ensure consistency in how data and business concepts are defined and applied across different systems, contributing to more trustworthy and accurate AI outputs. By mapping internal data to standardised semantic models, enterprises can accelerate insights, support more uniform decision-making, and access industry-standard metrics while maintaining governance and version control.
Snowflake reports that these advances are intended to eliminate the manual effort required to create semantic models internally, while supporting high-quality, context-rich, and accurate AI responses. Users can directly interact with their data using Semantic Model Sharing in Cortex AI, including Cortex Analyst, Cortex Agents, and Snowflake Intelligence.
In addition to content and model sharing, Snowflake is adding support for Agentic Native Apps in its marketplace. This feature provides customers with access to third-party agentic applications, which can securely combine provider and consumer data within the enterprise's governance framework. Data remains within the customer's environment while agents perform tasks such as portfolio management and optimisation, using proprietary algorithms and datasets.
Currently, Snowflake Marketplace connects enterprises with over 750 providers, offering more than 3,000 live data, application, and AI products. The introduction of Agentic Native Apps is intended to give providers new ways to distribute and monetise their offerings while allowing enterprises to drive additional value from their data without compromising privacy or security.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
10 hours ago
- Techday NZ
Data Army wins Snowflake award for proptech data solutions
Data Army has received recognition as the Snowflake ANZ Partner of the Year for Industry Solutions. The company was acknowledged for its expertise in creating and implementing industry-specific data solutions, particularly in the proptech and real estate sectors, using the Snowflake AI Data Cloud platform. Snowflake cited Data Army's capacity to deliver tailored solutions addressing unique sector challenges and providing measurable outcomes for its customers. This recognition includes Data Army's record of using Snowflake's capabilities to help organisations tackle distinct industry needs while driving business value and results. Over the past year, Data Army has made changes to its offering, including the rebranding of The Proptech Cloud as Data Army Intel and introducing additional public listings on the Snowflake Marketplace. Data Army Intel has developed into a tool used by hundreds of businesses across several sectors, providing datasets and knowledge content intended to enable businesses to maximise artificial intelligence (AI) and machine learning techniques through access to current, quality data. By making curated datasets available and sharing sector-relevant insights, Data Army Intel aims to support organisations in developing their AI and machine learning capabilities. The expanding range of datasets on offer is intended to enable businesses to utilise these technologies effectively, building on Snowflake's data ecosystem. Michael Ogilvie, Director, Data Army, said, "We are delighted to win this award - a great testament to our vision, skills, and technical expertise in delivering ongoing value to a wide range of users through data-driven intelligence for faster, smarter decision-making and improved operational efficiency. As a Data Cloud Provider and Solutions Integrator, we are uniquely positioned to deliver high-quality data products, consult with businesses, and create bespoke solutions. This award recognises our capacity to deliver data solutions that address a broad range of data challenges and empower our clients to succeed." Data Army's approach to partnership with Snowflake includes two streams. First, as a Solutions Integrator, the company uses its consulting team to build and implement data platforms on the Snowflake AI Data Cloud. Second, in the role of Data Cloud Provider, Data Army publishes ready-to-query datasets on the Snowflake Marketplace via Data Army Intel, targeting a growing range of business use cases. Cathy Conroy, Director, Channel and Alliances, Snowflake, commented on Data Army's impact across the proptech and real estate sectors: "Data Army is playing a central role in shaping how data, AI, and applications create business value in the Prop Tech and Real Estate industries. Their unique role as both an integrator and marketplace partner sees them support our joint customers throughout their data and AI journeys. Today's recognition is a well-deserved accolade and I look forward to seeing how Data Army continues to inspire our joint customers to build for tomorrow." The company sees its dual roles as a solutions integrator and data cloud provider as key to its market presence. In developing customer-specific solutions and making datasets publicly accessible, Data Army states it aims to assist customers at varied stages of their data and AI adoption journeys. Data Army has highlighted its aim to enable organisations to improve operational efficiency and make informed decisions using data-driven insights, emphasising its consultancy work and commitment to supporting data-driven business transformation. Follow us on: Share on:


Techday NZ
29-07-2025
- Techday NZ
Developers adopt AI tools but trust issues persist, survey finds
Stack Overflow has released the results of its 2025 Developer Survey, detailing the perceptions and habits of more than 49,000 technologists across 177 countries. The AI trust gap The survey indicates a significant disparity between AI adoption and trust among developers. While 84% of respondents use or plan to use artificial intelligence tools in their workflow, nearly half (46%) report that they do not trust the accuracy of AI-generated output. This marks a substantial rise from 31% indicating a lack of trust in the previous year. This year's expanded artificial intelligence section included 15 new questions, addressing topics such as the utility of AI agent tools, the impact of AI on developers' jobs, and the phenomenon of "vibe coding". "The growing lack of trust in AI tools stood out to us as the key data point in this year's survey, especially given the increased pace of growth and adoption of these AI tools. AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance. With the use of AI now ubiquitous and 'AI slop' rapidly replacing the content we see online, an approach that leans heavily on trustworthy, responsible use of data from curated knowledge bases is critical. By providing a trusted human intelligence layer in the age of AI, we believe the tech enthusiasts of today can play a larger role in adding value to build the AI technologies and products of tomorrow," said Prashanth Chandrasekar, CEO of Stack Overflow. The survey found that 75% of users do not trust AI-generated answers, and 45% find debugging AI-generated code time-consuming. Ethical and security concerns are prevalent, with 61.7% citing these as reasons for hesitancy, while 61.3% wish to maintain full understanding of their code. AI use and productivity Despite low overall adoption, AI agents are associated with productivity improvements. Only 31% of developers currently use AI agents, but among those, 69% report increased workplace productivity. Meanwhile, 17% are planning to adopt such tools, while 38% are not planning to use them at all. A majority (64%) of developers do not see AI as a threat to their employment, though this figure has declined slightly from the previous year's 68%. Platforms and tools Visual Studio Code and Visual Studio remain the most used Integrated Development Environments (IDEs). New AI-enabled IDEs have entered the market, with Cursor at an 18% usage rate, Claude Code at 10%, and Windsurf at 5% among respondents. Among large language models (LLMs), OpenAI's GPT series is the most popular, used by 81% of developers surveyed. Claude Sonnet received 43% usage, and Gemini Flash 35%. Vibe coding and new ways of learning 'Vibe coding', defined as generating software from LLM prompts, was explored for the first time. While AI tools are being adopted for learning and development, nearly 77% of developers indicated that vibe coding is not part of their professional workflow. The trend is more relevant for less experienced developers seeking a rapid start, but it comes with a trade-off in the level of trust and confidence in the output. Community platforms continue to play an important role. Stack Overflow is the most common platform, used or planned to be used by 84% of respondents, followed by GitHub at 67%, and YouTube at 61%. Notably, 35% of respondents reported consulting Stack Overflow when confronted with AI-related issues. The survey shows that 69% of developers have learned a new technology or programming language in the past year, with 36% focusing specifically on AI-enabled tools. Usage of AI tools for learning to code has risen to 44%, up from 37% last year. Top resources remain technical documentation (68%), online resources (59%), and Stack Overflow (51%). For those learning AI-specific skills, 53% used AI tools. Gen Z developers (aged 18-24) are more likely to engage with coding challenges, with 15% participating compared to an overall average of 12%. Additionally, a higher proportion of this age group prefers chat-based and challenge-based learning approaches than other cohorts. International responses and technology adoption The United States, Germany, India, United Kingdom, France, Canada, Ukraine, Poland, Netherlands, and Italy were the top ten countries by survey participation. Trust in AI tools differs by region; India saw the highest proportion of developers expressing some or significant trust in AI at 56%, followed by Ukraine at 41%. Other countries showed lower levels of trust, including Italy (31%), Netherlands and United States (28%), Poland (26%), Canada and France (25%), United Kingdom (23%), and Germany (22%). Python continues to gain in popularity, with a seven percentage point increase since 2024. JavaScript (66%), HTML/CSS (62%), and SQL (59%) remain popular programming languages. Docker usage grew by 17 percentage points to 71%, marking it as a widely adopted tool in cloud and infrastructure development. PostgreSQL holds the position as the most sought-after database technology, with 47% planning to use it in the next year or continuing usage, marking its third year at the top in this category. For documentation and collaboration, GitHub leads at 81%, followed by Jira (46%) and GitLab (36%).


Techday NZ
16-07-2025
- Techday NZ
The risks of using AI in the software development pipeline
The unveiling of a new technology is often accompanied by much fanfare about the significant positive impact it will have on society. Think back to events such as the creation of the internet, the mobile phone, cloud computing, and now artificial intelligence. Each was lauded as a big step forward for daily life. However, the disruption caused by such advances doesn't always come down to the technology itself but rather how it is utilised by the end user. Unfortunately, a positive outcome isn't always guaranteed. A recent StackOverflow survey[1] revealed approximately 76% of developers are using (or are planning to use) AI tooling in the software development process. This represents a rapid, seismic shift in how software is created, especially at the enterprise level. In just three years, it seems many development teams have shifted from gradual changes in the software development life cycle (SDLC), opting for enormous productivity gains and instant output. However, these gains come at a price that business leaders should not be willing to pay. The rampant, plentiful security bugs plaguing every major artificial intelligence and large language model (AI/LLM) coding assistant represent a code-level security risk for an organisation. Indeed, the best-performing tools are still only accurate around half the time. These tools - in the hands of a developer with low security awareness - simply expedite a volume of vulnerabilities entering the codebase, adding to the ever-growing mountain of code under which security professionals are buried. AI coding assistants are not going away, and the upgrade in code velocity cannot be ignored. However, security leaders must act now to manage their use safely. The growing appeal of AI-assisted coding Today, software developers are expected to perform a wide range of tasks, and that list is growing in scope and complexity. It stands to reason that, when an opportunity for assistance presents itself, your average overworked developer will welcome it with open arms. The issue, however, is that developers will choose whatever AI model will do the job fastest and cheapest, and that may not be in the best interests of their organisation. Take DeepSeek as an example. By all accounts it's an easy, highly functional tool that is (above all), free to use. However, despite the initial hype, it would appear the tool has significant security issues[2], including insecure code output, backdoors that leak sensitive data, and guardrails around creating malware that are far too easy to clear. The challenge of insecure code development Attention has recently been focused on so-called 'vibe coding'. The term refers to coding undertaken exclusively with agentic AI programming tools like Cursor AI. The developers use prompt engineering rather than writing and continue to prompt an LLM until the desired result is achieved. Naturally, this process places complete trust in the LLM to deliver functioning code, and the way in which many of these tools are programmed is to process answers with unwavering confidence in their accuracy. Independent benchmarking from BaxBench[3] reveals that many popular AI/LLM tools capable of acting as coding assistants produce insecure code. This has led BaxBench to the conclusion that none of the current flagship LLMs are ready for code automation from a security perspective. With 86% of developers indicating they struggle to practice secure coding[4], this should be a deep concern to enterprise security leaders. While it is absolutely true that a security-skilled developer paired with a competent AI tool will see gains in productivity this does not represent the skill state of the general developer population. Developers with low security awareness will simply supercharge the delivery of poor-quality, insecure code into enterprise code repositories, exacerbating the problems the AppSec team is already ill-equipped to address. Skilling the next generation of software developers Vibe coding, agentic AI coding, and whatever the next iteration of AI-powered software development will be are not going away. Indeed, they have already changed the way developers approach their jobs. The solution is not to ban the tools outright and possibly create a monster in the form of unchecked, 'shadow AI' within development teams. Rather, the next generation of developers must be shown how to leverage AI effectively and safely. It must be made clear why and how AI/LLM tools create acceptable risk, with hands-on, practical learning pathways delivering the knowledge required to manage and mitigate that risk as it presents itself. Organisations that don't follow this path risk opening themselves up to security holes that could cause widespread disruption and loss.