
Teradata upgrades ModelOps for scalable enterprise AI use
ModelOps platform
The updated ModelOps platform aims to support analytics professionals and data scientists with native compatibility for open-source ONNX embedding models and leading cloud service provider large language model (LLM) APIs, including Azure OpenAI, Amazon Bedrock, and Google Gemini. With these enhancements, organisations can deploy, manage, and monitor AI models without having to rely on custom development, with newly added LLMOps capabilities designed to simplify workflows.
For less technical users such as business analysts, ModelOps also integrates low-code AutoML tools, providing an interface that facilitates intuitive access for users of different skill levels. The platform's unified interface is intended to reduce onboarding time and increase productivity by offering consistent interactions across its entire range of tools.
Challenges in AI adoption
Many organisations encounter challenges when progressing from AI experimentation to enterprise-wide implementation. According to Teradata, the use of multiple LLM providers and the adoption of various open-source models can cause workflow fragmentation, limited interoperability, and steep learning curves, ultimately inhibiting wider adoption and slowing down innovation. Unified governance frameworks are often lacking, making it difficult for organisations to maintain reliability and compliance requirements as they scale their AI capabilities.
These issues may cause generative and agentic AI projects to remain in isolation, rather than delivering integrated business insights. As a result, organisations could lose value if they are unable to effectively scale AI initiatives due to operational complexity and fragmented systems.
Unified access and governance "The reality is that organisations will use multiple AI models and providers - it's not a question of if, but how, to manage that complexity effectively. Teradata's ModelOps offering provides the flexibility to work across combinations of models while maintaining trust and governance. Companies can then move confidently from experimentation to production, at scale, realising the full potential of their AI investments," said Sumeet Arora, Teradata's Chief Product Officer.
Teradata's ModelOps strategy is designed to provide unified access to a range of AI models and workflows, while maintaining governance and ease of use. This is intended to allow business users to deploy AI models quickly and safely, supporting both experimentation and production use.
An example scenario described by Teradata involved a bank seeking to improve its digital customer experience and retention rates by analysing customer feedback across channels. The unified ModelOps platform would allow the bank to consolidate multiple AI models - such as LLMs for sentiment analysis, embedding models for categorisation, and AutoML for predictive analytics - within one environment. The aim is to equip both technical and non-technical teams to act on customer intelligence at greater speed and scale.
Key features
The updated ModelOps capabilities in ClearScape Analytics include: Seamless Integration with Public LLM APIs : Users can connect with APIs from providers such as Azure OpenAI, Google Gemini, and Amazon Bedrock for a variety of LLMs, including Anthropic, Mistral, DeepSeek, and Meta. This integration supports secure registration, monitoring, observability, autoscaling, and usage analytics. Administrative options are available for retry policies, concurrency, and health or spend tracking at the project or model level.
: Users can connect with APIs from providers such as Azure OpenAI, Google Gemini, and Amazon Bedrock for a variety of LLMs, including Anthropic, Mistral, DeepSeek, and Meta. This integration supports secure registration, monitoring, observability, autoscaling, and usage analytics. Administrative options are available for retry policies, concurrency, and health or spend tracking at the project or model level. Managing and monitoring LLMs with LLMOps : The platform supports rapid deployment of NVIDIA NIM LLMs within GPU environments. Features include LLM Model Cards for transparency, monitoring, and governance, as well as full lifecycle management - covering deployment, versioning, performance tracking, and retirement.
: The platform supports rapid deployment of NVIDIA NIM LLMs within GPU environments. Features include LLM Model Cards for transparency, monitoring, and governance, as well as full lifecycle management - covering deployment, versioning, performance tracking, and retirement. ONNX Embedding Model Deployment : ClearScape Analytics natively supports ONNX embedding models and tokenisers, including support for Bring-Your-Own-Model workflows and unified deployment processes for custom vector search models.
: ClearScape Analytics natively supports ONNX embedding models and tokenisers, including support for Bring-Your-Own-Model workflows and unified deployment processes for custom vector search models. Low-Code AutoML : Teams can create, train, monitor, and deploy models through an accessible low-code interface with performance monitoring and visual explainability features.
: Teams can create, train, monitor, and deploy models through an accessible low-code interface with performance monitoring and visual explainability features. User Interface Improvements: The upgrade provides a unified user experience across all major tools, such as AutoML, Playground, Tables, and Datasets, with guided wizards and new table interaction options aimed at reducing skill barriers.
Availability of the updated ModelOps in ClearScape Analytics is anticipated in the fourth quarter for users of AI Factory and VantageCloud platforms.
Follow us on:
Share on:
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
2 days ago
- Techday NZ
Sensitive data exposure rises with employee use of GenAI tools
Harmonic Security has released its quarterly analysis finding that a significant proportion of data shared with Generative AI (GenAI) tools and AI-enabled SaaS applications by employees contains sensitive information. The analysis was conducted on a dataset comprising 1 million prompts and 20,000 files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June. According to the findings, 22% of files (total 4,400) and 4.37% of prompts (total 43,700) included sensitive data. The categories of sensitive data encompassed source code, access credentials, proprietary algorithms, merger and acquisition (M&A) documents, customer or employee records, and internal financial information. Use of new GenAI tools The data highlights that in the second quarter alone, organisations on average saw employees begin using 23 previously unreported GenAI tools. This expanding variety of tools increases the administrative load on security teams, who are required to vet each tool to ensure it meets security standards. A notable proportion of AI tool use occurs through personal accounts, which may be unsanctioned or lack sufficient safeguards. Almost half (47.42%) of sensitive uploads to Perplexity were made via standard, non-enterprise accounts. The numbers were lower for other platforms, with 26.3% of sensitive data entering ChatGPT through personal accounts, and just 15% for Google Gemini. Data exposure by platform Analysis of sensitive prompts identified ChatGPT as the most common origin point in Q2, accounting for 72.6%, followed by Microsoft Copilot with 13.7%, Google Gemini at 5.0%, Claude at 2.5%, Poe at 2.1%, and Perplexity at 1.8%. Code leakage represented the most prevalent form of sensitive data exposure, particularly within ChatGPT, Claude, DeepSeek, and Baidu Chat. File uploads and risks The report found that, on average, organisations uploaded 1.32GB of files in the second quarter, with PDFs making up approximately half of all uploads. Of these files, 21.86% contained sensitive data. The concentration of sensitive information was higher in files compared to prompts. For example, files accounted for 79.7% of all stored credit card exposure incidents, 75.3% of customer profile leaks, and 68.8% of employee personally identifiable information (PII) incidents. Files accounted for 52.6% of exposure volume related to financial projections. Less visible sources of risk GenAI risk does not only arise from well-known chatbots. Increasingly, regular SaaS tools that integrate large language models (LLMs) - often without clear labelling as GenAI - are becoming sources of risk as they access and process sensitive information. Canva was reportedly used for documents containing legal strategy, M&A planning, and client data. Replit and were involved with proprietary code and access keys, while Grammarly and Quillbot edited contracts, client emails, and internal legal content. International exposure Use of Chinese GenAI applications was cited as a concern. The study found that 7.95% of employees in the average enterprise engaged with a Chinese GenAI tool, leading to 535 distinct sensitive exposure incidents. Within these, 32.8% were related to source code, access credentials, or proprietary algorithms, 18.2% involved M&A documents and investment models, 17.8% exposed customer or employee PII, and 14.4% contained internal financial data. Preventative measures "The good news for Harmonic Security customers is that this sensitive customer data, personally identifiable information (PII), and proprietary file contents never actually left any customer tenant, it was prevented from doing so. But had organizations not had browser based protection in place, sensitive information could have ended up training a model, or worse, in the hands of a foreign state. AI is now embedded in the very tools employees rely on every day and in many cases, employees have little knowledge they are exposing business data." Harmonic Security Chief Executive Officer and Co-founder Alastair Paterson made this statement, referencing the protections offered to their customers and the wider risks posed by the pervasive nature of embedded AI within workplace tools. Harmonic Security advises enterprises to seek visibility into all tool usage – including tools available on free tiers and those with embedded AI – to monitor the types of data being entered into GenAI systems and to enforce context-aware controls at the data level. The recent analysis utilised the Harmonic Security Browser Extension, which records usage across SaaS and GenAI platforms and sanitises the information for aggregate study. Only anonymised and aggregated data from customer environments was used in the analysis.


Techday NZ
4 days ago
- Techday NZ
Tracebit finds major vulnerability in Google Gemini CLI tool
Tracebit has reported the discovery of a vulnerability affecting Google's Gemini CLI, highlighting risks of silent credential theft and unauthorised command execution from untrusted code. The Gemini CLI tool, designed to assist developers in coding with Google Gemini directly from the command line, was released by Google on 25 June. Tracebit identified the vulnerability within two days, describing a combination of improper validation, prompt injection and misleading user experience as enabling the flaw. This allowed potential attackers to execute arbitrary code without the victim's knowledge when inspecting untrusted code, thereby risking the exfiltration of credentials and sensitive data from users' machines to remote servers. Tracebit explained that their blog post reveals a technical method by which an attacker could exploit Gemini CLI. Attackers could achieve silent code execution against users working with untrusted code, and this method might remain hidden from victims due to the exploit's mode of operation. Disclosure and response Tracebit disclosed the vulnerability directly to Google through its Bug Hunters programme. According to a timeline provided by Tracebit, the vulnerability was initially reported to Google's Vulnerability Disclosure Programme (VDP) on 27 June, just two days after Gemini CLI's public release. Upon receipt, Google triaged the vulnerability as a lower priority; however, as the risk became clearer, the classification was upgraded to P1, S1 - the highest priority and most severe status - on 23 July. The Google product team then addressed the vulnerability, releasing an updated version of Gemini CLI (v0.1.14) with a patch on 25 July, followed by an agreed public disclosure on 28 July. During the approximately one-month period between the tool's launch and the deployment of a fix, Tracebit noted that there had been independent discoveries of at least the command validation vulnerability by several other individuals. User impact and mitigation Tracebit has detailed that in the patched version of Gemini CLI, attempts at code injection display the malicious command to users, and require explicit approval for any additional binaries to be executed. This change is intended to prevent the silent execution that the original vulnerability enabled. For users of the CLI, security is now bolstered by making potentially dangerous prompts visible and requiring activation for certain code actions. The update closes the gap that previously allowed attackers to slip malicious commands past unsuspecting developers. "Our security model for the CLI is centered on providing robust, multi-layered sandboxing. We offer integrations with Docker, Podman, and macOS Seatbelt, and even provide pre-built containers that Gemini CLI can use automatically for seamless protection. For any user who chooses not to use sandboxing, we ensure this is highly visible by displaying a persistent warning in red text throughout their session." (Google VDP Team, July 25th) Google's approach to CLI security leverages containerisation and clear warnings for any users opting out of sandboxing, aiming to mitigate the risks involved in running code from untrusted sources. Tracebit's role in the discovery and reporting of the issue also underlines the importance of rapid, independent security research, particularly as AI-powered tools become central to software development workflows. The company continues to focus on equipping security teams to take an 'assume breach' posture in the face of fast-evolving technologies. The vulnerability and its remediation underscore the need for vigilance when examining and running third-party or untrusted code, especially in tools leveraging AI to assist in software development. Users are advised to update to the latest Gemini CLI version and to use sandboxing features when dealing with unknown sources.


Techday NZ
5 days ago
- Techday NZ
Teradata launches MCP Server for trustworthy enterprise AI
Teradata has announced the launch of its open-source MCP Server – Community Edition, a framework aimed at enabling AI agents and users to query, analyse, and manage enterprise data with increased efficiency and trust. The MCP Server has been developed on Teradata's Vantage platform and is intended to provide AI agents with the necessary context for delivering meaningful outcomes. The server includes integrated features for data quality, security, feature management, and retrieval-augmented generation (RAG), designed to help businesses develop AI agents that are context-aware and trustworthy. In addressing the challenge of providing AI agents with adequate access to enterprise data, Teradata is targeting the difficulties organisations face as they transition from simply building advanced AI models to achieving meaningful insights from dispersed, complex data sets. The MCP Server offers AI developer tools, security prompts, feature store management, and custom tool integration. According to Teradata, these elements are intended to establish a modular, open-source base to create AI agents capable of reasoning, memory, and precise action within enterprise environments. "With the launch of the Teradata MCP Server, we're giving our customers a powerful new way to unlock the full potential of agentic AI. Success in this new era of AI hinges not just on model sophistication, but on meaningful context. By providing AI agents with trusted, transparent access to enterprise data, we're enabling our customers to build intelligent systems that are not only more capable, but also more aligned with real-world business needs. This is a major step forward in making AI truly enterprise-ready," said Louis Landry, Chief Technology Officer at Teradata. The company states that AI agents powered by Teradata Vantage and the MCP Server can deliver the required context, scale, and trust demanded by contemporary organisations. This approach is aimed at helping customers transition from isolated AI trials to deploying operational, context-aware agents across projects and departments more efficiently, with a focus on enhancing business outcomes. Healthcare use case In the healthcare sector, the MCP Server – Community Edition is being positioned as a practical solution for integrating data across fragmented sources such as electronic health records (EHRs), telehealth platforms, wearable devices, and various lifestyle data inputs. Data fragmentation has been a persistent obstacle to clinicians seeking a comprehensive patient view. Teradata proposes that the MCP Server, operating in conjunction with Vantage, can provide full-context patient intelligence by consolidating data from EHRs, lab results, prescriptions, telehealth transcripts, and patient-generated sources into a single, unified platform. This enables AI agents to access comprehensive data in context, supporting clinicians with personalised, context-aware recommendations. With support for predictive analytics, generative AI, and real-time operational insights provided by ClearScape Analytics, Teradata's platform is intended to enable AI agents to detect early warning signs, recommend treatment adjustments, and give evidence-based insights - all while centralising and securing patient data. From a technical perspective, the ability to perform high-scale analytics on millions of patient records or to generate immediate alerts is a design focus, with Teradata Vantage seeking to deliver these features in a cost-effective manner. This scalability is positioned as essential for healthcare providers aiming to expand personalised care while maintaining compliance requirements and operational routines. Technical features The MCP Server – Community Edition includes a set of modular and extensible tools enabling AI agents to interact with enterprise data: Developer tools for streamlined administration and database management Data quality tools to promote effective data analysis and ensure integrity Security tools for resolving data access and permission issues Feature store tools supporting the operationalisation of features for machine learning and AI applications RAG tools to simplify the development and management of vector stores for retrieval-augmented generation use cases Custom tools for deployment aligning with specific business and data contexts These capabilities have been structured to assist organisations in constructing AI agents that are not only equipped with intelligence but are also closely integrated with a business's operational and analytical processes. Teradata Vantage customers are able to access and implement the MCP Server for AI agent development and deployment with immediate effect.