
Litera unveils generative AI tools to transform Kira platform
The enhanced functionality forms part of all Kira subscriptions, requiring no separate setup and eliminating the need for users to provide an Azure OpenAI key. The new features build on Kira's existing predictive AI and traditional search technologies and are powered by Litera AI+.
Functionality update
With the addition of generative AI, Kira users can analyse documents across various languages and jurisdictions, aiming to increase speed, accuracy, and compliance. The AI capabilities are designed to streamline case workflows, highlight potential risks and emerging trends, and decrease the time spent on document review processes.
Adam Ryan, Chief Product Officer at Litera, said: "The re-engineering of Kira with GenAI represents a transformative leap forward for legal teams everywhere - accelerating contract analysis across languages and jurisdictions. By empowering our users with instant, smarter contract analysis and seamless compliance tools, we are redefining what's possible in legal technology and ensuring our clients are always ahead of the curve."
New tools
The updated Kira platform includes several developments. Generative smart fields allow for the creation of custom fields in any language using a prompt, which does not require coding or training cycles. This capability is intended to enable quicker insights across more document types, going beyond Kira's existing 1,400 built-in smart fields.
Kira users will also have access to a grid-based workflow, which features a new tabular layout for contract and document reviews. The layout provides an immediate overview of risks and trends by displaying extracted language and answers, and it enables interaction with documents via chat. Legal teams can also create smart fields within this interface. These features are available for preview and user feedback among cloud customers in the form of the new Analysis Chart.
Additional enhancements include concept search, which uses predictive AI based on large language model technology to enable identification of legal concepts across project documents from a single example. Project-level generative AI governance offers compliance options, allowing legal teams to enable or disable Litera AI+ features specific to client or project needs.
Expanded ecosystem
Litera has also recently introduced Lito, an AI legal agent integrated into its Litera One portfolio. Lito is designed to work in conjunction with Kira, adapting to different case requirements, user groups, and document complexities. Together, these tools intend to provide support for collaboration, analysis, and document summarisation for legal teams.
Kira's Rapid Clause Analysis functionality identifies and organises clauses across documents, supporting efficiency and consistency, while Kira Smart Summaries allow teams to generate client memo-ready summaries from organised clauses using Litera AI+. A newly designed search architecture is also included, which produces results more quickly - even in larger matters with tens of thousands of documents - and powers the latest generative AI functions in Kira's revised interface.
Market context
Kira has been recognised in the industry for its contract review functionality, including Tier 1 ranking in Legaltech Hub's 2025 Contract Review Competitive Analysis for the second consecutive year. The platform caters to sectors such as mergers and acquisitions, private equity, real estate, and finance.
According to Litera, the updates are aligned with helping legal teams meet client expectations and requirements through accelerated document analysis and risk assessment, while offering informed decision support across a variety of legal document types.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
12 hours ago
- Techday NZ
Sensitive data exposure rises with employee use of GenAI tools
Harmonic Security has released its quarterly analysis finding that a significant proportion of data shared with Generative AI (GenAI) tools and AI-enabled SaaS applications by employees contains sensitive information. The analysis was conducted on a dataset comprising 1 million prompts and 20,000 files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June. According to the findings, 22% of files (total 4,400) and 4.37% of prompts (total 43,700) included sensitive data. The categories of sensitive data encompassed source code, access credentials, proprietary algorithms, merger and acquisition (M&A) documents, customer or employee records, and internal financial information. Use of new GenAI tools The data highlights that in the second quarter alone, organisations on average saw employees begin using 23 previously unreported GenAI tools. This expanding variety of tools increases the administrative load on security teams, who are required to vet each tool to ensure it meets security standards. A notable proportion of AI tool use occurs through personal accounts, which may be unsanctioned or lack sufficient safeguards. Almost half (47.42%) of sensitive uploads to Perplexity were made via standard, non-enterprise accounts. The numbers were lower for other platforms, with 26.3% of sensitive data entering ChatGPT through personal accounts, and just 15% for Google Gemini. Data exposure by platform Analysis of sensitive prompts identified ChatGPT as the most common origin point in Q2, accounting for 72.6%, followed by Microsoft Copilot with 13.7%, Google Gemini at 5.0%, Claude at 2.5%, Poe at 2.1%, and Perplexity at 1.8%. Code leakage represented the most prevalent form of sensitive data exposure, particularly within ChatGPT, Claude, DeepSeek, and Baidu Chat. File uploads and risks The report found that, on average, organisations uploaded 1.32GB of files in the second quarter, with PDFs making up approximately half of all uploads. Of these files, 21.86% contained sensitive data. The concentration of sensitive information was higher in files compared to prompts. For example, files accounted for 79.7% of all stored credit card exposure incidents, 75.3% of customer profile leaks, and 68.8% of employee personally identifiable information (PII) incidents. Files accounted for 52.6% of exposure volume related to financial projections. Less visible sources of risk GenAI risk does not only arise from well-known chatbots. Increasingly, regular SaaS tools that integrate large language models (LLMs) - often without clear labelling as GenAI - are becoming sources of risk as they access and process sensitive information. Canva was reportedly used for documents containing legal strategy, M&A planning, and client data. Replit and were involved with proprietary code and access keys, while Grammarly and Quillbot edited contracts, client emails, and internal legal content. International exposure Use of Chinese GenAI applications was cited as a concern. The study found that 7.95% of employees in the average enterprise engaged with a Chinese GenAI tool, leading to 535 distinct sensitive exposure incidents. Within these, 32.8% were related to source code, access credentials, or proprietary algorithms, 18.2% involved M&A documents and investment models, 17.8% exposed customer or employee PII, and 14.4% contained internal financial data. Preventative measures "The good news for Harmonic Security customers is that this sensitive customer data, personally identifiable information (PII), and proprietary file contents never actually left any customer tenant, it was prevented from doing so. But had organizations not had browser based protection in place, sensitive information could have ended up training a model, or worse, in the hands of a foreign state. AI is now embedded in the very tools employees rely on every day and in many cases, employees have little knowledge they are exposing business data." Harmonic Security Chief Executive Officer and Co-founder Alastair Paterson made this statement, referencing the protections offered to their customers and the wider risks posed by the pervasive nature of embedded AI within workplace tools. Harmonic Security advises enterprises to seek visibility into all tool usage – including tools available on free tiers and those with embedded AI – to monitor the types of data being entered into GenAI systems and to enforce context-aware controls at the data level. The recent analysis utilised the Harmonic Security Browser Extension, which records usage across SaaS and GenAI platforms and sanitises the information for aggregate study. Only anonymised and aggregated data from customer environments was used in the analysis.


Techday NZ
a day ago
- Techday NZ
Quest Software boosts Australian channel strategy with new leaders
Quest Software has announced new leadership appointments to strengthen its channel partner strategy across Australia, signalling an increased commitment to the country's enterprise sector and its partner ecosystem. Maureen Perrelli has been appointed as Chief Channel Officer to lead Quest Software's global channel strategy. Perrelli's previous roles include Vice President of Worldwide Channel Sales at NCR and Chief Revenue Officer at Brivo, alongside her experience at Oracle, GE and Secureworks. Within the Asia Pacific and Japan region, Angela Maniscalco steps into the Head of Channel post after five years leading erwin sales across APJ and ANZ, including during and after its integration into Quest. Maniscalco's earlier experience includes serving as Head of Strategic Partnerships and Marketing APAC at Insight. Channel focus These appointments are positioned as part of a targeted effort to grow Quest's local presence by delivering enhanced partner programs, increased enablement, and new opportunities for enterprise collaboration. Quest sees the Australian partner ecosystem as a central component in driving growth, particularly in sectors faced with high regulatory requirements. "Australian partners are at the heart of Quest's growth strategy as we expand our cybersecurity, data, and modernisation portfolios. Our goal is to build deeper and more strategic partnerships that deliver strong customer outcomes, especially in complex, highly regulated sectors such as finance, government and healthcare," said Maureen Perrelli, Chief Channel Officer at Quest Software. The company has outlined plans to expand and improve its partner program. Central to this refreshed approach are enhanced enablement resources, new certification pathways, and a suite of benefits across Quest's solution areas. Three solution pillars Quest Software is grouping its offerings into three primary areas. The first is Cybersecurity & IT Resilience, which includes tools for Active Directory protection, threat detection and recovery. Among the upcoming products, the GenAI-enhanced Security Guardian is scheduled to launch later in 2025, promising to enhance detection and response capabilities for environments using Active Directory and Microsoft Entra ID. The second area, AI-ready Data & Governance, equips organisations with support for discovering, governing, and preparing enterprise data for compliance, analytics and future AI initiatives. This emphasis responds to increasing regulatory scrutiny and the drive toward making enterprise data audit-ready for statutory compliance and analytics. The third core pillar, Migrations & Modernisation, encompasses migration solutions for Microsoft platforms such as Active Directory, Exchange, and SQL Server. Quest points to its On Demand Migration platform, the first migration product to achieve Microsoft 365 Certification, providing assurances to partners involved in cloud transformation projects. Quest expects these solution pillars to be strengthened by the new leadership structure and targeted partner program enhancements. Deeper partner engagement The company is also investing in joint go-to-market initiatives and field engagement, aiming to equip its Australian partners to address customer challenges more effectively and grow their own businesses. "Our strategy is simple: put customer outcomes first by working closely with trusted partners. It's not just about building great products, it's about how we take them to market together and create real value for customers," Perrelli added. Quest Software states its ongoing commitment is to deliver technology and solutions that support enterprise needs around data management, cybersecurity and platform modernisation. The company's larger client base includes over 45,000 organisations worldwide, among them more than 90% of the Fortune 500.


Techday NZ
3 days ago
- Techday NZ
Teradata upgrades ModelOps for scalable enterprise AI use
Teradata has introduced ModelOps updates to its ClearScape Analytics offering, targeting streamlined integration and deployment for Agentic AI and Generative AI applications as organisations transition from experimentation to production at scale. ModelOps platform The updated ModelOps platform aims to support analytics professionals and data scientists with native compatibility for open-source ONNX embedding models and leading cloud service provider large language model (LLM) APIs, including Azure OpenAI, Amazon Bedrock, and Google Gemini. With these enhancements, organisations can deploy, manage, and monitor AI models without having to rely on custom development, with newly added LLMOps capabilities designed to simplify workflows. For less technical users such as business analysts, ModelOps also integrates low-code AutoML tools, providing an interface that facilitates intuitive access for users of different skill levels. The platform's unified interface is intended to reduce onboarding time and increase productivity by offering consistent interactions across its entire range of tools. Challenges in AI adoption Many organisations encounter challenges when progressing from AI experimentation to enterprise-wide implementation. According to Teradata, the use of multiple LLM providers and the adoption of various open-source models can cause workflow fragmentation, limited interoperability, and steep learning curves, ultimately inhibiting wider adoption and slowing down innovation. Unified governance frameworks are often lacking, making it difficult for organisations to maintain reliability and compliance requirements as they scale their AI capabilities. These issues may cause generative and agentic AI projects to remain in isolation, rather than delivering integrated business insights. As a result, organisations could lose value if they are unable to effectively scale AI initiatives due to operational complexity and fragmented systems. Unified access and governance "The reality is that organisations will use multiple AI models and providers - it's not a question of if, but how, to manage that complexity effectively. Teradata's ModelOps offering provides the flexibility to work across combinations of models while maintaining trust and governance. Companies can then move confidently from experimentation to production, at scale, realising the full potential of their AI investments," said Sumeet Arora, Teradata's Chief Product Officer. Teradata's ModelOps strategy is designed to provide unified access to a range of AI models and workflows, while maintaining governance and ease of use. This is intended to allow business users to deploy AI models quickly and safely, supporting both experimentation and production use. An example scenario described by Teradata involved a bank seeking to improve its digital customer experience and retention rates by analysing customer feedback across channels. The unified ModelOps platform would allow the bank to consolidate multiple AI models - such as LLMs for sentiment analysis, embedding models for categorisation, and AutoML for predictive analytics - within one environment. The aim is to equip both technical and non-technical teams to act on customer intelligence at greater speed and scale. Key features The updated ModelOps capabilities in ClearScape Analytics include: Seamless Integration with Public LLM APIs : Users can connect with APIs from providers such as Azure OpenAI, Google Gemini, and Amazon Bedrock for a variety of LLMs, including Anthropic, Mistral, DeepSeek, and Meta. This integration supports secure registration, monitoring, observability, autoscaling, and usage analytics. Administrative options are available for retry policies, concurrency, and health or spend tracking at the project or model level. : Users can connect with APIs from providers such as Azure OpenAI, Google Gemini, and Amazon Bedrock for a variety of LLMs, including Anthropic, Mistral, DeepSeek, and Meta. This integration supports secure registration, monitoring, observability, autoscaling, and usage analytics. Administrative options are available for retry policies, concurrency, and health or spend tracking at the project or model level. Managing and monitoring LLMs with LLMOps : The platform supports rapid deployment of NVIDIA NIM LLMs within GPU environments. Features include LLM Model Cards for transparency, monitoring, and governance, as well as full lifecycle management - covering deployment, versioning, performance tracking, and retirement. : The platform supports rapid deployment of NVIDIA NIM LLMs within GPU environments. Features include LLM Model Cards for transparency, monitoring, and governance, as well as full lifecycle management - covering deployment, versioning, performance tracking, and retirement. ONNX Embedding Model Deployment : ClearScape Analytics natively supports ONNX embedding models and tokenisers, including support for Bring-Your-Own-Model workflows and unified deployment processes for custom vector search models. : ClearScape Analytics natively supports ONNX embedding models and tokenisers, including support for Bring-Your-Own-Model workflows and unified deployment processes for custom vector search models. Low-Code AutoML : Teams can create, train, monitor, and deploy models through an accessible low-code interface with performance monitoring and visual explainability features. : Teams can create, train, monitor, and deploy models through an accessible low-code interface with performance monitoring and visual explainability features. User Interface Improvements: The upgrade provides a unified user experience across all major tools, such as AutoML, Playground, Tables, and Datasets, with guided wizards and new table interaction options aimed at reducing skill barriers. Availability of the updated ModelOps in ClearScape Analytics is anticipated in the fourth quarter for users of AI Factory and VantageCloud platforms. Follow us on: Share on: