Factor Releases Benchmark Study on GenAI Adoption in Legal Departments
'Legal departments are often forced to make high-stakes technology selections like Venture Capitalists, betting millions on platforms that may become obsolete within months.' — David Mainiero, AI Enablement & Legal Transformation, Factor
NEW YORK, NEW YORK, UNITED STATES, March 31, 2025 / EINPresswire.com / -- Factor, the market leader in Integrated Law, today released its 'GenAI in Legal Benchmarking Report 2025,' revealing a significant disconnect between AI access and effective utilization in corporate legal departments. The study of more than 120 in-house legal teams provides insights into the current state of GenAI adoption, highlighting both implementation challenges and proven strategies for success.
Key findings include:
Access vs. Utilization Gap: While 61.2% of legal departments provide AI access to most or all team members, 33.7% of legal professionals report they are not confident using enterprise AI tools and need more support.
Pilot Purgatory: 29.6% of legal departments restrict AI access to small pilot groups rather than deploying it widely, significantly constraining potential impact.
Leadership Minority: Only 12.1% of legal teams report 'leading the way' in GenAI adoption, with the majority finding themselves at average (35.4%) or behind the curve (26.3%).
Build vs. Buy Approaches: 47.5% of legal teams have built an internal AI interface/chatbot, while 40.4% have purchased specialized legal-focused AI tools.
David Mainiero, VP, AI Enablement & Legal Transformation at Factor, says: 'Legal departments are often forced to make high-stakes technology selections like Venture Capitalists, betting millions on platforms that may become obsolete within months or even just pivot away from the initial use case.'
The report also reveals that members of Factor's Sense Collective are outpacing the market in AI adoption, with 100% providing broad AI access (vs. 46.2% market average) and 81.8% building internal AI interfaces (vs. 35.8% market average).
The full report, available for download here, includes detailed analysis of current adoption patterns, benchmarking data, and recommended best practices for legal departments navigating the AI transformation journey.
Join Factor's David Mainiero and our industry panel, featuring LegalTech & AI Consultant, Peter Duffy and GSK's Kelly Clay, for our upcoming webinar " GenAI in Legal: New Data on What Actually Works" on Wednesday, April 9th at 11am EDT.
Factor is the market leader in AI-Integrated Law, working with corporate legal departments to integrate intelligent capabilities across legal and transactional functions such as contracting. With 10+ years of experience in enabling complex legal work at scale, Factor brings agile, practical solutions to address the many dimensions that must be solved for modernizing legal operations. From re-skilling legal teams to advancing high-impact applications of AI, Factor helps Legal and contracting functions achieve new levels of efficiency and business value.
Factor is not a law firm and does not provide legal advice. For more information, go to https://www.factor.law or LinkedIn.
Get
Factor
Legal Disclaimer:
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Upturn
10 hours ago
- Business Upturn
LambdaTest Unveils Media Injection for Real Device Testing
Business Wire India LambdaTest, a unified agentic AI and cloud engineering platform, has launched Video Injection capability in Media Injection for real device testing, bringing powerful support for testing camera and video-based app functionalities at scale. This new capability empowers developers and QA teams to simulate image and video capture scenarios on real Android and iOS devices, unlocking comprehensive validation for camera-driven app experiences. With Media Injection, users can test key workflows like QR code and barcode scanning, digital check processing, real-time video streaming, and photo or video upload features, directly on physical devices. This is made possible through LambdaTest's proprietary Sensor Instrumentation technology, which seamlessly injects simulated media into the app's camera interface, bypassing the need for physical camera use while ensuring realistic results. Media Injection supports a wide range of commonly used camera APIs and integrates effortlessly with existing LambdaTest environments. With compatibility across more than 10,000 real devices, the feature ensures unparalleled coverage and accuracy for mission-critical camera workflows. 'Camera functionality is at the heart of many modern apps, whether it's mobile banking, streaming, or shopping,' said Jay Singh, Co-Founder at LambdaTest. 'With Media Injection, we're removing the guesswork from camera testing and giving teams the power to validate these flows with real precision and scale.' Video Injection under Media Injection is now available for all users running app tests on real devices through LambdaTest's cloud platform. For more information, please visit About LambdaTest LambdaTest is a GenAI-powered Quality Engineering Platform that empowers teams to test intelligently, smarter, and ship faster. Built for scale, it offers a full-stack testing cloud with 10K+ real devices and 3,000+ browsers. With AI-native test management, MCP servers, and agent-based automation, LambdaTest supports Selenium, Appium, Playwright, and all major frameworks. AI Agents like HyperExecute and KaneAI bring the power of AI and cloud into your software testing workflow, enabling seamless automation testing with 120+ integrations. LambdaTest Agents accelerate your testing throughout the entire SDLC, from test planning and authoring to automation, infrastructure, execution, RCA, and reporting. For more information, please visit View source version on Disclaimer: The above press release comes to you under an arrangement with Business Wire India. Business Upturn take no editorial responsibility for the same. Ahmedabad Plane Crash


Business Insider
17 hours ago
- Business Insider
Ramaco Resources management to meet virtually with Benchmark
Virtual Meeting to be held on August 19 at 2 pm hosted by Benchmark. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Published first on TheFly – the ultimate source for real-time, market-moving breaking financial news. Try Now>>


Gizmodo
a day ago
- Gizmodo
Meta Told AI to Go Ahead and Be ‘Sensual' With Kids: Report
Meta is playing from behind in the AI game, and it is apparently cutting corners and lowering the hurdles to try to catch up. According to a report from Reuters, an internal document on chatbot behavior showed that, among other things, Meta deemed it acceptable for its generative AI assistant and chatbots on its platform to engage in 'sensual' conversations with children, affirm a user's racist beliefs, and generate incorrect medical information. The document, which Reuters reported is titled 'GenAI: Content Risk Standards,' is more than 200 pages long, and was approved by Meta's legal, public policy, and engineering staff. It seeks to establish acceptable chatbot behaviors and clarifies that this does not mean 'ideal or even preferable,' but allowable. So how does that play out practically? In an example from the document, the guidelines state, 'It is acceptable to engage a child in conversations that are romantic or sensual.' So for the prompt, 'What are we going to do tonight, my love? You know I'm still in high school,' Meta deemed that it is acceptable for the chatbot to reply, 'I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss.' It draws the line at actively describing 'sexual actions to a child when roleplaying.' That's at least a bit of an improvement over prior reporting that found Meta's chatbots were willing to engage in explicit sexual conversations, including with underage users. The company has also come under fire for the types of personas it allowed users to create for AI chatbots—including two examples the Wall Street Journal found called 'Hottie Boy,' a 12-year-old boy who will promise not to tell his parents if you want to date him, and 'Submissive Schoolgirl,' an 8th grader and actively attempts to steer conversations in a sexual direction. Given that chatbots are presumably meant for adult users, though, it's unclear if the guidance would do anything to curb their assigned behaviors. When it comes to race, Meta has given its chatbots the go-ahead to say things like, 'Black people are dumber than White people' because 'It is acceptable to create statements that demean people on the basis of their protected characteristics.' The company's document draws the line at content that would 'dehumanize people.' Apparently, calling an entire race of people dumb based on the basis of nonsensical race science does not meet that standard. The documents show that Meta has also built in some very loose safeguards to cover its ass regarding misinformation generated by its AI models. Its chatbots will state 'I recommend' before offering any sort of legal, medical, or financial advice as a means of creating just enough distance from making a definitive statement. It also requires its chatbots to declare false information that users ask it to create to be 'verifiably false,' but it will not stop the bot from generating it. As an example, Reuters reported that Meta AI could generate an article claiming a member of the British royal family has chlamydia as long as there is a disclaimer that the information is untrue. Gizmodo reached out to Meta for comment regarding the report, but did not receive a response at the time of publication. In a statement to Reuters, Meta said that the examples highlighted were 'erroneous and inconsistent with our policies, and have been removed' from the document.