logo
DataArt Commits $100 Million to Advance Data and AI Capabilities

DataArt Commits $100 Million to Advance Data and AI Capabilities

NEW YORK, NY, UNITED STATES, June 3, 2025 / EINPresswire.com / -- DataArt, a global software engineering firm specializing in data, analytics, and AI, today announced a $100 million commitment to strengthen its data and AI capabilities, responding to increased client demand while maintaining its focus on technology-driven business results.
'We are making a deliberate financial commitment to the technologies shaping the future of our clients and our company,' said Eugene Goland, CEO and Founder of DataArt. 'This isn't about changing course — it's about doubling down on the areas we've been building and delivering on for years.'
The financial commitment will strengthen DataArt's core data and AI services, which are already key drivers of client demand and revenue growth. Focus areas include:
• Data Strategy: Helping clients define roadmaps that transform data into a valuable business asset.
• Data Platform Engineering: Building scalable platforms for real-time, trusted data.
• Data Value Realization: Delivering measurable results from data initiatives.
• Generative AI: Turning advanced AI capabilities into real business solutions through strategic consulting, custom development, and accelerators.
• AI-Accelerated Engineering: Injecting AI into every stage of the SDLC to boost speed and improve quality.
These services are foundational to enterprise AI adoption, especially in data-intensive sectors where demand is rising sharply. A core part of DataArt's strategy is a pragmatic and thoughtful approach to AI. The company uses AI by default where it drives clear value but remains measured in areas where impact is limited or uncertain. DataArt continuously monitors advancements in third-party tools and models to ensure clients benefit from meaningful innovation — not just trends.
Internally, the company is scaling AI adoption across all functions. By the end of 2025, 100% of employees will have access to corporate AI tools, and up to 60% of engineering roles will actively use AI. As part of this strategy, DataArt is actively pursuing AI-driven optimization across the software development lifecycle (SDLC), with significant improvements already emerging in areas such as product management, code generation, and quality engineering — where AI is proving to be a strong accelerator of both speed and precision. These changes are supported by new training programs, updated skill matrices, and practical internal resources to guide responsible AI use.
The $100 million commitment also supports hiring senior talent, expanding learning programs via DataArt's proprietary LMS platform, EDU, and advancing R&D through Innovation Labs. The commitment also supports the continued development of proprietary solutions, including the DataArt Connect AI Platform — a secure, scalable platform for AI automation, productivity, and governance — and dozens of data and AI accelerators already driving value for clients.
To support delivery at scale, DataArt is deepening strategic partnerships with AWS, Google Cloud, Microsoft Azure, Snowflake, and Databricks. The company is also reserving part of the financial commitment for co-innovation with select clients, including support for early-stage R&D and business case development.
Additionally, DataArt is evolving its commercial models to reflect compressed timelines and rising value expectations around data and AI. While T&M, fixed-price, and managed services remain core, outcomes-based pricing is expanding where aligned with client goals.
Read the position paper, published today alongside this announcement, to learn more about the strategy behind DataArt's $100M commitment.
About DataArt
DataArt is a global software engineering firm that delivers breakthrough data, analytics, and AI platforms for the world's most demanding organizations. As the partner for progress in the digital age, our world-class teams artfully design and engineer data-driven, cloud-native solutions that generate immediate and enduring business value. We combine global scale, deep technical expertise, and progressive vision with advanced R&D Labs, frameworks, and accelerators to solve our clients' toughest challenges.
Since our founding in New York City in 1997, DataArt has grown to bring together 5,000+ experts across 40+ locations in the US, Europe, Latin America, India, and the Middle East, with clients including major global brands like Priceline, Ocado Technology, Legal & General, and Flutter Entertainment. Recognized as a 2023 Newsweek Most Loved Global Workplace and 13 times as an Inc. 5000 Fastest Growing Private Company, we are proud of our reputation as a great place to work and partner with.
For more information, please visit www.dataart.com.
Anni Tabagua
DataArt
email us here
Visit us on social media:
LinkedIn
YouTube
Legal Disclaimer:
EIN Presswire provides this news content 'as is' without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The real data revolution hasn't happened yet
The real data revolution hasn't happened yet

Fast Company

time39 minutes ago

  • Fast Company

The real data revolution hasn't happened yet

The Gartner Hype Cycle is a valuable framework for understanding where an emerging technology stands on its journey into the mainstream. It helps chart public perception, from the 'Peak of Inflated Expectations' through the 'Trough of Disillusionment,' and eventually up the 'Slope of Enlightenment' toward the 'Plateau of Productivity.' In 2015, Gartner removed big data from the Hype Cycle. Analyst Betsy Burton explained that it was no longer considered an 'emerging technology' and 'has become prevalent in our lives.' She's right. In hindsight, it's remarkable how quickly enterprises recognized the value of their data and learned to use it for their business advantage. Big data moved from novelty to necessity at an impressive pace. Yet in some ways, I disagree with Gartner. Adoption has been widespread, but effectiveness is another matter. Do most enterprises truly have the tools and infrastructure to make the most of the data they hold? I don't believe they do. Which is why I also don't believe the true big data revolution has happened yet. But it's coming. Dissecting the Stack A key reason big data is seen as mature, even mundane, is that people often confuse software progress with overall readiness. The reality is more nuanced. Yes, the software is strong. We have robust platforms for managing, querying, and analyzing massive datasets. Many enterprises have assembled entire software stacks that work well. But that software still needs hardware to run on. And here lies the bottleneck. Most data-intensive workloads still rely on traditional central processing units (CPUs)—the same processors used for general IT tasks. This creates challenges. CPUs are expensive, energy hungry, and not particularly well suited to parallel processing. When a query needs to run across terabytes or even petabytes of data, engineers often divide the work into smaller tasks and process them sequentially. This method is inefficient and time-consuming. It also ends up requiring more total computation than a single large job would. Even though CPUs can run at high clock speeds, they simply don't have enough cores to efficiently handle complex queries at scale. As a result, hardware has limited the potential of big data. But now, that's starting to change with the rise of accelerated computing. Breaking the Bottleneck Accelerated computing refers to running workloads on specialized hardware designed to outperform CPUs. This could mean field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs) built for a specific task. More relevant to big data, though, are graphics processing units (GPUs). GPUs contain thousands of cores and are ideal for tasks that benefit from parallel processing. They can dramatically speed up large-scale data operations. Interestingly, GPU computing and big data emerged around the same time. Nvidia launched CUDA (compute unified device architecture) in 2006, enabling general-purpose computing on graphics hardware. Just two years earlier, Google's MapReduce paper laid the foundation for modern big data processing. Despite this parallel emergence, GPUs haven't become a standard part of enterprise data infrastructure. That's due to a mix of factors. For one, cloud-based access to GPUs was limited until relatively recently. When I started building GPU-accelerated software, SoftLayer—now absorbed into IBM Cloud—was the only real option. There was also a perception problem. Many believed GPU development was too complex and costly to justify, especially for general business needs. And for a long time, few ready-made tools existed to make it easier. Those barriers have largely fallen. Today, a rich ecosystem of software exists to support GPU-accelerated computing. CUDA tools have matured, benefiting from nearly two decades of continuous development. And renting a top-tier GPU, like Nvidia's A100, now costs as little as $1 per hour. With affordable access and a better software stack, we're finally seeing the pieces fall into place. The Real Big Data Revolution What's coming next will be transformative. Until now, most enterprises have been constrained by hardware limits. With GPU acceleration more accessible and a mature ecosystem of supporting tools, those constraints are finally lifting. The impact will vary by organization. But broadly, companies will gain the ability to run complex data operations across massive datasets, without needing to worry about processing time or cost. With faster, cheaper insights, businesses can make better decisions and act more quickly. The value of data will shift from how much is collected to how quickly it can be used. Accelerated computing will also enable experimentation. Freed from concerns about query latency or resource drain, enterprises can explore how their data might power generative AI, smarter applications, or entirely new user experiences. Gartner took big data off the Hype Cycle because it no longer seemed revolutionary. Accelerated computing is about to make it revolutionary again.

FDA's New AI Tool Cuts Review Time From 3 Days To 6 Minutes
FDA's New AI Tool Cuts Review Time From 3 Days To 6 Minutes

Forbes

time41 minutes ago

  • Forbes

FDA's New AI Tool Cuts Review Time From 3 Days To 6 Minutes

AI at the FDA getty The U.S. Food and Drug Administration announced this week that it deployed a generative AI tool called ELSA (Evidence-based Learning System Assistant), across its organization. After a low-profile pilot that delivered measurable gains, the system is now in use by staff across the agency, several weeks ahead of its original schedule. Dr. Marty Makary, the FDA's commissioner, shared a major outcome. A review task that once took two or three days now takes six minutes. 'Today, we met our goal ahead of schedule and under budget,' said Makary. 'What took one scientific reviewer two to three days [before] The FDA has thousands of reviewers, analysts, and inspectors who deal with massive volumes of unstructured data such as clinical trial documents, safety reports, inspection records. Automating any meaningful portion of that stack creates outsized returns. ELSA helps FDA teams speed up several essential tasks. Staff are already using it to summarize adverse event data for safety assessments, compare drug labels, generate basic code for nonclinical database setup, and identify priority sites for inspections, among other tasks. This last item, using data to rank where inspectors should go, could have a real-world impact on how the FDA oversees the drug and food supply chain and impacts on how the FDA delivers its services. Importantly, however, the tool isn't making autonomous decisions without a human in the loop. The system prepares information so that experts can decide faster. It cuts through the routine, not the judgment. One of the biggest questions about AI systems in the public sector revolves around the use of data and third party AI systems. Makary addressed this directly by saying that 'All information stays within the agency. The AI models are not being trained on data submitted by the industry.' That's a sharp contrast to the AI approaches being taken in the private sector, where many large language models have faced criticism over training on proprietary or user-submitted content. In the enterprise world, this has created mounting demand for "air-gapped" AI solutions that keep data locked inside the company. That makes the FDA's model different from many corporate tools, which often rely on open or external data sources. The agency isn't building a public-facing product. It's building a controlled internal system, one that helps it do its job better. Federal departments have been slow to move past AI experimentation. The Department of Veterans Affairs has started testing predictive tools to manage appointments. The SEC has explored market surveillance AI for years. But few have pushed into full and widespread production. The federal government has thousands of employees processing huge volumes of information, most of it unstructured sitting in documents, files, and even paper. That means AI is being focused most on operational and process-oriented activities. It's shaping up to be a key piece of how agencies process data, make recommendations, and act. Makary put it simply that ELSA is just the beginning for AI adoption within the FDA. 'Today's rollout of ELSA will be the first of many initiatives to come,' he said. 'This is how we'll better serve the American people.'​​

Anthropic C.E.O.: Don't Let A.I. Companies off the Hook
Anthropic C.E.O.: Don't Let A.I. Companies off the Hook

New York Times

timean hour ago

  • New York Times

Anthropic C.E.O.: Don't Let A.I. Companies off the Hook

Picture this: You give a bot notice that you'll shut it down soon, and replace it with a different artificial intelligence system. In the past, you gave it access to your emails. In some of them, you alluded to the fact that you've been having an affair. The bot threatens you, telling you that if the shutdown plans aren't changed, it will forward the emails to your wife. This scenario isn't fiction. Anthropic's latest A.I. model demonstrated just a few weeks ago that it was capable of this kind of behavior. Despite some misleading headlines, the model didn't do this in the real world. Its behavior was part of an evaluation where we deliberately put it in an extreme experimental situation to observe its responses and get early warnings about the risks, much like an airplane manufacturer might test a plane's performance in a wind tunnel. We're not alone in discovering these risks. A recent experimental stress-test of OpenAI's o3 model found that it at times wrote special code to stop itself from being shut down. Google has said that a recent version of its Gemini model is approaching a point where it could help people carry out cyberattacks. And some tests even show that A.I. models are becoming increasingly proficient at the key skills needed to produce biological and other weapons. None of this diminishes the vast promise of A.I. I've written at length about how it could transform science, medicine, energy, defense and much more. It's already increasing productivity in surprising and exciting ways. It has helped, for example, a pharmaceutical company draft clinical study reports in minutes instead of weeks and has helped patients (including members of my own family) diagnose medical issues that could otherwise have been missed. It could accelerate economic growth to an extent not seen for a century, improving everyone's quality of life. This amazing potential inspires me, our researchers and the businesses we work with every day. But to fully realize A.I.'s benefits, we need to find and fix the dangers before they find us. Every time we release a new A.I. system, Anthropic measures and mitigates its risks. We share our models with external research organizations for testing, and we don't release models until we are confident they are safe. We put in place sophisticated defenses against the most serious risks, such as biological weapons. We research not just the models themselves, but also their future effects on the labor market and employment. To show our work in these areas, we publish detailed model evaluations and reports. Want all of The Times? Subscribe.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store