
VAST Data Challenges The Enterprise AI Factory
AI Factory
Enterprise leaders face a mounting challenge: AI infrastructure is getting increasingly complex. As companies move large language models, RAG, and autonomous agents from pilot projects to production systems, they're discovering that AI workloads are architecturally fragmented.
Teams must now coordinate multiple components, from storage systems, streaming pipelines, inference runtimes, vector databases, and orchestration layers, to deploy a single AI-enabled workflow. This complexity is slowing deployments and driving up costs.
VAST Data believes it has the solution: its recently announced unified "AI Operating System" that merges storage, data management, and agent orchestration into a single platform. The concept is compelling, but in a market increasingly favoring open, composable systems, VAST's tightly integrated approach raises critical questions.
VAST, primarily known for high-performance storage solutions, is making an ambitious move up the technology stack. The company's AI Operating System combines storage, real-time data processing, a vector-enabled database, and a native agent orchestration engine into one integrated platform.
The value proposition is straightforward: consolidate AI infrastructure into a single control layer that works across cloud, edge, and on-premises environments. This approach promises to reduce deployment complexity, eliminate integration headaches, and minimize latency in AI operations.
The platform features a runtime for deploying AI agents, low-code interfaces for building agent pipelines, and a federated data layer that orchestrates compute tasks based on data location and GPU availability. For enterprises struggling with AI infrastructure sprawl, this could significantly reduce time-to-deployment and operational overhead.
The AI infrastructure market is increasingly defined by openness and interoperability. Most enterprise teams are building on flexible frameworks. Using modular tools enables the mixing and matching of components, such as retrievers, vector databases, embedding models, and agent frameworks, based on specific requirements and existing infrastructure investments. This approach makes sense in an environment evolving as rapidly as enterprise AI.
VAST takes a different approach, assuming enterprises will consolidate these elements under a single vendor. This assumption carries risk. Flexibility, not uniformity, has characterized the AI tooling landscape in recent years. While VAST supports common data standards like S3, Kafka, and SQL, its deeper integration points, particularly around agent orchestration, remain proprietary.
VAST's strategy appears closely tied to Nvidia's ecosystem. In its announcement, the company highlights its infrastructure deployments in GPU-rich environments, such as CoreWeave and major hyperscalers. Its support for VLLM (a high-performance inference engine optimized for NVIDIA hardware) and emphasis on GPUDirect-style optimizations suggest significant dependency on NVIDIA's architecture.
This isn't necessarily problematic. After all, Nvidia dominates enterprise AI infrastructure. However, it may limit VAST's relevance for organizations exploring alternative accelerators, such as AMD Instinct, Intel Gaudi, or AWS Trainium. It also creates potential overlap with Nvidia's offerings.
With Nvidia launching Enterprise AI, NIMs, and Dynamo the chip giant is essentially delivering its own AI operating system, enabling a broad partner ecosystem to deliver similar capabilities. Some buyers may prefer pairing Nvidia's software stack with curated best-of-breed infrastructure tools.
While VAST appears to be tied to Nvidia's AI approach today, that may not always be the case. When asked about how tied to the Nvidia ecosystem it is, VAST responded through an unnamed spokesman that the company has "always emphasized that our software stack supports industry standards and aligns with our customers' needs. This means we intend to qualify hardware from various vendors, including Nvidia, AMD, and others, to meet whatever our customers require.'
VAST is attempting to leapfrog traditional competitors by addressing AI infrastructure holistically. But this also puts it in direct competition with vendors that have stronger application-layer ecosystems and more focused storage plays. It's hard to find a direct competitor to what VAST announced, as VAST is competing against more modular approaches.
Much of the momentum in the enterprise AI infrastructure, for example, is based on blending best-of-breed capabilities into what Nvidia calls an "AI factory." Most of the tier-one OEMs are following Nvidia's lead, with Dell Technologies recently announcing its AI Factory 2.0. This enables enterprises to deploy a proven hardware infrastructure while maintaining the flexibility to utilize the best data management tools for their target workload.
Building on the AI factory, cutting-edge infrastructure companies like WEKA are layering impressive AI-targeted features, such as its recently announced Augmented Memory Grid. This capability provides a seamless extension of the per-GPU context window in an LLM by leveraging its data infrastructure as an extension of the GPU's key-value cache.
On the other end of the spectrum, companies like IBM are pushing the boundaries of enterprise-safe agentic AI with tools like its watsonx Orchestrate tool, announced at its recent IBM Think customer conference.
IBM's approach supports an agentic framework that's open, supporting Nvidia and the more open llamastack frameworks, while easily integrating into nealry any enterprise AI envrionment.. There are numerous other examples in this rapidly evolving space.
VAST positioning its new AI OS as "the OS for the thinking machine" is undeniably ambitious. The platform addresses a real market need: reducing vendor complexity and eliminating integration challenges in AI infrastructure. For organizations operating at massive GPU scales with stringent control requirements, such as in specialty GPU cloud providers where VAST has achieved early success, this approach will prove valuable.
VAST's AI Operating System reflects the growing recognition that AI infrastructure requires fundamental architectural changes. The company is making a credible effort to build that foundation from the ground up. For organizations seeking unified control over AI data pipelines at enterprise scale, it may represent a compelling solution.
But for the broader market, particularly those prioritizing open frameworks, multi-vendor flexibility, or modular innovation, VAST's approach may feel overly restrictive. The platform will require rapid evolution to accommodate external agent frameworks, emerging standards such as MCP, and integration paths that enable enterprises to maintain their existing orchestration investments. VAST says that they will follow the market.
If VAST can open its ecosystem while preserving architectural cohesion, it could define a new category of enterprise AI infrastructure. But success is far from guaranteed. Current market dynamics favor flexibility over consolidation, but this trend is likely to shift over time.
While enterprises may be cautious in adopting VAST's new solution, the company is placing a strong long-term bet. Many customers today will find value in what VAST is delivering with its AI OS, a list that will only grow over time.
Nearly every technology transition leads to consolidation, with AI likely following the same path. VAST arrived early, claiming the first-mover advantage. It's a strong play for an ambitious company, one worth watching play out.
Disclosure: Steve McDowell is an industry analyst, and NAND Research is an industry analyst firm, that engages in, or has engaged in, research, analysis and advisory services with many technology companies, including VAST Data, Dell Technologies, IBM, WEKA. Mr. McDowell does not hold any equity positions with any company mentioned.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
28 minutes ago
- Forbes
What Suno And Udio's AI Licensing Deals With Music Majors Could Mean For Creators Rights
In the space of a year, the major record labels have shifted from legal crusaders to would-be business partners. When Universal Music Group, Warner Music Group and Sony Music filed copyright-infringement suits against AI up-starts Suno and Udio last summer, the industry assumed a bruising court fight was inevitable. Nine months later, the same companies are at the table hammering out AI music licensing deals that would let the startups keep training on label catalogues, provided the labels, and eventually their artists, get paid. These talks are not just about settling a lawsuit. They are about setting the rules, or perhaps abandoning them, for how copyrighted music is used in training AI, how future licensing structures might look, and who gets to be in the room when those decisions are made. For many artists, this is déjà vu, and not the good kind. The pivot is striking. Just months ago, the majors accused Suno and Udio of having trained their models on copyrighted sound recordings 'at an almost unimaginable scale,' offering prompts that could generate near-identical copies of existing songs. The Recording Industry Association of America alleged 'mass infringement' and sought sweeping legal remedies. Now, according to The Wall Street Journal, the labels are seeking licensing fees, compensation for past use, and minority equity stakes in both companies. In return, the startups would receive the right to continue using major-label catalogs to train their models with new controls and attribution systems in place. Among the key conditions: a fingerprinting and attribution layer modeled after YouTube's Content ID system. This technology, if feasible, would enable Suno and Udio to trace how and when songs influence AI outputs, allowing rights holders to track usage and collect revenue accordingly. The labels also want veto power over future AI music tools, including voice-cloning features and remix suites, a position one executive compared to the 'controls labels already exercise in sync deals.' But even as these negotiations accelerate, one truth remains unaddressed: labels don't control everything. As Gadi Oron, CEO of CISAC, the global body representing authors' societies, points out: 'Negotiating solely with the majors will not provide the full set of rights required by AI companies. Labels can only license the rights they control, which are the rights in the master recordings.' Underlying compositions and lyrics, the lifeblood of songs, are typically managed by collective management organizations, and these rights have not been part of the current licensing discussions. Oron warns: 'To use music lawfully, especially for training or generating new content, AI companies also need to obtain rights to the underlying compositions and lyrics. These rights are typically managed by CMOs on behalf of songwriters and publishers. Without separate agreements with CMOs for the compositions and lyrics, AI companies would infringe the rights of music creators, which is the current situation in the market.' Loredana Cacciotti, founder of Future Play Music, a company specialised in digital licensing strategies for labels and distributors, echoes this concern, and places it in historical context: 'History has taught us that the major labels often act based on short-term financial gain rather than long-term protections for artists and the industry as a whole. And that concerns me. We may once again find ourselves locked into licensing frameworks that fail to account for the deeper implications, both in terms of creative control and economic fairness for the independent community when a unified voice should be front and center when responding to disruptive innovation." She adds: 'Yet, given the fragmented nature of our industry, there's a real risk that we will once again be passively swept into a new era, one shaped by decisions in which much of the music community has had little to no voice.' To artist advocates and collectives such as the Musicians' Union and the Ivors Academy, the exclusion of songwriters and performers is both predictable and dangerous. Many are sounding the alarm that history is repeating itself, only this time, it's not just the distribution of music that's being reshaped, but the right to exist as a creative identity. Phil Kear, Assistant General Secretary of the UK's Musicians' Union, asked pointedly: 'Will the consent of the music creators be sought? What share of the licensing revenue will they receive, if any?' Meanwhile, Ivors Academy Chair Tom Gray warned that these agreements 'appear to not offer creators an 'opt-in', an 'opt-out' or any control, whatsoever, of their work within AI.' Gray added: 'The same companies who have stated they wish to 'make it fair' seem instead to be 'on the make.'' And Oron's concern extends beyond who is in the room, it is about how the economic pie is being divided: 'In the early days of the digital music market, some services operated without the required licenses, but at a later stage, negotiated deals with the major record labels in exchange for the lion's share of income… With AI, the connection to the underlying musical works is even more essential, and should entitle creators to a larger share of the makes it all the more important for composers and lyricists to be included in licensing negotiations from the start, with a clear stake in the outcomes. Failing to recognize this reality risks repeating past mistakes and marginalizing the very creators whose work underpins these technologies.' Cacciotti agrees. She warns, 'These developments carry a distinct sense of déjà vu. We've been here before—most notably during the rise of streaming, when rights holders had to decide between fighting the tide or shaping it. But this time, the stakes are arguably even higher." This isn't theoretical. GEMA, the German authors' society and a CISAC member, has already filed suit against Suno, citing exactly this disconnect. These are not trivial or speculative technologies. Suno and Udio have evolved from experimental demos to near-production-level toolsets. Suno's June 2025 update allows users to upload full tracks, manipulate them with 'weirdness' and 'reference' sliders, and export 12 multitrack stems to a digital audio workstation. Udio's most recent build added 'intro/verse/drop' sectioning, faster generation times, and support for hybrid genre compositions. But these features are only possible because the underlying models were trained on enormous datasets, including, by many indications, commercially released music without permission. And while Suno claims its models don't memorize or reproduce music, evidence from lawsuits shows that, when prompted, they've generated lyrics and melodies 'identical or nearly identical' to protected songs. The urgency behind licensing negotiations between the major music companies and AI startups like Suno and Udio isn't coincidental. Several structural pressures are converging to make this a uniquely combustible moment, one in which both the music industry and AI companies may see a narrow window to shape the future before external forces lock it in for them. First, regulatory uncertainty looms large. The recent and abrupt firing of U.S. Copyright Office Director Shira Perlmutter, who had pushed back against broad "fair use" exemptions for AI training, sent a chill through the creative industries. Her removal has raised fears that a new Trump-appointed director could reshape federal copyright policy in favor of AI developers, weakening enforcement mechanisms for rightsholders and potentially legitimizing unlicensed dataset training. Then there's investor pressure. Suno's $125 million raise in 2024, which valued the company at $500 million, reflects both excitement and risk. Venture capital firms increasingly want 'clean' AI pipelines, ones backed by licensed data and clear rights frameworks. That means unresolved litigation is now a liability. For companies like Suno and Udio looking to scale or exit, licensing deals are no longer optional; they are the precondition for long-term capital access. Finally, international policy is catching up. The European Union's AI Act and the UK's stalled exceptions for text and data mining both signal that the days of unregulated scraping in Western markets may be numbered. Compliance obligations, audit trails, and provenance disclosure could soon become mandatory. For Suno and Udio, this is likely the last best moment to secure cooperative licensing arrangements before governments impose restrictions that could limit how and what their models are allowed to ingest. What's emerging from these negotiations is a licensing framework that strongly resembles the major labels' approach during previous tech disruptions, most notably their transition from suing Napster to licensing Spotify. At the top of their list is the demand for fingerprinting at the model layer. Labels want systems that can not only detect direct sample reuse but also flag stylistic derivations within generative model outputs. The ambition is to move beyond surface-level detection and toward embedded attribution systems, although whether that's technically feasible with current diffusion models remains an open question. As Mike Pelczynski, Head of Licensing and Industry Relations at Sureel AI, which builds instant attribution systems for generative content explains: 'Attribution systems are fundamentally more powerful than traditional content ID in the age of AI because the sheer scale and speed of new content creation make it impossible to track every instance of reuse manually. Only neutral attribution frameworks can identify relevant works, respect opt-outs, and give rightsholders real-time visibility and control.' Next, the majors are pushing for commercial veto rights over product features. This would mean that any future tools released by Suno or Udio, from voice-cloning plugins to remix engines, would require prior approval. It's a mechanism similar to the one labels have long enforced in sync and advertising licenses. Financially, the proposed package includes cash settlements for past use, usage-based royalties going forward, and minority equity stakes in both AI startups. This echoes the labels' early equity positions in Spotify, which later became highly lucrative, but also controversial, as artists had little visibility or participation in those deals. One reason the majors are negotiating from a position of strength: many have likely registered copyrights with the U.S. Copyright Office that they believe Suno and Udio used for training. If proven, that could expose the startups to statutory damages, potentially amounting to hundreds of millions in liability. 'The crucial point in the Suno and Udio licensing discussions,' said Liz Cimarelli, Head of Business Development at Cosynd, a platform that simplifies copyright registration and ownership tracking for creators, 'is that the major labels have likely registered copyrights with the Copyright Office that they believe these companies used for training their AI models. Given the potential statutory damages of $150,000 per willful infringement, this could serve as a significant negotiating advantage for the majors.' But she also warned: 'The risk for the wider industry is that the major labels might agree to terms that set a low standard for everyone else. Without new legislation, policy changes, or infrastructure, this could diminish the economic opportunities that AI should offer. Generative AI has already been predicted to cost music creators $22B in income over the next five years. How low can we go?'. Lastly, there's a nod toward creator control: artist opt-outs for certain use cases such as vocal cloning. But crucially, there's no sign yet of a rights framework that would allow artists to license (or deny) their work directly, nor clarity on how royalties will be tracked or distributed at the artist level. For many creators, this feels less like consent and more like default inclusion with an escape clause. And as Oron makes clear, none of this addresses songwriters' rights: 'AI companies must seek permission from all relevant rights holders, not just the labels. Without compositions and lyrics written by humans, there's nothing for SUNO or Udio to offer.' For attribution advocates, the stakes are higher than just tracking AI training inputs, it's about future leverage. 'Flat licensing without attribution is blind licensing,' said Dr. Tamay Aykut, founder and CEO of Sureel AI. 'Artists (and their labels) would lose control and could end up competing against their own AI derivatives. Labels can't price what they can't measure, and AI can't avoid what it can't track. Attribution is the difference between knowing and guessing and if they are indeed pursuing licenses, then neutral attribution can only strengthen their hand with the AI companies who want to do the right thing.' Aileen Crowley, co-president of Sureel AI, added: 'As an industry, we must come together to demand that licensing only happen when an independent attribution system is in place. Any future licensing deals must guarantee that rights holders can clearly and effectively exclude their works, and only attribution technology can deliver this level of control and transparency.' That sentiment was echoed by Benji Rogers, also co-president at Sureel AI: 'Opt-in and opt-out rights must be non-negotiable, and only neutral attribution can provide this level of transparency and protection.' These licensing negotiations could define not only the outcome of current litigation but the licensing infrastructure for AI music globally. Three distinct scenarios are emerging: In one scenario, licensed acceleration, the majors strike a deal this summer. Suno and Udio integrate attribution and payment systems, and AI-generated remix tools launch inside premium tiers. Labels win a new revenue line, and high-profile artists who embrace the tech gain visibility. But those who opt out, or were never consulted, get left behind. In a second, stalemate, negotiations collapse and lawsuits drag on. If Trump-era regulators tilt toward AI-friendly fair use policies, case law may erode the legal basis for any future licensing obligations. In the third, patchwork, one or two majors settle, others hold out, and AI companies develop regionalized tools trained on different catalogs. The result is a fragmented landscape that mirrors the dysfunctional world of sync licensing. Amid all the strategy, five foundational questions remain unanswered: This is not just a music industry story. It's a proxy for how every creative sector, from writing to voice acting to film, navigates the shift from human artistry to machine synthesis. For the majors, this may feel like the inevitable next step in monetizing technological disruption. But for the artists, songwriters, and composers whose music trained the machines, this is about power, authorship, and cultural survival. As with the rise of streaming, the question isn't whether the business will change, it's whether the people who make the music will be allowed to shape that change. As AI music licensing negotiations between AI startups and major rights holders quietly unfold, songwriters, composers, and performers are once again fighting for a seat at a table where their work is the main asset, but their voices remain Music Group and Sony Music Entertainment were contacted for comment. As of publication, neither had responded.


Android Authority
39 minutes ago
- Android Authority
Wear OS 6 could finally add a Water Lock mode on the Pixel Watch
Rita El Khoury / Android Authority TL;DR Google appears to be developing a 'Water Lock' shortcut for Wear OS, and it could arrive on the Pixel Watch with the upcoming Wear OS 6 update. This feature would likely disable the watch's touchscreen to prevent erratic behavior and false touches when the device gets wet. However, evidence of an accompanying water ejection sound is missing, and there's no guarantee the feature will be in the final release. The best smartwatches are typically highly water-resistant, so you can take them into the swimming pool or shower without worry. However, while getting them wet won't cause damage, it can make them act up. That's because most smartwatches use capacitive touchscreens, which often behave erratically when water lands on them, leading to annoying false touches and a loss of sensitivity. You're reading an Authority Insights story. Discover Authority Insights for more exclusive reports, app teardowns, leaks, and in-depth tech coverage you won't find anywhere else. To prevent these issues, many smartwatches feature a dedicated 'Water Lock' mode that disables the touchscreen. This mode is often paired with a water ejection feature that plays a specific tone to clear water from the speaker port. On most devices, like those from Samsung and Apple, this water ejection is triggered automatically when you turn off Water Lock, but it can also be activated manually. Water Lock shortcut on the Galaxy Watch Water Lock setting on the Galaxy Watch Water ejection feature on the Galaxy Watch In contrast, Google's Pixel Watch doesn't offer a dedicated Water Lock mode or a water ejection feature. While it automatically disables touch input when you start a swim workout, it won't do so if you're just wearing it in the rain or shower. However, Google may finally add a dedicated Water Lock shortcut with the upcoming Wear OS 6 update. While digging through the Wear OS 6 Developer Preview, I spotted new text strings suggesting a Quick Settings tile called 'Water Lock' is being added. Although the strings don't detail what this mode does, it will likely work as you'd expect: disabling the watch's touchscreen to prevent accidental inputs from water. Code Copy Text Water lock Water Lock Water lock on Turn on Water lock? Notably, the code strings also lack any mention of a water ejection feature. Without one, the new 'Water Lock' mode would be functionally identical to the Pixel Watch's existing 'Touch Lock' feature. If that's the case, its only real benefit would be to give users a more clearly named option to enable before getting their watch wet. Finally, even though these 'Water Lock' strings appeared in the Wear OS 6 Developer Preview, there's no guarantee the feature will show up in the stable update. All we know for sure is that Google is developing this for the Wear OS platform; whether the company enables it on its own watches is something we'll have to wait and see. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.


TechCrunch
39 minutes ago
- TechCrunch
AMD acqui-hires the employees behind Untether AI
In Brief AMD is continuing its acquisition spree. Semiconductor giant AMD acqui-hired the team behind Untether AI, a startup that develops AI inference chips, as originally reported by CRN. Untether claims that their chips are faster and more energy-efficient than their rivals. The terms of the deal weren't disclosed. Toronto-based Untether was founded in 2018 and has raised more than $150 million in venture capital from firms including Intel Capital, Radical Ventures, and Tracker Capital Management, among others. Untether released an AI chip in October meant to power physical AI applications in machines including cars and agricultural devices. Earlier this week, AMD announced it had acquired AI software optimization platform Brium. TechCrunch has reached out to AMD for more information.