Latest news with #Nasuni


Cision Canada
15-05-2025
- Business
- Cision Canada
Frost & Sullivan's Benchmarking System Identifies Innovation Leaders in the Emerging $100 Billion Hybrid Cloud Storage Sector
LONDON, May 15, 2025 /CNW/ -- In Frost & Sullivan's 2024 global survey of IT decision-makers, 85% said enabling greater data-centricity and AI capability was a top priority, while more than 50% ranked AI as their biggest technology investment over the next 12 months. However, most businesses still struggle with siloed and heterogeneous datasets, limiting their ability to extract insights and drive value. The cloud-native storage market is entering a phase of rapid change, fuelled by the explosion of enterprise data and the accelerating adoption of AI, which requires new storage options. Many businesses are turning to hybrid cloud storage to meet escalating data and AI demands. According to Frost & Sullivan, this market generated $100 billion in 2024 and is projected to grow at a 16% compound annual growth rate (CAGR) over the next six years. As organisations race to become more data-driven and AI-enabled, they are turning to hybrid cloud storage. Such solutions support diverse data formats and types to unify siloed data environments, scale operations cost-effectively, and access integrated security across cloud and on-premises storage infrastructure. "AI demands access to unified, multimodal data," said Karyn Price, Industry Principal at Frost & Sullivan. "Hybrid cloud storage provides the foundation for highly performant, resilient, and compliant storage environments required to power enterprise AI." Frost & Sullivan's Frost Radar ™: Hybrid Cloud Storage, 2025 benchmarks key market players offering platforms with advanced capabilities, including global data access, integrated security, and seamless edge-to-cloud scalability. The Frost Radar ™ analysis highlights Nasuni, CTERA, and Panzura as standout innovators and growth leaders. Most of these vendors are reshaping the storage landscape with scalable, AI-ready solutions that integrate enterprise-grade security, performance optimisation, and compliance capabilities. Other companies include Cloudian, Hammerspace, LucidLink, NetApp and Peer Software. Click here to unlock growth potential and explore the future of Hybrid Cloud Storage. Editor's Note To arrange an interview or for any questions, please contact: Kristina Menzefricke Marketing & Communications Global Customer Experience, Frost & Sullivan [email protected]


Forbes
07-04-2025
- Business
- Forbes
Why Data Curation Is The Key To Enterprise AI
Nick Burling, Senior Vice President of Product at Nasuni. All the enterprise customers and end users I'm talking to these days are dealing with the same challenge. The number of enterprise AI tools is growing rapidly as ChatGPT, Claude and other leading models are challenged by upstarts like DeepSeek. There's no single tool that fits all, and it's dizzying to try to analyze all the solutions and determine which ones are best suited to the particular needs of your company, department or team. What's been lost in the focus on the latest and greatest models is the paramount importance of getting your data ready for these tools in the first place. To get the most out of the AI tools of today and tomorrow, it's important to have a complete view of your file data across your entire organization: the current and historical digital output of every office, studio, factory, warehouse and remote site, involving every one of your employees. Curating and understanding this data will help you deploy AI successfully. The potential of effective data curation is clear in the development of self-driving cars. Robotic vehicles can rapidly identify and distinguish between trees and cars in large part because of a dataset called ImageNet. This collection contains more than 14 million images of common everyday objects that have been labeled by humans. Scientists were able to train object recognition algorithms on this data because it was curated. They knew exactly what they had. Another example is the use of machine learning to identify early signs of cancer in radiological scans. Scientists were able to develop these tools in part because they had high-quality data (radiological images) and a deep understanding of the particulars of each image file. They didn't attempt to develop a tool that analyzed all patient data or all hospital files. They worked with a curated segment of medical data that they understood deeply. Now, imagine you're managing AI adoption and strategy at a civil engineering firm. Your goal is to utilize generative AI (GenAI) to streamline the process of creating proposals. And you've heard everyone in the AI world boasting about how this is a perfect use case. A typical civil engineering firm is going to have an incredibly broad range of files and complex models. Project data is going to be multimodal—a mix of text, video, images and industry-specific files. If you were to ask a standard GenAI tool to scan this data and produce a proposal, the result would be garbage. But let's say all this data was consolidated, curated and understood at a deeper level. Across tens of millions of files, you'd have a sense of which groups own which files, who accesses them often, what file types are involved and more. Assuming you had the appropriate security guardrails in place to protect the data, you could choose a tool specifically tuned for proposals and securely give that tool access to only the relevant files within your organization. Then, you'd have something truly useful that helps your teams generate better, more relevant proposals faster. Even with curation, there can be challenges. Let's say a project manager (PM) overseeing multiple construction sites wants to use a large language model (LLM) to automatically analyze daily inspection reports. At first glance, this would seem to be a perfect use case, as the PM would be working with a very specific set of files. In reality, though, the reports would probably come in different formats, ranging from spreadsheets to PDFs and handwritten notes. The dataset might include checklists or different phrasings representing the same idea. A human would easily recognize this collected data as variations of a site inspection report, but a general-purpose LLM wouldn't have that kind of world or industry knowledge. A tool like this would likely generate inaccurate and confusing results. Yet, having curated and understood this data, the PM would still be in a much better position. They'd recognize early that the complexity and variation in the inspection reports would lead to challenges and save the organization the expense and trouble of investing in an AI tool for this application. The opportunities that could grow out of organization-wide data curation stretch far beyond specific departmental use cases. Because most of your organization's data resides within your security perimeter, no AI model has been trained on those files. You have a completely unique dataset that hasn't yet been mined for insights. You could take the capabilities of the general AI models developed in training on massive, general datasets and (with the right security framework in place) fine-tune them to your organization's unique gold mine of enterprise data. This is already happening at an industry scale. The virtual paralegal Harvey has been fine-tuned on curated legal data, including case law, statutes, contracts, legal briefs and the rest. BioBERT, a model optimized for medical research, was trained on a curated dataset of biomedical texts. The researchers who developed this tool did so because biomedical texts have such a particular or specific language. Whether you want to embark on an ambitious project to create a fine-tuned model or select the right existing tool for a department or project team's needs, it all starts with data curation. In this period of rapid change and model evolution, the one constant is that if you don't know what sort of data you have, you're not going to know how to use it. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?