logo
Everyone's Talking About AI Compute—But It All Starts With Storage

Everyone's Talking About AI Compute—But It All Starts With Storage

Forbes5 hours ago

Dave Friend is the cofounder and CEO of Wasabi.
When you think of AI, there are probably many things that come to mind, such as how to use it, where it's headed and what powers it. The conversation typically centers around compute—A.K.A. all the CPUs and GPUs you hear about when discussing AI. While compute is critical, there is a significant aspect of AI that is often overlooked: data. Although it may not be widely discussed, the reality is that these massive, unstructured and ever-growing data sets are what are truly driving global AI growth.
As AI models become larger and more sophisticated, accessing the necessary data to train them is becoming a significant challenge for users. This is due to multiple factors, including the ever-increasing amount of data needed to train AI models. To make matters worse, hyperscaler storage that many rely on is expensive, overly complex and not optimized for the accessibility and performance that AI workflows demand. Additionally, enterprise data used to train AI systems is becoming a favored target for malicious actors. All of these factors combine to make AI adoption incredibly challenging, expensive and time-consuming.
The reality is that most companies aren't struggling with AI compute limitations. They're hitting walls because they can't store, access and manage the data quickly, securely and affordably enough to support real-time inference, fine-tuning or long-term retention.
If AI needs to run efficiently and cost-effectively, so does the data it learns from. To address these growing problems and fully leverage the benefits AI has to offer, organizations should implement a scalable cloud storage solution that provides cost-effectiveness, security and hybrid capabilities.
Best Practices And What Leaders Should Expect
However, not all data storage providers are created equal. The cloud giants that dominate the industry charge exorbitant fees to access data, making it more difficult, expensive and time-consuming for users. This makes training AI and storing the data that AI gleans a costly and challenging undertaking. To address this, organizations should seek out affordable cloud storage providers that don't charge these fees. This will enable them to easily access their data in a way that makes AI training as seamless and cost-effective as possible.Additionally, these storage buckets can be easily scaled up and down depending on need. This is ideal for training AI, as the storage will need to hold both the data required to train AI models and store the resulting information. Being able to scale up easily and down will ensure that an organization is adequately prepared for AI models and can adjust as needed.
Just as important as where you store the data is ensuring it is stored securely. Cyberattackers are increasingly going after enterprise data due to its vital role in AI operations. As a result, it is crucial for IT leaders to ensure that the storage solutions they choose are adequately protecting their data. When selecting a provider, organizations should remember to look for one that offers robust data protection offerings that ensure the storage sets are impenetrable. Organizations should also take notice that the data is hidden from bad actors in the event of a breach to prevent deletion or ransomware threats. These are critical for avoiding an attack and protecting critical enterprise data.
Key Features And Approaches To Security
An essential part of a secure data management program is immutable backups, which prevent a malicious actor from modifying or deleting the stored data. Immutable backups are an air-gapped solution that isolates data from potential threats such as ransomware or accidental deletion. IT leaders should consider immutable backups to ensure their data is impenetrable and cannot be encrypted or deleted. Additionally, no secure cloud management program would be complete without employee training on cyber protection. By regularly updating cybersecurity best practices for employees and providing training, organizations can effectively prevent malicious actors from breaching their networks and accessing critical data.
An approach that combines cost effectiveness and security is hybrid storage, which involves storing data in different methods and locations. This can include one copy in the cloud, one on-premises, and one on a hard drive. Incorporating cost-effective solutions like the cloud reduces expenses, while having the data in multiple locations allows it to be readily available in case of a cyberattack. For AI training, data can be readily available in the cloud, but it can also be stored on-premises for added security.
While it is easy to get caught up in the AI boom, organizations must take their time incorporating the emerging feature. Technology decision-makers should ensure they prioritize cost-effective and secure ways to store the data necessary for proper AI training. Without it, they may be left behind their competitors in the AI adoption race.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image
Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

CNN

time13 minutes ago

  • CNN

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

Last week, Grok, the chatbot from Elon Musk's xAI, replied to a user on X who asked a question about political violence. It said more political violence has come from the right than the left since 2016. Musk was not pleased. 'Major fail, as this is objectively false. Grok is parroting legacy media,' Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would 'rewrite the entire corpus of human knowledge,' calling on X users to send in 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to help train the model. 'Far too much garbage in any foundation model trained on uncorrected data,' he wrote. On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th. The exchanges, and others like it, raises concerns that the world's richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it's already impacting areas such as software development, healthcare and education. And the decisions that powerful figures like Musk make about the technology's development could be critical. Especially considering Grok is integrated into one of the world's most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI's ChatGPT, its inclusion in Musk's social media platform X has put it in front of a massive digital audience. 'This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,' said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta's Responsible AI team. A source familiar with the situation told CNN that Musk's advisers have told him Grok 'can't just be molded' into his own point of view, and that he understands that. xAI did not respond to a request for comment. For months, users have questioned whether Musk has been tipping Grok to reflect his worldview. In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was 'instructed to accept as real white genocide in South Africa'. Musk was born and raised in South Africa and has a history of arguing that a 'white genocide' has been committed in the nation. A few days later, xAI said an 'unauthorized modification' in the extremely early morning hours Pacific time pushed the AI chatbot to 'provide a specific response on a political topic' that violates xAI's policies. As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints. 'He's trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,' Frosst said. It's common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst. But retraining a model from scratch to 'remove all the things (Musk) doesn't like' would take a lot of time and money – not to mention degrade the user experience – Frosst said. 'And that would make it almost certainly worse,' Frosst said. 'Because it would be removing a lot of data and adding in a bias.' Another way to change a model's behavior without completely retraining it is to insert prompts and adjust what are called weights within the model's code. This process could be faster than totally retraining the model since it retains its existing knowledge base. Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model's decision-making process. Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok's weights and data labels in specific areas and topics. 'They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,' Neely said. 'They will simply go into doing greater level of detail around those specific areas.' Musk didn't detail the changes coming in Grok 4, but did say it will use a 'specialized coding model.' Musk has said his AI chatbot will be 'maximally truth seeking,' but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data. 'AI doesn't have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what's happening,' Neely said. 'However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.' It's possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful. 'For the most part, people don't go to a language model to have ideology repeated back to them, that doesn't really add value,' he said. 'You go to a language model to get it to do with do something for you, do a task for you.' Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust. But 'the journey to get there is very painful, very confusing,' Neely said and 'arguably, has some threats to democracy.'

Denmark plans to thwart deepfakers by giving everyone copyright over their own features
Denmark plans to thwart deepfakers by giving everyone copyright over their own features

CNN

time21 minutes ago

  • CNN

Denmark plans to thwart deepfakers by giving everyone copyright over their own features

The Danish government is planning to tackle the issue of AI-generated deepfakes by granting citizens property rights over their likeness and voice. The proposed legislation would mean that people who find that their features have been used to create a deepfake would have the right to ask the platforms that host the content to take it down, Danish Culture Minister Jakob Engel-Schmidt told CNN on Friday. Engel-Schmidt believes that 'technology has outpaced legislation' and the proposed law would help to protect artists, public figures and ordinary people from digital identity theft, which he said is now possible with just a few clicks thanks to the power of generative AI. 'I think we should not accept a situation where human beings can be run through, if you would have it, a digital copy machine and misused for all sorts of purposes,' he said. He cited the example of musical artists who have discovered songs online purporting to be theirs, but which have in fact been made using AI clones of their voice. One such case involves Canadian singer Celine Dion, who in March warned fans about AI-generated content featuring her voice and likeness that was circulating online. And in April 2024, more than 200 artists, including Billie Eilish, Kacey Musgraves, J Balvin, Ja Rule, Jon Bon Jovi, the Jonas Brothers, Katy Perry and Miranda Lambert, signed an open letter speaking out against AI-related threats in the music industry. Engel-Schmidt says he has secured cross-party support for the bill, and he believes it will be passed this fall. Once the legislation is passed, Engel-Schmidt believes a second step would be to introduce more legislation that could impose fines on companies that do not comply with requests to remove content featuring an AI-generated deepfake. 'We are champions of freedom of speech, we would like everyone to be heard, but we also believe that human beings have the right to say yes and no to them being used by generative AI,' he said. As for whether he has discussed the proposed legislation with tech companies, Engel-Schmidt said: 'Not yet, but I'm looking forward to it. I think it's in their interest as well to make AI work for humanity, not against, you know, artists, popular figures and ordinary people.' Athina Karatzogianni, a professor of technology and society at the University of Leicester, England, told CNN that the Danish proposal is one of hundreds of policy initiatives around the world looking to reduce the possible harms associated with the misuse of generative AI. 'Deepfakes can have both individual and social impact, because they can both harm individual rights and also (have) sociopolitical impacts, because they undermine the values that are fundamental to a democracy, such as equality and transparency,' said Karatzogianni.

David H. Rosmarin brings a founder-focused approach to anxiety at TechCrunch All Stage
David H. Rosmarin brings a founder-focused approach to anxiety at TechCrunch All Stage

Yahoo

time32 minutes ago

  • Yahoo

David H. Rosmarin brings a founder-focused approach to anxiety at TechCrunch All Stage

Startups demand constant decision-making, pressure-filled pivots, and bring big emotional swings. It's no wonder anxiety shows up at every stage. But what if it didn't have to be a liability? At TechCrunch All Stage 2025 on July 15 at Boston's SoWa Power Station, Dr. David H. Rosmarin, clinical psychologist, author, and Harvard Medical School professor, will lead a refreshingly honest roundtable session that challenges how founders think about fear and pressure. His roundtable, 'Thriving with Anxiety: How Startup Founders Can Turn Fear, Pressure, and Self-Doubt into Their Greatest Advantage.'. This session isn't about 'overcoming' anxiety. It's about using it as a strategic advantage. As founder of the Center for Anxiety and a nationally recognized mental health expert, Rosmarin has worked with executives, entrepreneurs, and high-performance teams across industries. In this session, he'll guide attendees through a stigma-free, deeply practical conversation on how to turn anxiety into fuel, not friction. Expect takeaways on: With coverage in outlets like The New York Times, WSJ, GMA, and Rosmarin's work has reached millions. Now he's bringing it directly to startup leaders. If you're building under pressure (and who isn't?), this session will change how you how to channel your anxiety into your greatest advantage, and many other takeaways from a whole day packed with sessions with scaling experts at TC All Stage. Register now before prices go up at the door. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store