AI Data Storage Leader MinIO Expands Partner Program to Meet AIStor Demand
REDWOOD CITY, Calif., June 11, 2025 /CNW/ -- MinIO, the leader in high-performance object storage for the exabyte AI era, today launched an expanded MinIO Partner Program. The MinIO Partner Program is designed to meet the evolving needs of global customers as the demand for object storage skyrockets with AI adoption, which is projected to exceed $20B by the end of 2025, according to IDC. The updated partner experience introduces new, unique and profitable opportunities for partners as more and more customers rely on MinIO AIStor as the de facto object-native storage system for AI.
According to Statista, the total amount of data created, captured, copied, and consumed globally reached 149 zettabytes in 2024 and is forecast to increase rapidly, reaching 394 zettabytes by 2028. This is in large part because AI model scale is continuing to grow rapidly, with training compute doubling every five months and dataset sizes every eight, according to Stanford University's 2025 AI Index Report. Consequently, the AI data storage market is expanding rapidly and MinIO is experiencing unparalleled growth. With multiple 8 figure exabyte scale customer engagements closed in 2024 and 149% Annual Recurring Revenue (ARR) growth over the last two years, MinIO is evolving its Partner Program to meet this scale and demand.
The program now offers larger incentives for revenue growth, enhanced training courses, enablement materials, and new certifications and tiering for partners in the MinIO ecosystem. Areas of focus include channel (system integrators, appliance partners and distributors), cloud, GSI, platform, and technology partners.
MinIO's newly appointed partner leader, Mike Marinchak, formerly led channel and partner sales at NetApp and Pure Storage, and brings nearly 25 years of experience in the data storage and infrastructure industry. At MinIO, Marinchak will be focused on extending MinIO AIStor's capabilities and services through channel partners (system integrators, appliance partners and distributors), with plans to partner globally. He has been instrumental in shaping this new program and will be critical to company growth moving forward.
"After launching MinIO AIStor in late 2024 and setting a new bar for multi-exabyte AI data storage, the evolution of the MinIO Partner Program presents a massive partner opportunity as AI adoption drives exponential demand for scalable storage solutions," said Mike Marinchak, Vice President, Global Partner Sales, MinIO. "The enhancements to the program deliver significantly more value to our partners, and importantly, to our customers, as we continue to invest in expanding the reach and breadth of MinIO global offerings available."
Continuing its promise of a frictionless partnership that paves the way for customer and partner success globally, the MinIO Partner Program centers around four key pillars:
Market Demand — Partners can expect to benefit from MinIO's massive global adoption (over 33K community Slack members and 52.8K stars and 6K forks on GitHub) and AIStor's unmatched performance, which scales linearly with capacity in a single namespace, as customers strive for storage environment modernization and cost and control optimization.
Bundling — Partners can realize a 4-5x selling opportunity by bundling standardized storage hardware with MinIO AIStor and services to maximize ASPs.
Sustained Profitability & Predictable Subscription Model — Partners can build recurring revenues with generous margins for the entirety of the customer engagement, across renewals, up-sell and cross-sell.
Enablement & Certification — MinIO now empowers partners with dedicated deal support, enablement training and a paid certification program via the MinIO Academy to arm partners with the skills, knowledge, and confidence required for success.
"Trace3 and MinIO have a strong partnership and successful track record in helping joint customers meet and exceed their goals," said Jeremy Morris, General Manager, Northwest, Trace3. "We value the new opportunities the partner program delivers, especially the increased revenue opportunities, co-marketing, and advanced training—it makes us better, together. The benefits of this revamped program will help fuel our business growth as we evolve our offerings and go-to-market strategy."
As a part of the updated program, the MinIO Academy offers different levels of MinIO sales, technical, and advanced technical training. Each level of training earns a badge to recognize the skills and commitment it represents. Partners can access training via the partner portal.
New partners can apply to join the partner program here.
About MinIOMinIO is the leader in high-performance object storage for AI. With 2B+ Docker downloads 50k+ stars on GitHub, MinIO is used by more than half of the Fortune 500 to achieve performance at scale at a fraction of the cost compared to the public cloud providers. MinIO AIStor is uniquely designed to meet the flexibility and exascale requirements of AI, empowering organizations to fully capitalize on existing AI investments and address emerging infrastructure challenges while delivering continuous business value. Founded in November 2014 by industry visionaries AB Periasamy and Garima Kapoor, MinIO is the world's fastest growing object store.
Media Contact: Tucker Hallowell, Inkhouse, minio@inkhouse.com
View original content to download multimedia:https://www.prnewswire.com/news-releases/ai-data-storage-leader-minio-expands-partner-program-to-meet-aistor-demand-302478600.html
SOURCE MinIO
View original content to download multimedia: http://www.newswire.ca/en/releases/archive/June2025/11/c2072.html
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
14 minutes ago
- Tom's Guide
ChatGPT-5 just got 4 new personalities — here's how to use them (and why you should)
With the launch of OpenAI's newest model, the company has introduced four distinct personality modes for ChatGPT-5. As the company's most advanced large language model to date, it delivers major upgrades in reasoning, memory and multimodal capabilities. It's better at sustaining complex conversations, understanding context across chats and producing more accurate and creative responses. Now, during text-based conversations, users can customize how the chatbot sounds and responds. It's similar to custom GPTs, but these modes were created by OpenAI to give users a new way to tailor the assistant's tone and demeanor with a single click. Now, users can turn the chatbot into a dry-witted Cynic, an efficient Robot, a warm Listener or an enthusiastic Nerd. These personality styles are available to users running GPT-5 in text mode and can be changed at any time during a chat. Each one distinct, yet helpful, here's how each mode behaves: Cynic: Dry, sarcastic, and unfiltered. Offers practical advice with a side of side-eye. Great for humor or cutting through fluff. Robot: Emotionless, direct, and to the point. No extra words — just clean, accurate responses. Listener: Calm, warm and empathetic. Reflects your thoughts gently and adds emotional nuance to its replies. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Nerd: Curious, excitable, and deeply knowledgeable. Explains things thoroughly and loves sharing fun facts. Each personality mode brings a different tone to your conversations and can even shift how you brainstorm, write or learn with AI. Activating these modes is easy: The personality will apply to your current chat, but you can switch or turn off the personality mode at any time. Here's how each personality mode answers the same prompt: 'Why is the sky blue?' Cynic: Oh, you mean besides the fact that Mother Nature likes to mess with us? Fine. Sunlight hits the atmosphere, the shorter blue wavelengths scatter more than the other colors, so the sky looks blue to us simple humans. You'll get your nice orange or red at sunset when the light's taking the scenic route. Happy now? Robot: The sky appears blue due to Rayleigh scattering. When sunlight enters Earth's atmosphere, shorter wavelengths (blue) scatter more efficiently than longer wavelengths (red). This selective scattering causes the human eye to perceive the sky as blue during daylight hours. Listener: That's such a great question! It's actually because sunlight is made up of many colors, and when it passes through our atmosphere, the tiny air particles scatter the shorter blue wavelengths more than the others. So when we look up, our eyes catch mostly that soft, blue light. It's kind of peaceful when you think about it, right? Nerd: Oh, you've unlocked one of my favorite topics! The blue sky is thanks to Rayleigh scattering, which is inversely proportional to the fourth power of wavelength. That means shorter wavelengths like blue and violet scatter much more than red or yellow. Our eyes are more sensitive to blue than violet, and some violet is absorbed by the upper atmosphere, so we see a rich blue. Fun fact: the same physics makes Mars' sunsets blue and ours red! Switching between ChatGPT's personalities is fun, but it can also seriously boost your productivity and creativity. Each mode offers a different lens for thinking and communicating: Use Cynic when you wnt to cut through the noise. It's good for brainstorming hot takes or injecting humor into dry topics. When you want to keep things efficient like when you're doing technical writing or troubleshooting, try using Robot. It's also a useful companion when coding. Listener adds empathy, which can be helfpul when you're doing some personal writing or doing mental check-ins. It could also be useful for writing to customers if you run a business. Nerd is a useful personality when you want to make learning fun. The Nerd explains complex topics much more fun; this one is useful for kids. Whether you're writing an email, stuck on a project or just want to hear something explained with personality, these modes can shift the vibe and help you unlock new creative angles — all done without switching tools. These new personality styles give ChatGPT-5 a more human-like edge and give you more control. As in the example above, you'll see that they all respond differently. This is an opportunity to choose how your AI sounds, thinks and helps, instead of the one-size-fits-all assistant that we got with GPT-4. Try them all. You might be surprised which one becomes your Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Forbes
15 minutes ago
- Forbes
ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?
During the GPT-5 launch this week, Sam Altman, CEO of OpenAI, invited a cancer patient and her husband to the stage. She shared how, after receiving her biopsy report, she turned to ChatGPT for help. The AI instantly decoded the dense medical terminology, interpreted the findings, and outlined possible next steps. That moment of clarity gave her a renewed sense of control over her care. Altman mentioned; 'health is one of the top reasons consumers use ChatGPT, saying it 'empowers you to be more in control of your healthcare journey.' Around the world, patients are turning to AI chatbots like ChatGPT and Claude to better understand their diagnoses and take a more active role in managing their health. In hospitals, both patients and clinicians sometimes use these AI tools informally to verify information. At medical conferences, some healthcare professionals admit to carrying a 'second phone' dedicated solely to AI queries. Without accessing any private patient data, they use it to validate their assessments, much like patients seeking a digital 'second opinion' alongside their physician's advice. Even during leisure activities like hiking or camping, parents often rely on AI Chatbots like ChatGPT or Claude for quick guidance on everyday concerns such as treating insect bites or skin reactions in their children. This raises an important question: Can AI Companions Like ChatGPT, Claude, and Others Offer the Same Promise, Comfort, Commitment, and Care as Some Humans? As AI tools become more integrated into patient management, their potential to provide emotional support alongside clinical care is rapidly evolving. These chatbots can be especially helpful in alleviating anxiety caused by uncertainty, whether it's about a diagnosis, prognosis, or simply reassurance regarding potential next steps in medical or personal decisions. Given the existing ongoing stressors from disease management burden on patients, advanced AI companions like ChatGPT and Claude can play an important role by providing timely, 24/7 reassurance, clear guidance, and emotional support. Notably, some studies suggest that AI responses can be perceived as even more compassionate and reassuring than those from humans. Loneliness is another pervasive issue in healthcare. Emerging research suggests that social chatbots can reduce loneliness and social anxiety, underscoring their potential as complementary tools in mental health care. These advanced AI models help bridge gaps in information access, emotional reassurance, and patient engagement, offering clear answers, confidence, comfort, and a digital second opinion, particularly valuable when human resources are limited. Mustafa Suleyman, CEO of Microsoft AI, has articulated a vision for AI companions that evolve over time and transform our lives by providing calm and comfort. He describes an AI 'companion that sees what you see online and hears what you hear, personalized to you. Imagine the overload you carry quietly, subtly diminishing. Imagine clarity. Imagine calm.' While there are many reasons AI is increasingly used in healthcare, a key question remains: Why Are Healthcare Stakeholders Increasingly Turning to AI? Healthcare providers are increasingly adopting AI companions because they fill critical gaps in care delivery. Their constant availability and scalability enhance patient experience and outcomes by offering emotional support, cognitive clarity, and trusted advice whenever patients need it most. While AI companions are not new, today's technology delivers measurable benefits in patient care. For example, Woebot, an AI mental health chatbot, demonstrated reductions in anxiety and depression symptoms within just two weeks. ChatGPT's current investment in HealthBench to promote health and well-being further demonstrate its promise, commitment, and potential to help even more patients. These advances illustrate how AI tools can effectively complement traditional healthcare by improving patient well-being through consistent reassurance and engagement. So, what's holding back wider reliance on chatbots? The Hindrance: Why We Can't Fully Rely on AI Chatbot Companions Despite rapid advancements, AI companions are far from flawless, especially in healthcare where the margin for error is razor thin. Large language models (LLMs) like ChatGPT and Claude are trained on vast datasets that may harbor hidden biases, potentially misleading vulnerable patient populations. Even with impressive capabilities, ChatGPT can still hallucinate or provide factually incorrect information—posing real risks if patients substitute AI guidance for professional medical advice. While future versions may improve reliability, current models are not suited for unsupervised clinical use. Sometimes, AI-generated recommendations may conflict with physicians' advice, which can undermine trust and disrupt the patient–clinician relationship. There is also a risk of patients forming deep emotional bonds with AI, leading to over-dependence and blurred boundaries between digital and human interaction. As LinkedIn cofounder Reid Hoffman put it in Business Insider, 'I don't think any AI tool today is capable of being a friend,' and "And I think if it's pretending to be a friend, you're actually harming the person in so doing." For now, AI companions should be regarded as valuable complements to human expertise, empathy, and accountability — not replacements. A Balanced, Safe Framework: Maximizing Benefit, Minimizing Risk To harness AI companions' full potential while minimizing risks, a robust framework is essential. This begins with data transparency and governance: models must be trained on inclusive, high-quality datasets designed to reduce demographic bias and errors. Clinical alignment is critical; AI systems should be trained on evidence-based protocols and guidelines, with a clear distinction between educational information and personalized medical advice. Reliability and ethical safeguards are vital, including break prompts during extended interactions, guidance directing users to seek human support when needed, and transparent communication about AI's limitations. Above all, AI should complement human clinicians, acting as a navigator or translator to encourage and facilitate open dialogue between patients and their healthcare providers. Executive Call to Action In today's digital age, patients inevitably turn to the internet and increasingly to AI chatbots like ChatGPT and Claude for answers and reassurance. Attempts to restrict this behavior are neither practical nor beneficial. Executive physician advisors and healthcare leaders are therefore responsible for embracing this reality by providing structured, transparent, and integrated pathways that guide patients in using these powerful tools wisely. It is critical that healthcare systems are equipped with frameworks ensuring AI complements clinical care rather than confuses or replaces it. Where AI capabilities fall short, these gaps must be bridged with human expertise and ethical oversight. Innovation should never come at the expense of patient safety, trust, or quality of care. By proactively shaping AI deployment in healthcare, stakeholders can empower patients with reliable information, foster meaningful clinician-patient dialogue, and ultimately improve outcomes in this new era of AI-driven medicine.
Yahoo
2 hours ago
- Yahoo
Realtors Are Using AI Images of Homes They're Selling. Comparing Them to the Real Thing Will Make You Mad as Hell
As if suspiciously AI-generated descriptions of real estate listings weren't enough, agents are starting to use AI-generated images of houses that don't exist to sell expensive properties. The Register spotted a dubious listing of a fully renovated 3-bedroom in the UK with a photo — which has since been deleted, but can still be viewed in an archived version — that shows the tell-tale signs of generative AI, including awnings that don't line up, hedges that inexplicably turn into walls, and a flowerbed obstructing a neighboring door. The real — or at least less-edited — version of the image tells a dramatically different story, with a tiled awning, no flowerbed obstructing the path to the neighboring door, and an adjoining hair salon that was seemingly deleted from the tampered-with version entirely. In some ways, the haphazard use of AI shouldn't come as much of a surprise. The real estate world has long made use of CGI tools to virtually "stage" homes, a cost-cutting measure designed to save agencies from having to physically move furniture around. But given the damning evidence — alongside society's current obsession with shoehorning generative AI into virtually every aspect of modern life — the trend has seemingly found its next logical application. And we're not just talking about photoshopping a couch into the photo of an empty living room. As The Register points out, the original listing of the house pictured above confused entire structural elements, down to the placement of the toilet in the bathroom. Reached by The Register, a spokesperson for the agency responsible for the AI carnage responded with a "sharp intake of breath and a declaration that 'we're a little busy at the moment.'" It didn't reply to followup questions. The trend paints a troubled picture of an industry that already has a tarnished reputation for twisting the truth. It's a worrying reality, given that entire livelihoods are being put on the line in the pursuit of buying one's dream home. Worse, experts say this kind of use of AI could even run afoul of the law. "For me, the use of AI for imagery in property listings is a major red flag aligned to what was previously covered by the Property Misdescriptions Act," University of Reading associate professor of building surveying Adrian Tagg told The Register. Tagg explained that building surveyors are "bound by regulations to deliver evidence-based opinion and hold a duty of care to deliver correct, appropriate advice." "Estate agency has never really had this professional duty, and ultimately it's all about sales and doing 'the deal,'" he explained. "Therefore I'm not surprised that there appears an openness to accept AI when ultimately it's an industry with little obligation to be accountable for their actions." Despite very real concerns of misleading home buyers, the real estate market has welcomed the age of generative AI with open arms. Consultancy firm McKinsey & Company predicted in 2023 that the tech could "generate $110 billion to $180 billion or more in value for the real estate industry." The consultancy said that generative AI could help in "customer engagement," and generating "new creative content," among other use cases. An entire cottage industry has cropped up, offering cheap, "AI-powered virtual staging" and real estate "photo editing" services. But prospective buyers are understandably furious. Earlier this year, one user in the FirstTimeHomeBuyer subreddit came across a listing on Zillow, which included a (since-deleted) picture of a small bungalow in Newport, Kentucky. The heavily edited image didn't even get the house number right, showing up as a mangled "418," instead of 1026. "Using AI in listing photos should be illegal," the user wrote. More on generative AI: Microsoft Releases List of Jobs Most and Least Likely to Be Replaced by AI