AIRMEEZ Inc. announces partnership with Praxis Ai to transform education and enterprise markets
Palm Harbor FL, USA, April 4, 2025 -- AIRMEEZ Inc, a leader in AI-powered customer engagement solutions, announced today a groundbreaking partnership with Praxis AI, a premier middleware platform transforming human expertise into digital twin agents.
'We're thrilled to announce that AIRMEEZ Inc. has integrated Digital Twins into the AIRMEEZ Platform.' says Mike Seeley, CRO and Co-Founder. 'This groundbreaking addition expands the reach of our applications in Education, workforce development, and Healthcare verticals, driving smarter, more efficient, and immersive experiences like never before. With Digital Twins, we're unlocking new possibilities by enhancing simulations, optimizing operations, and delivering cutting-edge solutions to improve learning and patient care.'
This collaboration aims to combine AIRMEEZ's innovative engagement technology with Praxis AI's proven education and workforce development solutions, leveraging Praxis AI's deep relationships with many institutions while expanding AIRMEEZ's enterprise-grade solutions across educational communities.
This comes as Praxis AI experiences 15X growth delivering nearly 1 billion AI tokens monthly with zero data leakage.
Together, the companies are working to break down traditional barriers to learning and create personalized educational experiences that extend beyond classroom walls. By combining cutting-edge AI with human-centered teaching approaches, this partnership empowers educators to reimagine what's possible and inspire students to engage with content in transformative new ways.
'By joining forces with Praxis AI, we're creating an AI powerhouse that will dramatically accelerate digital twin adoption across education and enterprise markets,' said Brandi Hardman, Chief Channel Officer at AIRMEEZ. 'This isn't just a distribution agreement, it's a commitment to revolutionizing how organizations capture, scale, and deploy human expertise.'
The partnership announcement coincides with Praxis AI's selection as a GSV Cup 50 finalist, with co-founder and CEO David James Clarke IV presenting at the prestigious ASU+GSV Summit in San Diego. The recognition underscores the transformative potential of the combined AIRMEEZ-Praxis AI technology suite.
The organization's distribution agreement has been reinforced after Harvard Business School showed research demonstrating that AI agents dramatically reshape expertise sharing and collaboration in knowledge work. The study found that AI-enabled digital twins break down professional silos and democratize knowledge, foster more holistic problem-solving across domains, and create positive workplace experiences with reduced negative emotions.
The combined AIRMEEZ-Praxis AI solution leverages Anthropic's Claude AI models, with resulting upturn in student engagement and outcomes along with full letter grade improvements in class averages and time savings for faculty.
'What Anthropic has created with Claude is extraordinary. Students can now have meaningful, personalized learning interactions anytime, at scale, which is magical,' said David James Clarke IV, CEO of Praxis AI. 'Our partnership with AIRMEEZ will make this technology accessible to even more institutions and enterprises worldwide.'
ENDS
About the company: Media Contact: Brandi Hardman
Phone: (919) 600-1069
Website: www.airmeez.com
About AIRMEEZ
AIRMEEZ specializes in AI-driven customer engagement solutions that transform business-customer interactions through innovative digital technologies. The company's digital twin platform creates virtual, real-time replicas using advanced artificial intelligence, delivering personalized, 24/7 support across multiple channels.
About Praxis AI
Praxis AI is an innovative AI middleware platform that transforms human experts' knowledge into digital twin agents for education and workforce development. The platform enables personalized one-on-one faculty interactions for students, democratizes specialized
Contact Info:
Name: AIRMEEZ
Email: Send Email
Organization: Brandi Hardman
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
2 hours ago
- Business Insider
Klarna CEO warns AI may cause a recession as the technology comes for white-collar jobs
Speaking on The Times Tech podcast, Sebastian Siemiatkowski said there would be "an implication for white-collar jobs," which he said "usually leads to at least a recession in the short term." "Unfortunately, I don't see how we could avoid that, with what's happening from a technology perspective," he continued. Siemiatkowski, who has long been candid about his belief that AI will come for human jobs, added that AI had played a key role in "efficiency gains" at Klarna and that the firm's workforce had shrunk from about 5,500 to 3,000 people in the last two years as a result. It's not the first time the exec and Klarna have made headlines along these lines. In February 2024, Klarna boasted that its OpenAI-powered AI assistant was doing the work of 700 full-time customer service agents. The company, most famous for its "buy now, pay later" service, was one of the first firms to partner with Sam Altman's company. Later that year, Siemiatkowski told Bloomberg TV that he believed AI was already capable of doing "all of the jobs" that humans do and that Klarna had enacted a hiring freeze since 2023 as it looked to slim down and focus on adopting the technology. However, Siemiatkowski has since dialed back his all-in stance on AI, telling an audience at the firm's Stockholm headquarters in May that his AI-driven customer service cost-cutting efforts had gone too far and that Klarna was planning to now recruit, according to Bloomberg. "From a brand perspective, a company perspective, I just think it's so critical that you are clear to your customer that there will be always a human if you want," he said. In the interview with The Times, Siemiatkowski said he felt that many people in the tech industry, particularly CEOs, tended to "downplay the consequences of AI on jobs, white-collar jobs in particular." "I don't want to be one of them," he said. "I want to be honest, I want to be fair, and I want to tell what I see so that society can start taking preparations." Some of the top leaders in AI, however, have been ringing the alarm lately, too. Anthropic's leadership has been particularly outspoken about the threat AI poses to the human labor market. The company's CEO, Dario Amodei, recently said that AI may eliminate 50% of entry-level white-collar jobs within the next five years. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei said. "I don't think this is on people's radar." Similarly, his colleague, Mike Krieger, Anthropic's chief product officer, said he is hesitant to hire entry-level software engineers over more experienced ones who can also leverage AI tools. The silver lining is that AI also brings the promise of better and more fulfilling work, Krieger said. Humans, he said, should focus on "coming up with the right ideas, doing the right user interaction design, figuring out how to delegate work correctly, and then figuring out how to review things at scale — and that's probably some combination of maybe a comeback of some static analysis or maybe AI-driven analysis tools of what was actually produced."

Business Insider
3 hours ago
- Business Insider
The future of AI will be governed by protocols no one has agreed on yet
The tech industry, much like everything else in the world, abides by certain rules. With the boom in personal computing came USB, a standard for transferring data between devices. With the rise of the internet came IP addresses, numerical labels that identify every device online. With the advent of email came SMTP, a framework for routing email across the internet. These are protocols — the invisible scaffolding of the digital realm — and with every technological shift, new ones emerge to govern how things communicate, interact, and operate. As the world enters an era shaped by AI, it will need to draw up new ones. But AI goes beyond the usual parameters of screens and code. It forces developers to rethink fundamental questions about how technological systems interact across the virtual and physical worlds. How will humans and AI coexist? How will AI systems engage with each other? And how will we define the protocols that manage a new age of intelligent systems? Across the industry, startups and tech giants alike are busy developing protocols to answer these questions. Some govern the present in which humans still largely control AI models. Others are building for a future in which AI has taken over a significant share of human labor. "Protocols are going to be this kind of standardized way of processing non-deterministic information," Antoni Gmitruk, the chief technology officer of Golf, which helps clients deploy remote servers aligned with Anthropic's Model Context Protocol, told BI. Agents, and AI in general, are "inherently non-deterministic in terms of what they do and how they behave." When AI behavior is difficult to predict, the best response is to imagine possibilities and test them through hypothetical scenarios. Here are a few that call for clear protocols. Scenario 1: Humans and AI, a dialogue of equals Games are one way to determine which protocols strike the right balance of power between AI and humans. In late 2024, a group of young cryptography experts launched Freysa, an AI agent that invites human users to manipulate it. The rules are unconventional: Make Freysa fall in love with you or agree to concede its funds, and the prize is yours. The prize pool grows with each failed attempt in a standoff between human intuition and machine logic. Freysa has caught the attention of big names in the tech industry, from Elon Musk, who called one of its games "interesting," to veteran venture capitalist Marc Andreessen. "The core technical thing we've done is enabled her to have her own private keys inside a trusted enclave," said one of the architects of Freysa, who spoke under the condition of anonymity to BI in a January interview. Secure enclaves are not new in the tech industry. They're used by companies from AWS to Microsoft as an extra layer of security to isolate sensitive data. In Freysa's case, the architect said they represent the first step toward creating a "sovereign agent." He defined that as an agent that can control its own private keys, access money, and evolve autonomously — the type of agent that will likely become ubiquitous. "Why are we doing it at this time? We're entering a phase where AI is getting just good enough that you can see the future, which is AI basically replacing your work, my work, all our work, and becoming economically productive as autonomous entities," the architect said. In this phase, they said Freysa helps answer a core question: "What does human involvement look like? And how do you have human co-governance over agents at scale?" In May, the The Block, a crypto news site, revealed that the company behind Freysa is Eternis AI. Eternis AI describes itself as an "applied AI lab focused on enabling digital twins for everyone, multi-agent coordination, and sovereign agent systems." The company has raised $30 million from investors, including Coinbase Ventures. Its co-founders are Srikar Varadaraj, Pratyush Ranjan Tiwari, Ken Li, and Augustinas Malinauskas. Scenario 2: To the current architects of intelligence Freysa establishes protocols in anticipation of a hypothetical future when humans and AI agents interact with similar levels of autonomy. The world, however, needs also to set rules for the present, where AI still remains a product of human design and intention. AI typically runs on the web and builds on existing protocols developed long before it, explained Davi Ottenheimer, a cybersecurity strategist who studies the intersection of technology, ethics, and human behavior, and is president of security consultancy flyingpenguin. "But it adds in this new element of intelligence, which is reasoning," he said, and we don't yet have protocols for reasoning. "I'm seeing this sort of hinted at in all of the news. Oh, they scanned every book that's ever been written and never asked if they could. Well, there was no protocol that said you can't scan that, right?" he said. There might not be protocols, but there are laws. OpenAI is facing a copyright lawsuit from the Authors Guild for training its models on data from "more than 100,000 published books" and then deleting the datasets. Meta considered buying the publishing house Simon & Schuster outright to gain access to published books. Tech giants have also resorted to tapping almost all of the consumer data available online from the content of public Google Docs and the relics of social media sites like Myspace and Friendster to train their AI models. Ottenheimer compared the current dash for data to the creation of ImageNet — the visual database that propelled computer vision, built by Mechanical Turk workers who scoured the internet for content. "They did a bunch of stuff that a protocol would have eliminated," he said. Scenario 3: How to take to each other As we move closer to a future where artificial general intelligence is a reality, we'll need protocols for how intelligent systems — from foundation models to agents — communicate with each other and the broader world. The leading AI companies have already launched new ones to pave the way. Anthropic, the maker of Claude, launched the Model Context Protocol, or MCP, in November 2024. It describes it as a "universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol." In April, Google launched Agent2Agent, a protocol that will "allow AI agents to communicate with each other, securely exchange information, and coordinate actions on top of various enterprise platforms or applications." These build on existing AI protocols, but address new challenges of scaling and interoperability that have become critical to AI adoption. So, managing their behavior is the "middle step before we unleash the full power of AGI and let them run around the world freely," he said. When we arrive at that point, Gmitruk said agents will no longer communicate through APIs but in natural language. They'll have unique identities, jobs even, and need to be verified. "How do we enable agents to communicate between each other, and not just being computer programs running somewhere on the server, but actually being some sort of existing entity that has its history, that has its kind of goals," Gmitruk said. It's still early to set standards for agent-to-agent communication, Gmitruk said. Earlier this year he and his team initially launched a company focused on building an authentication protocol for agents, but pivoted. "It was too early for agent-to-agent authentication," he told BI over LinkedIn. "Our overall vision is still the same -> there needs to be agent-native access to the conventional internet, but we just doubled down on MCP as this is more relevant at the stage of agents we're at." Does everything need a protocol? Definitely not. The AI boom marks a turning point, reviving debates over how knowledge is shared and monetized. McKinsey & Company calls it an "inflection point" in the fourth industrial revolution — a wave of change that it says began in the mid-2010s and spans the current era of "connectivity, advanced analytics, automation, and advanced-manufacturing technology." Moments like this raise a key question: How much innovation belongs to the public and how much to the market? Nowhere is that clearer than in the AI world's debate between the value of open-source and closed models. "I think we will see a lot of new protocols in the age of AI," Tiago Sada, the chief product officer at Tools for Humanity, the company building the technology behind Sam Altman's World. However, "I don't think everything should be a protocol." World is a protocol designed for a future in which humans will need to verify their identity at every turn. Sada said the goal of any protocol "should be like this open thing, like this open infrastructure that anyone can use," and is free from censorship or influence. At the same time, "one of the downsides of protocols is that they're sometimes slower to move," he said. "When's the last time email got a new feature? Or the internet? Protocols are open and inclusive, but they can be harder to monetize and innovate on," he said. "So in AI, yes — we'll see some things built as protocols, but a lot will still just be products."
Yahoo
7 hours ago
- Yahoo
Reddit Lawsuit Against Anthropic AI Has Stakes for Sports
In a new lawsuit, Reddit accuses AI company Anthropic of illegally scraping its users' data—including posts authored by sports fans who use the popular online discussion platform. Reddit's complaint, drafted by John B. Quinn and other attorneys from Quinn Emanuel Urquhart & Sullivan, was filed on Wednesday in a California court. It contends Anthropic breached the Reddit user agreement by scraping Reddit content through its web crawler, ClaudeBot. The web crawler provides training data for Anthropic's AI tool, Claude, which relies on large language models (LLMs) that distill data and language. More from Prime Video NASCAR Coverage Uses AI to Show Hidden Race Story Indy 500 Fans Use Record Amount of Data During Sellout Race Who Killed the AAF? League's Demise Examined in Latest Rulings Other claims in the complaint include tortious interference and unjust enrichment. Scraping Reddit content is portrayed as undermining Reddit's obligations to its more than 100 million daily active unique users, including to protect their privacy. Reddit also contends Anthropic subverts its assurances to users that they control their expressions, including when deleting posts from public view. Scraping is key to AI. Automated technology makes requests to a website, then copies the results and tries to make sense of them. Anthropic, Reddit claims, finds Reddit data 'to be of the highest quality and well-suited for fine-tuning AI models' and useful for training AI. Anthropic allegedly violates users' privacy, since those users 'have no way of knowing' their data has been taken. Reddit, valued at $6.4 billion in its initial public offering last year, has hundreds of thousands of 'subreddits,' or online communities that cover numerous shared interests. Many subreddits are sports related, including r/sports, which has 22 million fans, r/nba (17 million) and the college football-centered r/CFB (4.4 million). Some pro franchises, including the Miami Dolphins (r/miamidolphins) and Dallas Cowboys (r/cowboys), have official subreddits. Reddit contends its unique features elevate its content and thus make the content more attractive to scraping endeavors. Reddit users submit posts, which can include original commentary, links, polls and videos, and they upvote or downvote content. This voting influences whether a post appears on the subreddit's front page or is more obscurely placed. Subreddit communities also self-police, with prohibitions on personal attacks, harassment, racism and spam. These practices can generate thoughtful and detailed commentary. Reddit estimates that ClaudeBot's scraping of Reddit has 'catapulted Anthropic into its valuation of tens of billions of dollars.' Meanwhile, Reddit says the company and its users lose out, because they 'realize no benefits from the technology that they helped create.' Anthropic allegedly trained ClaudeBot to extract data from Reddit starting in December 2021. Anthropic CEO Dario Amodei is quoted in the complaint as praising Reddit content, especially content found in prominent subreddits. Although Anthropic indicated it had stopped scraping Reddit in July 2024, Reddit says audit logs show Anthropic 'continued to deploy its automated bots to access Reddit content' more than 100,000 times in subsequent months. Reddit also unfavorably compares Anthropic to OpenAI and Google, which are 'giants in the AI space.' Reddit says OpenAI and Google 'entered into formal partnerships with Reddit' that permitted them to use Reddit content but only in ways that 'protect Reddit and its users' interests and privacy.' In contrast, Anthropic is depicted as engaging in unauthorized activities. In a statement shared with media, an Anthropic spokesperson said, 'we disagree with Reddit's claims, and we will defend ourselves vigorously.' In the weeks ahead, attorneys or Anthropic will answer Reddit's complaint and argue the company has not broken any laws. Reddit v. Anthropic has implications beyond the posts of Reddit users. Web crawlers scraping is a constant activity on the Internet, including message boards, blogs and other forums where sports fans and followers express viewpoints. The use of this content to train AI without knowledge or explicit consent by users is a legal topic sure to stir debate in the years ahead. Best of College Athletes as Employees: Answering 25 Key Questions