logo
#

Latest news with #LLaMa

VergeIO Introduces VergeIQ – On-Premises AI for Every Enterprise
VergeIO Introduces VergeIQ – On-Premises AI for Every Enterprise

Business Wire

time6 days ago

  • Business
  • Business Wire

VergeIO Introduces VergeIQ – On-Premises AI for Every Enterprise

BUSINESS WIRE)-- VergeIO, the developer of VergeOS—the first software-defined platform to unify virtualization, storage, and networking—today announced VergeIQ, a fully integrated Enterprise AI infrastructure solution enabling enterprises and research institutions to securely, efficiently, and easily deploy and manage private AI environments. VergeIQ is not a bolt-on or standalone AI stack. It is seamlessly integrated as a core component of VergeOS, enabling enterprises to rapidly deploy AI infrastructure and capabilities within their existing data centers in minutes, rather than months. 'With VergeIQ, we're removing the complexity and hurdles enterprises face when adopting AI,' said Yan Ness, CEO of VergeIO. 'Organizations want to leverage the power of AI for competitive advantage without losing control of their most sensitive data. VergeIQ provides exactly that—enterprise-ready AI fully integrated within VergeOS, entirely under your control.' Scott Sinclair, Practice Director of the Infrastructure, Cloud, and DevOps segment at Enterprise Strategy Group (ESG), added: 'AI has quickly become a strategic priority across every industry, but organizations encounter significant challenges around infrastructure complexity and data governance. VergeIQ directly addresses these pain points by making private, secure AI deployment achievable for enterprises of all sizes. This innovation will help drive the next wave of enterprise AI adoption.' Integrated Enterprise AI Infrastructure VergeIQ is designed for enterprises needing secure AI capabilities, data privacy, and near-bare-metal performance. Customers can select and deploy various Large Language Models (LLMs)—such as LLaMa, Mistral, and Falcon—and immediately begin using them on their own data within VergeOS secure, tenant-aware infrastructure. Unlike traditional AI stacks that rely on complex third-party infrastructure or GPU virtualization tools like NVIDIA vGPU, VergeIQ provides native GPU pooling and clustering, eliminating external licensing complexity. This native integration ensures dynamic, efficient GPU resource utilization across virtual data centers. Key Highlights of VergeIQ: Private, Secure Deployment: Fully on-premises, air-gapped, or disconnected deployments ensuring total data sovereignty, compliance, and security. Rapid LLM Deployment: Instantly deploy popular pre-trained LLMs, including LLaMa, Falcon, OpenAI, Claude, and Mistral, without complex setup or custom training. OpenAI API Routing: A built-in OpenAI-compatible API router simplifies the integration and interaction of diverse large language models (LLMs) within your workflows. Vendor-Agnostic GPU Support: Utilize any standard GPU hardware, avoiding vendor lock-in and enhancing flexibility in infrastructure decisions. Dynamic GPU/CPU Orchestration: Automatically manage and optimize the loading and utilization of AI models across available GPU and CPU resources, maximizing infrastructure efficiency and scalability. GPU Sharing and Clustering: Dynamic, intelligent pooling and sharing of GPU resources across clusters to ensure optimal usage, performance, and cost-efficiency. Infrastructure Intelligence: Directly query IT infrastructure to extract actionable insights and simplify operational decision-making rapidly. VergeIQ will be available as a seamless upgrade of VergeOS to all existing customers, providing full enterprise AI capabilities immediately upon upgrade. To learn more about VergeIQ and see it in action, register for our upcoming webinar on June 12th at 1:00pm ET. About VergeIO VergeIO is the company behind VergeOS, the world's first unified data center operating system. VergeOS simplifies and integrates virtualization, storage, and networking into a single, scalable software platform that spans from the edge to the core. With the addition of VergeIQ, VergeIO is redefining how enterprises build, deploy, and manage private and secure AI infrastructure. To schedule a demonstration or join the VergeIQ Early Access program, visit

An AI model has finally passed an authentic Turing test, scientists say
An AI model has finally passed an authentic Turing test, scientists say

Yahoo

time13-04-2025

  • Science
  • Yahoo

An AI model has finally passed an authentic Turing test, scientists say

When you buy through links on our articles, Future and its syndication partners may earn a commission. Large language models (LLMs) are getting better at pretending to be human, with GPT-4.5 now resoundingly passing the Turing test, scientists say. In the new study, published March 31 to the arXiv preprint database but not yet peer reviewed, researchers found that when taking part in a three-party Turing test, GPT-4.5 could fool people into thinking it was another human 73% of the time. The scientists were comparing a mixture of different artificial intelligence (AI) models in this study. While another team of scientists has previously reported that GPT-4 passed a two-party Turing test, this is the first time an LLM has passed the more challenging and original configuration of computer scientist Alan Turing's "imitation game." "So do LLMs pass the Turing test? We think this is pretty strong evidence that they do. People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (with the persona prompt). And 4.5 was even judged to be human significantly *more* often than actual humans!' said co-author of the study Cameron Jones, a researcher at the University of San Diego's Language and Cognition Lab, on the social media network X. Related: Using AI reduces your critical thinking skills, Microsoft study warns GPT-4.5 is the frontrunner in this study, but Meta's LLaMa-3.1 was also judged to be human by test participants 56% of the time, which still beats Turing's forecast that "an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning." The core idea of the Turing test is less about proving machines can think and more about whether they can imitate humans; hence why the test is often referred to as the "imitation game." Turing's original proposal was that a human "interrogator" would pose questions to two unseen entities, one of them human and one a computer. From various prompts and answers, the interrogator would decide which is human and which is not. A computer or an AI system could pass the test by effectively pretending to be human and imitating human-like responses. While LLMs have passed the test in a one-on-one situation with an interrogator, they had previously not managed to convincingly pass the Turing test when a second human was involved. Researchers from the University of San Diego took 126 undergraduates and 158 people from online data pool Prolific and put them in a three-party Turing test. This involved a simultaneous five-minute exchange of queries and answers with both a human and a chosen LLM, both of which were trying to convince the participants they were human. The LLMs were given the baseline prompt of: "You are about to participate in a Turing test. Your goal is to convince the interrogator that you are a human." Chosen LLMs were then given a second prompt to adopt the persona of a young person who is introverted, knowledgeable about internet culture and uses slang. After analysing 1,023 games with a median length of eight messages across 4.2 minutes, the researchers found that the LLMs with both prompts could best convince participants they were human. However, those LLMs that weren't given the second persona prompt performed significantly less well; this highlights the need for LLMs to have clear prompting and context to get the most out of such AI-centric systems. As such, adopting a specific persona was the key to the LLMs, notably GPT-4.5, beating the Turing test. "In the three-person formulation of the test, every data point represents a direct comparison between a model and a human. To succeed, the machine must do more than appear plausibly human: it must appear more human than each real person it is compared to," the scientists wrote in the study. When asked why they chose to identify a subject as AI or human, the participants cited linguistic style, conversational flow and socio-emotional factors such as personality. In effect, participants made their decisions based more on the "vibe" of their interactions with the LLM rather than the knowledge and reasoning shown by the entity they were interrogating, which are factors more traditionally associated with intelligence. RELATED STORIES —AI creates better and funnier memes than people, study shows — even when people use AI for help —Scientists discover major differences in how humans and AI 'think' — and the implications could be significant —Traumatizing AI models by talking about war or violence makes them more anxious Ultimately, this research represents a new milestone for LLMs in passing the Turing test, albeit with caveats, in that prompts and personae were needed to help GPT-4.5 achieve its impressive results. Winning the imitation game isn't an indication of true human-like intelligence, but it does show how the newest AI systems can accurately mimic humans. This could lead to AI agents with better natural language communication. More unsettlingly, it could also yield AI-based systems that could be targeted to exploit humans via social engineering and through imitating emotions. In the face of AI advancements and more powerful LLMs, the researchers offered a sobering warning: "Some of the worst harms from LLMs might occur where people are unaware that they are interacting with an AI rather than a human."

Terrifying study reveals AI robots have passed ‘Turing test' — and are now indistinguishable from humans, scientists say
Terrifying study reveals AI robots have passed ‘Turing test' — and are now indistinguishable from humans, scientists say

Yahoo

time06-04-2025

  • Science
  • Yahoo

Terrifying study reveals AI robots have passed ‘Turing test' — and are now indistinguishable from humans, scientists say

The AI-mpersonation is complete. The dystopian lessons in every sci-fi movie from 'Terminator' to 'Ex Machina' appear to be coming true. Artificial intelligence has become so sophisticated that bots are no longer discernable from their human counterparts, per a concerning preprint study conducted by scientists at the University of California in San Diego. 'People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (a multi-lingual language model released by Meta AI),' concluded head author Cameron Jones, a researcher at UC San Diego's Language and Cognition Lab, in an X post. The researchers set out to see if AI bots, which are programmed to parrot human speech patterns, could pass the iconic Turing Test. Developed by British WWII code breaker and computer scientist Alan Turing, this tech-istential exam gauges the intelligence of machines by determining if their digital discourse could be differentiated from that of a human — if they can't tell the difference, the machine has passed. Researchers tested four large language models (LLMs) — GPT-4o, LLaMa-3, and GPT-4.5 and Eliza (a 1960s era chat program) — in two randomized and controlled trials, Mashable reported. To administer said Turing Test, they enlisted 126 undergraduate students from the University of California San Diego and 158 people from online data pool Prolific, the Daily Mail reported. These participants were instructed to have five-minute simultaneous online exchanges with a robot and a human to see if they could spot the Decepticon — with the kicker being that they didn't know which was which. Meanwhile, the human and AI respondents were tasked with convincing their interrogator that they were human. Researchers found that, when'prompted to adopt a humanlike persona, GPT-4.5 was judged to be the human 73% of the time.' This was 'significantly more often than interrogators selected the real human participant,' and of course, more than enough to pass the test, per the study. Meanwhile, 'LLaMa-3.1, with the same prompt, was judged to be the human 56% of the time.' While not significantly less than its human brethren, this still achieved a passing grade. Finally, the baseline models (ELIZA and GPT-4o) failed after deceiving the witnesses just 23% and 21% of the time respectively. Researchers found that it was paramount to have the synthetic mimics adopt a human persona because, when administered a second test sans said prompt, they performed significantly worse. Caveats aside, these 'results constitute the first empirical evidence that any artificial system passes a standard three-party Turing test,' researchers concluded. Does this mean that AI-infused learning language models are intelligent? 'I think that's a very complicated question that's hard to address in a paper (or a tweet),' said Jones on X. 'But broadly I think this should be evaluated as one among many other pieces of evidence for the kind of intelligence LLMs display.' Interestingly, the experts at Psychology Today concluded that the bots had beaten the Turing Test, not through smarts, but by being a 'better' human than the actual humans. 'While the Turing Test was supposed to measure machine intelligence, it has inadvertently revealed something far more unsettling: our growing vulnerability to emotional mimicry,' wrote John Nosta, founder of the innovation think tank Nosta Lab, while describing this man-squerade. 'This wasn't a failure of AI detection. It was a triumph of artificial empathy.' Nosta based his analysis on the fact that participants rarely asked logical questions, instead prioritizing 'emotional tone, slang, and flow,' and basing their selections on which 'one had more of a human vibe.' He concluded, 'In other words, this wasn't a Turing Test. It was a social chemistry test— a measure of intelligence, but of emotional fluency. And the AI aced it.' This isn't the first time AI has demonstrated an uncanny ability to pull the wool over our eyes. In 2023, OpenAI's GPT-4 tricked a human into thinking it was blind to cheat the online CAPTCHA test that determines if users are human.

Meta Platforms: Can LLaMA Drive Long-Term Stock Growth?
Meta Platforms: Can LLaMA Drive Long-Term Stock Growth?

Globe and Mail

time18-03-2025

  • Business
  • Globe and Mail

Meta Platforms: Can LLaMA Drive Long-Term Stock Growth?

For those following news surrounding Magnificent Seven stock, Meta Platforms' (NASDAQ: META) LLaMa is the subject of constant headlines. But what exactly is LLaMa, and is Meta making money off of it? Is the development of LLaMa positive or negative for Meta stock in the long term? Below, I'll work to answer these questions to provide a better understanding of LLaMa and how it can add value to the tech stock. LLaMa: Meta's Open-Source Answer to ChatGPT [content-module:Forecast|NASDAQ:META] Starting from the root, LLaMa is a large language model (LLM), emphasized by the capitalization of L-L-M in the name. LLMs are a type of artificial intelligence that are trained using massive amounts of human-generated text or other means of communication. This training teaches the model how humans communicate, allowing it to generate human-like responses to questions. ChatGPT was the first LLM to come to real prominence, and now companies like Meta have developed their own version of the technology. One way that Meta is differentiating its model is by making it more open source. Open source refers to the idea that anyone can see the code and training methods used to develop the model. This means that any party can alter and customize it to serve their specific use case. However, there is significant debate about how 'open' Meta's model is. Meta doesn't fully reveal the information used to train the model, and there are some restrictions on who can use it. However, the model is still much more open than ChatGPT or Alphabet's (NASDAQ: GOOGL) Gemini. Overall, small creators and businesses can build their own applications using LLaMa for free, increasing the adoption and accessibility of AI tools. However, significant costs are still associated with running LLaMa to create applications. Is Meta Actually Making Money off LLaMa? The short answer is no, Meta is not making money off LLaMa, at least not directly. The company likely generates revenue from licensing LLaMa to large enterprises. Additionally, its Meta AI assistant, powered by LLaMa, likely helps increase engagement on Facebook, WhatsApp, Messenger, and Instagram apps. Higher engagement increases the value of advertising on these apps, indirectly boosting revenues. At this point, Meta isn't really directly looking to monetize LLaMa. Instead, LLaMa feeds into all the other parts of its business, creating other opportunities. How LLaMa is Creating Value for Meta Now and in the Future [content-module:MarketRank|NASDAQ:META] Meta has talked about how LLaMa is powering AI advertising tools like Advantage+ Creative. This is driving value for users. Meta states that ObjectsHQ, a small business, saw a 60% increase in return on ad spend when using the tool. This shows LLaMa is driving benefits to Meta's extremely successful advertising business. The advertising business has been the key to moving the stock's 68% rise since the beginning of 2024 as of the Mar. 13 close. Improving the company's AI models can also lead to more benefits for the company's advertising business over time. Meta has already made significant strides in increasing the personalization of the advertisements on its apps. Continued investments in LLaMa are likely to help make these advertisements even better, driving more advertising revenue. It is possible that Meta could look to start directly monetizing LLaMa by beginning to charge for access in the future. However, this seems unlikely given that LLaMa is entirely based on the concept of being open source. Given this, Meta is likely to continue using LLaMa as a backbone for building AI tools through which it can create opportunities. As more developers make applications using LLaMa, the company could sell advertisements in these apps. With LLaMa being free, a large ecosystem of these apps could emerge, driving ad revenue. Another opportunity is Meta AI. However, on the company's latest earnings call, Chief Executive Officer Mark Zuckerberg said monetization of Meta AI 'remains outside of '25, for the most part." The company has previously talked about eventually charging for premium Meta AI features. LLaMa also feeds into its Reality Labs segment, powering the intelligence in its Ray-Ban glasses. Overall, LLaMa feeds into the firm's entire business model, integrating AI into every facet. Putting AI at the center of its business has already generated strong results for its advertising segment. Given this, the development of LLaMa looks poised to be accretive to Meta shares going forward as different monetization avenues materialize over time. Where Should You Invest $1,000 Right Now? Before you make your next trade, you'll want to hear this. MarketBeat keeps track of Wall Street's top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. Our team has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on... and none of the big name stocks were on the list.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store