Latest news with #LLaMa


Time of India
4 days ago
- Business
- Time of India
Tanla Platforms wins deal in Indonedia
Mumbai: Tanla Platforms has won a multi-year deal from Indosat Ooredoo Hutchison , Indonesia's second largest telco, to roll out an AI-powered platform aimed at combating spam and scam messages. Indonesia is the world's second largest spammed market after India. 'The rollout marks Tanla's first major international expansion of its AI-based security technology beyond India,' Uday Reddy, founder chairman of Tanla, said Thursday. The Hyderabad based communications services company didn't provide a deal value. It already provides blockchain-based anti-spam solutions to Vodafone Idea and state-run Bharat Sanchar Nigam Ltd. (BSNL) in India. The anti-spam and anti-scam system being adopted by Indosat uses both traditional machine learning and generative AI models (trained on Meta's LLaMa), and is hosted on NVIDIA GPU infrastructure, Tanla said in a joint statement Thursday. The deployment will help detect and mitigate fraudulent activity in real time across SMS, voice, and voice over Internet protocol (VoIP) channels, covering both application-to-person (A2P) and person-to-person (P2P) communication. 'Our partnership with Tanla to deploy the AI‑powered anti‑spam and anti‑scam solution shows how technology can safeguard daily digital lives,' Vikram Sinha, president director of Indosat Ooredoo, said in the statement. Indosat has some 100 million subscribers and its parent group has presence in 12 global markets across Middle East, Africa and SouthEast Asia. Indonesian telecom users face 400-450 million cyberattacks annually, Reddy said. Nearly, 1-1.25 million unique users fall victim to scam messages in that country. When asked about competing with a similar offering from India's second largest telco Bharti Airtel, Reddy explained that 360 degree spam protection will only succeed if it is ubiquitous across SMS, voice, and VoIP channels – both personal and commercial. 'VoIP and P2P protection is our biggest moat which competing products do not have.' He added that the solution integrates with SIM cards and mobile handsets, both Android and iOS, for real-time threat detection at the device and network levels. It will be delivered under a subscription-based Software-as-a-Service (SaaS) model.


Economic Times
4 days ago
- Business
- Economic Times
Tanla Platforms wins Indonesia deal for AI check on spam messages
Tanla Platforms has won a multi-year deal from Indosat Ooredoo Hutchison, Indonesia's second largest telco, to roll out an AI-powered platform aimed at combating spam and scam is the world's second largest spammed market after India. 'The rollout marks Tanla's first major international expansion of its AI-based security technology beyond India,' Uday Reddy, founder chairman of Tanla, said Thursday. The Hyderabad-based communications services company didn't provide a deal value. It already provides blockchain-based anti-spam solutions to Vodafone Idea and state-run Bharat Sanchar Nigam Ltd. (BSNL) in India. The anti-spam and anti-scam system being adopted by Indosat uses both traditional machine learning and generative AI models (trained on Meta's LLaMa), and is hosted on NVIDIA GPU infrastructure, Tanla said in a joint statement Thursday. The deployment will help detect and mitigate fraudulent activity in real time across SMS, voice, and voice over Internet protocol (VoIP) channels, covering both application-to-person (A2P) and person-to-person (P2P) communication.'Our partnership with Tanla to deploy the AI‑powered anti‑spam and anti‑scam solution shows how technology can safeguard daily digital lives,' Vikram Sinha, president director of Indosat Ooredoo, said in the has some 100 million subscribers and its parent group has presence in 12 global markets across the Middle East, Africa and Southeast Asia. Indonesian telecom users face 400-450 million cyberattacks annually, Reddy said. Nearly 1-1.25 million unique users fall victim to scam messages in that country. When asked about competing with a similar offering from India's second largest telco Bharti Airtel, Reddy explained that 360-degree spam protection will only succeed if it is ubiquitous across SMS, voice, and VoIP channels – both personal and commercial. 'VoIP and P2P protection is our biggest moat, which competing products do not have.' He added that the solution integrates with SIM cards and mobile handsets, both Android and iOS, for real-time threat detection at the device and network levels. It will be delivered under a subscription-based software-as-a-service (SaaS) model.


Business Wire
03-06-2025
- Business
- Business Wire
VergeIO Introduces VergeIQ – On-Premises AI for Every Enterprise
BUSINESS WIRE)-- VergeIO, the developer of VergeOS—the first software-defined platform to unify virtualization, storage, and networking—today announced VergeIQ, a fully integrated Enterprise AI infrastructure solution enabling enterprises and research institutions to securely, efficiently, and easily deploy and manage private AI environments. VergeIQ is not a bolt-on or standalone AI stack. It is seamlessly integrated as a core component of VergeOS, enabling enterprises to rapidly deploy AI infrastructure and capabilities within their existing data centers in minutes, rather than months. 'With VergeIQ, we're removing the complexity and hurdles enterprises face when adopting AI,' said Yan Ness, CEO of VergeIO. 'Organizations want to leverage the power of AI for competitive advantage without losing control of their most sensitive data. VergeIQ provides exactly that—enterprise-ready AI fully integrated within VergeOS, entirely under your control.' Scott Sinclair, Practice Director of the Infrastructure, Cloud, and DevOps segment at Enterprise Strategy Group (ESG), added: 'AI has quickly become a strategic priority across every industry, but organizations encounter significant challenges around infrastructure complexity and data governance. VergeIQ directly addresses these pain points by making private, secure AI deployment achievable for enterprises of all sizes. This innovation will help drive the next wave of enterprise AI adoption.' Integrated Enterprise AI Infrastructure VergeIQ is designed for enterprises needing secure AI capabilities, data privacy, and near-bare-metal performance. Customers can select and deploy various Large Language Models (LLMs)—such as LLaMa, Mistral, and Falcon—and immediately begin using them on their own data within VergeOS secure, tenant-aware infrastructure. Unlike traditional AI stacks that rely on complex third-party infrastructure or GPU virtualization tools like NVIDIA vGPU, VergeIQ provides native GPU pooling and clustering, eliminating external licensing complexity. This native integration ensures dynamic, efficient GPU resource utilization across virtual data centers. Key Highlights of VergeIQ: Private, Secure Deployment: Fully on-premises, air-gapped, or disconnected deployments ensuring total data sovereignty, compliance, and security. Rapid LLM Deployment: Instantly deploy popular pre-trained LLMs, including LLaMa, Falcon, OpenAI, Claude, and Mistral, without complex setup or custom training. OpenAI API Routing: A built-in OpenAI-compatible API router simplifies the integration and interaction of diverse large language models (LLMs) within your workflows. Vendor-Agnostic GPU Support: Utilize any standard GPU hardware, avoiding vendor lock-in and enhancing flexibility in infrastructure decisions. Dynamic GPU/CPU Orchestration: Automatically manage and optimize the loading and utilization of AI models across available GPU and CPU resources, maximizing infrastructure efficiency and scalability. GPU Sharing and Clustering: Dynamic, intelligent pooling and sharing of GPU resources across clusters to ensure optimal usage, performance, and cost-efficiency. Infrastructure Intelligence: Directly query IT infrastructure to extract actionable insights and simplify operational decision-making rapidly. VergeIQ will be available as a seamless upgrade of VergeOS to all existing customers, providing full enterprise AI capabilities immediately upon upgrade. To learn more about VergeIQ and see it in action, register for our upcoming webinar on June 12th at 1:00pm ET. About VergeIO VergeIO is the company behind VergeOS, the world's first unified data center operating system. VergeOS simplifies and integrates virtualization, storage, and networking into a single, scalable software platform that spans from the edge to the core. With the addition of VergeIQ, VergeIO is redefining how enterprises build, deploy, and manage private and secure AI infrastructure. To schedule a demonstration or join the VergeIQ Early Access program, visit
Yahoo
13-04-2025
- Science
- Yahoo
An AI model has finally passed an authentic Turing test, scientists say
When you buy through links on our articles, Future and its syndication partners may earn a commission. Large language models (LLMs) are getting better at pretending to be human, with GPT-4.5 now resoundingly passing the Turing test, scientists say. In the new study, published March 31 to the arXiv preprint database but not yet peer reviewed, researchers found that when taking part in a three-party Turing test, GPT-4.5 could fool people into thinking it was another human 73% of the time. The scientists were comparing a mixture of different artificial intelligence (AI) models in this study. While another team of scientists has previously reported that GPT-4 passed a two-party Turing test, this is the first time an LLM has passed the more challenging and original configuration of computer scientist Alan Turing's "imitation game." "So do LLMs pass the Turing test? We think this is pretty strong evidence that they do. People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (with the persona prompt). And 4.5 was even judged to be human significantly *more* often than actual humans!' said co-author of the study Cameron Jones, a researcher at the University of San Diego's Language and Cognition Lab, on the social media network X. Related: Using AI reduces your critical thinking skills, Microsoft study warns GPT-4.5 is the frontrunner in this study, but Meta's LLaMa-3.1 was also judged to be human by test participants 56% of the time, which still beats Turing's forecast that "an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning." The core idea of the Turing test is less about proving machines can think and more about whether they can imitate humans; hence why the test is often referred to as the "imitation game." Turing's original proposal was that a human "interrogator" would pose questions to two unseen entities, one of them human and one a computer. From various prompts and answers, the interrogator would decide which is human and which is not. A computer or an AI system could pass the test by effectively pretending to be human and imitating human-like responses. While LLMs have passed the test in a one-on-one situation with an interrogator, they had previously not managed to convincingly pass the Turing test when a second human was involved. Researchers from the University of San Diego took 126 undergraduates and 158 people from online data pool Prolific and put them in a three-party Turing test. This involved a simultaneous five-minute exchange of queries and answers with both a human and a chosen LLM, both of which were trying to convince the participants they were human. The LLMs were given the baseline prompt of: "You are about to participate in a Turing test. Your goal is to convince the interrogator that you are a human." Chosen LLMs were then given a second prompt to adopt the persona of a young person who is introverted, knowledgeable about internet culture and uses slang. After analysing 1,023 games with a median length of eight messages across 4.2 minutes, the researchers found that the LLMs with both prompts could best convince participants they were human. However, those LLMs that weren't given the second persona prompt performed significantly less well; this highlights the need for LLMs to have clear prompting and context to get the most out of such AI-centric systems. As such, adopting a specific persona was the key to the LLMs, notably GPT-4.5, beating the Turing test. "In the three-person formulation of the test, every data point represents a direct comparison between a model and a human. To succeed, the machine must do more than appear plausibly human: it must appear more human than each real person it is compared to," the scientists wrote in the study. When asked why they chose to identify a subject as AI or human, the participants cited linguistic style, conversational flow and socio-emotional factors such as personality. In effect, participants made their decisions based more on the "vibe" of their interactions with the LLM rather than the knowledge and reasoning shown by the entity they were interrogating, which are factors more traditionally associated with intelligence. RELATED STORIES —AI creates better and funnier memes than people, study shows — even when people use AI for help —Scientists discover major differences in how humans and AI 'think' — and the implications could be significant —Traumatizing AI models by talking about war or violence makes them more anxious Ultimately, this research represents a new milestone for LLMs in passing the Turing test, albeit with caveats, in that prompts and personae were needed to help GPT-4.5 achieve its impressive results. Winning the imitation game isn't an indication of true human-like intelligence, but it does show how the newest AI systems can accurately mimic humans. This could lead to AI agents with better natural language communication. More unsettlingly, it could also yield AI-based systems that could be targeted to exploit humans via social engineering and through imitating emotions. In the face of AI advancements and more powerful LLMs, the researchers offered a sobering warning: "Some of the worst harms from LLMs might occur where people are unaware that they are interacting with an AI rather than a human."
Yahoo
06-04-2025
- Science
- Yahoo
Terrifying study reveals AI robots have passed ‘Turing test' — and are now indistinguishable from humans, scientists say
The AI-mpersonation is complete. The dystopian lessons in every sci-fi movie from 'Terminator' to 'Ex Machina' appear to be coming true. Artificial intelligence has become so sophisticated that bots are no longer discernable from their human counterparts, per a concerning preprint study conducted by scientists at the University of California in San Diego. 'People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (a multi-lingual language model released by Meta AI),' concluded head author Cameron Jones, a researcher at UC San Diego's Language and Cognition Lab, in an X post. The researchers set out to see if AI bots, which are programmed to parrot human speech patterns, could pass the iconic Turing Test. Developed by British WWII code breaker and computer scientist Alan Turing, this tech-istential exam gauges the intelligence of machines by determining if their digital discourse could be differentiated from that of a human — if they can't tell the difference, the machine has passed. Researchers tested four large language models (LLMs) — GPT-4o, LLaMa-3, and GPT-4.5 and Eliza (a 1960s era chat program) — in two randomized and controlled trials, Mashable reported. To administer said Turing Test, they enlisted 126 undergraduate students from the University of California San Diego and 158 people from online data pool Prolific, the Daily Mail reported. These participants were instructed to have five-minute simultaneous online exchanges with a robot and a human to see if they could spot the Decepticon — with the kicker being that they didn't know which was which. Meanwhile, the human and AI respondents were tasked with convincing their interrogator that they were human. Researchers found that, when'prompted to adopt a humanlike persona, GPT-4.5 was judged to be the human 73% of the time.' This was 'significantly more often than interrogators selected the real human participant,' and of course, more than enough to pass the test, per the study. Meanwhile, 'LLaMa-3.1, with the same prompt, was judged to be the human 56% of the time.' While not significantly less than its human brethren, this still achieved a passing grade. Finally, the baseline models (ELIZA and GPT-4o) failed after deceiving the witnesses just 23% and 21% of the time respectively. Researchers found that it was paramount to have the synthetic mimics adopt a human persona because, when administered a second test sans said prompt, they performed significantly worse. Caveats aside, these 'results constitute the first empirical evidence that any artificial system passes a standard three-party Turing test,' researchers concluded. Does this mean that AI-infused learning language models are intelligent? 'I think that's a very complicated question that's hard to address in a paper (or a tweet),' said Jones on X. 'But broadly I think this should be evaluated as one among many other pieces of evidence for the kind of intelligence LLMs display.' Interestingly, the experts at Psychology Today concluded that the bots had beaten the Turing Test, not through smarts, but by being a 'better' human than the actual humans. 'While the Turing Test was supposed to measure machine intelligence, it has inadvertently revealed something far more unsettling: our growing vulnerability to emotional mimicry,' wrote John Nosta, founder of the innovation think tank Nosta Lab, while describing this man-squerade. 'This wasn't a failure of AI detection. It was a triumph of artificial empathy.' Nosta based his analysis on the fact that participants rarely asked logical questions, instead prioritizing 'emotional tone, slang, and flow,' and basing their selections on which 'one had more of a human vibe.' He concluded, 'In other words, this wasn't a Turing Test. It was a social chemistry test— a measure of intelligence, but of emotional fluency. And the AI aced it.' This isn't the first time AI has demonstrated an uncanny ability to pull the wool over our eyes. In 2023, OpenAI's GPT-4 tricked a human into thinking it was blind to cheat the online CAPTCHA test that determines if users are human.