logo
#

Latest news with #LlaMa

Telkom Group Plans to Integrate Meta's LlaMa Technology
Telkom Group Plans to Integrate Meta's LlaMa Technology

Yahoo

time18-03-2025

  • Business
  • Yahoo

Telkom Group Plans to Integrate Meta's LlaMa Technology

JAKARTA, Indonesia, March 18, 2025 /PRNewswire/ -- Telkom Group has an ambition to bring many businesses in Indonesia onto WhatsApp to drive economic growth and development in Indonesia. The Telkom Group plans to implement LlaMa, a state-of-the-art, open-source AI model developed by Meta, into the customer chatbots of its enterprise clients. This integration will enable businesses to create tailored and engaging experiences for users interacting with businesses on WhatsApp. Veronika, Telkomsel's chatbot, is already live on WhatsApp for sales and customer support. In the future, this chatbot will be powered by LlaMa, enhancing its capabilities and providing a more personalized experience for users. Budi Satria Dharma Purba, CEO TELIN, stated, "Integrating Meta's LlaMa technology into our platform is a key step in advancing technology solutions. TELIN is committed to improving telecommunications services locally and globally. We will support this initiative through Telin WhatsApp Business platform on NeuAPIX (Telin's Cloud-based Communication Platform as a Service (CPaaS)". Arief Pradetya, Vice President of Digital Advertising, Wholesale, and Interconnect at Telkomsel, stated," Meta's LlaMa will strengthen Telkomsel's position as a leader in regional digital telecommunications. It reflects our commitment to superior customer services and align with our strategy to offer innovative solutions that empower communities and businesses, contributing to national progress through a secure and reliable digital ecosystem". This initiative underscores Telkom Group's commitment to driving innovation, fostering digital inclusion, and empowering businesses and individuals to succeed in a rapidly evolving digital landscape. View original content to download multimedia: SOURCE PT. Telekomunikasi Indonesia International (Telin)

DeepSeek and the Promethean dilemma: The ethics of open-source AI
DeepSeek and the Promethean dilemma: The ethics of open-source AI

Gulf Business

time13-03-2025

  • Gulf Business

DeepSeek and the Promethean dilemma: The ethics of open-source AI

Image: Supplied A very long time ago, atop the heights of Mount Olympus, a drama unfolded that would shape the human story: Zeus resented how the Titan Prometheus had become attached to humans, so decreed that no human could use fire on earth — a reminder of the gods' ultimate power. Yet Prometheus, defiant, smuggled a spark of divine fire back to humanity. That spark ignited the rise of civilisations and empires as humans harnessed its potential. Some became so confident in their mastery that they questioned the gods themselves, even believing they were gods. Zeus was furious. Not only had Prometheus stolen from the heavens, but he had upended the natural order of human subservience. For Prometheus, it didn't end well. Zeus exacted his vengeance, which led to the opening of Pandora's box. The lesson? Empowering humanity with fire led to extraordinary progress, but humans are nothing if not unpredictable. There are accidents, and there are arsonists. Open-source artificial intelligence feels much the same: a Promethean spark with immense potential and significant risks. Open-source AI refers to systems whose components — code, models, and sometimes datasets — are made publicly accessible. This openness allows individuals and organisations to use, study, and modify these AI resources freely. It democratises access to technology, accelerates innovation, and empowers smaller players. Projects like LlaMa, Mistral, and, more recently, But just as fire-forged weapons alongside warmth, open-source AI carries ethical dilemmas. Its accessibility — the foundation of its power — can heighten risks if unchecked. With proprietary models, individuals and companies can be held accountable (albeit that is slightly diminished with the repeal of the Trustworthy Development and Use of Artificial Intelligence (EO 14110) Act). With open source, we rely on a willing community of dispersed individuals to do the right thing. While openness fosters rapid innovation and transparency, it needs tools and assurances to prevent misuse. DeepSeek's low-cost, open-source AI disrupts the very foundation of the global AI race. Developed for a fraction of the cost of its rivals, its efficiency and openness challenge the assumption that massive resources are prerequisites for cutting-edge technology. Yet with openness comes a lack of control. Once released, models are no longer governed by their creators, leaving accountability elusive when harm occurs. This underscores the urgent need for the global AI community to develop tools — a suite of tests, monitoring systems, and ethical protocols — to ensure that open-source models behave responsibly and resist malicious manipulation. Inspiration from DeepSeek's example The UK and Europe, with constrained AI budgets relative to the US and China, can take inspiration from DeepSeek's example. By focusing on efficiency over scale, these nations could embrace open-source frameworks to pool talent and resources, fostering collective advancements rather than isolated efforts. This approach aligns with the UK's stated commitment to fairness, accountability, and transparency in AI development. Furthermore, the UK's leadership in ethical AI could drive the creation of governance standards that enhance the safety and reliability of open-source models without stifling their potential. History offers parallels. Open-source software, from Linux to decentralised cryptocurrencies, demonstrates how collective innovation can accelerate progress. But freedom without governance often invites chaos. Bitcoin democratised financial transactions, but it also fueled ransomware attacks and unregulated markets. In AI, the stakes are higher still. A safety net is key Artificial intelligence's borderless nature accelerates innovation but complicates governance. A safety net is needed that ensures innovation does not outpace responsibility. This is where the AI community must come together to create tools that govern open-source models effectively. Navigating these challenges demands balance. Developers must embed safeguards into their models, such as fine-grained permissions, ethical guidelines, and robust monitoring mechanisms. Initiatives like the Global Partnership on AI (GPAI) offer a collaborative platform to monitor developments and respond to risks. Prometheus gave humanity fire, but he did so without a plan for its use. Open-source AI could rapidly bring transformational progress. But we must think of the consequences and prepare accordingly — something Prometheus, for all his brilliance, did not. The

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store