Latest news with #Llama2


WIRED
22-05-2025
- Business
- WIRED
DOGE Used Meta AI Model to Review Emails From Federal Workers
May 22, 2025 12:57 PM DOGE tested and used Meta's Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email. Photo Illustration:Elon Musk's so-called Department of Government Efficiency (DOGE) used artificial intelligence from Meta's Llama model to comb through and analyze emails from federal workers. Materials viewed by WIRED show that DOGE affiliates within the Office of Personnel Management (OPM) tested and used Meta's Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email that was sent across the government in late January. The email offered deferred resignation to anyone opposed to changes the Trump administration was making to its federal workforce, including an enforced return to office policy, downsizing, and a requirement to be 'loyal.' To leave their position, recipients merely needed to reply with the word 'resign.' This email closely mirrored one that Musk sent to Twitter employees shortly after he took over the company in 2022. Records show Llama was deployed to sort through email responses from federal workers to determine how many accepted the offer. The model appears to have run locally, according to materials viewed by WIRED, meaning it's unlikely to have sent data over the internet. Meta and OPM did not respond to requests for comment from WIRED. Meta CEO Mark Zuckerberg appeared alongside other Silicon Valley tech leaders like Musk and Amazon founder Jeff Bezos at Trump's inauguration in January, but little has been publicly known about his company's tech being used in government. Because of Llama's open-source nature, the tool can easily be used by the government to support Musk's goals without the company's explicit consent. Soon after Trump took office in January, DOGE operatives burrowed into OPM, an independent agency that essentially serves as human resources for the entire federal government. The new administration's first big goal for the agency was to create a government-wide email service, according to current and former OPM employees. Riccardo Biasini, a former Tesla engineer, was involved in building the infrastructure for the service that would send out the original 'Fork in the Road' email, according to material viewed by WIRED and reviewed by two government tech workers. In late February, weeks after the Fork email, OPM sent out another request to all government workers and asked them to submit five bullet points outlining what they accomplished each week. These emails threw a number of agencies into chaos, with workers unsure how to manage email responses that had to be mindful of security clearances and sensitive information. (Adding to the confusion, it has been reported that some workers who turned on read receipts say they found that the responses weren't actually being opened.) In February, NBC News reported that these emails were expected to go into an AI system for analysis. While the materials seen by WIRED do not explicitly show DOGE affiliates analyzing these weekly 'five points' emails with Meta's Llama models, the way they did with the Fork emails, it wouldn't be difficult for them to do so, two federal workers tell WIRED. 'We don't know for sure,' says one federal worker on whether DOGE used Meta's Llama to review the 'five points' emails. 'Though if they were smart they'd reuse their code.' DOGE did not appear to use Musk's own AI model, Grok, when it set out to build the government-wide email system in the first few weeks of the Trump administration. At the time, Grok was a proprietary model belonging to xAI and access to its API was limited. But earlier this week, Microsoft announced that it would begin hosting xAi's Grok 3 models as options in its Azure AI Foundry, making the xAI models more accessible in Microsoft environments like the one used at OPM. This potentially, should they want it, would enable Grok as an option as an AI system going forward. In February, Palantir struck a deal to include Grok as an AI option in the company's software, which is frequently used in government. Over the last few months, DOGE has rolled out and used a variety of AI-based tools at government agencies. In March, WIRED reported that the US Army was using a tool called 'CamoGPT' to remove DEI-related language from training materials. The General Services Administration rolled out 'GSAi' earlier this year, a chatbot aimed at boosting overall agency productivity. OPM has also accessed software called AutoRIF that could assist in the mass firing of federal workers.


Time of India
09-05-2025
- Business
- Time of India
Meta appoints former Google DeepMind director Robert Fergus as head of AI Research lab
Representative image Facebook parent Meta has informed its staff that the company has appointed former Google DeepMind director Robert Fergus to lead its artificial intelligence research lab. According to a report by Bloomberg, Fergus will head the Fundamental AI Research (FAIR) lab at Meta. Fergus co-founded the Facebook AI Research lab (FAIR) along with Yann LeCun in 2014. Operation Sindoor IPL 2025 suspended as India-Pakistan tensions escalate Pakistan appeals for loans citing 'heavy losses', later says X account hacked Can Pakistan afford a war with India? Here's a reality check The unit takes care of AI research at the company. The FAIR lab creates models of advanced robotics, generate audio and further push boundaries of AI capabilities. As per Fergus LinkedIn profile, he was associated with Google DeepMind as a research director for five years. Before joining Google, Fergus was working with Meta as a research scientist. As per the report, Chief Product Officer Chris Cox informed the Meta employees that Fergus has joined FAIR labs and has succeeded Joelle Pineau , who announced departure plans last month. 'We're working towards building human-level experiences that transform the way we interact with technology and are dedicated to leading and advancing AI research,' Fergus said in a post on LinkedIn. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like New Container Houses Vietnam (Take A Look At The Prices) Container House | Search Ads Search Now Undo According to a report from Fortune, FAIR led research on the company's early AI models, including Llama 1 and Llama 2. However, the report states that many researchers have departed FAIR for other startups, companies, and even Meta's newer GenAI group, which spearheaded the development of Llama 4. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Forbes
21-03-2025
- Business
- Forbes
How The AI Alliance Collaborated Its Way To Rapid First-Year Growth
The power of partnership has been demonstrated by the AI Alliance's milestones and global expansion ... More in its first year of operation. The AI Alliance was founded in December 2023 by IBM and Meta, along with 50 other initial members. Over a year's time, its membership has grown quickly, so that now it has more than 140 worldwide members and allows companies, non-profits and academic institutions of all sizes to collaborate on building a robust and open AI ecosystem. In the bigger picture, the AI Alliance has become an important force in the democratization of AI, which is why it's worth reviewing what it has accomplished in its first year-plus of existence. Before we get into those specifics, it's important to understand why the organization was founded in the first place. For most of AI's history, open source development was a fragmented effort that led to underperforming models. Prior to 2023, there were few nonprofit institutions capable of training AI models with even GPT-2 capabilities. At that time, large tech companies dominated proprietary AI, and open source AI was mainly confined to niche applications. Everything changed in 2023 when multiple new base models with permissive licenses were released. Then by mid-2023, Meta released its open source Llama 2 model in partnership with Microsoft. Within six months, it was used to create more than 10,000 derivative models. A major new phase of open source AI development was underway. In this context, the AI Alliance established an impressive list of goals right from its inception. These goals included fostering open collaboration, establishing governance and guardrails for AI and developing benchmarking tools and clear policy positions. Additionally, the alliance prioritized extensive educational initiatives and nurturing robust hardware ecosystems. The AI Alliance's strength is further demonstrated by the quality of its steering committee, which has a roster of well-known commercial organizations and universities. (Note: Moor Insights & Strategy has client relationships with about a dozen of the AI Alliance's 140-plus members.) Criteria for AI Alliance membership To join the AI Alliance, an organization must meet four important standards. First, the potential member must be aligned with the mission to cultivate safety, open science and innovation. Second, members must be committed to work on significant projects aligned with the Alliance's mission. Third, the potential member must be willing to contribute to the diversity of perspectives and cultures that exist within its global membership of 140-plus organizations. In the future, it is expected that the membership will grow even larger and even more diverse. The final expectation for membership is reputation; the AI Alliance seeks members with a recognized reputation as an educator, builder or advocate within the AI open source community. Alliance members generally fall into one of those three categories. Builders are responsible for models, datasets, tools and applications that use AI. Enablers evangelize the adoption of open AI technologies using tutorials, use cases and general community support. Advocates emphasize the benefits of the AI Alliance ecosystem, plus foster public trust and safety among organizational leaders, societal stakeholders and regulatory bodies. The AI Alliance focuses its work on six major areas. The AI Alliance defines its long range priorities in terms of six focus areas. However, it doesn't restrict its activities just to these areas. The alliance takes a holistic approach to the entire AI ecosystem by encouraging its community members and developers and allowing them to participate in one or more these areas, then switch if interests or priorities change. Here are the six key focus areas of The AI Alliance: Trust and Safety is an important and large field within the AI Alliance. It has many specialists working on tools to detect and reduce hate speech, bias and other harmful material. The Trust and Safety Evaluation Initiative is a major initiative for 2025 providing a unified view of the entire spectrum of evaluation, not just for safety, but also for performance and other areas where evaluating the effectiveness of AI models and applications is required. A sub-project is exploring specific safety priorities by domain for areas such as health, law and finance. In mid-2025, the AI Alliance plans to publish a Hugging Face leaderboard that will allow developers to search for evaluations that best fit their needs, to compare how open models perform against those evaluations, and to download and deploy those evaluations to examine their own private models and AI applications. That initiative will also provide guidance on important safety and compliance aspects of use cases. Not all invocations of AI models will use hosted commercial services. Some situations require air-gapped solutions. AI-enabled smart edge devices make it likely that more companies will begin deploying new small and powerful models on-premises, and sometimes without an internet connection. To support these use cases and facilitate large-scale model serving with flexible hardware configurations, the AI Alliance is developing hardware-agnostic software stacks. Two examples will illustrate how open collaboration between alliance members is providing significant benefits for everyone. The first example, SemiKong, was a collaborative effort among three alliance members. The members created an open-source large language model for the semiconductor manufacturing process domain. Manufacturers can use this model to accelerate the development of new devices and processes. SemiKong contains specialized knowledge about the physics and chemistry of semiconductor devices. In only six months, SemiKong captured the attention of the global semiconductor industry. SemiKong was created by fine-tuning a Llama 3 base model using datasets curated by Tokyo Electron. The tuning created an industry-specific generative AI model that had more knowledge about semiconductor etching processes than the generic base model. A technical report on SemiKong is available here. The second example is DANA — the Domain-Aware Neurosymbolic Agents project. It is a joint development of Aitomatic Inc. (based in Silicon Valley) and Fenrir Inc. (based in Japan). DANA is an early example of the now-popular agent architecture, where models are integrated with other tools to provide complementary capabilities. While models alone can provide amazing results, many studies have shown that LLMs often generate incorrect answers. A 2023 study cited in the SemiKong paper measures typical LLM errors of 50%, while DANA's complementary use of reasoning and planning tools increased the accuracy to 90% for the target applications. DANA uses neurosymbolic agents that combine the pattern recognition of neural networks with symbolic reasoning that supports rigorous logic and rules-based capabilities to solve problems. Logical reasoning combined with tools for planning (such as for designing assembly-line processes) produce accurate and reliable results that are essential for industrial quality control systems and automated planning and scheduling. DANA can be used for multiple domains. For example, for financial forecasting and decision-making, DANA can understand market trends and make predictions based on complex theories, using both structured and unstructured data. That same ability can also be applied to retrieval and evaluation of medical literature and research information to ensure that existing diagnoses and treatments meet established medical protocols and practices. In short, DANA can enhance patient outcomes and reduce errors in critical patient applications. The AI Alliance began 2025 in a strong position with members in 23 countries and a number of working groups focused on major AI issues. The AI Alliance has over 1,200 working-group collaborators working on 90-plus active projects. Internationally, the AI Alliance has participated in events held in 10 countries involving more than 20,000 people, and it has published five how-to guides on important AI topics to help researchers and developers build and use AI. The AI Alliance has published examples for using AI on models such as IBM's Granite family and Meta's Llama models. Its growing curation of 'recipes' utilize the most popular open libraries and models for popular application patterns, including RAG, knowledge graphs, neurosymbolic systems and emerging agent planning and reasoning architectures. In 2025, the AI Alliance is committed to scaling up its reach and impact tenfold. Two of its new major initiatives, discussed previously, are the Open Trusted Data Initiative and the Trust and Safety Evaluation Initiative. The AI Alliance also plans to create an industry-standard community lab for developing and testing AI application technologies. Its domain-specific model initiatives will also evolve. For example, the new Climate and Sustainability Working Group has plans to develop multimodal foundation models and open source software tooling to meet major challenges in climate change and its mitigation. By 2030, it is estimated that AI will help grow the global economy by $20 trillion. By then, it is forecasted that 70% of industrial AI applications will be run on open source AI. It is also expected that the shortage of AI professionals will become even more pronounced than it is today. AI Alliance members may be able to reduce that problem by collaborating with other members to gain access to diverse expertise and resource sharing. The AI Alliance is following a similar growth trajectory followed by other successful open-source organizations, such as the Linux Foundation, the Apache Software Foundation and the Open Source Initiative. These include: I believe the AI Alliance will continue to attract developers, researchers, business and government leaders as contributors and collaborators. The AI Alliance's leadership has established scaling of global collaboration as its overarching mission for 2025. Everything considered, the AI Alliance has the foundation to grow into a dominant global force that shapes, improves and innovates the future of artificial intelligence.