Latest news with #Llama3.2


Arabian Post
07-05-2025
- Arabian Post
Curl Project Tightens Controls on AI-Generated Bug Reports
The maintainers of the widely used command-line tool curl have implemented stricter measures to filter out low-quality bug reports generated by artificial intelligence, citing a surge in submissions that lack clarity, relevance, or actionable detail. Daniel Stenberg, the founder and lead developer of curl, expressed concern over the growing number of AI-generated reports that consume valuable time and resources. He noted that many of these reports are vague, inaccurate, or fail to provide the necessary information for effective debugging. To address this issue, the curl project has updated its bug reporting guidelines, explicitly discouraging the use of AI tools for generating bug reports unless the output is thoroughly reviewed and edited by a knowledgeable human. The maintainers have also enhanced their triage processes to identify and deprioritize reports that appear to be AI-generated and lack substantive content. This move reflects a broader trend in the open-source community, where developers are grappling with the implications of AI-generated content. While AI tools can assist in various aspects of software development, their use in bug reporting has raised concerns about quality and reliability. An empirical study published on April 26, 2025, titled 'Can We Enhance Bug Report Quality Using LLMs?: An Empirical Study of LLM-Based Bug Report Generation,' explored the effectiveness of large language models in generating structured bug reports. The study evaluated models such as Qwen 2.5, Mistral, Llama 3.2, and ChatGPT-4o, finding that while some models performed well in certain metrics, the overall quality varied, and human oversight remained crucial. The curl project's decision underscores the importance of maintaining high standards in bug reporting to ensure efficient software maintenance and development. By filtering out low-quality AI-generated reports, the maintainers aim to focus their efforts on actionable issues that contribute to the project's stability and performance. See also QEMU 10.0 Enhances Emulation Across Architectures


CNBC
25-04-2025
- Science
- CNBC
Meta and Booz Allen partner on 'Space Llama' AI program with Nvidia and HPE
On Friday, Meta and Booz Allen Hamilton launched "Space Llama." Llama, Meta's open-source artificial intelligence model, will help astronauts in low orbit at the International Space Station's National Laboratory with research. "We see a future where open source AI models like Llama will play a vital role in fueling space exploration and research – enabling astronauts to undertake the next level of experimentation to solve complex scientific problems and drive innovation in entirely new ways," Ahmad Al-Dahle, Meta VP and head of generative AI, said in a release. The announcement builds on Booz Allen Hamilton's ISS addition in August, when it deployed the first known large language model in space, per the company. Less than a year later, it is adding a tech stack with Meta's "fine-tuned" Llama 3.2, powered by Hewlett Packard Enterprise's Spaceborne Computer-2 and Nvidia graphics processing units, or GPUs. The companies billed the new tech stack as "Space Llama" and are using it as an example of how large language models work in environments "denied, disrupted, intermittent, and limited bandwidth," per a release. The project is looking to lower costs, reduce computing power and have faster outcomes for responding to maintenance issues or "onboard anomalies," without having to rely on Earth-bound internet. Space Llama will help astronauts replace paper documents and reduce reliance on instructions from the ground, according to the release.

WIRED
04-04-2025
- WIRED
What Is the Meta AI Button in WhatsApp, and How Do I Remove It?
Meta's in-app chatbot add-on is rolling out in more countries. Here's what you need to know, including how to get rid of it. EDMONTON, CANADA - APRIL 28: An image of a woman holding a cell phone in front of the Meta AI logo displayed on a computer screen, on April 29, 2024, in Edmonton, Canada. (Photo by Artur Widak/NurPhoto via Getty Images) NurPhoto/Getty Images If you've noticed a new light blue circle appear in your Whatsapp chats recently, and wondered what it was, Meta has recently expanded its implementation of Meta AI into new markets—and now, it's in yours. While it began rolling out in the US and Canada in 2023, more recently it has started arriving on devices across countries in Europe, including the UK, as well as Australia, New Zealand, South Africa and India. In fact, the artificial intelligence-based chatbot is rolling out across the entire Meta ecosystem, including Messenger and Instagram, and can provide you with a few basic features like answering questions, generating text or creating content. However, its appearance has also raised privacy concerns with users, and questions as to whether it can be turned off. Here's what you need to know. What Does the Meta AI Button in WhatsApp do? The new button on WhatsApp appears above the icon to start a new chat in Android, and in the bottom right hand corner in iOS. It allows users to interact directly with the Meta AI assistant. Users can also tag @Meta AI in their chats to ask it questions, plus it'll also appear in their personal contact lists. The chatbot is based on Meta's large and open source language model Llama 3.2. As was the case with ChatGPT when it first arrived on WhatsApp, Meta AI has started with a smaller number of features based on what it believes users want most, helping you to generate and improve text, or help you search the web for answers to questions. More features are expected to roll out later. But what happens with that data? Meta says its assistant can't access or read the content of your messages unless you ask it for help, but even then, the content of your messages remain end-to-end encrypted and any details will not be linked with other Meta accounts, like Facebook or Instagram. However, it's an important distinction to note that while the content of private messages are encrypted, interactions with Meta AI are not. Meta warns that anything you send to Meta may be used to improve its AI models, so users shouldn't 'send messages … with information you don't want it to know.' It does promise to make chats with Meta AI 'visually distinct' so users can easily tell the difference. How to Remove Meta AI on WhatsApp So, what if you are not interested in this feature? At the moment there is no way to disable or delete Meta AI from WhatsApp. You can ignore it and choose not to use it, but the button will stay in the app. If you have used it but prefer you hadn't, you can, however, reset Meta AI, at an individual chat level, or across the board. Deleting a chat with Meta AI will not be enough to do this, you will need to use the reset commands below. Resetting the AI will delete Meta AI's copy of messages and details from a chat (though yours will remain for you), and these commands work across WhatsApp, Messenger and Instagram. For one chat, type /reset-ai in any individual chat to reset that chat. in any individual chat to reset that chat. For all chats, type /reset-all-ais in an individual chat with Meta AI to reset all chats on the app, including group chats. How Do I Delete the Information Meta AI Has About Me? If you have used Meta AI before, it learns from your interactions in order to give more accurate responses over time. That could be things like your favourite food and sport, the fact you have a dog or that you're a vegetarian. You can view, correct and delete that information by digging into the chatbot's menus. To view the memory that Meta AI has on you, start a chat with the chatbot then tap Meta AI > Memory. From there you can click View All to see everything it knows and delete individual memories by tapping and holding, then clicking the trash can. Alternatively, click the icon with three dots, then Delete All>Delete All. To correct any wrong information, you can just tell Meta AI in a chat and it should use the correct details going forward. This article was originally published by WIRED Italia.