
Facebook parent Meta tech chief has a 'US military' reminder for Silicon Valley, says: Return to ...
Andrew Bosworth, CTO, Meta
Meta's chief technology officer Andrew Bosworth has issued a stark reminder to Silicon Valley about its foundational relationship with the US military, suggesting that a renewed partnership marks a "return to grace" for the tech industry. Speaking at the Bloomberg Tech Summit, Bosworth emphasised that the tech industry was originally built on military collaboration, and that a 'silent majority' in Silicon Valley has always supported defense projects. He emphasised that the region was essentially built on a "three-way investment between the military, academics, and private industry."
Meta's defense tech push
Bosworth's remarks follow Meta's recent partnership with
Anduril Industries
, a defense tech firm, to develop
AI-powered military gear
, including augmented reality helmets. He stated that Silicon Valley's roots in military innovation had been severed over time, but the industry is now reopening to defense collaborations.
"The effort has been funded through private capital, without taxpayer support, and is designed to save the US military billions of dollars by utilizing high-performance components and technology originally built for commercial use," Anduril said in its May statement.
Historically, many tech companies avoided military contracts, with employees protesting defense-related projects. However, Bosworth believes the tides have turned, and that patriotism and technological advancement can coexist.
Bosworth clarified that Meta's partnership with Anduril does not signal a shift toward defense contracting.
"They have a system within a program, and we're simply supplying them with components. Everything we're developing is designed for consumers, with a clear focus on consumer audiences," he stated.
He further explained, "A lot of this technology has potential for multi-use applications, and that's where I see an opportunity to build a partnership."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
9 hours ago
- Time of India
UK regulator leads crackdown on 'finfluencers'
LONDON: Market regulators from six countries are cracking down on the illegal promotion of financial products by influencers on social media, UK officials said Friday. Britain's Financial Conduct Authority (FCA) said the action, which began on Monday, has resulted in three arrests in the UK and the authorisation of criminal proceedings against three individuals. Tired of too many ads? go ad free now The crackdown is being conducted jointly with regulators from Italy, Canada, Hong Kong, Australia, and the United Arab Emirates. Some 50 "warning letters" have been issued, which will result in more than 650 requests to remove content from social media platforms and more than 50 websites "operated by unauthorised finfluencers", the FCA said. It has also sent seven "cease and desist" letters and invited four so-called finfluencers for interviews. So-called finfluencers, or financial influencers, use their social media audiences to promote investment products, share advice, or offer their opinions on investments. Many act legitimately, but some "tout products or services illegally and without authorisation through online videos and posts, where they use the pretence of a lavish lifestyle, often falsely, to promote success", according to the FCA. These products can be risky, such as cryptocurrencies. "Our message to finfluencers is loud and clear," said Steve Smart, joint executive director of enforcement and market oversight at the FCA. "They must act responsibly and only promote financial products where they are authorised to do so -- or face the consequences." The announcement came as a group of British MPs said it had sent a letter to Meta, owner of Facebook and Instagram, asking for information on its approach to financial influencers. Tired of too many ads? go ad free now The letter from parliament's Treasury Committee follows evidence from FCA officials that Meta took up to six weeks to remove harmful content, longer than other platforms. "There was an isolated incident in late 2024 which resulted in a delay in actioning a small number of reports from the FCA," Meta said in a statement Friday. "This was rectified and all other relevant reports made by the FCA have been promptly processed."


Time of India
11 hours ago
- Time of India
Hey Siri, Am I Okay? : AI tools are being trained to detect suicidal signals.
Live Events Suicidal risk identification on SNS: The prompts fed to AI do not remain confined to tasks related to needing help in everyday activities, such as asking Alexa to play the family's favourite song, asking Siri on a random Tuesday to set a reminder, or asking Google Assistant to search the song based on humming. But what if users, in an especially low moment, were to ask, 'Am I okay?' Or maybe other such prompts that insinuate the user's want to harm themselves, whether through means of self-harm or and suicide attempts remain alarmingly prevalent, requiring more effective strategies to identify and support individuals at high risk. Current methods of suicide risk assessment largely rely on direct questioning, which can be limited by subjectivity and inconsistent interpretation. Simply put, their accuracy and predictive value remain limited, regardless of the large variety of scales that can be used to assess the risk; predictability remains unimproved over the past 50 intelligence and machine learning offer new ways to improve risk detection, but their accuracy depends heavily on access to large datasets that can help identify patient profiles and key risk factors. As outlined in a clinical review, AI tools can help identify patterns in the dataset, generate risk algorithms, and determine the effect of risk and protective factors on suicide. The use of AI reassures healthcare professionals with an improved accuracy rate, especially when combined with their skills and expertise, even when diagnostic accuracy could never reach 100%.According to Burke et al. , there are three main goals of machine learning studies in suicide: the first is improving the accuracy of risk prediction, the second is identifying important predictors and the interaction between them, and the last one is to model subgroups of patients. At an individual level, AI could allow for better identification of individuals in crisis and appropriate intervention, while at a population level, the algorithm could find groups at risk and individuals at risk of suicide attempts within these groups. Social media platforms have become both the cause and solution for the mental health crisis. While they are often criticized for contributing to the mental health crisis, these platforms also provide a rich source of real-time data to AI, enabling it to identify individuals portraying signs of suicidal intent. This is achieved by analyzing users' posts, comments, and behavioral patterns, allowing AI tools to detect linguistic cues, such as expressions of hopelessness or other emotional signals that may indicate psychological distress. For instance, Meta employs AI algorithms to scan user content and identify signs of distress, allowing the company to reach out and offer support or even connect users with crisis helplines. Studies such as those by the Black Dog Institute also demonstrate how AI's natural language processing can flag at-risk individuals earlier than traditional methods, enabling timely are also companies such as Samurai Labs and Sentinet that have developed AI-driven systems that monitor social media content and flag posts that insinuate suicidal ideation. For example, Samurai Labs 'One Life' project scans online conversations to detect signs that indicate high suicide risk. Upon detecting these indicators, the platform then leads the user to support resources or emergency assistance. In the same manner, Sentient's algorithms analyze thousands of posts on a daily basis, triggering alerts when users express some form of emotional distress, allowing for timely AI isn't a replacement for human empathy or professional mental health care, it offers a promising advancement in suicide prevention. By identifying warning signs at a much faster and more precise rate than human diagnosis and enabling early interventions, AI tools can serve as valuable allies in this fight against suicide.


India Today
12 hours ago
- India Today
Anthropic working on building AI tools exclusively for US military and intelligence operations
Artificial Intelligence (AI) company Anthropic has announced that it is building custom AI tools specifically for the US military and intelligence community. These tools, under the name 'Claude Gov', are already being used by some of the top US national security agencies. Anthropic explains in its official blog post that Claude Gov models are designed to assist with a wide range of tasks, including intelligence analysis, threat detection, strategic planning, and operational support. According to Anthropic, these models have been developed based on direct input from national security agencies and are tailored to meet the specific needs of classified introducing a custom set of Claude Gov models built exclusively for US national security customers,' the company said. 'Access to these models is limited to those who operate in such classified environments.'Anthropic claims that Claude Gov has undergone the same safety checks as its regular AI models but has added capabilities. These include better handling of classified materials, improved understanding of intelligence and defence-related documents, stronger language and dialect skills critical to global operations, and deeper insights into cybersecurity data. While the company has not disclosed which agencies are currently using Claude Gov, it stressed that all deployments are within highly classified environments, and the models are strictly limited to national security use. Anthropic also reiterated its 'unwavering commitment to safety and responsible AI development.'Anthropic's move highlights a growing trend of tech companies building advanced AI tools for defence. advertisementEarlier this year, OpenAI introduced ChatGPT Gov, a tailored version of ChatGPT that was built exclusively for the US government. ChatGPT Gov tools run within Microsoft's Azure cloud, giving agencies full control over how it's deployed and managed. The Gov model shares many features with ChatGPT Enterprise, but it places added emphasis on meeting government standards for data privacy, oversight, and responsible AI usage. Besides Anthropic and OpenAI, Meta is also working with the US government to offer its tech for military month, Meta CEO Mark Zuckerberg revealed a partnership with Anduril Industries, founded by Oculus creator Palmer Luckey, to develop augmented and virtual reality gear for the US military. The two companies are working on a project called EagleEye, which aims to create a full ecosystem of wearable tech including helmets and smart glasses that give soldiers better battlefield awareness. Anduril has said these wearable systems will allow soldiers to control autonomous drones and robots using intuitive, AR-powered interfaces.'Meta has spent the last decade building AI and AR to enable the computing platform of the future,' Zuckerberg said. 'We're proud to partner with Anduril to help bring these technologies to the American service members that protect our interests at home and abroad.'Together, these developments point to a larger shift in the US defence industry, where traditional military tools are being paired with advanced AI and wearable tech.