logo
#

Latest news with #Llama3

Mar Zuckerberg's Meta Is A Defence Contractor Now, Partners With Anduril For AI-Powered Products
Mar Zuckerberg's Meta Is A Defence Contractor Now, Partners With Anduril For AI-Powered Products

NDTV

time4 days ago

  • Business
  • NDTV

Mar Zuckerberg's Meta Is A Defence Contractor Now, Partners With Anduril For AI-Powered Products

Facebook and Instagram parent Meta has teamed up with defence startup, Anduril Industries, to develop military products that use artificial intelligence (AI) and augmented reality (AR). The military devices are expected to provide real-time battlefield intelligence to soldiers in the field, allowing them to make better decisions based on data, according to a CBS News report. "Meta has spent the last decade building AI and AR to enable the computing platform of the future. We're proud to partner with Anduril to help bring these technologies to the American servicemembers that protect our interests at home and abroad," Meta CEO Mark Zuckerberg said in the release. The collaboration would see Meta's AR and AI tech fusing with Anduril's data analytics platform 'Lattice', into products such as glasses, goggles and visors. Palmer Luckey, 32, the brain behind Anduril, has previously worked with Meta. He joined the social media company in 2014 when it was still known as Facebook, following the acquisition of Oculus, a virtual reality headset business that he founded. Anduril in a statement said the company was working Meta to "design, build, and field a range of integrated XR (extended reality) products that provide warfighters with enhanced perception and enable intuitive control of autonomous platforms on the battlefield'. Meta AI and military This is not the first instance when the Zuckerberg-owned company has hinted at its hard pivot towards defence and military-related programmes. In November last year, the company announced that US government agencies and contractors working on national security will get hold of the latest Llama 3 model. The company said it was playing its part in ensuring the safety and security of the United States by working with the likes of Lockheed Martin, IBM, Amazon, Microsoft and Oracle among others to make Llama available to the government. "As an American company, and one that owes its success in no small part to the entrepreneurial spirit and democratic values the United States upholds, Meta wants to play its part to support the safety, security and economic prosperity of America - and of its closest allies too," the company said. With the collaboration, the US military intends to use the power of AI to streamline logistics and track terrorist financing, as well as strengthen cyber defence.

Robots to Build iPhones as Foxconn Accelerates AI-Powered Factory Automation
Robots to Build iPhones as Foxconn Accelerates AI-Powered Factory Automation

Hans India

time22-05-2025

  • Business
  • Hans India

Robots to Build iPhones as Foxconn Accelerates AI-Powered Factory Automation

In a bold move that signals the future of manufacturing, Foxconn, the world's largest electronics contract manufacturer and Apple's chief iPhone supplier, is gearing up to fully automate its assembly lines using a combination of generative AI and robotics. The announcement was made by Foxconn Chairman Young Liu during his keynote at Computex 2025 in Taipei. Liu revealed that robots and AI will soon take over most of the manual labour at Foxconn's global factories, significantly reducing the company's reliance on low-wage human workers. 'We thought maybe we could replace every human... We quickly realised we could not,' Liu stated candidly, adding that the goal is not to eliminate humans entirely but to free them from repetitive and low-value tasks. Foxconn has already seen tangible improvements by incorporating generative AI into production workflows, Liu shared. According to him, AI-backed software now performs around 80 percent of the tasks needed to set up new production lines — a development that is dramatically improving efficiency and cutting setup times. 'AI is already helping resolve production issues quickly and allowing human experts to focus on more complex, value-driven operations,' Liu said. The company's shift to AI-driven workflows marks a turning point in industrial manufacturing, combining what Liu calls 'bots and brains' for smarter factories. At the core of Foxconn's automation push is its proprietary AI model, FoxBrain. The system integrates Meta's Llama 3 and 4 models with Foxconn's internal operational data to deliver what Liu describes as an 'agentic workflow' tailored to specific factory-floor applications. While the company plans to open-source FoxBrain, no timeline has been provided for its public release. In parallel, Foxconn is also investing in robotic production with its Foxbot units—about 10,000 of which are manufactured annually to replace human labour. The company had previously reported the automation of 60,000 jobs at one of its plants and had set a target of 30% automation across Chinese factories by 2020. Foxconn's innovation strategy also includes advanced simulation. Liu shared that the company is using Nvidia's Omniverse to create digital twins of factories before construction begins, allowing AI to simulate and optimise layouts and operational workflows beforehand. According to Digitimes, Foxconn's three-phase automation plan is already underway in China. The first phase automates dangerous and repetitive jobs. The second focuses on streamlining production and reducing excess machinery. The final phase envisions fully automated factories, where humans are limited to logistics, quality testing, and inspection roles. Foxconn's major manufacturing facility in Zhengzhou, often dubbed 'iPhone City,' is currently in the second phase of this automation roadmap and is expected to become entirely automated in the coming years. This shift not only redefines Foxconn's production model but also paints a vivid picture of what the future of manufacturing may look like in an AI-dominated era.

Foxconn CEO Predicts Generative AI Will Wipe Out Low-End Manufacturing Jobs
Foxconn CEO Predicts Generative AI Will Wipe Out Low-End Manufacturing Jobs

NDTV

time20-05-2025

  • Business
  • NDTV

Foxconn CEO Predicts Generative AI Will Wipe Out Low-End Manufacturing Jobs

Young Liu, the chief executive officer and chairman of Foxconn, has predicted that artificial intelligence (AI) will destroy low-end manufacturing jobs. The Taiwanese technology giant currently assembles around 70 per cent of iPhones and is the world's largest contract manufacturer. Mr Liu was delivering a keynote address at the Computex conference when he made the rather gloomy prediction. As per him, a combination of robotics and generative AI may engineer the change, leading to the loss of jobs. "Generative AI and robotics will fill the void. That is the opportunity I see when a country becomes more prosperous - the low-GDP work will be done by GenAI and robotics," said Mr Liu. "I think that is the real challenge for all developed countries. I urge leaders of developed countries to watch this very carefully," he added. As per a report in The Register, Mr Liu revealed that Foxconn was developing its own manufacturing-centric model called "FoxBrain" that will integrate Meta's Llama 3 and 4 AI models and data drawn from its own operations. The new entity will be used to create what he described as "agentic workflow for very domain specific applications". AI and jobs The popularity of AI models has left workers concerned about their careers as employers attempt to use the technology to cut costs, increase efficiency and maximise revenues. Apart from manufacturing jobs, those in IT have been fearing for their jobs as well. Last week, Microsoft announced it was laying off around 6,000 employees or three per cent of its global workforce, to remove unnecessary layers of management as it aggressively pushes into AI. CrowdStrike, the infamous cybersecurity company responsible for the massive global IT outage last year, also announced that it was slashing five per cent of its workforce and replacing it with AI. Similarly, language-learning platform Duolingo announced it would "gradually stop using contractors to do work that AI can handle". The company justified its switch in approach, stating that it had taken a similar call in 2012 by betting big on mobile.

Pocket FM is training its AI model to scale storytelling. Is the investment worth it?
Pocket FM is training its AI model to scale storytelling. Is the investment worth it?

Mint

time19-05-2025

  • Business
  • Mint

Pocket FM is training its AI model to scale storytelling. Is the investment worth it?

Audio series startup Pocket FM plans to have a large language model (LLM) up and running by the end of the year. The company has already labelled and categorized its proprietary datasets and is currently testing an early version of its model. 'We're currently testing a very raw model, that is going to take some time. We're also working on getting graphic processing units," said Pocket FM co-founder and chief technology officer Prateek Dixit told Mint. Teams at the company are already working on reinforcement learning for the LLM. Pocket FM expects the LLM to be ready five to six months after that. The startup plans to buy between 30 and 50 of Nvidia's A100 or H100 GPUs in a staggered manner. These units cost anywhere between $8,000 and $25,000 each. Despite the steep costs, Dixit views the investment as strategic. 'It's not just a cost decision, you've to understand. It's more of a strategic asset for how we scale storytelling with AI," Dixit added. Beyond hardware, the LLM push includes infrastructure upgrades, hiring skilled AI engineers, and increased R&D investment. Pocket FM currently spends 8-13% of its revenue (approximately $26 million) on R&D, with 40% of that allocated to AI initiatives. This is expected to go up by 1% to 2%. Pocket FM plans to build their model on top of an open-source foundation model like Meta's Llama 3, tailored specifically for storytelling. The company currently uses open-source models, fine-tuned for its genre-specific needs, but over time, it reached a point where the quality of the content plateaued. 'With our own model training, we can have a step jump in quality," Dixit said. Popular genres of content on the platform include drama, fantasy and thrillers. The company's writers have produced thousands of stories in these categories. The plan is to use the LLM for everything ranging from story creation and comic creation to developing stories, character arcs and even AI-based videos. 'The idea is to take these foundation models and fine-tune them for different writing styles. That is the use case for comics as well," said Dixit. Earlier this year, Pocket FM launched Pocket Toons, its webcomic platform. For this, the company created an AI-powered studio it calls Blaze to produce comics, '20x faster at one-third the cost, automating processes like background rendering, scene composition, and colouring while preserving artistic creativity," the company had said. Pocket FM has been using natural language processing (NLP), a subset of artificial intelligence (AI), for translating across the 10 languages for which it produces audio content. Other use cases include text summarisation, metadata generation and genre tagging. 'An LLM forms the backbone of a powerful IP engine that not only drives our audio formats today but will also power future innovations across multiple storytelling mediums," said Dixit. Is the investment worth it? Experts are divided on whether building a domain-specific language model (DSLM) is worth the cost and can help in the long run. It's hard to say whether a DSLM can stand the test of time, given how fast the AI industry is moving. 'I don't think building a proprietary model is a good idea where the rate of innovation is so fast in the industry," said Anushree Verma, senior director analyst at Gartner. According to Gartner, enterprise spending on such models is expected to reach $838 million in 2025 and grow to $11.3 billion by 2028. The market is expected to grow at a compound annual growth rate of 233%. Open-source generative AI models are emerging as a viable source for domain-specific models, rapidly closing the performance and reliability gap with proprietary models and offering a cost-effective and flexible alternative for model training and specialization, Verma added. 'Building an LLM isn't automatically a strategic advantage. In many cases, a smaller DSLM can outperform a general-purpose LLM in speed, cost-efficiency, and relevance—especially when fine-tuned on proprietary data," said Manpreet Singh Ahuja, tech, media and telecom sector leader and chief clients and alliances officer at PwC India. 'The question is not 'can we build it?' but 'should we.'" His argument is that LLMs are only worth building when a company has a clearly established reason that current models in the market can't satisfy. If a company is unable to prove that or is not able to monetise the model itself or use it across high-scale products, the return on investment is questionable. 'Long-term value comes not from owning the model alone, but from the unique data, applications, and feedback loops built around it," Ahuja added. However, given that Pocket FM knows the use case it wants to build for, a custom LLM can benefit them, even in the long run. What's more, building one that doesn't require them to lease GPUs for training means they don't need to worry about data security concerns and safeguarding their intellectual property. 'Over time, running your own optimized model, especially using open-source foundations, can slash inference costs by up to 80%," said Sameer Jain, managing director at Primus Partners, a global management consulting firm. Inferencing refers to the process where a trained AI model uses its existing knowledge to make its own conclusions on data its never seen before. Eventually, the company expects that by owning its own GPUs and LLMs, it'll be able to reduce its AI costs significantly. 'The unit cost per generation of content at scale gets reduced. We're not talking about one-time use cases. We want to continuously generate inferences from models," Dixit said, adding that they expect their inferencing cost to drop by 20-30%. Deeper AI push Besides LLM, Pocket FM has a co-pilot that is used internally to create content in German, English, and Hindi. The company is still fine-tuning it to work with other Indic languages like Tamil, Telegu, Kannada, Marathi, and Bengali. 'We'll be making a public launch of this tool in a few months," said Dixit. The company is also building AI agents which can participate in every step of the story creation process, from how intense the beginning of a story should be to where a cliff hanger might be appropriate to add. 'We're building them in such a way that individual modules can act and trigger separately. A story could have a really good cliff hanger but bad pacing. I should be able to ask a model to address these specific queries," said the Pocket FM co-founder. Meanwhile, Pocket FM is considering acquisitions for the first time, and is moving with two strategies in mind: lean AI companies building either LLMs for stories or AI-based voice and video and secondly, companies which have large writer communities. 'We're building an AI entertainment suite so it would be great to get companies that can be baked into our systems," Dixit said. While the company hasn't actively set aside money for inorganic growth, they said they're going to be opportunistic about making acquisitions. Pocket FM is knocking on the doors of global private equity players as it looks to raise another round of money. The company is looking to raise between $100 million and $200 million, this time at a unicorn valuation, according toVCCircle in March. The company last raised money in March 2024 in a $103 million Series-D round that was led by Lightspeed India Partners at a valuation of $750 million. So far, the company has cumulatively raised $197 million across rounds and has the likes of Brand Capital, Tencent, Stepstone Group on its cap table. Pocket FM competitor Kuku FM is also leveraging AI for similar use cases. Kuku FM used AI for the creation of scripts of series on its platform, like 'Secret Billionaire,' 'Women of Prison' and 'Bloodstone Fortune.' Across industries, companies are now opting to build their own models as they look to leverage the vast amounts of user data they've collected over the years. Healthify, the health and wellness startup, built their own small language model that runs on top of LLMs from OpenAI and Anthropic. Ed-tech startup Physicswallah is building smaller models to solve questions pertaining to physics, chemistry, mathematics and biology. Strategy this year While the US has always been Pocket FM's main revenue source, accounting for 70-75% of total revenue, the company expects the European market to take off this year. With the $103 million raised last year, the company expanded into Europe and Latin America. Currently, Pocket FM is available in Germany and the UK, where the company entered just six months ago and claims that the two markets have already contributed to 5% of its revenue. Instead of opting to go live simultaneously across Europe, they're staggering their entry into different nations. They'll go live in France in June, then Italy around October and finally, the Netherlands around January in 2026. Dixit expects Europe to contribute up to 30% in two years. As a result, the company said, their revenue will 'grow multi-fold." India currently contributes 10-15% to Pocket FM's revenue. Pocket FM expects that the revenue percentage contribution will remain the same while 'its absolute revenue is expected to grow significantly, potentially 2–3 times." The company claimed it had surpassed $200 million in terms of revenue in FY25, with an annual recurring revenue of $250 million. In FY24, Pocket FM's revenue stood at ₹261 crore, compared to ₹130 crore in FY23, according to regulatory files accessed by business intelligence platform Tofler. The company trimmed losses to ₹16 crore in FY24 from ₹75 crore in FY23. Founded in 2018 by Rohan Nayak, Prateek Dixit, and Nishanth KS Pocket FM started as a audio series platform. The company has since rebranded itself, changing its name to Pocket Entertainment. It now runs three verticals, Pocket FM, Pocket Novels and Pocket Toons.

Fife crime author Marion Todd fears for career after AI 'theft'
Fife crime author Marion Todd fears for career after AI 'theft'

The Courier

time04-05-2025

  • Entertainment
  • The Courier

Fife crime author Marion Todd fears for career after AI 'theft'

A best-selling Fife crime author fears losing her career 'at a stroke' following the 'theft' of work by a global tech giant. All nine of Marion Todd's novels were included in a dataset used to train Meta's new AI model, Llama 3. The former lecturer and piano tutor's books centre on fictional detective Clare Mackay, who is based at St Andrews police station. They are among more than seven million copyrighted works downloaded without permission. And Marion has added her voice to those of around 150 other authors calling for their removal. They fear AI models, trained on their books, could soon begin producing work replicating their style. Marion said: 'If it becomes very skilled, it could finish my career at a stroke.' The Wormit author's concerns echo those of Angus crime author Ed James. and other best-selling writers. Well-known musicians, including Annie Lennox, Kate Bush and Blur frontman Damon Albarn, are also protesting the use of their work. Originally from Dundee, Marion Todd has just completed her 10th book, which is now with her publisher. It takes her between six and nine months to produce each book in her series. However, she is concerned Meta will eventually be able to do it at the touch of a button. She said: 'My concern is it would then be possible for Meta to say 'give me 100 Marion Todd books'. 'They wouldn't be very good to start with, but the more they're fed and the more they do it, the better they'll get.' All of the affected works were taken from the 'Lib Gen dataset', one of the largest collections of pirated books in the world. Marion says book piracy has been around for a long time and is impossible to stop. 'It's whack-a-mole, to be honest,' she said. 'You ask one to take it down and another one pops up. 'But crime author fans are voracious and the idea that my books could be replicated hundreds of times over is not good.' The Society of Authors held a protest outside Meta's UK headquarters in London last month. It has now written to Meta demanding compensation for affected writers. 'I'm fully behind the action being taken by the Society of Authors,' said Marion. 'I would also like to see some protection coming out from the Government in the form of licensing model.' Last year, Meta founder and CEO Mark Zuckerberg said the use of open source AI, such as Llama 3, is progress and will be good for the world. He added: 'Since the models are open, anyone is capable of testing for themselves as well. 'We must keep in mind that these models are trained by information that's already on the internet, so the starting point when considering harm should be whether a model can facilitate more harm than information that can quickly be retrieved from Google or other search results.' However, Marion said: 'For creativity, it's not progress.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store