鴻海處分3.75億美元Lordstown廠 轉攻AI資料中心
代工大廠鴻海4日代子公司公告,將處分位在美國俄亥俄州的資產,總值約3.75億美元,廠房總面積達620萬平方英尺,鴻海也證實該處分與東元的策略聯盟有關,將轉型聚焦AI資料中心建置。
鴻海代子公司Foxconn EV System LLC、Foxconn EV Asset Management LLC公告,將處分位在俄亥俄州,原為Lordstown的廠房、土地與機器設備,總金額達3.75億美元,帳面獲利達1.7億美元。
鴻海強調,此次處分不影響Model C在美國的業務,市場原先預期Lordstown廠的產能利用率低檔,加上設備折舊,可能影響鴻海的獲利,此次處分仍保留16萬平方英尺(約4500坪)的廠房,保留下來的面積作為資料中心用地相當足夠。
鴻海在2021年以2300萬美元買下新創Lordstown的廠房及設備,原打算為Lordstown代工純電皮卡,但截至2023年底,僅交付40台Endurance皮卡,扣除召回數量,實際僅交付3台,在其餘的新創電動車品牌先後宣告破產後,Lordstown的產能利用率趨近於零。
更多中時新聞網報導
15%關稅輾壓台灣!韓股突遭血洗…崩快4% 兇手抓到了
《半導體》昇陽半導體翻紅走高 放量飆逾9%
高於日韓!台灣吞20%關稅 僅台積、聯發科能活 專家揭「3產業」完蛋了:毛利率吃光光

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
2 minutes ago
- Forbes
DeepSeek: A Paradigm Shift, What It Means For Humanity
The whale that is DeepSeek was invisible prior to Jan 20th 2025. Then the Blue Whale breaches to the whole world's sight on Jan 20th. That body slam sent shockwaves around the world. The release of DeepSeek-R1 immediately cratered the market cap of several hardware and software companies which were buoyed by what investors thought was American exceptionalism. Withholding the latest chips and AI Intellectual Property from China was thought to be the strategy to follow. Except it was wrong. Such is the stuff that leapfrogging is made of. Especially for manufacturing and design powerhouse such as China. Ironically, the latest models from DeepSeek are free to use. They even run it on their servers for free. Development of general purpose large language models through scaling of parameters and training data led to many breakthroughs. The release of ChatGPT-3.5 and 4.0 in 2022-23 unleashed the general purpose potential of AI to the general public. This approach also increased costs tremendously as compute and data demands spurred bigger and better processors. In late 2023 and 2024 and even now, the construction of power hungry data centers were thought to be the only way to improve the performance of the models. Limiting access to computing and the latest chips was thought to restrain China as a source of these powerful models. With DeepSeek that paradigm was shifted. Companies like Nvidia whose stock was heavily affected by the announcement have since recovered and thrived. The lessons were lost on global markets. The worst may yet to come as the companies buoyed by the rising of AI and its use are brought down to earth by a combination of new methods and the lessening of compute needed to do training as well as inference. Sunk costs and the costs of switching with their own powerful economic adherents prevent a longer term view and lock the American AI in their paths. Success breeds complacency and adherence to the model that produced success. In AI, a rapidly developing field, getting stuck on algorithms, process and practice is deadly. DeepSeek showed that just piling on computing and data does not make for exponential progress. This is a lesson from many fields, that is often ignored with an overused but wrong dictum 'This time it is different.' Innovation follows familiar patterns; slowly then rapidly. Efficiency The costs of training and running DeepSeek are much lower than for other models. The ratio in a recent presentation showed $6M for DeepSeek/ versus $600M for Llama (the open source model from Meta). One hundredth the cost. The costs for other models, including ChatGPT, are even more. The cost savings are a result of implementing DeepSeek's own discoveries in reinforcement learning and training using distillation. Further, the model is very efficient in generating Chinese language. As of three months ago, a large number of Chinese companies had joined the AI revolution by subscribing to DeepSeek. As the national champion, the government industrial policy supports DeepSeek. RL as a training method was invented in the University of Amherst. The recipients of the 2024 ACM Turing award, Andrew Barto and Richard Sutton were the inventors of the classic reinforcement learning techniques. For LLMs and other large models, such an approach falls under supervised learning. The model is refined by feedback, classically from humans, called RLHF (Reinforcement Learning with Human Feedback). This is called supervised fine- tuning. Humans are the supervisors. The paper released by the creators of DeepSeek R1 goes into detail on the way that they modified RL. Anything that involves humans in the loop at scale requires a lot of money. Removing the human in the loop makes training cheaper. A version of the model is used to fine-tune the other. In other words, one model functions as the supervisor and the other is trained. The arrival of new companies with models such as MiniMax-M1 epitomizes this shift even more. Such techniques will overtake models which are created using conventional scaling. DeepSeek-R1 was effective through its evolution utilizing multiple strategies. A combination of novel methods based on existing techniques made the training and inference efficient in time and resources. More details can be found in this article. In short, all aspects of the creation and running of large language models were changed, enhanced or reworked for cost and time efficiency. MiniMax-M1 MiniMax-M1 claims to have chopped the cost of DeepSeek-R1 training by 90%. They trained their model for a cost of $500K. Contrast this to the $6M cost for DeepSeek-R1 and $600M for LLaMa. There have been doubts cast on the numbers publicized by both DeepSeek and MiniMax. Efficiencies have been through further refining RL with what is called lightning attention. This is mostly for deterministic problems such as mathematical and logical reasoning and long context problems such as coding. Minimax is also available through HuggingFace the open source AI host. Privacy There is concern that DeepSeek is harvesting private data for its own use. This phenomenon is rife in the world of AI and social media in general. What makes the sharing of private data with DeepSeek or other private companies is the fact that they will be used to refine the models. In the case of DeepSeek or other China based companies, there is a fear of data reaching the Chinese government. Private AI companies, even those in the United States do the same, except they will share that data with the US government if they are forced by law. At this juncture, such a scenario is more disquieting. The fourth amendment will fall by the wayside, if the government can search not only our persons and our homes, but our minds without a warrant. To read more about the risks of DeepSeek, read this analysis from Hidden Layer. Since Hidden Layer's business model is based on these kinds of analysis, it is best to look closely at the analysis and compare with their work on other open models. Open Source AI Models Open Source International (OSI) has a definition of Open Source AI. It is 1.0 right now, subject to revision. Like the Open Source definition for software, it allows users to use, observe, modify and distribute without any restrictions. AI models depend a lot on their training data. AI use involves inference, consuming resources. The expenditure on training is separate from the expense of inference. In the classic definition of open source software the source code is available to any user to use, observe, modify and distribute. In a strict interpretation of AI open-source, the source code should include data used to train the model. However this may not be practical, nor is it part of the OSI definition of Open Source AI. This is drastically different from the OSI guidance for open source software. The other difference is the observability of the model weights and hyperparameters. During the learning phase model weights are refined. Model weights embody the model in its current form, crystallizing all the training that the model has undergone. Hyperparameters control the initial configuration of the learning setup. In an open model, model weights and model parameters are meant to be open. Open Source AI models can be called open weights models. Many models from China are open weights models, including Qwen (From AliBababa). This competition has also forced OpenAI to release an open weight model. This is the gpt-oss base model with two variants. The Future We have not delved into the technology behind the creation of multi-modal prompts and multi-modal generation. By multi-modal, we mean not only text, but images, audio as well as video. MiniMax as well as DeepSeek have these capabilities. It is clear that limiting access to hardware and know-how cannot hold true innovation back. Such constraints also make for multiple paradigm shifts, making AI cheaper to develop with lower hardware and power resources, creating democratized and decentralized future where we could fine-tune and run models on commodity hardware. These developments give us hope that we will be able to control and bend these capabilities to help humanity rather than harm ourselves.
Yahoo
30 minutes ago
- Yahoo
Paycom raises 2025 revenue and profit forecasts on AI-driven demand
(Reuters) -Payroll processor Paycom Software raised its forecast for annual revenue and profit on Wednesday, as the addition of AI features helps accelerate demand for its employee management services, sending its shares up 7% in extended trading. The company now expects fiscal 2025 revenue of $2.05 billion to $2.06 billion, up from its previous projection of $2.02 billion to $2.04 billion. Analysts on average expect $2.03 billion, according to data compiled by LSEG. Paycom has been integrating artificial intelligence features into its software with its 'smart AI' suite that automates tasks such as writing job descriptions and helps employers identify which employees are most at risk of leaving. This has boosted demand for Paycom's services as businesses look to simplify workforce management functions. "We are well positioned to extend our product lead and eclipse the industry with even greater AI and automation," CEO Chad Richison said in a statement. Paycom expects 2025 core profit in the range of $872 million to $882 million, up from previous expectations of $843 million to $858 million. The payroll processor reported revenue of $483.6 million for the second quarter ended June 30, beating analysts' estimate of $472 million. Adjusted core profit was $198.3 million, compared to $159.7 million in the same period last year. Paycom's expectation of strong growth comes despite a sharp deterioration in U.S. labor market conditions. U.S. employment growth was weaker than expected in July, while the nonfarm payrolls count for the prior two months was revised down by 258,000 jobs, according to a Labor Department report. Sign in to access your portfolio


CNET
31 minutes ago
- CNET
Researchers Seize Control of Smart Homes With Malicious Gemini AI Prompts
Recent reports and demonstrations from the Black Hat computer-security conference have shown how outside Gemini AI prompts -- dubbed promptware -- could fool the AI and force it to control Google Home-connected smart devices. That's an issue for Google, which has been working to add Gemini features to its Google Home app and replace Google Assistant with the new AI helper. The secret to these serious vulnerabilities is how Gemini is designed to respond to basic commands in English. Demonstrations show how a prompt sneakily added to an inserted Google Calendar invite will be read by Gemini the same way it scans other Google app data, such as when it is summarizing emails. But in this case, the addition gives Gemini a very specific order, like creating an agent to control everyday devices from Google Home. The Tel Aviv University researchers, including Ben Nassi, Stav Cohen and Or Yair, have created their own website that showcases their report, "Invitation is All You Need." It includes videos showing how the right Gemini prompts could be used to open windows, turn off lights, turn on a boiler, or geolocate the current user. As the Invitation is All You Need research shows, a detailed prompt can be hidden in an innocuous Calendar invite title or similar spot. These commands can make Gemini create a hidden agent and wait for a common response (like saying "thank you" in an email) to trigger certain actions. Even if your calendar controls are tight, some of these promptware attacks could be performed through other things that Gemini scans, such as an email subject line. Other demonstrations showed how similar commands could lead to spam messages, deleted events, automatic Zoom streaming and more unpleasant tricks. Should you worry about your Google Home devices? Google told CNET they have introduced multiple fixes to address the promptware vulnerabilities since the researchers provided Google with their report in February 2015. That's the point of the Black Hat conferences -- to uncover problems before real cybercriminals seize them, and get the fixes in fast. Andy Wen, senior director of security product management at Google Workspace, told CNET, "We fixed this issue before it could be exploited thanks to the great work and responsible disclosure by Ben Nassi and team. Their research helped us better understand novel attack pathways, and accelerated our work to deploy new, cutting edge defenses which are now in place protecting users." If you're still concerned, you can disable Gemini entirely in most cases. As I've covered before as CNET's home security editor, smart home hacking is very rare and very difficult with today's latest security measures. But as these new generative AIs get added to smart homes (the slowly rolling out Alexa Plus and eventual Siri AI upgrades included), there's a chance they could bring new vulnerabilities with them. Now, we're seeing how that actually works, and I'd like these AI features to get another security pass, ASAP.