
Yuan's Worst Week in Six Months Casts Spotlight on PBOC Fixing
China's currency has depreciated 0.7% since July 25 in the offshore market to head for its largest weekly decline since end-January. Its onshore peer slid 0.6% during the period, putting it on track for its worst performance since early February.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
3 minutes ago
- Yahoo
Trump's Asia tariffs takes a massive hit on Bitcoin miners
Trump's Asia tariffs takes a massive hit on Bitcoin miners originally appeared on TheStreet. As per the latest report by The Block, President Donald Trump's latest order on global tariffs seems to be becoming a pain in the neck for the Bitcoin mining community in the U.S. As the 90-day tariff pause deadline neared, Trump announced a slew of new global tariff rates on July 31. Among the worst-hit are the key centers of mining rig manufacturing in Southeast Asia. For those unfamiliar, crypto mining is the process of using high-tech hardware to validate and secure transactions on a blockchain network. Ethan Vera, the COO of a Bitcoin mining technology and services company, Luxor Technology, shared a document with The Block as per which the latest directive imposes 21.6% tariffs, including a 19% "reciprocal tariff," on imports of application-specific integrated circuits (ASICs) from Indonesia, Malaysia, and Thailand beginning Aug. are chips that are designed to execute a specific task extremely efficiently such as crypto mining, as opposed to general-purpose processors like central processing units (CPUs) or graphics processing units (GPUs). As per the report, U.S. tariffs on imports from China, a major hub of mining rig manufacturing, stand at a staggering 57.6%, which includes a 10% baseline reciprocal tariff and an additional country‑specific tariff of 20%. Anyway, the tariff pause deadline between the two countries is set to expire on Aug. 12, and there has been no breakthrough so far. While still lower than earlier rates, the current tariffs are significantly higher than the 2.6% tariffs on ASICs imported from the Southeast Asian countries before Trump's second term. However, Chinese machines even earlier used to face an additional 25% ad valorem said, "At 21.6% tariffs, the U.S. is now one of the least competitive jurisdictions to bring machines in, and miners are looking at Canada and other markets to expand too." Opportunity for domestic manufacturers However, the new trade conditions could also lead to a rise in the prices of U.S.-based used ASIC machines in the face of high tariffs on imported rigs, he explained. Though Luxor is excited about the production of mining rigs in the U.S., it warned fully onshoring the manufacturing could take years because most of the raw materials are still imported from Asia. The mining company expected the Trump administration to exempt the mining equipment from tariff hikes in order to encourage the domestic crypto industry. Trump's Asia tariffs takes a massive hit on Bitcoin miners first appeared on TheStreet on Aug 6, 2025 This story was originally reported by TheStreet on Aug 6, 2025, where it first appeared.
Yahoo
3 minutes ago
- Yahoo
Record-breaking bridges: How Italy and China are pushing engineering limits
The future of transportation took shape on two continents this week as Italy and China announced bridges that would redefine what's possible in modern engineering. Italy greenlit a $15.5 billion project to build what would become the world's longest suspension bridge, infrastructure company WeBuild said on Wednesday. Over the weekend, the Chinese state-run People's Daily reported the near-completion of what will be the world's tallest bridge. The Italian project, connecting Sicily to mainland Italy across the Strait of Messina, would stretch nearly 2.3 miles, with its suspended span reaching nearly 2.1 miles. This would surpass the current record holder, Turkey's Canakkale Bridge, by more than half a mile. "Today, Italy has shown once again how it can come together around a mega project that will be transformative for the whole country," Pietro Salini, chief executive of WeBuild, said in a press release. Meanwhile, in China's southwestern Guizhou Province, the Huajiang Grand Canyon Bridge reached a milestone with the installation of its final steel girder. According to Chinese state television, the bridge will stand at 2,051 feet from deck to river -- roughly twice the height of the Eiffel Tower. Italy's bridge, designed to carry 6,000 cars per hour and 200 trains daily, focuses on connecting a major island to the mainland. China's Huajiang bridge, while completing a crucial expressway link, incorporates ambitious tourist attractions, including what will be the world's highest bungee jump. Both projects face unique challenges, according to the Associated Press. Italy's bridge must contend with seismic risks in the Messina fault region, while China's bridge tackles the extreme engineering demands of spanning one of the world's deepest canyons, the newsgathering service noted. The competition reflects a broader trend in global infrastructure development. While Italy aims to strengthen its connection to Sicily and bolster NATO's capabilities, China has been systematically building the world's highest bridges. China's Guizhou Province alone has more high bridges than all other countries combined, according to The Italian Transport Ministry announced construction on the Messina bridge is expected to begin next year. According to People's Daily China, the Huajiang bridge is now over 98% complete, and is set to open by the end of September in southwest China's Guizhou Province.


Forbes
5 minutes ago
- Forbes
DeepSeek: A Paradigm Shift, What It Means For Humanity
The whale that is DeepSeek was invisible prior to Jan 20th 2025. Then the Blue Whale breaches to the whole world's sight on Jan 20th. That body slam sent shockwaves around the world. The release of DeepSeek-R1 immediately cratered the market cap of several hardware and software companies which were buoyed by what investors thought was American exceptionalism. Withholding the latest chips and AI Intellectual Property from China was thought to be the strategy to follow. Except it was wrong. Such is the stuff that leapfrogging is made of. Especially for manufacturing and design powerhouse such as China. Ironically, the latest models from DeepSeek are free to use. They even run it on their servers for free. Development of general purpose large language models through scaling of parameters and training data led to many breakthroughs. The release of ChatGPT-3.5 and 4.0 in 2022-23 unleashed the general purpose potential of AI to the general public. This approach also increased costs tremendously as compute and data demands spurred bigger and better processors. In late 2023 and 2024 and even now, the construction of power hungry data centers were thought to be the only way to improve the performance of the models. Limiting access to computing and the latest chips was thought to restrain China as a source of these powerful models. With DeepSeek that paradigm was shifted. Companies like Nvidia whose stock was heavily affected by the announcement have since recovered and thrived. The lessons were lost on global markets. The worst may yet to come as the companies buoyed by the rising of AI and its use are brought down to earth by a combination of new methods and the lessening of compute needed to do training as well as inference. Sunk costs and the costs of switching with their own powerful economic adherents prevent a longer term view and lock the American AI in their paths. Success breeds complacency and adherence to the model that produced success. In AI, a rapidly developing field, getting stuck on algorithms, process and practice is deadly. DeepSeek showed that just piling on computing and data does not make for exponential progress. This is a lesson from many fields, that is often ignored with an overused but wrong dictum 'This time it is different.' Innovation follows familiar patterns; slowly then rapidly. Efficiency The costs of training and running DeepSeek are much lower than for other models. The ratio in a recent presentation showed $6M for DeepSeek/ versus $600M for Llama (the open source model from Meta). One hundredth the cost. The costs for other models, including ChatGPT, are even more. The cost savings are a result of implementing DeepSeek's own discoveries in reinforcement learning and training using distillation. Further, the model is very efficient in generating Chinese language. As of three months ago, a large number of Chinese companies had joined the AI revolution by subscribing to DeepSeek. As the national champion, the government industrial policy supports DeepSeek. RL as a training method was invented in the University of Amherst. The recipients of the 2024 ACM Turing award, Andrew Barto and Richard Sutton were the inventors of the classic reinforcement learning techniques. For LLMs and other large models, such an approach falls under supervised learning. The model is refined by feedback, classically from humans, called RLHF (Reinforcement Learning with Human Feedback). This is called supervised fine- tuning. Humans are the supervisors. The paper released by the creators of DeepSeek R1 goes into detail on the way that they modified RL. Anything that involves humans in the loop at scale requires a lot of money. Removing the human in the loop makes training cheaper. A version of the model is used to fine-tune the other. In other words, one model functions as the supervisor and the other is trained. The arrival of new companies with models such as MiniMax-M1 epitomizes this shift even more. Such techniques will overtake models which are created using conventional scaling. DeepSeek-R1 was effective through its evolution utilizing multiple strategies. A combination of novel methods based on existing techniques made the training and inference efficient in time and resources. More details can be found in this article. In short, all aspects of the creation and running of large language models were changed, enhanced or reworked for cost and time efficiency. MiniMax-M1 MiniMax-M1 claims to have chopped the cost of DeepSeek-R1 training by 90%. They trained their model for a cost of $500K. Contrast this to the $6M cost for DeepSeek-R1 and $600M for LLaMa. There have been doubts cast on the numbers publicized by both DeepSeek and MiniMax. Efficiencies have been through further refining RL with what is called lightning attention. This is mostly for deterministic problems such as mathematical and logical reasoning and long context problems such as coding. Minimax is also available through HuggingFace the open source AI host. Privacy There is concern that DeepSeek is harvesting private data for its own use. This phenomenon is rife in the world of AI and social media in general. What makes the sharing of private data with DeepSeek or other private companies is the fact that they will be used to refine the models. In the case of DeepSeek or other China based companies, there is a fear of data reaching the Chinese government. Private AI companies, even those in the United States do the same, except they will share that data with the US government if they are forced by law. At this juncture, such a scenario is more disquieting. The fourth amendment will fall by the wayside, if the government can search not only our persons and our homes, but our minds without a warrant. To read more about the risks of DeepSeek, read this analysis from Hidden Layer. Since Hidden Layer's business model is based on these kinds of analysis, it is best to look closely at the analysis and compare with their work on other open models. Open Source AI Models Open Source International (OSI) has a definition of Open Source AI. It is 1.0 right now, subject to revision. Like the Open Source definition for software, it allows users to use, observe, modify and distribute without any restrictions. AI models depend a lot on their training data. AI use involves inference, consuming resources. The expenditure on training is separate from the expense of inference. In the classic definition of open source software the source code is available to any user to use, observe, modify and distribute. In a strict interpretation of AI open-source, the source code should include data used to train the model. However this may not be practical, nor is it part of the OSI definition of Open Source AI. This is drastically different from the OSI guidance for open source software. The other difference is the observability of the model weights and hyperparameters. During the learning phase model weights are refined. Model weights embody the model in its current form, crystallizing all the training that the model has undergone. Hyperparameters control the initial configuration of the learning setup. In an open model, model weights and model parameters are meant to be open. Open Source AI models can be called open weights models. Many models from China are open weights models, including Qwen (From AliBababa). This competition has also forced OpenAI to release an open weight model. This is the gpt-oss base model with two variants. The Future We have not delved into the technology behind the creation of multi-modal prompts and multi-modal generation. By multi-modal, we mean not only text, but images, audio as well as video. MiniMax as well as DeepSeek have these capabilities. It is clear that limiting access to hardware and know-how cannot hold true innovation back. Such constraints also make for multiple paradigm shifts, making AI cheaper to develop with lower hardware and power resources, creating democratized and decentralized future where we could fine-tune and run models on commodity hardware. These developments give us hope that we will be able to control and bend these capabilities to help humanity rather than harm ourselves.