Latest news with #ZhangPeng


Economic Times
7 hours ago
- Business
- Economic Times
GLM-4.5 vs DeepSeek: China's AI Cost War Just Got Personal
At the World AI Conference in Shanghai, (formerly Zhipu AI) launched its new open-source large language model, GLM‑4.5, and shook the market with a bold promise - cheaper, faster, and leaner than even China's current cost-leader, DeepSeek. In a global race dominated by compute efficiency and token economics, this move marks a turning point. GLM‑4.5 is built with agentic intelligence in mind, able to autonomously break down and complete multi-step tasks with less redundancy. It requires just eight Nvidia H20 chips and is half the size of DeepSeek's R1 model, which was already considered a breakthrough in operational efficiency. CEO Zhang Peng claims no further chip scaling is needed, a sharp contrast to the GPU-hungry practices of Western competitors. The cost efficiency is what's drawing the spotlight. has priced dramatically undercutting DeepSeek's model, and slashing costs compared to OpenAI's GPT‑4 or Gemini. That unlocks game-changing affordability for startups, product teams, and AI-driven platforms. launch plays into China's broader strategic bet on open-source AI dominance. With over 1,500 LLMs developed to date, China is leveraging lower-cost compute, government support, and model-sharing culture to put pressure on U.S. and European players. Whether you're a startup building a SaaS tool, a product team testing conversational AI, or an enterprise scaling internal automation, GLM‑4.5 offers a high-performance, low-cost alternative to traditional Western LLMs. Developers can easily integrate it into chatbots, agents, document summarizers, or AI copilots using open-source APIs, without burning through compute budgets. Its agentic design means you can offload complex multi-step workflows, such as code generation, customer support, or data analysis, with higher efficiency. The lean GPU requirement lowers the barrier for self-hosting or deploying in resource-constrained environments. Ultimately, GLM‑4.5 enables rapid iteration, reduced inference costs, and greater flexibility, especially for teams operating under tight margins or looking to scale without vendor lock-in. Even so, GLM‑4.5 raises a pivotal question: if high-quality AI can be built and deployed at a fraction of today's cost, what happens to the premium pricing strategies of the West? For budget-conscious developers and enterprises, the message is clear, value is shifting eastward.


Time of India
a day ago
- Business
- Time of India
This Chinese company included in US restricted entity list announces ChatGPT's newest rival that claims to be cheaper than DeepSeek
Chinese startup has announced its newest artificial intelligence model, GLM-4.5. The company claims that its latest AI model will be cheaper to use than DeepSeek . This new model shows how Chinese companies are creating more capable AI models at reduced costs. previously known as Zhipu, has confirmed that GLM-4.5 is built on "agentic" AI principles. This means the model can automatically decompose a task into sub-tasks to complete it with greater accuracy, which is a different approach from the logic of some existing AI models. In June, OpenAI included Zhipu in a warning list regarding advancements in Chinese AI. The US government has also placed the startup on its restricted entity list, limiting American firms from engaging in business with it. The new GLM-4.5 model is also open-sourced, which will allow developers to download and use it for free. What said about its AI model GLM-4.5 being cheaper than DeepSeek In a statement to CNBC, Zhang Peng , the CEO of said the company's new GLM-4.5 model runs on eight Nvidia H20 chips . These are Nvidia's AI training chips that are specifically designed for the Chinese market to comply with US export rules. While Nvidia recently received approval to resume shipments to China after a pause, the timeline for delivery remains unclear. Zhang noted that has adequate computing resources and does not need to purchase additional chips for now, but declined to disclose how much was spent on training the model, saying more information would be shared later. said it will price GLM-4.5 at 11 cents per million input tokens and 28 cents per million output tokens, compared to DeepSeek R1's rates of 14 cents and $2.19, respectively. These tokens are used to measure the volume of data processed by AI models. In recent weeks, several Chinese firms have introduced new open-source AI models . At the World AI Conference in Shanghai, Tencent unveiled its HunyuanWorld-1.0 model, designed to help generate 3D scenes for game development. Alibaba followed with the launch of its Qwen3-Coder model, focused on coding tasks. Earlier this month, Moonshot, which Alibaba backs, announced Kimi K2, which it said performs better than OpenAI's ChatGPT and Anthropic's Claude in specific coding tasks. According to the company's website, Kimi K2 charges 15 cents per million input tokens and $2.50 per million output tokens. Back in January, DeepSeek drew attention from global investors with its AI model, which it said was developed despite US chip restrictions and came with lower training and operating costs than its US counterparts. The company claimed its V3 model was trained for under $6 million, though some analysts noted that figure reflects part of its total hardware investment, which exceeded $500 million. iQOO Z10R 5G goes on Sale: BEST Budget Phone for Content Creators?


Time of India
a day ago
- Business
- Time of India
GLM-4.5 vs DeepSeek: China's AI Cost War Just Got Personal
Live Events At the World AI Conference in Shanghai, (formerly Zhipu AI) launched its new open-source large language model , GLM‑4.5, and shook the market with a bold promise - cheaper, faster, and leaner than even China's current cost-leader, DeepSeek . In a global race dominated by compute efficiency and token economics, this move marks a turning is built with agentic intelligence in mind, able to autonomously break down and complete multi-step tasks with less redundancy. It requires just eight Nvidia H20 chips and is half the size of DeepSeek's R1 model, which was already considered a breakthrough in operational efficiency. CEO Zhang Peng claims no further chip scaling is needed, a sharp contrast to the GPU-hungry practices of Western cost efficiency is what's drawing the spotlight. has priced dramatically undercutting DeepSeek's model, and slashing costs compared to OpenAI's GPT‑4 or Gemini. That unlocks game-changing affordability for startups, product teams, and AI-driven launch plays into China's broader strategic bet on open-source AI dominance. With over 1,500 LLMs developed to date, China is leveraging lower-cost compute, government support, and model-sharing culture to put pressure on U.S. and European you're a startup building a SaaS tool, a product team testing conversational AI, or an enterprise scaling internal automation, GLM‑4.5 offers a high-performance, low-cost alternative to traditional Western LLMs. Developers can easily integrate it into chatbots, agents, document summarizers, or AI copilots using open-source APIs, without burning through compute budgets. Its agentic design means you can offload complex multi-step workflows, such as code generation, customer support, or data analysis, with higher efficiency. The lean GPU requirement lowers the barrier for self-hosting or deploying in resource-constrained environments. Ultimately, GLM‑4.5 enables rapid iteration, reduced inference costs, and greater flexibility, especially for teams operating under tight margins or looking to scale without vendor so, GLM‑4.5 raises a pivotal question: if high-quality AI can be built and deployed at a fraction of today's cost, what happens to the premium pricing strategies of the West? For budget-conscious developers and enterprises, the message is clear, value is shifting eastward.


India Today
a day ago
- Business
- India Today
DeepSeek effect: New free, open source ChatGPT rival GLM 4.5 breaks cover in China
A new AI model has been introduced in China. Called GLM-4.5, this is an open-source model unlike most of the US-based AI systems that are closed, and some benchmarks put it ahead of even DeepSeek R1 and ChatGPT. GLM 4.5 has been developed by startup (formerly known as Zhipu). According to the company, this model is designed specifically for intelligent agent tasks and uses what is described as an 'agentic' AI architecture. This means that the AI model can autonomously take on tasks and handle reasoning, coding and other applications more effectively. advertisementAccording to the company, GLM-4.5 is a large language model featuring 355 billion total parameters, with an optimised variant named GLM-4.5 Air that has 106 billion parameters, making it lighter and faster. The model supports a context window of 128,000 tokens, allowing it to process long conversations or documents without losing focus. It is also said to include native function calling, enabling seamless integration with external software and workflows. The company highlights that these capabilities make GLM-4.5 suitable for a wide range of applications, including advanced coding, physics simulations, game development and interactive company has also revealed that the GLM 4.5 has been released under an Apache 2.0 open-source licence. This means it is free for use and developers can freely download and deploy it, CNBC reported. claims that GLM4.5 ranks third globally and first among Chinese and open-source models across 12 major AI evaluation benchmarks. According to the company, the model scored 98.2 per cent on the MATH500 reasoning test and 91 per cent on the AIME24 challenge. It also delivered 64.2 per cent accuracy on SWE-Bench Verified, a benchmark used for software engineering tasks, and achieved a 90.6 per cent tool-calling success rate, edging out leading by Nvidia chip One of GLM4.5's biggest advantages is cost. CEO ZhangPeng told CNBC that the model can run on just eight Nvidia H20 chips. These chips are designed specifically for the Chinese market under US export controls. This is roughly half the hardware required by DeepSeek's comparable model. ZhangPeng revealed that the company does not currently need to purchase additional chips, indicating that the model already has sufficient computing than DeepSeekThe company reveals that it has also aggressively cut token pricing. will charge $0.11 per million input tokens compared with $0.14 for DeepSeek R1, and $0.28 per million output tokens. This is lower than the $2.19 charged by DeepSeek. Notably, tokens are the standard unit of data measurement for AI context, DeepSeek, the advanced LLM launched earlier this year, is developed by the Chinese startup High Flyer AI and is known for rivalling OpenAI's ChatGPT in natural language understanding and reasoning, while requiring significantly less training cost, reportedly under $6 million. Although GLM-4.5 is about half the size of DeepSeek, it is touted to be using agentic AI design to maintain high accuracy and flexibility in completing tasks with fewer computational the arrival of GLM-4.5 comes at a time when China's AI model development is seeing a rapid surge. By July, Chinese companies had released 1,509 large language models, more than any other country, according to the state-owned Xinhua news agency.- Ends


Business Insider
2 days ago
- Business
- Business Insider
China's Newest AI Model Is Cheaper than DeepSeek
Chinese tech startups are following in DeepSeek's footsteps by releasing new artificial intelligence models that are smarter and cheaper, according to CNBC. One of the most notable is formerly known as Zhipu, which on Monday revealed its GLM-4.5 model. The company says that this model costs less to use than DeepSeek's and works with agentic AI, which is a system that breaks large tasks into smaller steps for more accurate results. Z. ai is also making the model open-source so developers can freely download and use it. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. What's interesting is that GLM-4.5 is roughly half the size of DeepSeek's competing model and requires only eight Nvidia (NVDA) H20 chips to run. For context, these are the specialized chips made for China to comply with U.S. export controls. CEO Zhang Peng told CNBC that the company already has enough computing power and does not need to buy more chips, although he declined to disclose the model's training costs. Nevertheless, stated that it will charge $0.11 per million input tokens and $0.28 per million output tokens, which is significantly cheaper than DeepSeek's rates of $0.14 and $2.19, respectively. It is worth noting that release adds to a wave of new open-source AI models from China. Indeed, earlier this month, Alibaba-backed Moonshot (BABA) introduced its Kimi K2 model that it claimed was better at coding than OpenAI's ChatGPT and Anthropic's Claude. Tencent (TCEHY) also released its HunyuanWorld-1.0 model for 3D game development, and Alibaba announced Qwen3-Coder for programming tasks. In addition, has raised more than $1.5 billion from investors, which has led U.S. regulators to add the startup to their Entity List that limits American companies from working with it. Which Tech Stock Is the Better Buy? Turning to Wall Street, out of the three stocks mentioned above, analysts think that BABA stock has the most room to run. In fact, BABA's average price target of $151.08 per share implies more than 23% upside potential. On the other hand, analysts expect the least from of $184.91 equates to a gain of 5.6%.