logo
#

Latest news with #GPT‑4

GLM-4.5 vs DeepSeek: China's AI Cost War Just Got Personal
GLM-4.5 vs DeepSeek: China's AI Cost War Just Got Personal

Economic Times

time30-07-2025

  • Business
  • Economic Times

GLM-4.5 vs DeepSeek: China's AI Cost War Just Got Personal

At the World AI Conference in Shanghai, (formerly Zhipu AI) launched its new open-source large language model, GLM‑4.5, and shook the market with a bold promise - cheaper, faster, and leaner than even China's current cost-leader, DeepSeek. In a global race dominated by compute efficiency and token economics, this move marks a turning point. GLM‑4.5 is built with agentic intelligence in mind, able to autonomously break down and complete multi-step tasks with less redundancy. It requires just eight Nvidia H20 chips and is half the size of DeepSeek's R1 model, which was already considered a breakthrough in operational efficiency. CEO Zhang Peng claims no further chip scaling is needed, a sharp contrast to the GPU-hungry practices of Western competitors. The cost efficiency is what's drawing the spotlight. has priced dramatically undercutting DeepSeek's model, and slashing costs compared to OpenAI's GPT‑4 or Gemini. That unlocks game-changing affordability for startups, product teams, and AI-driven platforms. launch plays into China's broader strategic bet on open-source AI dominance. With over 1,500 LLMs developed to date, China is leveraging lower-cost compute, government support, and model-sharing culture to put pressure on U.S. and European players. Whether you're a startup building a SaaS tool, a product team testing conversational AI, or an enterprise scaling internal automation, GLM‑4.5 offers a high-performance, low-cost alternative to traditional Western LLMs. Developers can easily integrate it into chatbots, agents, document summarizers, or AI copilots using open-source APIs, without burning through compute budgets. Its agentic design means you can offload complex multi-step workflows, such as code generation, customer support, or data analysis, with higher efficiency. The lean GPU requirement lowers the barrier for self-hosting or deploying in resource-constrained environments. Ultimately, GLM‑4.5 enables rapid iteration, reduced inference costs, and greater flexibility, especially for teams operating under tight margins or looking to scale without vendor lock-in. Even so, GLM‑4.5 raises a pivotal question: if high-quality AI can be built and deployed at a fraction of today's cost, what happens to the premium pricing strategies of the West? For budget-conscious developers and enterprises, the message is clear, value is shifting eastward.

Hackbots Accelerate Cyber Risk — And How to Beat Them
Hackbots Accelerate Cyber Risk — And How to Beat Them

Arabian Post

time10-06-2025

  • Arabian Post

Hackbots Accelerate Cyber Risk — And How to Beat Them

Security teams globally face mounting pressure as artificial‑intelligence‑driven 'hackbots' emerge as a new front in cyber warfare. These autonomous agents, powered by advanced large language models and automation frameworks, are increasingly capable of probing systems, identifying exploits, and in some instances, launching attacks with minimal human intervention. Experts warn that if left unchecked, hackbots could rapidly outpace traditional scanning tools and elevate the scale of cyber threats. Hackbots combine the intelligence of modern LLMs—most notably GPT‑4—with orchestration layers that enable intelligent decision‑making, adapting test payloads, refining configurations, and parsing results. Unlike legacy scanners, these systems analyse target infrastructure and dynamically choose tools and strategies, often flagging novel vulnerabilities that evade conventional detection. Academic research demonstrates that GPT‑4 agents can autonomously perform complex operations like blind SQL injection and database schema extraction without prior specifications. Corporate platforms have begun integrating hackbot capabilities into ethical hacking pipelines. HackerOne, for instance, now requires human review before any vulnerability submission, underscoring that hackbots remain tools under human supervision. Cybersecurity veteran Jack Nunziato explains: 'hackbots leverage advanced machine learning … to dynamically and intelligently hack applications,' a leap forward from rigid automated scans. Such systems are transforming both offensive and defensive security landscapes. ADVERTISEMENT Alongside legitimate use, underground markets are offering hackbots-as-a-service. Products like WormGPT and FraudGPT are being promoted on darknet forums, providing scripting and social‑engineering automation under subscription models. Though some users criticise their limited utility—one described WormGPT as 'just an old cheap version of ChatGPT'—the consensus is that even basic automation can significantly lower the barrier for entry into cybercrime. Security analysts caution that these services, even if imperfect, democratise attack capabilities and may increase volume and reach of malicious campaigns. While hackbots enable faster and more thorough scans, they lack human creativity. Modern systems depend on human-in-the-loop oversight, where experts validate results and craft exploit chains for end-to-end attacks. Yet the speed advantage is real: automated agents can tirelessly comb through code, execute payloads, and surface anomalies across large environments. One cybersecurity researcher noted hackbots are 'getting good, really good, at simulating … a curious, determined hacker'. Defensive strategies must evolve rapidly to match this new threat. The UK's National Cyber Security Centre has warned that AI will likely increase both the volume and severity of cyberattacks. GreyNoise Intelligence recently reported that actors are increasingly exploiting long-known vulnerabilities in edge devices as defenders lag on patching — demonstrating how automation favours adversaries. Organisations must enhance their baseline defences to withstand hackbots, which operate at machine scale. A multi-layered response is critical. Continuous scanning, hardened endpoint controls, identity‑centric solutions, and robust patch management programmes form the backbone of resilience. Privileged Access Management, especially following frameworks established this year, is being touted as indispensable. Likewise, advanced Endpoint Detection and Response and Extended Detection & Response platforms use AI defensively, applying behavioural analytics to flag suspicious activity before attackers can exploit high-velocity toolkits. Legal and policy frameworks are also adapting. Bug bounty platforms now integrate hackbot disclosures under rules requiring human oversight, promoting ethical use while mitigating abuse. Security regulators and insurers are demanding evidence of AI-aware defences, particularly in critical sectors, aligning with risk-based compliance models. ADVERTISEMENT Industry insiders acknowledge the dual nature of the phenomenon. Hackbots serve as force multipliers for both defenders and attackers. As one expert puts it, 'these tools could reshape how we defend systems, making it easier to test at scale … On the other hand, hackbots can … scale sophisticated attacks faster than any human ever could'. That tension drives the imperative: treat hackbots as exotic scanners failing to catch human logic, but succeed in deploying scalable exploitation. Recent breakthroughs on LLM‑powered exploit automation heighten the stakes. A February 2024 study revealed GPT‑4 agents autonomously discovering SQL vulnerabilities on live websites. With LLMs maturing rapidly, future iterations may craft exploit payloads, bypass filters, and compose stealthier attacks. To pre‑empt this, defenders must embed AI strategies within security operations. Simulated red-team exercises should leverage hackbot‑style agents, exposing defenders to their speed and variety. Build orchestration workflows that monitor, sandbox, and neutralise test feeds. Maintain visibility over AI‑driven tooling across pipelines and supply chains. Ethical AI practices extend beyond tooling. Security teams must ensure any in‑house or third‑party AI system has strict governance. That mandates access control, audit logging, prompt validation, and fallbacks to expert review. In contexts where hackbots are used, quarterly audits should verify compliance with secure‑by‑design frameworks.

Kali GPT Delivers AI Co‑Pilot for Penetration Testing
Kali GPT Delivers AI Co‑Pilot for Penetration Testing

Arabian Post

time08-06-2025

  • Arabian Post

Kali GPT Delivers AI Co‑Pilot for Penetration Testing

Kali GPT demonstrates a new frontier in cybersecurity, embedding an AI‑powered assistant directly into Kali Linux to streamline penetration testing for professionals and learners. Leveraging GPT‑4 architecture, the tool generates payloads, explains complex tools such as Metasploit and Nmap, and recommends relevant exploits—all within the terminal. For expert practitioners, the assistant accelerates assessments; for novices, it serves as an interactive mentor by translating technical concepts into clear guidance . Advanced educational institutions are rapidly integrating Kali GPT into curricula, citing its capacity to present example‑driven instructions that engage students more effectively than traditional documentation . This trend enhances practical cybersecurity education, addressing skill‑gap challenges facing the industry. Core to Kali GPT's offering is its real‑time assistance: users receive immediate diagnostics for errors encountered in tools like Nmap, along with actionable remedies. It can also generate tailored Linux commands—such as identifying files larger than 100 MB—customised to the user's needs, cutting down the manual effort required . ADVERTISEMENT A standout feature is adaptive learning. Kali GPT adjusts its guidance based on the user's proficiency level—offering foundational explanations to beginners while providing intricate technical depth to seasoned testers . This reduces wasted time searching through forums and documentation, acting as a dynamic mentor that evolves with the user. Corporate users report that Kali GPT enhances efficiency during routine vulnerability scans and network audits. By automating repetitive tasks, it enables security teams to focus on complex threat scenarios and strategic decision‑making. Observers note that the AI assistant effectively democratizes penetration testing by enabling a broader range of talent to contribute meaningfully to security assessments . Despite its promise, expert voices caution that human oversight remains essential. Kali GPT might generate code that contains false positives or is improperly optimised. Developers stress that it complements—but does not replace—the deep technical understanding and judgment that cybersecurity professionals provide . Kali GPT's emergence aligns with broader academic exploration of AI‑assisted security tooling. Three recent LLM‑powered systems highlight this trend. AutoPentest, based on GPT‑4o and LangChain, performs autonomous black‑box penetration tests, achieving task success rates comparable to manual ChatGPT use while offering cost efficiencies . PenTest++, introduced in February 2025, integrates generative AI for reconnaissance, scanning, exploitation, and reporting with structured human validation at key points . Earlier, PentestGPT demonstrated substantial improvements—delivering over 228 percent better task completion on benchmarks compared with GPT‑3 . These developments reinforce Kali GPT's capacity to complement ongoing advances in AI‑enhanced penetration testing. However, the academic studies also flag limitations. LLMs often struggle to maintain holistic scaffolding across testing phases, and there is a risk of hallucinations or weak context‑awareness . Responsible deployment thus requires layered oversight. ADVERTISEMENT A key benefit of tools like Kali GPT is their educational value. They elevate learning in a field traditionally dependent on self‑study and documentation. For novices, the ability to request practical walkthroughs—such as step‑by‑step deployment of an exploit or interpreting tool output—makes cybersecurity more approachable. Mid‑level professionals gain quicker troubleshooting, and experts reclaim time from routine scripting and documentation, focusing on analytical tasks. Ethical considerations are integral to the tool's design. Kali GPT includes guidelines urging permission‑based operations and discouraging misuse. The developer community emphasises that best outcomes require integration into structured, policy‑driven environments where outputs undergo review by qualified professionals . Integration with Kali Linux—an extensively used open‑source, Debian‑based distribution hosting over 600 tools such as Aircrack‑ng, Metasploit, Nmap, Wireshark and Burp Suite—intensifies its impact . By embedding AI within an established toolkit, Kali GPT minimizes workflow disruption, enabling users to stay within familiar terminals rather than switching contexts. Industry analysts predict that such tools will accelerate the frequency and depth of security assessments. With AI assistance, organisations can conduct more thorough penetration tests at lower costs, increasing coverage across both internal and external environments. However, vigilance is needed: overreliance on AI may obscure the imperative to maintain up‑to‑date threat intelligence, deep systems knowledge, and manual oversight. Community feedback highlights interest tempered by concern. One seasoned LinkedIn commentator described Kali GPT as 'a concrete example of how AI can become a powerful tool in penetration testing,' while cautioning that its deployment requires 'awareness and responsibility' . Another user remarked that the tool 'could speed up red team tasks and makes advanced techniques more accessible. On the other, it raises serious concerns around misuse, accuracy, and data exposure' . In the academic realm, research such as PenTest++, AutoPentest and PentestGPT underscores a pattern: AI boosts efficiency, but project design must include ethical control, regular evaluation, and human‑in‑the‑loop checkpoints . Ethical frameworks—such as clear use policies and expert validation—mitigate risks of AI‑assisted errors or malicious deployment. As cybersecurity professionals weigh its adoption, Kali GPT reinforces the evolving division of labour: AI handles routine code generation, troubleshooting, and command synthesis, while humans retain oversight, strategic thinking, and contextual judgement. The most effective deployments will integrate the tool into formal penetration‑testing protocols where professionals validate, adapt, and interpret AI output.

Two Years After GPT-4 Broke the Internet, OpenAI Is Quietly Killing It
Two Years After GPT-4 Broke the Internet, OpenAI Is Quietly Killing It

Yahoo

time11-04-2025

  • Business
  • Yahoo

Two Years After GPT-4 Broke the Internet, OpenAI Is Quietly Killing It

PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing. OpenAI is retiring GPT-4, one of its most well-known AI models. Effective April 30, GPT-4 will be removed from the drop-down menu known as the "model picker" for ChatGPT Plus users. It will remain an option in the API. Those who are using ChatGPT for free cannot select the model they use, so their experience won't change. "GPT‑4 marked a pivotal moment in ChatGPT's evolution," OpenAI says. "We're grateful for the breakthroughs it enabled and for the feedback that helped shape its successor." When GPT-4 debuted in March 2023, it was a noticeable improvement over ChatGPT's initial model, GPT-3.5. OpenAI spent over $100 million training GPT-4, TechCrunch reports. Some of that training included copyrighted materials, The New York Times argues in an ongoing lawsuit. Two years later, the GPT-4o model debuted as the new flagship. Today, it "consistently surpasses GPT‑4 in writing, coding, STEM, and more," and delivers "even greater capability, consistency, and creativity," OpenAI says. Other available models include GPT-4 Turbo, GPT-o3, GPT-o1, and GPT-4.5, which launched this year. There are also rumors of an upcoming GPT-4.1 model. Having that many models can be confusing, OpenAI CEO Sam Altman admitted in February. He hopes to eventually do away with the model picker, which puts the onus on the user to select the appropriate model for the task, and move toward a "unified intelligence" solution, where the system picks. All the models it offers are experiments and steps toward its next big release, GPT-5, which has proven difficult to achieve. OpenAI is battling GPU shortages and those graphics cards aren't cheap. Last month, it closed a massive $40 billion funding round, which it says will "push the frontiers of AI research even further, scale our compute infrastructure, and deliver increasingly powerful tools." That will, presumably, include GPT-5. When GPT-5 arrives, Altman says it will be available to free users on a "standard intelligence" setting. Plus subscribers ($20/month) will be able to run it at "a higher level of intelligence," and Pro subscribers ($200/month) at an "even higher level of intelligence" that includes Voice mode, Canvas, Search, Deep Research, and other OpenAI products.

OpenAI will soon phase out GPT-4 from ChatGPT
OpenAI will soon phase out GPT-4 from ChatGPT

Yahoo

time11-04-2025

  • Business
  • Yahoo

OpenAI will soon phase out GPT-4 from ChatGPT

OpenAI will soon retire GPT-4, an AI model it launched over two years ago, from ChatGPT, according to a changelog posted on Thursday. Effective April 30, GPT-4 will be "fully replaced" by GPT-4o, the current default model in ChatGPT, OpenAI said. GPT-4 will remain available for use via OpenAI's API. "In head‑to‑head evaluations, [GPT-4o] consistently surpasses GPT‑4 in writing, coding, STEM, and more," wrote OpenAI in the changelog. "Recent upgrades have further improved GPT‑4o's instruction following, problem solving, and conversational flow, making it a natural successor to GPT‑4." GPT-4 was rolled out in March 2023 for ChatGPT and Microsoft's Copilot chatbot on the web. Several versions of GPT-4 had multimodal capabilities, allowing them to understand both images and text — the first for a widely deployed OpenAI model. OpenAI CEO Sam Altman has said that GPT-4, reportedly massive in size, cost more than $100 million to train. It was succeeded by GPT-4 Turbo in November 2023, a faster and cheaper model. GPT-4 is one of the models at the heart of copyright disputes between OpenAI and publishers that include The New York Times. Publishers allege that OpenAI trained GPT-4 on their data without their knowledge or consent. OpenAI claims that fair use doctrine shields it from liability. GPT-4's coming retirement will likely follow the release of new models in ChatGPT. According to reverse engineer Tibor Blaho, OpenAI is readying a family of models called GPT-4.1 — GPT-4.1-mini, GPT-4.1-nano, and GPT-4.1 — as well as the o3 "reasoning" model the company announced in December, and a new reasoning model called o4-mini. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store