Latest news with #GPT‑4


Arabian Post
12 hours ago
- Arabian Post
Kali GPT Delivers AI Co‑Pilot for Penetration Testing
Kali GPT demonstrates a new frontier in cybersecurity, embedding an AI‑powered assistant directly into Kali Linux to streamline penetration testing for professionals and learners. Leveraging GPT‑4 architecture, the tool generates payloads, explains complex tools such as Metasploit and Nmap, and recommends relevant exploits—all within the terminal. For expert practitioners, the assistant accelerates assessments; for novices, it serves as an interactive mentor by translating technical concepts into clear guidance . Advanced educational institutions are rapidly integrating Kali GPT into curricula, citing its capacity to present example‑driven instructions that engage students more effectively than traditional documentation . This trend enhances practical cybersecurity education, addressing skill‑gap challenges facing the industry. Core to Kali GPT's offering is its real‑time assistance: users receive immediate diagnostics for errors encountered in tools like Nmap, along with actionable remedies. It can also generate tailored Linux commands—such as identifying files larger than 100 MB—customised to the user's needs, cutting down the manual effort required . ADVERTISEMENT A standout feature is adaptive learning. Kali GPT adjusts its guidance based on the user's proficiency level—offering foundational explanations to beginners while providing intricate technical depth to seasoned testers . This reduces wasted time searching through forums and documentation, acting as a dynamic mentor that evolves with the user. Corporate users report that Kali GPT enhances efficiency during routine vulnerability scans and network audits. By automating repetitive tasks, it enables security teams to focus on complex threat scenarios and strategic decision‑making. Observers note that the AI assistant effectively democratizes penetration testing by enabling a broader range of talent to contribute meaningfully to security assessments . Despite its promise, expert voices caution that human oversight remains essential. Kali GPT might generate code that contains false positives or is improperly optimised. Developers stress that it complements—but does not replace—the deep technical understanding and judgment that cybersecurity professionals provide . Kali GPT's emergence aligns with broader academic exploration of AI‑assisted security tooling. Three recent LLM‑powered systems highlight this trend. AutoPentest, based on GPT‑4o and LangChain, performs autonomous black‑box penetration tests, achieving task success rates comparable to manual ChatGPT use while offering cost efficiencies . PenTest++, introduced in February 2025, integrates generative AI for reconnaissance, scanning, exploitation, and reporting with structured human validation at key points . Earlier, PentestGPT demonstrated substantial improvements—delivering over 228 percent better task completion on benchmarks compared with GPT‑3 . These developments reinforce Kali GPT's capacity to complement ongoing advances in AI‑enhanced penetration testing. However, the academic studies also flag limitations. LLMs often struggle to maintain holistic scaffolding across testing phases, and there is a risk of hallucinations or weak context‑awareness . Responsible deployment thus requires layered oversight. ADVERTISEMENT A key benefit of tools like Kali GPT is their educational value. They elevate learning in a field traditionally dependent on self‑study and documentation. For novices, the ability to request practical walkthroughs—such as step‑by‑step deployment of an exploit or interpreting tool output—makes cybersecurity more approachable. Mid‑level professionals gain quicker troubleshooting, and experts reclaim time from routine scripting and documentation, focusing on analytical tasks. Ethical considerations are integral to the tool's design. Kali GPT includes guidelines urging permission‑based operations and discouraging misuse. The developer community emphasises that best outcomes require integration into structured, policy‑driven environments where outputs undergo review by qualified professionals . Integration with Kali Linux—an extensively used open‑source, Debian‑based distribution hosting over 600 tools such as Aircrack‑ng, Metasploit, Nmap, Wireshark and Burp Suite—intensifies its impact . By embedding AI within an established toolkit, Kali GPT minimizes workflow disruption, enabling users to stay within familiar terminals rather than switching contexts. Industry analysts predict that such tools will accelerate the frequency and depth of security assessments. With AI assistance, organisations can conduct more thorough penetration tests at lower costs, increasing coverage across both internal and external environments. However, vigilance is needed: overreliance on AI may obscure the imperative to maintain up‑to‑date threat intelligence, deep systems knowledge, and manual oversight. Community feedback highlights interest tempered by concern. One seasoned LinkedIn commentator described Kali GPT as 'a concrete example of how AI can become a powerful tool in penetration testing,' while cautioning that its deployment requires 'awareness and responsibility' . Another user remarked that the tool 'could speed up red team tasks and makes advanced techniques more accessible. On the other, it raises serious concerns around misuse, accuracy, and data exposure' . In the academic realm, research such as PenTest++, AutoPentest and PentestGPT underscores a pattern: AI boosts efficiency, but project design must include ethical control, regular evaluation, and human‑in‑the‑loop checkpoints . Ethical frameworks—such as clear use policies and expert validation—mitigate risks of AI‑assisted errors or malicious deployment. As cybersecurity professionals weigh its adoption, Kali GPT reinforces the evolving division of labour: AI handles routine code generation, troubleshooting, and command synthesis, while humans retain oversight, strategic thinking, and contextual judgement. The most effective deployments will integrate the tool into formal penetration‑testing protocols where professionals validate, adapt, and interpret AI output.
Yahoo
11-04-2025
- Business
- Yahoo
Two Years After GPT-4 Broke the Internet, OpenAI Is Quietly Killing It
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing. OpenAI is retiring GPT-4, one of its most well-known AI models. Effective April 30, GPT-4 will be removed from the drop-down menu known as the "model picker" for ChatGPT Plus users. It will remain an option in the API. Those who are using ChatGPT for free cannot select the model they use, so their experience won't change. "GPT‑4 marked a pivotal moment in ChatGPT's evolution," OpenAI says. "We're grateful for the breakthroughs it enabled and for the feedback that helped shape its successor." When GPT-4 debuted in March 2023, it was a noticeable improvement over ChatGPT's initial model, GPT-3.5. OpenAI spent over $100 million training GPT-4, TechCrunch reports. Some of that training included copyrighted materials, The New York Times argues in an ongoing lawsuit. Two years later, the GPT-4o model debuted as the new flagship. Today, it "consistently surpasses GPT‑4 in writing, coding, STEM, and more," and delivers "even greater capability, consistency, and creativity," OpenAI says. Other available models include GPT-4 Turbo, GPT-o3, GPT-o1, and GPT-4.5, which launched this year. There are also rumors of an upcoming GPT-4.1 model. Having that many models can be confusing, OpenAI CEO Sam Altman admitted in February. He hopes to eventually do away with the model picker, which puts the onus on the user to select the appropriate model for the task, and move toward a "unified intelligence" solution, where the system picks. All the models it offers are experiments and steps toward its next big release, GPT-5, which has proven difficult to achieve. OpenAI is battling GPU shortages and those graphics cards aren't cheap. Last month, it closed a massive $40 billion funding round, which it says will "push the frontiers of AI research even further, scale our compute infrastructure, and deliver increasingly powerful tools." That will, presumably, include GPT-5. When GPT-5 arrives, Altman says it will be available to free users on a "standard intelligence" setting. Plus subscribers ($20/month) will be able to run it at "a higher level of intelligence," and Pro subscribers ($200/month) at an "even higher level of intelligence" that includes Voice mode, Canvas, Search, Deep Research, and other OpenAI products.
Yahoo
11-04-2025
- Business
- Yahoo
OpenAI will soon phase out GPT-4 from ChatGPT
OpenAI will soon retire GPT-4, an AI model it launched over two years ago, from ChatGPT, according to a changelog posted on Thursday. Effective April 30, GPT-4 will be "fully replaced" by GPT-4o, the current default model in ChatGPT, OpenAI said. GPT-4 will remain available for use via OpenAI's API. "In head‑to‑head evaluations, [GPT-4o] consistently surpasses GPT‑4 in writing, coding, STEM, and more," wrote OpenAI in the changelog. "Recent upgrades have further improved GPT‑4o's instruction following, problem solving, and conversational flow, making it a natural successor to GPT‑4." GPT-4 was rolled out in March 2023 for ChatGPT and Microsoft's Copilot chatbot on the web. Several versions of GPT-4 had multimodal capabilities, allowing them to understand both images and text — the first for a widely deployed OpenAI model. OpenAI CEO Sam Altman has said that GPT-4, reportedly massive in size, cost more than $100 million to train. It was succeeded by GPT-4 Turbo in November 2023, a faster and cheaper model. GPT-4 is one of the models at the heart of copyright disputes between OpenAI and publishers that include The New York Times. Publishers allege that OpenAI trained GPT-4 on their data without their knowledge or consent. OpenAI claims that fair use doctrine shields it from liability. GPT-4's coming retirement will likely follow the release of new models in ChatGPT. According to reverse engineer Tibor Blaho, OpenAI is readying a family of models called GPT-4.1 — GPT-4.1-mini, GPT-4.1-nano, and GPT-4.1 — as well as the o3 "reasoning" model the company announced in December, and a new reasoning model called o4-mini. Sign in to access your portfolio