logo
#

Latest news with #autonomous

The Secret ‘Data War' is Happening Now — And Elon Musk's Dojo May Be the Most Advanced Weapon in It
The Secret ‘Data War' is Happening Now — And Elon Musk's Dojo May Be the Most Advanced Weapon in It

Yahoo

time17 hours ago

  • Business
  • Yahoo

The Secret ‘Data War' is Happening Now — And Elon Musk's Dojo May Be the Most Advanced Weapon in It

WASHINGTON, June 01, 2025 (GLOBE NEWSWIRE) -- Everyone's chasing AI. But according to former hedge fund manager Enrique Abeyta, it's not the models or the algorithms that matter most. It's the data. Abeyta explains in his recent briefing how Elon Musk's Dojo platform may be the first system designed from the ground up to collect, process, and dominate real-world visual data at scale — turning cars into cameras, roads into training grounds, and movement itself into leverage. 'This is how power works now,' Abeyta says. 'He who owns the data writes the rules.' A Global Eye That Never Stops Learning Dojo is fueled by Tesla's fleet — which now collects 160 billion frames of real-world video per day. That data is piped into a custom-built chip that's reportedly 6x faster than Nvidia's best AI processor, enabling Dojo to learn, correct, and evolve without human supervision. And on June 1st, Tesla plans to activate the first fully autonomous robotaxi — with no wheel, no pedals, and no driver. Every mile driven, every stop sign seen, every erratic pedestrian encountered becomes another training loop — refining the AI, sharpening the system, and growing Musk's private dataset of the real world. From Road Maps to Strategic Maps Abeyta doesn't see this as a tech story. He sees it as a territory story. Just as satellites mapped the earth for governments, and Google mapped it for search engines, Dojo is mapping it for machines — in real time, with full feedback, at street level. That's why, he argues, this system is no longer just a Tesla project. One of the companies involved in its development is now 'expecting to receive billions of dollars from the Trump administration,' according to internal briefings. 'If you can control what AI sees, you can control how it behaves — that's the endgame here,' Abeyta notes. The Arms Race Isn't Code. It's Collection. While others race to build the smartest models, Musk is building the largest sensory field in the world — one that never sleeps, always watches, and trains itself constantly. Abeyta warns that the world may be entering a new kind of arms race — not over weapons, but over the training environments machines use to understand the world. About Enrique Abeyta Enrique Abeyta is a former hedge fund manager with 25 years of experience following macro shifts in capital, power, and infrastructure. After managing nearly $4 billion in institutional capital, he now runs Breaking Profits, a research platform focused on identifying the hidden systems behind the next wave of global control — from AI to autonomy to state-aligned tech. Media Contact:Derek WarrenPublic Relations ManagerParadigm Press GroupEmail: dwarren@ in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Tesla's Robotaxis Won't Be Driving Alone — Humans Will Be Watching
Tesla's Robotaxis Won't Be Driving Alone — Humans Will Be Watching

Auto Blog

time3 days ago

  • Automotive
  • Auto Blog

Tesla's Robotaxis Won't Be Driving Alone — Humans Will Be Watching

Tesla plans to launch its first autonomous ride-hailing service in Austin next month. But behind the scenes, human operators will be keeping the cars in check. Tesla's self-driving dream is getting human backup Tesla's long-promised fleet of self-driving robotaxis is finally arriving — sort of. CEO Elon Musk says the company will launch a small pilot of driverless Model Ys in Austin, Texas, by the end of June. But rather than letting the cars roam entirely on their own, Tesla is quietly hiring human teleoperators to monitor and, if necessary, take control of the vehicles remotely. 0:01 / 0:09 2025 Nissan Z undercuts Toyota Supra by a surprising amount Watch More This hybrid approach isn't unusual in the autonomous vehicle industry, but it stands in contrast to Tesla's public messaging around 'Full Self-Driving.' While Musk has long dismissed the need for external sensors like LIDAR and radar, calling Tesla's camera-based AI system sufficient, the company is still relying on human assistance to address real-world challenges. Remote operators will ensure safety, for now Investment analysts at Morgan Stanley recently confirmed that Tesla is relying heavily on human oversight, citing visits to the company's Palo Alto office. Several job postings back this up, including roles for 'Teleoperation' engineers who will help create tools that allow humans to see what the car sees and intervene when things go wrong. Tesla will use its Model Y as a launching platform for its robotaxi tech. — Source: Tesla These workers won't just be steering stuck robotaxis from afar. Tesla says they'll also help build the user interface that connects remote humans with onboard AI — essentially shaping the collaboration between man and machine. Still no safety data, and regulators are watching Tesla hasn't yet released safety data for its Full Self-Driving (Supervised) software, which is still under federal scrutiny. Both Autopilot and FSD have been linked to hundreds of crashes, including some fatal ones. The company maintains that its approach will prove safer than human drivers in the long run, but for now, it's taking extra precautions. Autoblog Newsletter Autoblog brings you car news; expert reviews and exciting pictures and video. Research and compare vehicles, too. Sign up or sign in with Google Facebook Microsoft Apple By signing up I agree to the Terms of Use and acknowledge that I have read the Privacy Policy . You may unsubscribe from email communication at anytime. The Austin rollout will start with just 10 vehicles and be limited to an invite-only group of users. Musk says Tesla plans to scale up to 1,000 robotaxis within a few months, assuming the pilot runs smoothly. Tesla Model Y, equipped with FSD system. Three front-facing cameras under the windshield near rear view mirror. — Source: Mark Leong/TheHuman-in-the-loop is becoming the norm Waymo/Hyundai IONIQ 5 — Source: Waymo Tesla's teleoperation setup now looks surprisingly similar to what Waymo, a longtime autonomous driving competitor, already uses. Waymo employs 'fleet response agents' who step in when its cars get confused. Tesla's version may go even further, using virtual reality rigs and custom software to give operators a real-time view and control of the car's surroundings. Final thoughts Tesla's upcoming robotaxi rollout may mark a major milestone for the company, but it also underscores how far we still are from truly autonomous driving. Despite bold claims and years of promises, even Tesla is leaning on human intervention to make its system viable in the real world. That's not necessarily a setback; it's a pragmatic step. As the line between autonomy and remote assistance blurs, Tesla joins a growing list of companies accepting that 'self-driving' often still requires a safety net. Whether this hybrid model is a short-term bridge or a long-term necessity remains to be seen. For now, the road to autonomy still has a human hand on the wheel, just not inside the car.

Hope AI Wants To Replace Your Dev Team — But Not How You Think
Hope AI Wants To Replace Your Dev Team — But Not How You Think

Forbes

time3 days ago

  • Business
  • Forbes

Hope AI Wants To Replace Your Dev Team — But Not How You Think

Agentic AI tools like Hope AI are promising to build entire applications autonomously, but are they ... More replacing developers or redefining them? When Ran Mizrahi launched Bit Cloud in 2014, the idea that AI could design and deploy full-scale enterprise software wasn't something that anyone in the industry was really talking about. At best, it sounded like a pipe dream. Over a decade later, Bit Cloud claims to have built exactly that. Its newest agent, Hope AI, doesn't just write code. It promises to do what once took engineering teams months: design complete system architectures, assemble reusable components and generate production-grade enterprise applications. But as bold as that sounds, it's the kind of promise that triggers equal parts excitement and existential dread across the developer world: If AI can really build the software, what happens to the people who used to? Hope AI positions itself not as another AI assistant but as an autonomous software architect. Unlike tools like GitHub Copilot or Amazon CodeWhisperer, which help engineers write small pieces of code, Hope AI claims to design entire applications from scratch. Built atop Bit's composable software platform — already used by over 500 companies including AT&T, Moody's, and Red Bull — Hope AI supports modern stacks like React, TypeScript, MongoDB and GraphQL. It also integrates with the Model Context Protocol (MCP), an open standard that allows AI models to interact with dev tools more seamlessly. 'Hope AI functions as an intelligent software architect,' Mizrahi said in the company's press release. 'It leverages existing, proven components to compose professional and practical software solutions, enabling consistency and simplifying long-term maintainability.' Hope AI enters the market as agentic AI gains traction across the tech industry. Gartner recently named agentic AI one of its Top Strategic Technology Trends for 2025, while a recent IBM report suggests AI agents are already reshaping software engineering by offloading routine coding tasks. But excitement around agentic AI also comes with unease. In a fireside chat at NTT Research's Upgrade Summit back in April, Naveen Rao, VP of AI at Databricks, expressed a more cautious perspective on the current state of AI agents. Rao noted that while AI adoption is increasing, particularly in productivity tools like coding assistants, true autonomous AI agents are still years away. He advised enterprises to 'focus on targeted, measurable AI projects while anticipating future advances in user interfaces, reliability and self-learning systems.' Security is another big concern. As Brian Roche noted in an article for Veracode, while AI assistants like GitHub's Copilot help developers write code faster, AI-generated code is often insecure or fails basic vulnerability checks. Startups relying too heavily on agentic tools may ship faster but they risk deploying flawed, even dangerous software. And as Robert Lemos wrote for Dark Reading, most developers have adopted AI assistants to help with coding and improve output, but most are also creating vulnerabilities that take longer to remediate. AI agents trained to optimize for speed and output often miss critical business logic, edge cases and compliance checks. This could result in code that works on the surface but fails under real-world conditions. Worse, the growing trend of outsourcing core development to AI could erode internal expertise. Companies may end up with software they don't fully understand and can't maintain without the same AI tools that created it. The economic implications are just as thorny. Recently, in what Axios described as 'a white-collar bloodbath,' Anthropic CEO Dario Amodei warned that AI could eliminate up to 50% of entry-level white-collar jobs in the next five years. That includes junior software engineers, QA testers and documentation writers — the exact roles companies often staff to support large projects. Hope AI's capabilities may accelerate that trend. But not everyone believes developers will become obsolete. 'Writing code is only one piece of building great software,' said Funso Richard, information security executive at Karysburg, in an interview. 'You still need people who understand the user, can collaborate across teams and know how to think critically about design and outcomes,' Richard told me. He added that in the age of AI, 'empathy and communication' will matter more than raw technical skills. For Teddie Wardi, managing director at Insight Partners and an investor in Bit Cloud, Hope AI 'marks a transformational leap in how software is built, accelerating both time-to-market and long-term maintainability for modern teams.' Rather than eliminating developers, tools like Hope AI might redefine what they do. If these tools continue to evolve, coders may act less like builders and more like curators, overseeing AI agents, validating architectural choices and enforcing compliance and ethics. As Mizrahi noted, it's not a replacement but a powerful tool that enables developers to build complex applications in a few hours. 'Think of this as moving from laborer to supervisor,' Mizrahi explained. 'Developers will increasingly oversee AI agents, review their output and guide software architecture at a higher level.' This evolution means software engineering education may also need to change. Future developers will need training in AI model evaluation, prompt engineering, secure integration, and agent orchestration. Already, some computer science programs are introducing AI coding agents into the curriculum. But companies hoping to rely entirely on AI agents could also face harsh lessons if they neglect such human guardrails, as Mizrahi explained. Hope AI may very well become the prototype for a new generation of intelligent dev agents. But for now, it also serves as a mirror — forcing companies, coders and customers to ask what we want software development to look like in the AI era. More autonomous? More efficient? Less human? Or just human in a different way? What's certain, though, is that the future of development won't just be written in code. It will be negotiated between humans and the agents they build to assist them.

New Challenge Offers Cash For Drone Navigation Without Satellites
New Challenge Offers Cash For Drone Navigation Without Satellites

Forbes

time4 days ago

  • Business
  • Forbes

New Challenge Offers Cash For Drone Navigation Without Satellites

Competitors at the previous SPRIND Autonomous Drone Challenge A new technology challenge backed by the German government offers up to half a million Euros ($569k) each to teams developing precise drone navigation without satellites. The particular applications are last-mile delivery and search and rescue, but this is a dual-use technology which may also be used for defense applications. The challenge is needed because So far the market has so far failed to deliver this crucial capability. 'To start, technologically, it's really hard,' Jano Costard, Head of Challenges at SPRIND is Germany's Federal Agency for Breakthrough Innovation, told me. 'In military applications, such as those in Ukraine, pragmatic alternatives like fiber optic drones have been adopted because immediate functionality is critical. In the civilian space, the commercial use of drones in a widespread manner has not been realized yet.' This 2025-2026 Fully Autonomous Flight 2.0 Challenge is the latest funding phase of a continuing program to encourage multinational innovation in autonomous drones. The SPRIND effort aims to develop fully autonomous drones Satellite navigation, and GPS in particular, have become ubiquitous. GPS circuitry is tiny and costs just a few dollars, and satellite navigation by phone has become so much a part of life that the paper maps, street directories, and road atlases familiar to the older generation have virtually disappeared. Europe spent an estimated 10 billion Euros and 17 years on its Galileo satellite navigation system, an alternative to the GPS NavStar constellation operated by the U.S. military which took 20 years and cost around $18 billion – plus another billion or two a year to maintain. But recent events have made it clear that satellite navigation no longer cuts it. The faint signal from navigation satellites is easy to jam. Repeated warnings about the dangers of SatNav jamming were more or less ignored until the conflict in Ukraine. Widespread jamming by both sides has made satellite navigation almost useless. The Russians have repeatedly upgraded their Shahed drones to cope with Ukrainian jammers, and Russian jamming has rendered even military-grade GPS in U.S.-supplied guided bombs and artillery rounds ineffective. Drones need alternate means of navigation. However, there are some promising technologies, and the SPRIND challenge should help commercialize them. In fact, it is going a step further: rather than just finding a point on the map, it aims to develop sensors to find objects such as people on the ground. 'The competition focuses on complete autonomous drone systems, encompassing both navigation capabilities and sensors,' says Costard. 'Teams must demonstrate integrated system performance in real-world scenarios, specifically addressing the search and rescue and last-mile delivery use cases that form the core of this challenge.' Drone approaching a target flag in the previous SPRIND challenge For the last-mile delivery drop, the drone has to land on an exact spot to simulate parcel delivery. The location will be given half an hour in advance, but finding the exact spot requires intelligent decision making by the autonomous drone without human assistance. Costard says that the description may be something like 'House number 5, in front of the green dustbin' or 'Place the parcel next to the bicycle leaning against the front door.' This is not just navigation but will require an LLM or similar to understand the requirements. For the search and rescue mission, the teams will be given a specified search area, again 30 minutes before mission. The drone is required to find and give the precise location of static objects, people or dummies. In a more advanced level, the subject will be moving, and the drone has to follow them and relay back their location in real time for ten minutes. The basic technology for satellite-free navigation goes way back. The U.S Tomahawk cruise missile, developed before GPS was available, had both Terrain Contour Matching, using a radar to follow the shape of the land below and Digital Scene Matching Area Correlation to identify the target area. But modern systems are far more capable and far cheaper than this 1980s technology. The contest is open to teams from industry and universities A wide range of visual navigation systems for drones have emerged in Ukraine and elsewhere, such as Palantir's VNav, the Raptor Guide recently released by Maxar or the attack drones supplied to Ukraine by Lithuanian company Granta Autonomy. We may see off-the-shelf solutions contending from companies, universities, research institutions, and technology incubators, all of whom are invited to compete. However, this is basically a European competition. The many possible Chinese contenders will be excluded, as need to be headquartered in Europe – technically anywhere in the EU, United Kingdom, Switzerland or Israel (which Costard says is 'closely connected to the European innovation ecosystem.') U.S. companies with European office may also be eligible, provided substantial work is conducted in Europe. 'Merely maintaining a postal address is insufficient—we require evidence of significant team presence located within Europe,' says Costard. In addition, applicants cannot already have received other public funding for their contribution to the challenge. Teams must apply via SPRIND by Monday 9th June. On 26th and 27th June 2025, shortlisted teams will pitch for funding at an event in Leipzig, Germany. Those selected for funding will embark on a 14-month development and testing process. Challenges offer a means to accelerate drone technology The challenge has two stages. In the first, teams will each receive up to 150,000 Euros over seven months, with funding based on projected costs. They will then be evaluated in January 2026, and the ten strongest contenders will receive up to 350,000 Euros for the second stage. Compared to the tens of billions spent on satellite navigation, the Fully Autonomous Flight 2.0 Challenge is a low-cost, short-term project. But the potential benefits, for both civil and military operators, are gigantic. Simply opening up reliable, efficient drone delivery services – promised for over ten years – would be a huge breakthrough. Europe could be the first to exploit those benefits. 'As historical ties of cooperation weaken globally, it is critical that Europe leverages the brightest innovators across national borders,' says Costard. 'Dual-use applications of breakthrough technologies, such as those that the latest phase of our autonomous flight Challenge will fund, reflect the growing need for continental collaboration in both economic and security matters.'

Why AI Is The New Cybersecurity Battleground
Why AI Is The New Cybersecurity Battleground

Forbes

time5 days ago

  • Business
  • Forbes

Why AI Is The New Cybersecurity Battleground

AI has evolved from a tool to an autonomous decision-maker, reshaping the landscape of cybersecurity ... More and demanding innovative defense strategies. Artificial intelligence has quickly grown from a capability to an architecture. As models evolve from backend add-ons to the central engine of modern applications, security leaders are facing a new kind of battlefield. The objective not simply about protecting data or infrastructure—it's about securing the intelligence itself. In this new approach, AI models don't just inform decisions—they are decision-makers. They interpret, respond, and sometimes act autonomously. That shift demands a fundamental rethink of how we define risk, build trust, and defend digital systems. Historically, enterprise software was built in layers: infrastructure, data, logic, and presentation. Now, there's a new layer in the stack—the model layer. It's dynamic, probabilistic, and increasingly integral to how applications function. Jeetu Patel, EVP and GM of security and collaboration at Cisco, described this transformation to me in a recent conversation: 'We are trying to build extremely predictable enterprise applications on a layer of the stack which is inherently unpredictable.' That unpredictability is not a flaw—it's a feature of large language models and generative AI. But it complicates traditional security assumptions. Models don't always produce the same output from the same input. Their behavior can shift with new data, fine-tuning, or environmental cues. And that volatility makes them harder to defend. As AI becomes more central to application workflows, it also becomes a more attractive target. Attackers are already exploiting vulnerabilities through prompt injection, jailbreaks, and system prompt extraction. And with models being trained, shared, and fine-tuned at record speed, security controls struggle to keep up. Patel pointed out that most enterprises take six to nine months to validate a model, but models themselves may only be relevant for three to six months. The math doesn't work. More models also means more inconsistency—each with different safety thresholds, behaviors, and guardrails. This patchwork of protections creates gaps. The only way forward, Patel argued, is 'a common substrate for security and safety across all models, all agents, all applications, all clouds.' Given the speed and sophistication of modern threats, legacy QA methods aren't enough. Patel emphasized that red teaming must evolve into something automated and algorithmic. Security needs to shift from periodic assessments to continuous behavioral validation. He described one such approach as 'the game of 1,000 questions'—an automated interrogation method that probes a model's responses for signs of compromise. This kind of adaptive red teaming reveals how models might be tricked into unsafe behavior through indirect or deceptive prompts. 'We literally jailbroke DeepSeek 100% of the time with the top 50 benchmark prompts,' he noted, 'while OpenAI only broke 26% of the time.' That kind of differential risk highlights the need for a standard, cross-model framework for runtime enforcement. Models can't be treated as black boxes—they must be monitored, validated, and guided in real time. The risk doesn't stop at outputs. With the rise of agentic AI—where models autonomously complete tasks, call APIs, and interact with other agents—the complexity multiplies. Security must now account for autonomous systems that make decisions, communicate, and execute code without human intervention. Patel warns that inter-agent communication creates new threat vectors, as models pass data and instructions between themselves. Without oversight, these interactions could amplify vulnerabilities or obscure malicious activity. This trend is accelerating. By next year, we could see widespread deployment of agents that complete multi-step workflows with minimal human input. Securing these systems will require a blend of visibility, behavioral heuristics, and real-time enforcement—at a scale the industry has never attempted before. 'As AI gets smarter and more independent, the stakes for keeping it secure get much higher. We have to change how we think about risks and act faster than before,' cautioned Russell Fishman, senior director, global head of solutions product management for AI and modern workloads at NetApp. 'This includes giving close attention to data provenance—ensuring we have visibility into, security of, and confidence in the data used to fine-tune and re-train models, as well as the information driving real-time inference. By tracking and securing this entire 'chain of trust,' we can minimize the risks tied to suboptimal agent responses and protect against increasingly sophisticated attack vectors." Patel warns that if every model, platform, and enterprise rolls out its own unique security framework, we're heading toward chaos. What's needed is a shared infrastructure—a neutral, interoperable foundation for AI security that spans clouds, vendors, and models. Recognizing this, Cisco announced the launch of Foundation AI at RSAC 2025—a significant step towards democratizing AI security. Foundation AI is presented as the first open-source reasoning model specifically designed to enhance security applications. By making this model openly available, Cisco aims to foster a community-driven approach to securing AI systems, encouraging collaboration across the industry to address the complex challenges posed by AI integration. The introduction of Foundation AI represents a broader industry trend towards open collaboration in AI security. By contributing to the open-source community, Cisco is not only addressing the immediate security concerns associated with AI but also setting a precedent for other organizations to follow suit in fostering transparency and collective problem-solving in the AI era. Despite AI's power, it doesn't replace human intuition. Patel emphasized that even advanced models struggle to replicate instinct, nuance, and non-verbal reasoning. 'Most of the things you and I engage on,' he said, 'have some level of data—but then a lot of judgment.' The best systems will be those that augment human expertise, not replace it. We still need people to ask the right questions, interpret the right signals, and make the right calls—especially when AI's recommendations veer into gray areas. Much like using GPS in a city you already know, humans must retain the ability to validate, override, and refine machine-generated suggestions. AI should be a co-pilot, not an autopilot. As organizations embed intelligence deeper into their systems, they must also embed trust. That means building models that are accountable. It means validating behavior continuously, not just at release. And it means working together—across companies, disciplines, and platforms—to ensure that AI enhances security without becoming its own liability. Fishman summed up, "Real-time monitoring, smarter guardrails, and cross-industry collaboration—with transparency at every step—are essential to building trust in AI and safeguarding our digital world.' AI is already transforming the cybersecurity landscape. The question is whether we can secure that transformation in time. The intelligence layer is here. It's powerful. And it's vulnerable. Now is the moment to reimagine what security looks like when intelligence is everywhere.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store