Latest news with #Vercel


Axios
6 days ago
- Business
- Axios
ChatGPT unlocks its next level with OpenAI's major GPT-5 update
OpenAI's long-awaited GPT-5 — a major update to the technology underlying ChatGPT — is a faster, more-capable large language model that CEO Sam Altman said gives users instant access to a PhD-level expert on any subject. Why it matters: The Thursday release of GPT-5 brings together traditional and "reasoning" models and ups the ante in the race toward so-called artificial general intelligence (AGI). Driving the news: OpenAI is making the new model available Thursday for free and paid users, with enterprise and educational customers getting access starting next week. GPT-5 will have fewer hallucinations and be better at coding and health-related tasks, among other improvements, the company said. Yes, but: Free ChatGPT users will have a limited capacity for GPT-5 use. Once they hit their limit, they'll be switched to a smaller GPT-5 mini model. Paying subscribers will be able to use GPT-5 as their default model, while Pro users will have unlimited GPT-5 queries and access to GPT-Pro, which uses additional computing capacity. Zoom in: OpenAI said GPT-5 sets new highs on a range of benchmarks and cited praise from a variety of companies that had early access, including Windsurf, Vercel, Notion and Manus. Altman told a press briefing that the new model not only excels at coding but at taking software projects from idea to usable code in a single step. "This idea of software on demand will be a defining part of the new GPT-5 era," he said. Zoom out: Altman said that GPT-5 shows signs of broad intelligence but lacks the ability to learn and improve on its own — a capacity that, he added, is likely a key component of AGI. After you use GPT-5, Altman said, going back to an earlier model is like reverting to a lower-resolution display after you've grown used to an iPhone's Retina display. Between the lines: Users will also be able to customize GPT-5's personality as part of a feature in "research preview." The four initial options are "Cynic," "Robot," "Listener" and "Nerd." The new model displays less sycophantic behavior than its predecessors, OpenAI said in a blog post. "Overall, GPT‑5 is less effusively agreeable, uses fewer unnecessary emojis, and is more subtle and thoughtful in follow‑ups compared to GPT-4o. It should feel less like 'talking to AI' and more like chatting with a helpful friend with PhD‑level intelligence." Threat level: As it did with its recently released ChatGPT Agent, OpenAI is proceeding as if GPT-5 ranks "high" on its risk scale for biological threat capability and has added additional safeguards. What we're watching: GPT-5 was originally expected much earlier this year, and its delays led some critics to argue that OpenAI's scaling-up strategy was yielding diminishing returns.

Business Insider
06-08-2025
- Business
- Business Insider
An Accel-backed startup CEO says your next user isn't human — and it's changing how software gets built
The future of software isn't being built for people — it's being built for machines, said Vercel's CEO, Guillermo Rauch. "Your customer is no longer the developer," said Rauch on an episode of the "Sequoia Capital" podcast published Tuesday. "Your customer is the agent that the developer or non-developer is wielding." The CEO of the web infrastructure startup, valued at $3.25 billion last year, said code isn't just being written for humans to read or interact with anymore. It's increasingly being written so AI agents can understand, use, and extend it. "That is actually a pretty significant change," said Rauch. "Is there something that I could change about that API that actually favors the LLM being the, quote-unquote, entity or user of this API?" This new AI-first era means software tools may need to evolve based on how large language models interact with them. "LLMs' strengths and weaknesses will inform the development of runtimes, languages, type checkers, and frameworks of the future," Rauch said. Rauch also said that in the AI era, Vercel's newer users — who may not be developers but designers, marketers, or even AI agents — expect things to just work. Developers were used to dealing with errors and "terrible, negative feedback all day long," he said. But today's users have a much shorter fuse when something goes wrong. Still, he sees that as an "amazing pressure" for product builders. "You want something that works 99.99% of the time," he added. Last year, Vercel raised $250 million in a Series E round led by Accel, with investors including Tiger Global and GV. Rauch and Vercel did not respond to a request for comment from Business Insider. Rise of AI agents 2025 has been hailed as the year of AI agents. They could change how the internet works and how apps and software interact with users. Bernstein analysts wrote in February that while websites and apps won't go away, users may no longer interact with them directly. Instead, they will access information, content, and widgets through an AI assistant that becomes "the aggregator of the aggregators." "If it scales and plays out like we think it might, this. Changes. Everything. The aggregators get disaggregated, and much of consumer internet may be structural shorts. Welcome to the Agentic AI era," the analysts wrote. "There's nowhere to hide." But these agents are not perfect. Researchers have warned that agent errors are prevalen t and compound with each step they take. "An error at any step can derail the entire task. The more steps involved, the higher the chance something goes wrong by the end," Patronus AI, a startup that helps companies evaluate and optimize AI technology, wrote on its blog. The startup built a statistical model that found that an agent with a 1% error rate per step can compound to a 63% chance of error by the 100th step. Still, they said that guardrails — such as filters, rules, and tools that can be used to identify and remove inaccurate content — can help mitigate error rates. Small improvements "can yield outsized reductions in error probability," Patronus AI said.

Business Insider
24-07-2025
- Business
- Business Insider
CTO says companies are missing out by not writing internal documents that AI can easily read
Your company's database of screen-recorded webinars could be on its way out. Laura Tacho, CTO of developer productivity platform DX, says companies should at least consider rewriting their documents to make them more legible for AI. Corporate documentation is often a messy haze of memos, slide decks, and video trainings. In the age of AI, Tacho told Business Insider that training materials should be in text form, so that they may easily be read by a large language model. Text-based procedures are even more important for developers, who can immediately feed their documentation to an AI code editor like Cursor. Tacho said she thinks that documents can benefit both humans and LLMs. Speaking about the topic on the "Pragmatic Engineer" podcast, she said that "AI-first" companies were overhauling their internal documentation for LLM consumption. "That's been the biggest way that I've seen companies think about or already start trying to change the way that they architect their services," Tacho told podcast host Gergely Orosz. In a follow-up interview with BI, Tacho elaborated on her policy for documentation. Well-organized, text-based procedures don't just help an LLM or a human worker, she said. The "venn diagram is a circle," she said. "Documentation is a huge point of friction for nearly every organization," Tacho told BI. "There's a lot of efficiency gain to be had when documentation is more fit for purpose. This is an area where what's good for the human is also good for the LLM." Human-first documentation often relies on "visual cues" and screenshots, which the AI can find incomprehensible, Tacho said. She recommended that all photos have text descriptions alongside them. Social media sites often have alternative texts and captioning to make their services less exclusively visual. Tacho said that corporate document-makers can "take a note from the world of web accessibility." "It is so important for people who are not using their eyes to access the screen," she said. She said other tasks, like centralizing documents, help an LLM's ease of access and also make the human worker's life easier. Tacho used the developer term, saying that companies should "defrag" their policies. "Documentation is made piecemeal, a little here, a little there," she said. "You have to hop between different pages in order to put together the complete instruction of what you're supposed to do." Tacho also listed some technical elements of documentation, like making sure the HTML markup is semantic, which can help ensure LLM readability. Some companies have already begun the change. In May, Vercel's former VP of Developer Experience Lee Robinson posted about the company's documentation on X. "We're starting to add cURL commands to Vercel's documentation wherever we previously said 'click,'" Robinson wrote. "In the future, maybe computer using agents could log in and perform actions for you, but this feels like a nice incremental step for the LLMs." Tacho expects that more companies will join Vercel in making their documents easily legible to AI. She found that engineers were wasting over 30 minutes a week looking for information they couldn't find in documentation. AI code editors should automate those problems — if the LLM can read the documents properly. "When you think about how much time is being wasted due to poor documentation, it becomes actually a very critical business problem," she said.


Scoop
06-07-2025
- Business
- Scoop
Okta Observes Weaponisation Of AI Tool V0 In Phishing Campaigns
Okta Threat Intelligence has observed threat actors leveraging AI-powered development tools to develop phishing infrastructure with little more than a few natural language prompts. Okta Threat Intelligence observed a cluster of phishing activity leveraging a generative AI-powered development tool created by Vercel, to develop and host multiple phishing sites that impersonate sign-in pages for legitimate brands, including Okta, Microsoft 365 and cryptocurrency companies. The platform allows users to generate web interfaces using simple natural language prompts. Okta researchers confirmed that attackers are using this capability to rapidly develop convincing phishing pages that spoof well-known brands, increasing the scale, speed and believability of their campaigns. 'This marks an expected turning point in adversarial use of AI,' said Brett Winterford, Vice President at Okta Threat Intelligence. 'We had anticipated we would soon enough see attackers stepping up from using AI to create convincing phishing lures, to now using AI to create the infrastructure that supports phishing campaigns at scale. With these tools, the least skilled adversary can build convincing phishing infrastructure in seconds. This is a wake-up call for every organisation that relies on outdated defences like password-based logins. You can't rely on perimeter defence and awareness campaigns alone to mitigate attacks: you need passwordless solutions that remove the ability of users to submit a credential to an attacker.' Okta has also observed attackers using public GitHub repositories to clone v0 or build custom generative tools, further democratising access to advanced phishing capabilities. To defend against AI-generated phishing threats, Okta Threat Intelligence recommends: · Require phishing-resistant authentication: Deploy and enforce the use of phishing-resistant methods such as Okta FastPass, which cryptographically binds the user to the site they enrolled with. · Bind access to trusted devices: Use device trust policies to ensure only managed or security-compliant devices can access sensitive applications. · Trigger step-up authentication for anomalies: Use Okta Behaviour Detection and Network Zones to require additional verification when unusual patterns are detected. · Update security awareness programs: Educate employees on the evolving sophistication of AI-powered social engineering. Okta Threat Intelligence is a unit within Okta that develops timely, highly relevant and actionable insights about the threat environment, with a strong focus on identity-based threats. The security contacts at Okta customers can access a detailed security advisory at the Okta Security Trust Center.


Forbes
06-07-2025
- Forbes
Google Warns All Gmail Users To Upgrade Accounts—This Is Why
It's time to upgrade your Google account. It happens all the time. A familiar sign-in window pops up on your screen, asking for your account password to enable you open a document or access emails. It happens so often we no longer notice and simply go through the motions on autopilot. But Google warns this is dangerous and needs to stop before you lose your account. Most Gmail users 'still rely on older sign-in methods like passwords and two-factor authentication (2FA),' Google warns, despite the FBI reporting that 'online scams raked in a record $16.6 billion last year — up 33% in just one year — and are growing more sophisticated.' That means you're less likely to spot an attack until it's too late. When I first covered Google's alarming new stats, the company told me the warning to upgrade accounts is right, but needs to go further. This is about more than Gmail, it's about all the accounts that can be accessed with a Google sign-in. But Gmail is the most prized, because your email account opens up access to so much more. And less than a month later we have a frightening new proof point as to exactly why accounts that are protected by passwords and even 2FA are at such risk. Okta warns threat actors are now 'abusing v0 — a breakthrough GenAI tool created by Vercelopens to develop phishing sites that impersonate legitimate sign-in webpages.' Most users have not upgraded to passkeys. That's why Google says 'we want to move beyond passwords altogether, while keeping sign-ins as easy as possible.' That means upgrading the security on your Google Account to add a passkey. This stops attackers accessing your account, because the passkey is linked to your own devices and can can't be stolen or bypassed. Most Gmail users still don't have passkeys — but all must add them as soon as possible. Okta says this 'signals a new evolution in the weaponization of Generative AI by threat actors who have demonstrated an ability to generate a functional phishing site from simple text prompts.' If you're willing to use your password, you're at risk. And that's the second part of this warning. Upgrading your account with a passkey only helps secure that account if you change your behavior as well. No more entering a password when prompted — only use your passkey. And if that's not possible, make sure your account uses a different form of 2FA to SMS codes. An authenticator app is best. Video showing how easily a malicious sign-in window can be created with AI. Okta warns 'today's threat actors are actively experimenting with and weaponizing leading GenAI tools to streamline and enhance their phishing capabilities. The use of a platform like Vercel's allows emerging threat actors to rapidly produce high-quality, deceptive phishing pages, increasing the speed and scale of their operations.' Passkeys are phishing resistant. That's why Microsoft is going even further than Google, actively pushing users to delete passwords altogether and removing them from its own Authenticator app, and will now limit that app to passkeys only. This is just the beginning of the new AI-fueled attacks that will fast become the norm. Attackers are playing with these new tools, and that's changing the game. You need to ensure that all your key accounts are fully protected — it's a change you should make today, not some time soon when you get around to it. 'We build advanced, automatic protections directly into Google's products,' the company says, 'so security is something you don't have to think about.' But if you're still securing those products with a password — the digital equivalent of a flimsy $5 padlock, then you are playing into the hands of those attackers. It takes a few seconds and can be done directly from here. Add your passkey now.