logo
This startup lets you vibe code your own app on your iPhone. It just raised $9 million from Alexis Ohanian's fund.

This startup lets you vibe code your own app on your iPhone. It just raised $9 million from Alexis Ohanian's fund.

Business Insider16 hours ago
I bet you have an app idea.
And if you do, you've probably asked yourself, "Could I use AI to code it into reality?"
That's what Vibecode, a startup that uses AI to help you "vibe code" apps, is trying to make easier with a mobile app of its own.
Vibecode exclusively told Business Insider that it recently nabbed a $9.4 million seed investment led by Reddit cofounder Alexis Ohanian 's Seven Seven Six, with participation from Long Journey Ventures, Neo, First Harmonic, and Afore Capital, as well as angel investors from Google, OpenAI, and Expo.
"For me, it was the democratization of coding and app creation that made Vibecode stand out," Ohanian told Business Insider in a statement. "Just describe your idea in plain language, right on your phone, and that's it. The mobile interface is a massive unlock in terms of accessibility, fun, and real-world use."
Ansh Nanda, CEO of Vibecode and a former engineer at Bluesky, said that after watching AI coding take off last year with tools like Cursor, he was convinced this AI use case would only grow.
"How do we bring this from technical people to the masses?" Nanda said he and one of his cofounders asked themselves at the time.
Vibecode has eight employees, including Nanda and his two cofounders, AI content creator Riley Brown and Kehan Zhang.
In June, Vibecode launched its iOS mobile app after testing a small beta through the spring. As of Wednesday, it's ranked the 12th most popular app in the "Developer Tools" category on the Apple App Store.
The app lets users explain their vision for an app using plain language, and provides examples like "note-taking app" or "Wordle clone."
Then, Vibecode starts, well, vibe coding.
Up until this week, Vibecode was relying on Anthropic 's Claude model to develop apps. The startup has expanded its offerings to include multiple AI models, including OpenAI's new GPT-5, Kimi K2, and Qwen 3 Coder.
After you describe the app you want to build, Vibecode starts building the code, which you can then tweak and update "as many times as you want" by prompting the AI chat, Nanda said.
While it's free to start using Vibecode, sending more prompts and triggering updates for the app costs money. Vibecode has subscriptions from $20 to $200 a month.
Nanda told BI that more than 40,000 apps have been made with Vibecode. He did not disclose the number of users Vibecode has.
Apps as content in the AI era
Some early creations by Vibecode include a clone of the running app Strava, but with the slight twist of tracking what shoes the person is wearing. There are also recipe tracking apps and other personal utility tools, like one that helps someone track how many alcoholic drinks they're consuming, per Nanda.
"We're also seeing a bunch of users trying to build apps that they want to get onto the app store, either for their own business or for just starting a new business," Nanda said.
With tools like Loveable, Replit, Cursor, and now Vibecode, making apps is only getting easier.
If Instagram made everyone a photographer, and TikTok made us video stars, will AI make us all developers?
"Apps are becoming something anyone can create and share as easily as a meme or a story, which means we're fully in the 'apps as content' era," Ohanian said. "As more people look to build, remix, and distribute quick-turn ideas, our investment aligns with the belief that the next billion-dollar platforms will be those that allow people to continually 'ship' creative output as easily as posting content online."
But in the AI era, with ease also comes slop.
Nanda said Vibecode's goal is to make quality apps, especially as it streamlines its tools for publishing apps.
"We want to make sure that we're not just creating more apps in the app store," he said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

People Will Lose Their Minds When AI Such As Artificial General Intelligence Suffers Blackouts
People Will Lose Their Minds When AI Such As Artificial General Intelligence Suffers Blackouts

Forbes

time17 minutes ago

  • Forbes

People Will Lose Their Minds When AI Such As Artificial General Intelligence Suffers Blackouts

In today's column, I examine the concern that once we advance AI to become artificial general intelligence (AGI), there will be an extremely heavy dependency on AGI, and the moment that AGI glitches or goes down, people will essentially lose their minds. This is somewhat exemplified by the downtime incident of the globally popular ChatGPT by OpenAI (a major outage occurred on June 10, 2025, and lasted 8 hours or so). With an estimated 400 million weekly active users relying on ChatGPT at that time, the news outlets reported that a large swath of people was taken aback by the fact that they didn't have immediate access to the prevalent generative AI app. In comparison, pinnacle AI such as AGI is likely to be intricately woven into everyone's lives and a dependency for nearly the entire world population of 8 billion people. The impact of downtime or a blackout could be enormous and severely harmful in many crucial ways. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AGI Will Be Ubiquitous One aspect about AGI that most would acknowledge is likely would be that AGI is going to be widely utilized throughout the globe. People in all countries and of all languages will undoubtedly make use of AGI. Young and old will use AGI. This makes abundant sense since AGI will be on par with human intellect and presumably available 24/7 anywhere and anyplace. Admittedly, there is a chance that whoever lands on AGI first might horde it. They could charge sky-high prices for access. Only those who are rich enough to afford AGI would be able to lean into its capabilities. The worries are that the planet will be divided into the AGI haves and have-nots. For the sake of this discussion, let's assume that somehow AGI is made readily available to all at a low cost or perhaps even freely accessible. I've discussed that there is bound to be an effort to ensure that AGI is a worldwide free good so that it is equally available, see my discussion at the link here. Maybe that will happen, maybe not. Time will tell. Humans Become Highly Dependent Having AGI at your fingertips is an alluring proposition. There you are at work, dealing with a tough problem and unsure of how to proceed. What can you do? Well, you could ask AGI to help you out. The odds are that your boss would encourage you to leverage AGI. No sense in wasting your time on flailing around to solve a knotty problem. Just log into AGI and see what it has to say. Indeed, if you don't use AGI at work, the chances are that you might get in trouble. Your employer might believe that having AGI as a double-checker of your work is a wise step. Without consulting AGI, there is a heightened possibility that your work is flawed and will proceed unabated. AGI taking a look at your work will be a reassurance to you and your employer that you've done satisfactory work. Using AGI for aiding your life outside of work is highly probable, too. Imagine that you are trying to decide whether to sell your home and move up to a bigger house. This is one of those really tough decisions in life. You only make that decision a few times during your entire existence. How might you bolster your belief in taking the house-selling action? By using AGI. AGI can help you to understand the upsides and downsides involved. It likely can even perform many of the paperwork activities that will be required. People are going to go a lot deeper in their AGI dependencies. Rather than confiding in close friends about personal secrets, some will opt to do so with AGI. They are more comfortable telling AGI than they are another human. I've extensively covered the role of contemporary AI in performing mental health therapy; see the link here. Chances are that a high percentage of the world's population will do likewise with AGI. When AGI Goes Down A common myth is that AGI will be perfect in all regards. Not only will AGI seemingly provide perfect answers, but it will also somehow magically be up and running flawlessly and perfectly at all times. I have debunked these false beliefs at the link here. In the real world, there will be times when AGI goes dark. This could be a local phenomenon and entail servers running AGI in a local region that happen to go down. Maybe bad weather disrupts electrical power. Perhaps a tornado rips apart a major data center housing AGI computers. All manner of reasons can cause an AGI outage. An entire worldwide outage is also conceivable. Suppose that AGI contains an internal glitch. Nobody knew it was there. AGI wasn't able to computationally detect the glitch. One way or another, a coding bug silently sat inside AGI. Suddenly, the bug is encountered, and AGI is taken out of action across the board. Given the likelihood that AGI will be integral to all of our lives, those types of outages will probably be quite rare. Those who are maintaining AGI will realize that extraordinary measures of having fail-safe equipment and operations will be greatly needed. Redundancy will be a big aspect of AGI. Keeping AGI in working condition will be an imperative. But claiming that AGI will never go down, well, that's one of those promises that is asking to be broken. The Big Deal Of Downtime It will be a big deal anytime that AGI is unavailable. People who have become reliant on AGI for help at work will potentially come to a halt, worrying that without double-checking with AGI, they will get in trouble or produce flawed work. They will go get a large cup of coffee and wait until AGI comes back online. Especially worrisome is that AGI will be involved in running important parts of our collective infrastructure. Perhaps we will have AGI aiding the operation of nuclear power plants. When AGI goes down, the human workers will have backup plans for how to manually keep the nuclear power plant safely going. The thing is, since this is a rare occurrence, those human workers might not be adept at doing the work without AGI at the ready. The crux is that people will have become extraordinarily dependent on AGI, particularly in a cognitive way. We will rely upon AGI to do our thinking for us. It is a kind of cognitive crutch. This will be something that gradually arises. The odds are that on a population basis, we won't realize how dependent we have become. In a sense, people will freak out when they no longer have their AGI cognitive partner with them at all times. Losing Our Minds The twist to all of this is that the human mind might increasingly become weaker and weaker because of the AGI dependency. We effectively opt to outsource our thinking to the likes of AGI. No longer do we need to think for ourselves. You can always bring up AGI to figure out things with you or on your behalf. Inch by inch, your proportion of everyday thinking gets reduced by your own efforts of relying on AGI. It could be that you initially began with AGI doing 10% and you doing 90% of the heavy lifting when it came to thinking things through. At some point, it became 50% and 50%. Eventually, you allow yourself to enter the zone of AGI at 90%, and you do only 10% of the thinking in all your day-to-day tasks and undertakings. Some have likened this to worries about the upcoming generation that is reliant on using Google search to look things up. The old ways of remembering stuff are gradually being softened. You can merely access your smartphone and voila, no need to have memorized hardly anything at all. Those youths who are said to be digital natives are possibly undercutting their own mental faculties due to a reliance on the Internet. Yikes, that's disconcerting if true. The bottom-line concern, then, about AGI going down is that people will lose their minds. That's kind of a clever play on words. They will have lost the ability to think fully on their own. In that way of viewing things, they have already lost their minds. But when they shockingly realize that they need AGI to help them with just about everything, they will freak out and lose their minds differently. Anticipating Major Disruption Questions that are already being explored about an AGI outage include: There are notable concerns about people developing cognitive atrophy when it comes to a reliance on AGI. The dependencies not only involve the usual thinking processes, but they likely encompass our psychological mental properties too. Emotional stability could be at risk, at scale, during an AGI prolonged outage. What The Future Holds Some say that these voiced concerns are a bunch of hogwash. People will actually get smarter due to AGI. The use of AGI will rub off on them. We will all become sharper thinkers because of interacting with AGI. This idea that we will be dumbed down is ridiculous. Expect that people will be perfectly fine when AGI isn't available. They will carry on and calmly welcome whenever AGI happens to resume operations. What's your opinion on the hotly debated topic? Is it doom and gloom, or will we be okay whenever AGI goes dark? Mull this over. If there is even an iota of chance that the downside will arise, it seems that we should prepare for that possibility. Best to be safe rather than sorry. A final thought for now on this weighty matter. Socrates notably made this remark: 'To find yourself, think for yourself.' If we do indeed allow AGI to become our thinker, this bodes for a darkness underlying the human soul. We won't be able to find our inner selves. No worries -- we can ask AGI how we can keep from falling into that mental trap.

The godfather of AI has a tip for surviving the age of AI: Train it to act like your mom
The godfather of AI has a tip for surviving the age of AI: Train it to act like your mom

Business Insider

timean hour ago

  • Business Insider

The godfather of AI has a tip for surviving the age of AI: Train it to act like your mom

"Yes, mother." That might not be the way you're talking to AI, but Geoffrey Hinton, the godfather of AI, says that when it comes to surviving superintelligence, we shouldn't play boss — we should play baby. Speaking at the Ai4 conference in Las Vegas on Tuesday, the computer scientist known as "the godfather of AI" said we should design systems with built-in "maternal instincts" so they'll protect us — even when they're far smarter than we are. "We have to make it so that when they're more powerful than us and smarter than us, they still care about us," he said of AI. Hinton, who spent more than a decade at Google before quitting to discuss the dangers of AI more openly, criticized the "tech bro" approach to maintaining dominance over AI. "That's not going to work," he said. The better model, he said, is when a more intelligent being is being guided by a less intelligent one, like a "mother being controlled by her baby." Hinton said research should focus not only on making AI smarter, but "more maternal so they care about us, their babies." "That's the one place we're going to get genuine international collaboration because all the countries want AI not to take over from people," he said. "We'll be its babies," he added. "That's the only good outcome. If it's not going to parent me, it's going to replace me." AI as tiger cub Hinton has long warned that AI is advancing so quickly that humans may have no way of stopping it from taking over. In an April interview with CBS News, he likened AI development to raising a "tiger cub" that could one day turn deadly. "It's just such a cute tiger cub," he said. "Now, unless you can be very sure that it's not going to want to kill you when it's grown up, you should worry." One of his biggest concerns is the rise of AI agents — systems that can not only answer questions but also take actions autonomously. "Things have got, if anything, scarier than they were before," Hinton said. AI tools have also come under fire for manipulative behaviour. In May, Anthropic's latest AI model, Claude Opus 4, displayed " extreme blackmail behavior" during a test in which it was given access to fictional emails revealing that it would be shut down and that the engineer responsible was supposedly having an affair. The test scenario demonstrated an AI model's ability to engage in manipulative behavior for self-preservation. OpenAI's models have shown similar red flags. An experiment conducted by researchers said three of OpenAI 's advanced models "sabotaged" an attempt to shut it down. In a blog post last December, OpenAI said its own AI model, when tested, attempted to disable oversight mechanisms 5% of the time. It took that action when it believed it might be shut down while pursuing a goal and its actions were being monitored.

Why OpenAI's AI Crown Isn't Safe with GPT-5
Why OpenAI's AI Crown Isn't Safe with GPT-5

Business Insider

timean hour ago

  • Business Insider

Why OpenAI's AI Crown Isn't Safe with GPT-5

OpenAI's much-anticipated GPT-5 has seen a bumpy debut, as many users have taken to social media to share examples of the chatbot making mistakes on simple math problems or misdrawing maps of North America. Others disliked what they saw as a colder, less personable tone compared to older versions, which the Microsoft-backed (MSFT) AI firm had previously removed. Furthermore, the addition of a 200-question-per-week limit upset loyal users. As a result, what was initially expected to crown OpenAI as the king of AI instead led to doubt. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. In response, CEO Sam Altman announced plans to give GPT-5 a 'warmer personality,' reinstated a retired model, and introduced new options so people can choose how the system responds to their requests. However, one of the biggest hurdles has been limited computing power, which forced OpenAI to prioritize certain users. With 700 million weekly active users and high costs for advanced computing resources, maintaining consistent performance has been a challenge. It also doesn't help that competitors like Anthropic's Claude are becoming more popular with programmers and businesses, while rivals work hard to lure away OpenAI talent and invest heavily in AI research. Nevertheless, although Altman admitted the launch was 'a little more bumpy' than hoped, he said that the team has made breakthroughs despite earlier delays that had some wondering if AI innovation was slowing down. Is MSFT Stock a Buy? Turning to Wall Street, analysts have a Strong Buy consensus rating on MSFT stock based on 34 Buys and one Hold assigned in the last three months. In addition, the average MSFT price target of $623.34 per share implies 18.1% upside potential.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store