Monzo's former CEO shares 3 tips for getting the most out of vibe coding
Vibe coding is enabling nontechnical users to write code with AI.
Former Monzo CEO Tom Blomfield shared tips on how to get the most out of vibe coding.
They include experimenting with different tools and keeping a log in case you need to reset to a clean code base.
Vibe coding continues to gain traction in Silicon Valley, and former Monzo CEO Tom Blomfield has thoughts on how to maximize its potential.
Coined just two months ago by Andrej Karpathy, an OpenAI cofounder, the term refers to people using AI to write code by giving it text-based instructions.
Experienced engineers are using it to save time, and those with nontechnical backgrounds are coding everything from dating apps to games.
Blomfield, who's now a group partner at Y Combinator, shared some tips for people looking to optimize the way they vibe code, in a video posted by the accelerator on Friday. Here are three pieces of advice he gave.
Pick the right tool and create a comprehensive plan
Blomfield advised users to plan ahead and experiment to find the tool that best supports their skill level and desired end product.
He found that tools like Lovable and Replit were suited for beginners, whereas more experienced coders could use Windsurf or Cursor.
"Work with the LLM to create a comprehensive plan," he said in the video, referring to large language models. "Put that in a markdown file inside your project folder and keep referring back to it."
He suggested that users could use the LLM to carry out the plan section by section, instead of making the product in one go.
"This advice could change in one or two months, as the models are getting better," he added.
Do version tests on the product
Blomfield said that when he prompted AI tools multiple times for the same coding task, he would get bad results as a result of the model accumulating "layers of bad code."
He advised using the large language model to write tests that simulate someone clicking through a version of the site or app, to gauge how well the features are working.
Sometimes, LLMs can make unnecessary changes to these features, he said, and implementing integration tests can pick up on these changes quicker.
Write instructions for the LLMs
Blomfield said he found that different models succeeded where others failed. If a user encounters a specific bug, it's helpful to reset all changes and give the LLM detailed instructions to fix it on a clean code base.
"Logging is your friend," Blomfield said.
Another tip he offered was to use small files and a more modular, service-based architecture, where the LLM has clear API boundaries.
An upside of this is that it would avoid creating a huge single repository of code for various projects, which could be more complex to manage and have more integration challenges.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
25 minutes ago
- Yahoo
ChatGPT down for almost 2K users, with OpenAI reporting high ‘error rates'
Almost 2,000 ChatGPT users reported that the artificial intelligence website service was down on Tuesday. At 9:49 a.m., 1,964 users reported that the website was down on Downdetector. OpenAI confirmed that there was a problem, after 'engineers have identified the root cause and are working as fast as possible to fix the issue,' according to a company statement. 'We are observing elevated error rates and latency across ChatGPT and the API,' the statement read. The chatbot also gave users a message that said it could not answer questions. The message read, 'Too many concurrent requests.' OpenAI first started investigating an issue at 1:36 a.m., then identified the issue at 8:03 a.m. By 10:06 a.m., the issue was patched up for API and engineers continued to monitor ChatGPT as of 3:34 p.m. 'We have seen a full recovery in the API,' the latest update read. 'We are monitoring and working towards a full recovery of ChatGPT.' By 4:03 p.m., Downdetector had 144 reports of ChatGPT not working for users. Springfield grants $3.5M for 19 preservation projects, rejects 1 housing request Bankruptcy protection ends for ESG Clean Energy, Holyoke generating plant linked to Scuderi engine Coast Guard searches for days-overdue fishing boat last seen in Cape Cod Bay Amherst Cinema presents free short film festival in celebration of Juneteenth 8 Mass. residents accused of stealing nearly $9 million in federal tax refunds Read the original article on MassLive.
Yahoo
25 minutes ago
- Yahoo
Good Taste Is More Important Than Ever
The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here. There's a lesson I once learned from a CEO—a leader admired not just for his strategic acumen but also for his unerring eye for quality. He's renowned for respecting the creative people in his company. Yet he's also unflinching in offering pointed feedback. When asked what guided his input, he said, 'I may not be a creative genius, but I've come to trust my taste.' That comment stuck with me. I've spent much of my career thinking about leadership. In conversations about what makes any leader successful, the focus tends to fall on vision, execution, and character traits such as integrity and resilience. But the CEO put his finger on a more ineffable quality. Taste is the instinct that tells us not just what can be done, but what should be done. A corporate leader's taste shows up in every decision they make: whom they hire, the brand identity they shape, the architecture of a new office building, the playlist at a company retreat. These choices may seem incidental, but collectively, they shape culture and reinforce what the organization aspires to be. Taste is a subtle sensibility, more often a secret weapon than a person's defining characteristic. But we're entering a time when its importance has never been greater, and that's because of AI. Large language models and other generative-AI tools are stuffing the world with content, much of it, to use the term du jour, absolute slop. In a world where machines can generate infinite variations, the ability to discern which of those variations is most meaningful, most beautiful, or most resonant may prove to be the rarest—and most valuable—skill of all. I like to think of taste as judgment with style. Great CEOs, leaders, and artists all know how to weigh competing priorities, when to act and when to wait, how to steer through uncertainty. But taste adds something extra—a certain sense of how to make that decision in a way that feels fitting. It's the fusion of form and function, the ability to elevate utility with elegance. Think of Steve Jobs unveiling the first iPhone. The device itself was extraordinary, but the launch was more than a technical reveal—it was a performance. The simplicity of the black turtleneck, the deliberate pacing of the announcement, the clean typography on the slides—none of this was accidental. It was all taste. And taste made Apple more than a tech company; it made it a design icon. OpenAI's recently announced acquisition of Io, a startup created by Jony Ive, the longtime head of design at Apple, can be seen, among other things, as an opportunity to increase the AI giant's taste quotient. Taste is neither algorithmic nor accidental. It's cultivated. AI can now write passable essays, design logos, compose music, and even offer strategic business advice. It does so by mimicking the styles it has seen, fed to it in massive—and frequently unknown or obscured—data sets. It has the power to remix elements and bring about plausible and even creative new combinations. But for all its capabilities, AI has no taste. It cannot originate style with intentionality. It cannot understand why one choice might have emotional resonance while another falls flat. It cannot feel the way in which one version of a speech will move an audience to tears—or laughter—because it lacks lived experience, cultural intuition, and the ineffable sense of what is just right. This is not a technical shortcoming. It is a structural one. Taste is born of human discretion—of growing up in particular places, being exposed to particular cultural references, developing a point of view that is inseparable from personality. In other words, taste is the human fingerprint on decision making. It is deeply personal and profoundly social. That's precisely what makes taste so important right now. As AI takes over more of the mechanical and even intellectual labor of work—coding, writing, diagnosing, analyzing—we are entering a world in which AI-generated outputs, and the choices that come with them, are proliferating across, perhaps even flooding, a range of industries. Every product could have a dozen AI-generated versions for teams to consider. Every strategic plan, numerous different paths. Every pitch deck, several visual styles. Generative AI is an effective tool for inspiration—until that inspiration becomes overwhelming. When every option is instantly available, when every variation is possible, the person who knows which one to choose becomes even more valuable. This ability matters for a number of reasons. For leaders or aspiring leaders of any type, taste is a competitive advantage, even an existential necessity—a skill they need to take seriously and think seriously about refining. But it's also in everyone's interest, even people who are not at the top of the decision tree, for leaders to be able to make the right choices in the AI era. Taste, after all, has an ethical dimension. We speak of things as being 'in good taste' or 'in poor taste.' These are not just aesthetic judgments; they are moral ones. They signal an awareness of context, appropriateness, and respect. Without human scrutiny, AI can amplify biases and exacerbate the world's problems. Countless examples already exist: Consider a recent experimental-AI shopping tool released by Google that, as reported by The Atlantic, can easily be manipulated to produce erotic images of celebrities and minors. Good taste recognizes the difference between what is edgy and what is offensive, between what is novel and what is merely loud. It demands integrity. Like any skill, taste can be developed. The first step is exposure. You have to see, hear, and feel a wide range of options to understand what excellence looks like. Read great literature. Listen to great speeches. Visit great buildings. Eat great food. Pay attention to the details: the pacing of a paragraph, the curve of a chair, the color grading of a film. Taste starts with noticing. The second step is curation. You have to begin to discriminate. What do you admire? What do you return to? What feels overdesigned, and what feels just right? Make choices about your preferences—and, more important, understand why you prefer them. Ask yourself what values those preferences express. Minimalism? Opulence? Precision? Warmth? The third step is reflection. Taste is not static. As you evolve, so will your sensibilities. Keep track of how your preferences change. Revisit things you once loved. Reconsider things you once dismissed. This is how taste matures—from reaction to reflection, from preference to philosophy. Taste needs to considered in both education and leadership development. It shouldn't be left to chance or confined to the arts. Business schools, for example, could do more to expose students to beautiful products, elegant strategies, and compelling narratives. Leadership programs could train aspiring executives in the discernment of tone, timing, and presentation. Case studies, after all, are about not just good decisions, but how those decisions were expressed, when they went into action, and why they resonated. Taste can be taught, if we're willing to make space for it. Article originally published at The Atlantic
Yahoo
25 minutes ago
- Yahoo
Starbucks launching AI tool for baristas
DAYTON, Ohio (WDTN) — Starbucks will reportedly begin the implementation of an AI assistance tool for its baristas. The cafe company announced during its Leadership Experience session Tuesday that 'Green Dot Assist' is initially being introduced in 35 stores. CNBC reported the system is based from Microsoft Azure's OpenAI. Baristas will use an iPad in the store to use the system, which will reportedly 'help baristas in real time' with information needed like displaying beverage recipes, filing support tickets, identifying baristas to cover shifts, naming alternative ingredients, showing diagnostic videos and recommended items to upsell. New Starbucks barista dress code is a move protested by some According to the chain, the new technology will reportedly support baristas with beverage-making and customer interactions. 'With this new solution, we're simplifying access to essential information in the flow of work for partners, making their jobs a little easier while they build confidence and expertise,' said the company. Green Dot Assist is expected to begin 'a broad launch' of implementation in the fall at locations in the U.S. and Canada, according to the CNBC report. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.