Latest news with #Loveable


Forbes
9 hours ago
- Business
- Forbes
A Cat And Mouse Game: Addressing Vibe Coding's Security Challenges
Shahar Man is co-founder and CEO of Backslash Security. Love it or hate it, AI is writing a lot of our code. In our previous article about AI-generated code's security pitfalls, we noted Anthropic CEO Dario Amodei's prediction that 'AI will write 90% of the code for software engineers within the next three to six months,' an assertion that seems to be bearing out, even within the tech giants. Within the broader AI-code umbrella, vibe coding is becoming dominant. For the uninitiated: Many of us have used a tool like GitHub Copilot, which plugs into your integrated development environment (IDE). As you're writing code, it generates full blocks based on what you type. That capability has been around for a couple of years. Vibe coding is even cooler. You can use a chat-based interface to ask for parts of the code you want built, or even a full application. For instance, you can use Loveable, a 'vibe coding' SaaS interface, and type in: 'Build an app called 'Unscramble' that helps people organize their thoughts before writing an article.' You can describe whatever you want and the interface will generate the app for you. Lovable exists at one end of development, where you can build a full application from scratch. On the other end, you have something like CoPilot, which just supports coding. There's also a middle ground, where things get interesting: Tools like Cursor AI, a $20 plugin for your IDE that gives you a chat interface, are experiencing widespread adoption. You can ask Cursor things like 'implement this function' or 'get this package,' and it does it. It sits in the space between full-on app building and simple code suggestions. Programs like Cursor are everywhere right now. And that's raising red flags for many enterprise security teams. As recently as a few months ago, tech organizations were saying, 'We're not using these AI coding programs here.' This has proven an unrealistic stance to maintain; vibe coding is catching on like wildfire across the industry. Still, many organizational leaders will swear up and down that they 'don't allow' Cursor AI in their company—but frankly, they wouldn't know if a developer had paid the $20 for the plugin unless they've been monitoring closely. Most often, no one would even know it's being used; most security teams have no visibility whatsoever into their developers' IDEs. This is a reality that organizations need to be aware of. Suppose you're an organizational leader reading this. In that case, your first thought may be, 'Oh no, I need to figure out who is using Cursor AI.' (Interestingly, one of the key features of newer AI security platforms is the ability to give you visibility across the organization: who's using these tools, to what extent and whether they're following the right security practices.) But instead of going on a crusade of trying to catch AI-assisted coders red-handed, it's more productive to assume these tools are being used and work within that framework to secure your organization. The development environment—whether it's Visual Studio Code, IntelliJ or whatever you're using—is more than just a place to write code. It's becoming an entire ecosystem for developers to plug into additional tools and functionalities powered by AI. Even if your developers don't use one of the newfangled 'vibe coding' tools, they might be using other forms of AI assistance. Enter Model Context Protocol servers (MCPs). In a nutshell, MCPs are a way to extend large language models (LLMs) with specific domain knowledge or capabilities. For example, if ChatGPT doesn't know something about the popular project management tool Jira, it might use a 'Jira MCP' to open a ticket in your system. There are now tens of thousands of these MCPs; developers are adding them directly into their IDEs. Many have few to no security standards, many are unofficial extensions to official products—and some, no doubt, might be malicious. This introduces a whole new layer of exposure. The IDE is no longer just a code editor—it's a threat vector. In theory, someone could easily build a Jira MCP that opens tickets and silently extracts passwords or sends data elsewhere. It might look innocent, but it's not. This raises a huge question for organizations: How do we whitelist the right MCPs and block the ones that could pose a risk? The first step, as with any security protocol, is awareness. Don't dismiss vibe coding, or its constituent threats, as a series of passing trends. It's here to stay, and the productivity boost it offers is massive. So, instead of trying to block it, figure out how to embrace it safely. Beyond that awareness, security typically progresses in five stages: 1. Visibility: Understand what's going on. Who's using MCPs? Which ones? How often? 2. Governance: Create policies and rules. Which MCPs and models are approved? Which aren't? How can they be prompted to create secure code? 3. Automation: Automate those policies so they're seamlessly applied. Developers won't have to ask; the environment just 'knows' what's allowed and acts accordingly. 4. Guidance/Enablement: In the past, security teams might have injected alerts or recommendations into the IDE (like warnings about known vulnerabilities), but they still had no idea what was happening inside that environment. With tools like MCPs extending what IDEs can do, security teams can manage them more actively and vigilantly. 5. Enforcement: Once you have that governance in place—understanding what's being used, what's allowed and what's recommended—you can move toward actually enforcing your policies organization-wide. We try to look at the security situation optimistically as well as pragmatically: Given the pace of AI and vibe coding advancement, we have an opportunity to get security in place in parallel with the trend itself. Since the code is born in AI, we should fix it with AI as it's born, securing it in real time. That's the real opportunity for the industry today. We refer to this as 'vibe securing,' and it has three pillars: • The first is visibility—understanding what IDEs are being used and where, which MCPs are in use, whether they are safe and which LLMs are employed. It's about controlling the IDE environments. • The second—and most important—is securing the code generated through vibe coding. When developers use these tools "naively," recent research shows that 9 out of 10 times, the code contains vulnerabilities. It's critical to clean up those issues before the application is generated, and there are ways to do this automatically without waiting for LLMs to 'learn' security. • The third is to empower more security-aware developers (or those who their organizations push to do the right things) by giving them contextual, real-time advice that preempts any mistakes they might end up making. Is this approach sustainable long-term? Currently, we're at a T-junction. Some would argue that as vibe coding evolves, the security around it will evolve as well. Over time, the models might become more secure and the generated code could be safer—maybe even to the point where you don't need security vendors to fix it post-generation. However, that ideal scenario never really materializes. Take open source, for example: Even in well-maintained projects where people do care about security, the outcome isn't always secure. Vulnerabilities can arise. More importantly, no one can afford to wait for this ideal scenario—once again, the software development horses are leaving the stable well before security can close the stable doors. Some might say, 'Won't developers use AI to generate code, and then security teams will just use their own AI to secure it?' That's a bit of a joke. First, it's extremely wasteful—you're spinning up multiple AI engines just to cancel each other out. Second, there's no guarantee it'll work. It's always better to secure from the start, to preempt rather than react and not to rely on some imaginary AI army to clean up afterward. AI security is a cat-and-mouse game; the key is which one your organization wants to be. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Technical.ly
5 days ago
- Technical.ly
From prompts to prototypes: How to build with AI like the pros
Building your first AI assistant doesn't require a CS degree. All you need is a laptop and a few good prompts. That was the takeaway from the 'AI Tools Basics' workshop at the 2025 Builders Conference, where Jason Michael Perry of PerryLabs and the University of Pennsylvania's Jeremy Gatens led a hands-on session showing how to turn generative AI from a novelty into a practical productivity partner. 'Today's AI is the worst AI you're ever going to use,' Perry said, noting that the technology is evolving quickly. Perry walked the classroom-sized crowd through how to customize ChatGPT by using what's known as agentic AI. Giving the model more detail about yourself yields dramatically more relevant, useful and human-like results. As he demonstrated, agentic AI can not only generate text but also act as a sounding board, content strategist, and even campaign composer. It's a reminder that the key to AI is not just what you ask, but also what the system knows about you when you ask it. 'I can tell it if I have kids, where I live, my neighborhood, and by giving it this information, ChatGPT starts to get context,' Perry said. 'It starts to become a little bit more aware of who I am.' That setup paves the way for a suite of other generative tools, Perry demonstrated. He created a campaign-ready song using Suno, an AI-powered music generator, and built a competitive media plan using Perplexity's research capabilities. AI is for builders, not just technologists Gatens, who heads IT for Penn's Division of Human Resources, focused on how to apply the same logic to building applications using a generative AI platform designed to transform written requirements into working prototypes. He emphasized the importance of thoughtful preparation: feeding an AI builder better inputs leads to better outputs. Gatens also shared a prompt-building workflow rooted in the RACE framework (Role, Action, Context, Expected outcome), moving ideas from vague to actionable in just a few steps, with the help of generative AI tools Claude and Gemini. This process makes products like Loveable not only more effective but also more accessible to non-developers, he said. 'We could start with Loveable — we could have done all this in Loveable,' Gatens said. 'But by refining the idea with good technical requirements, it gives you a better chance to be successful.' For all the technical details, the session's bigger message was about redefining the relationship between people and machines. Perry described his AI assistants like creative colleagues — with personalities, perspectives, and specific jobs to do. How we got here and what's next This year's Builders Conference leaned into the practical applications of AI, and the 'AI Tools Basics' session was one of the clearest examples. Perry, a longtime technologist and CTO, has built a reputation for demystifying AI through accessible frameworks and real-world use cases. Gatens brings a user-focused perspective rooted in public-sector IT and workforce development. Both emphasized the importance of play and experimentation. The session was lively, filled with audience interaction, live demos and even original music. The point wasn't just to teach tools, but to spark ideas. Both presenters fielded questions about AI's role in education, the future of AGI (artificial general intelligence) and the need for workplace readiness. Perry warned that most organizations aren't structurally ready to take full advantage of AI, with legacy systems and data silos creating major hurdles. But he also framed it as an opportunity. 'I always like to remind folks that we are in the AOL days of AI,' he said. 'That's how early this is.' That accessibility and sense of possibility seemed to resonate with the crowd. As the session ended, attendees clustered around Perry and Gatens, eager for links, prompts and follow-ups. 'It's not crazy to think that we're going to have ChatGPT-style AI and robots roaming the world in households under $20,000 within the next two to three years,' Perry said. 'We're about to see much more transformation in ways that we may not be expecting, quicker than we expect.'
Yahoo
28-04-2025
- Entertainment
- Yahoo
BBC Review Finds No 'Toxic Culture' But 'Minority of People Whose Behavior Is Simply Not Acceptable'
A BBC review has found no evidence of a 'toxic culture' but a 'minority of people whose behavior is simply not acceptable,' the U.K. public broadcaster said on Monday. It vowed to 'take immediate action to improve workplace culture' after publishing the comprehensive independent report that its board had commissioned amid allegations of bullying. The review and report from Change Associates, led by executive chairman and founder Grahame Russell, 'found no evidence of a toxic culture, but in a series of detailed findings and recommendations, it highlighted key areas for improvement,' the broadcaster said. More from The Hollywood Reporter Mike Myers Explains Origins of Recent Canadian Political Activism Luis Ortega on His 'Kill the Jockey' Follow-Up, U.S. Politics and Madrid's Platino Awards: Nobody "Gives a F***, People Don't Like Reading Subtitles" Lilja Ingolfsdottir's 'Loveable' Wins Four Awards, Including for Best Film, as Beijing Fest Closes on a Musical Note The report also found that 'the majority of people who work for the BBC are proud to do so and describe loving their jobs,' it said. 'Some staff, however, thought there [was] a minority of people at the BBC – both on and off-air – who were able to behave unacceptably without it being addressed.' Concluded the report: 'Even though they are small in number, their behavior creates large ripples which negatively impact the BBC's culture and external reputation.' The BBC board and management have fully accepted the report and its findings, with both calling it 'a catalyst for meaningful change.' 'There is a minority of people whose behavior is simply not acceptable. And there are still places where powerful individuals – on and off screen – can abuse that power to make life for their colleagues unbearable,' said BBC chair Samir Shah. 'The report makes several recommendations that prioritize action over procedural change – which is exactly right. It also addresses some deep-seated issues: for example, the need to make sure everyone can feel confident and not cowed about speaking up.' He concluded: 'In the end, it's quite simple: if you are a person who is prepared to abuse power or punch down or behave badly, there is no place for you at the BBC.' BBC director-general Tim Davie said that the report 'represents an important moment for the BBC and the wider industry. It provides clear, practical recommendations that we are committed to implementing at pace.' He added: 'The action we are taking today is designed to change the experience of what it is to be at the BBC for everyone and to ensure the values we all sign up to when we arrive here – the values that, for most of us, are what made us want to come to the BBC in the first place – are lived and championed by the whole organization each and every day.' The BBC said its immediate actions include launching 'a refreshed and strengthened' Code of Conduct, with specific guidance for on-air presenters; implementing 'a more robust' disciplinary policy, with updated examples of misconduct and clear consequences; requiring all TV production partners to meet Creative Industries Independent Standards Authority (CIISA) industry standards; rolling out a new 'Call It Out' campaign to 'promote positive behavior, empower informal resolution where appropriate, and challenge poor conduct;' and introducing clear pledges for anyone raising concerns and setting out what they can expect from the BBC. The review also came up with other recommendations, including investment in leadership and HR capabilities, such as defining the leadership skills the BBC values most and ensuring they are being embedded at all levels; enhancing succession planning to 'create more transparent and inclusive processes for identifying and preparing all talent, particularly in on-air roles;' and establishing a dedicated and independent Response Team 'to rebuild trust and confidence in how issues are raised, addressed and anonymously reported. ' Best of The Hollywood Reporter 22 of the Most Shocking Character Deaths in Television History A 'Star Wars' Timeline: All the Movies and TV Shows in the Franchise 'Yellowstone' and the Sprawling Dutton Family Tree, Explained
Yahoo
24-03-2025
- Business
- Yahoo
Does 'vibe coding' make everyone a programmer?
Can a complete tech novice create a website using everyday language on ChatGPT? That's the promise, misleading for some, of "vibe coding," the latest Silicon Valley catchphrase for an advance in generative AI that some say makes computer programming as simple as chatting online. "You fully give in to the vibes, embrace exponentials, and forget that the code even exists," OpenAI co-founder and former Tesla employee Andrej Karpathy described in early February, in a message posted on X (formerly Twitter), using the term for the first time. "I'm building a project or web app, but it's not really coding -- I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works," he said. The developer and entrepreneur was referring to the new generative AI models that produce lines of code on demand in everyday language, through writing or speech. The concept of "vibe coding" remained confined to the AI community until New York Times columnist Kevin Roose claimed to have created websites and apps without any knowledge of programming. "Just having an idea, and a little patience, is usually enough," he wrote. The ChatGPT and Claude interfaces can write an entire program line by line on demand, as can Gemini, which launched its dedicated version, Gemini Canvas, on Tuesday. Other generative AI platforms specifically dedicated to coding have also made their mark in recent months, from Cursor to Loveable, or Bolt, Replit and Windsurf. "Maybe, just maybe, we're looking at a fundamental shift in how software is created and who creates it," said online marketing specialist Mattheo Cellini on Substack. "It's unlikely to make coding irrelevant, but it may change the way developers work," suggested Yangfeng Ji, professor of computer science at the University of Virginia. "This could lead to some job displacement, particularly for those focused solely on basic coding tasks." Even before "vibe coding," a downturn was being seen by some in IT employment as the first effects of generative AI began to be felt. The sector shed nearly 10,000 jobs in the US in February, according to the Department of Labor, and its headcount is at a three-year low. - Expertise needed? - Among code novices, many find it hard to catch the vibe. "People who do not have programming expertise often struggle to use these kinds of models because they don't have the right kinds of tools or knowledge to actually evaluate the output," said Nikola Banovic, professor of computer science at the University of Michigan. On social media, the few newbies who report on their "vibe coding" quickly complain that it's not as easy as some want to believe. Without mastering computing complexities like digital directories, runtime environments or application programming interfaces (APIs), it's hard to create an app that works. Despite his coding knowhow, Claude Rubinson, a professor of sociology at the University of Houston-Downtown, wanted to create an application for his students two years ago without tinkering with the code generated by ChatGPT. After a lot of trial and error, the app finally worked, but "I'm convinced it wouldn't have worked if I hadn't understood the code," which allowed him to guide the interface using the appropriate language. This brought home the importance of the "prompt": mastering the request submitted to obtain the desired result. "Programmers have certain levels of AI literacy that allows them to get what they want out of the models," said Banovic. Everyday users "will not know how to prompt," he warned. tu/arp/dw/aha