Latest news with #Redis

Business Insider
2 days ago
- Business
- Business Insider
Vibe coding is the future — just don't trust it (yet)
In late June, the CEO of a tech startup halted all software development at the company. He wanted to ensure every team member who worked with code was up to speed on the latest trend: vibe coding. "You start to realize, wow, these things could move way faster," Rowan Trollope, the CEO of Redis, a software company, told Business Insider. He immediately approved the use of all AI-assisted coding tools. Then the company launched a weeklong hackathon that challenged teams of employees to "use all the latest and greatest AI technologies to do something cool," Trollope said. Companies have found, however, that despite the excitement — and the millions in funding pouring into vibe coding companies — the technology is still limited. So, many CEOs are developing new policies and tools to maximize the benefits of vibe coding while mitigating the pitfalls. Vibe coding is when developers (or anyone, really) prompt AI to generate code. In a survey of hundreds of engineers in May, Jellyfish, a software intelligence platform, found that 90% of them had integrated AI into their work, up from 61% just a year ago. Vibe coding is now a marketable skill in Silicon Valley. Companies from Visa to Reddit to DoorDash are posting jobs that require vibe coding experience or familiarity with AI coding tools. Meta now allows job candidates to use an AI assistant in their coding interviews. The term was coined by OpenAI cofounder Andrej Karpathy in February. "There's a new kind of coding I call 'vibe coding,' where you fully give in to the vibes, embrace exponentials, and forget that the code even exists," Karpathy wrote in a post on X. "I just see stuff, say stuff, run stuff, and copy-paste stuff, and it mostly works." Limits to the technology remain, however. Though vibe coding promises quick productivity gains and allows people with little coding experience to create software, tech executives say AI is still prone to mistakes, often writes unnecessarily long code, or lacks the proper architecture. So Trollope and other tech CEOs have had to introduce parameters for its use. Trollope said vibe coding is best for building proof of concepts, writing tests, and validating existing code, but not necessarily developing any of the company's core software. "It's still not in a place yet where we would trust it with our core technology," he said. While still limited, this new, more freewheeling approach has gained momentum in part thanks to the money flowing into vibe coding platforms. Last month, Anysphere, the company behind Cursor, an AI-assisted code editor, announced a $900 million Series C fundraise at a $9.9 billion valuation. Wix, a web-development platform, announced that it had acquired the vibe coding platform Base44 for $80 million. Replit, a code editor, saw revenue grow fivefold in September after it released Agent, a coding assistant that works with natural language prompts. By June, the company landed a new $250 million funding round that brought its valuation to $3 billion, according to Forbes. Swedish vibe coding startup Lovable, one of Europe's fastest-growing startups, raised $200 million in Series A funding in July at a $1.8 billion valuation, according to PitchBook. The funding frenzy pushed AirTable, a database development platform, to relaunch last month as a fully AI-native platform. As part of the overhaul, the company created an app-building assistant called Omni for vibe coders. It allows developers to "conversationally vibe generate the app they want, but understand what's been generated all the way down to the data and logic layer as well," AirTable said in a blog post announcing the overhaul. "There's still a question of: Is that an AI tourist attraction? Is it going to be durable? Is it going to be high churn?" AirTable CEO Howie Liu told Business Insider. But "all these people are coming in and pulling out their credit cards to try it out." For AirTable, "not fully reinventing ourselves is kind of like a guaranteed path to obsolescence," Liu said. With the launch of Omni, Liu also saw an opportunity to correct some of the problems that come with vibe coding. With vibe coding, "you're not really inspecting the code, you're not really thinking about the technical architecture, you're just telling it what you want it to build and kind of like clicking the 'I'm feeling lucky button,'" Liu said. "The magical thing is like the AI has gotten good enough that it seemingly just works some of the time." But even apps that "seemingly work" can be riddled with errors and security vulnerabilities at their deeper, infrastructure layers. In a perfect world, developers could leverage artificial general intelligence to code apps given the breadth of information that developers need to reason through, Liu said. Until then, developers need a "two-way feedback loop between the agent that is building the code, or building the app, and the user, the human, who's guiding it and saying, 'here's what I actually want you to build.'" At Redis, humans are convening internal groups to share best prompting practices to improve their part of the equation, Trollope said. "I think people do go on a journey where you start very small and you very quickly realize the prompts can get longer and longer and more and more complicated," he said.


Techday NZ
30-07-2025
- Business
- Techday NZ
AI deployment creates new cybersecurity risks, warns report
Trend Micro has published its latest State of AI Security Report, highlighting how the pace of artificial intelligence development is contributing to new cybersecurity vulnerabilities in critical infrastructure. The report details a range of security challenges faced by organisations as they deploy AI technologies, including vulnerabilities in key components, accidental internet exposure, weaknesses in open-source software, and issues with container-based systems. Critical vulnerabilities The research identifies vulnerabilities and exploits in vital parts of AI infrastructure. Many AI applications rely on a blend of specialised software, some of which are susceptible to the same flaws as traditional software. The report notes the discovery of zero-day vulnerabilities in components such as ChromaDB, Redis, NVIDIA Triton, and NVIDIA Container Toolkit, posing significant risks if left unpatched. In addition to these, the report draws attention to the exposure of servers hosting AI infrastructure to the public internet, often as a result of rapid deployment and inadequate security measures. According to Trend Micro, more than 200 ChromaDB servers, 2,000 Redis servers, and over 10,000 Ollama servers have been found exposed without authentication, leaving them open to malicious probing. Open-source and container concerns The reliance on open-source components in AI frameworks is another focus for security risks. Vulnerabilities may go unnoticed when they are integrated into production systems, as demonstrated at the recent Pwn2Own Berlin event. Researchers there identified an exploit in the Redis vector database, attributed to an outdated Lua component. Continuing the theme of infrastructure risk, the report discusses the widespread use of containers in AI deployments. Containers, while commonly used to improve efficiency, are vulnerable to the same security issues that plague broader cloud and container environments. Pwn2Own researchers also discovered an exploit targeting the NVIDIA Container Toolkit, raising concerns about container management practices in the deployment of AI technologies. Expert perspectives AI may represent the opportunity of the century for ANZ businesses. But those rushing in too fast without taking adequate security precautions may end up causing more harm than good. As our report reveals, too much global AI infrastructure is already being built from unsecured and/or unpatched components, creating an open door for threat actors. This statement from Mick McCluney, Field CTO for ANZ at Trend Micro, underscores the importance of balancing innovation in AI with a robust approach to cybersecurity. Stuart MacLellan, Chief Technology Officer at NHS SLAM, also shared perspectives on the organisational implications of these findings: There are still lots of questions around AI models and how they could and should be used. We now get much more information now than we ever did about the visibility of devices and what applications are being used. It's interesting to collate that data and get dynamic, risk-based alerts on people and what they're doing depending on policies and processes. That's going to really empower the decisions that are made organisationally around certain products. Recommended actions The report sets out several practical steps organisations can take to mitigate risk. These include enhanced patch management, regular vulnerability scanning, maintaining a comprehensive inventory of all software components, and adopting best practices for container management. The report also advises that configuration checks should be undertaken to ensure that critical AI infrastructure is not inadvertently exposed to the internet. The findings highlight the need for the developer community and users of AI to better balance security with speed to market. Trend Micro recommends that organisations exercise due diligence, particularly as the adoption of AI continues to rise across various sectors.

Time of India
11-07-2025
- Entertainment
- Time of India
How Redis powers SonyLIV's seamless streaming experience
SonyLIV has played a pioneering role in shaping India's OTT landscape—streaming live sports, original series, and global content to millions of users across 55+ countries. But what goes on behind the scenes to ensure such a smooth and scalable experience? In this exclusive conversation, Sumon Mal, Head of Backend Engineering at SonyLIV, takes us through the platform's technological journey—from modernizing legacy infrastructure to enabling real-time interactivity during live shows. He explains how SonyLIV manages massive volumes of concurrent traffic, optimizes content delivery, and continuously improves user experience through intelligent engineering choices. Redis, in combination with AWS and other cloud-native tools, plays a crucial role in handling over 22,000 QPS and powering features like real-time voting and automated publishing workflows. Get an insider view of how SonyLIV stays ahead in India's dynamic OTT race. Advertisement

Time of India
30-06-2025
- Business
- Time of India
The Billion Agent Revolution: Building an Agent-Ready AI Stack Today
In this session, Manvinder Singh, VP, Product Management, Redis spoke about building an Agent-ready AI stack. This talk was a part of ETCIO Annual Conclave 2025, Goa, May 29-June 1 Advertisement


The Star
07-06-2025
- Business
- The Star
Human coders are still better than AI, says this expert developer
Your team members may be tempted to rely on AI to help them write code for your company, either for cost or speed rationales or because they lack particular expertise. But you should be wary. — Pixabay In the complex 'will AI steal my job?' debate, software developers are among the workers most immediately at risk from powerful AI tools. It's certainly looking like the tech sector wants to reduce the number of humans working those jobs. Bold statements from the likes of Meta's Mark Zuckerberg and Anthropic's Dario Amodei support this since both of them say AI is already able to take over some code-writing roles. But a new blog post from a prominent coding expert strongly disputes their arguments, and supports some AI critics' position that AI really can't code. Salvatore Sanfilippo, an Italian developer who created Redis (an online database which calls itself the 'world's fastest data platform' and is beloved by coders building real-time apps), published a blog post this week, provocatively titled 'Human coders are still better than LLMs.' His title refers to large language model systems that power AI chatbots like OpenAI's ChatGPT and Anthropic's Claude. Sanfilippo said he's 'not anti-AI' and actually does 'use LLMs routinely,' and explained some specific interactions he'd had with Google's Gemini AI about writing code. These left him convinced that AIs are 'incredibly behind human intelligence,' so he wanted to make a point about it. The billions invested in the technology and the potential upending of the workforce mean it's 'impossible to have balanced conversations' on the matter, he wrote. Sanfilippo blogged that he was trying to 'fix a complicated bug' in Redis's systems. He made an attempt himself, and then asked Gemini, 'hey, what we can do here? Is there a super fast way' to implement his fix? Then, using detailed examples of the kind of software he was working with and the problem he was trying to fix, he blogged about the back-and-forth dialogue he had with Gemini as he tried to coax it toward an acceptable answer. After numerous interactions where the AI couldn't improve on his idea or really help much, he said he 'asked Gemini to do an analysis of (his last idea, and it was finally happy.' We can ignore the detailed code itself and just concentrate on Sanfilippo's final paragraph. 'All this to say: I just finished the analysis and stopped to write this blog post, I'm not sure if I'm going to use this system (but likely yes), but, the creativity of humans still have an edge, we are capable of really thinking out of the box, envisioning strange and imprecise solutions that can work better than others,' he wrote. 'This is something that is extremely hard for LLMs.' Gemini was useful, he admitted, to simply 'verify' his bug-fix ideas, but it couldn't outperform him and actually solve the problem itself. This stance from an expert coder goes up against some other pro-AI statements. Zuckerberg has said he plans to fire mid-level coders from Meta to save money, employing AI instead. In March, Amodei hit the headlines when he boldly predicted that all code would be written by AIs inside a year. Meanwhile, on the flip side, a February report from Microsoft warned that young coders coming out of college were already so reliant on AI to help them that they failed to understand the hard computer science behind the systems they were working on –something that may trip them up if they encountered a complex issue like Sanfilippo's bug. Commenters on a piece talking about Sanfilippo's blog post on coding news site Hacker News broadly agreed with his argument. One commenter likened the issue to a popular meme about social media: 'You know that saying that the best way to get an answer online is to post a wrong answer? That's what LLMs do for me.' Another writer noted that AIs were useful because even though they give pretty terrible coding advice, 'It still saves me time, because even 50 percent accuracy is still half that I don't have to write myself.' Lastly, another coder pointed out a very human benefit from using AI: 'I have ADHD and starting is the hardest part for me. With an LLM it gets me from 0 to 20% (or more) and I can nail it for the rest. It's way less stressful for me to start now.' Why should you care about this? At first glance, it looks like a very inside-baseball discussion about specific coding issues. You should care because your team members may be tempted to rely on AI to help them write code for your company, either for cost or speed rationales or because they lack particular expertise. But you should be wary. AIs are known to be unreliable, and Sanfilippo's argument, supported by other coders' comments, point out that AI really isn't capable of certain key coding tasks. For now, at least, coders' jobs may be safe… and if your team does use AI to code, they should double and triple check the AI's advice before implementing it in your IT system. – Inc./Tribune News Service