
Gemini Code Assist launches for all, delivers 2.5x boost
Gemini Code Assist for individuals and for GitHub is now generally available and powered by Gemini 2.5.
Gemini Code Assist, which is a free AI-coding assistant, launched public previews for individuals and a code review agent compatible with GitHub a few months ago. According to Group Product Manager for Gemini Code Assist, Damith Karunaratne, since the February preview announcement, the company has been "requesting input, listening to feedback and shipping capabilities developers are asking for."
Gemini 2.5 now powers both the free and paid versions of Gemini Code Assist. The tool is designed to enhance coding performance and assist developers with tasks such as creating visually compelling web applications, as well as handling code transformation and editing requirements. Both Gemini Code Assist for individuals and for GitHub are now available, and developers can start using them swiftly.
Karunaratne said, "Now we're announcing that Gemini Code Assist for individuals and Gemini Code Assist for GitHub are generally available, and developers can get started in less than a minute. Gemini 2.5 now powers both the free and paid versions of Gemini Code Assist, features advanced coding performance; and helps developers excel at tasks like creating visually compelling web apps, along with code transformation and editing."
Recognising that developers often spend significant time personalising their coding environments for efficiency and collaborative purposes, the latest updates to Gemini Code Assist focus on expanded customisation. The company states that all versions now offer more options to accommodate individual and team preferences, including workflow customisation, the option to resume tasks from where they were paused, and new tools to enforce team coding standards, style guides and architectural patterns.
"We know developers spend a lot of time personalizing their coding environment so they can be more efficient and work better in team settings. Our latest updates to Gemini Code Assist, across all versions, give more customization options for you and your team's preferences. This includes more ways to customize workflows to fit different project needs, the ability to more easily pick up tasks exactly from where you were left off, and new tooling to enforce a team's coding standards, style guides and architectural patterns," said Karunaratne.
Some recent updates to Gemini Code Assist include the ability to resume work and explore new directions using chat history and threads, shaping the AI's responses by specifying persistent rules such as "always add unit tests," and automating repetitive tasks with custom commands like "generate exception handling logic." Other features allow developers to review and accept chat code suggestions in parts, across files, or all at once, with improvements aimed at streamlining the code review and suggestion process.
Karunaratne outlined, "Here are some examples of recent updates you can explore in Gemini Code Assist: Quickly resume where you left off and jump into new directions with chat history and threads. Shape Gemini's responses by specifying rules (i.e., 'always add unit tests') that you want applied to every AI generation in the chat. Automate repetitive tasks by creating custom commands (i.e., "generate exception handling logic") Save time by choosing to review and accept chat code suggestions in parts, across files, or accept all together. Reviewing and accepting code suggestions is now significantly improved."
The company also confirmed that when a 2 million token context window becomes available on Vertex AI, Gemini Code Assist Standard and Enterprise customers will have access to it as well. This expanded capability is intended to aid those working on complex, large-scale development challenges, such as bug tracing, code transformations, and compiling extensive onboarding materials for new members of sizeable codebases.
"And when we make a 2 million token context window available on Vertex AI, Gemini Code Assist Standard and Enterprise developers will get it too. This expanded context window will help developers with complex tasks at large scale, like bug tracing, code transformations, and generating comprehensive onboarding guides for people new to a vast codebase," Karunaratne explained.
New data provided by the company shows notable improvements in developer productivity when using Gemini Code Assist. In an internal experiment comparing developers utilising the tool to those without coding assistance software, results showed that Gemini Code Assist increased the likelihood of successfully completing common development tasks by 2.5 times.
"New data shows that Gemini Code Assist significantly helps developers get things done. In an experiment comparing developers using Gemini Code Assist to developers without any coding assistance tools, we found that Gemini Code Assist significantly boosts developers' odds of success in completing common development tasks by 2.5 times," Karunaratne noted.
The Gemini Code Assist extension is available for download in both Visual Studio Code and JetBrains integrated development environments, and the code review agent is accessible through the GitHub app. The service is now also available in Android Studio, allowing businesses to utilise Gemini at every phase of the Android development lifecycle.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
29-05-2025
- Techday NZ
Kurrent unveils open-source MCP Server for AI-driven databases
Kurrent has released its open-source MCP Server for KurrentDB, enabling developers to interact with data in the KurrentDB database using natural language and AI agents rather than traditional coding methods. The Kurrent MCP Server offers new functionalities, allowing developers not only to query data but also to create, test, and debug projections directly through conversational commands. This feature is not available in other MCP server implementations, establishing a novel approach to database interaction by integrating AI-driven workflows into the database layer. Central to this release is the introduction of a self-correcting engine, which assists in automatically identifying and fixing logic errors during the prototyping phase. This reduces the need for manual debugging loops, streamlining the development process significantly for users building or modifying projections. The software is fully open-source and released under the MIT license, with documentation and a development roadmap available on GitHub. This permits both enterprise users and open-source contributors to adopt, customise, and improve the KurrentDB MCP Server without licensing restrictions. Kurrent MCP Server supports natural language prompts for tasks such as reading streams, listing streams within the database, building and updating projections, writing events to streams, and retrieving projection status for debugging. These capabilities aim to make the visual and analytical exploration of data more accessible and conversational for users with varying levels of technical expertise. The MCP Server is compatible with a broad range of frontier AI models, such as Claude, GPT-4, and Gemini. It can be integrated with popular IDEs and agent frameworks, including Cursor and Windsurf. This compatibility enables developers to leverage their preferred tools while reducing friction points typically associated with traditional database interactions. Addressing the new approach, Kirk Dunn, CEO of Kurrent, said, "Our new MCP Server makes it possible to use the main features of the KurrentDB database, like reading and writing events to streams and using projections, in a way that's as simple as having a conversation. The system's ability to test and fix itself reduces the need for debugging and increases reliability. Copilots and AI assistants become productive database partners rather than just code generators, seamlessly interfacing with KurrentDB." The server's key functions are designed to reduce development times for database tasks, enabling a focus on higher-value project work. Eight core capabilities are available, including Read_stream, List_streams, Build_projection, Create_projection, Update_projection, Test_projection, Write_events_to_stream, and Get_projections_status. Each of these responds directly to natural language instructions provided by the developer or AI agent. Kurrent has highlighted opportunities for the open source community to participate in the MCP Server's ongoing development. Developers can contribute code, report or tackle issues, and suggest new features through the project's GitHub repository and discussion forums. Comprehensive educational resources and installation guides are intended to help developers quickly integrate the MCP Server with KurrentDB for various use cases. Lokhesh Ujhoodha, Lead Architect at Kurrent, commented, "Before, database interactions required developers to master complex query languages, understand intricate data structures, and spend significant time debugging projections and data flows. Now, everything agentic can interface with KurrentDB through this MCP Server. We're not just connecting to today's AI tools, but we're positioning for a future where AI agents autonomously manage data workflows, make analytical decisions and create business insights with minimal human intervention." Kurrent emphasises that its MCP Server aims to remove barriers historically associated with database development by supporting conversational, agent-driven workflows. This aligns with broader trends towards AI-native infrastructure in enterprise environments, where human and algorithmic agents increasingly collaborate to deliver data-driven business outcomes.


Techday NZ
29-05-2025
- Techday NZ
LexisNexis data breach exposes 364,000 personal records
LexisNexis, a prominent global data analytics and legal intelligence provider, has confirmed a data breach impacting more than 364,000 individuals, raising significant concerns over the security of personal information held by data brokers. The breach, reportedly executed through a third-party platform used for software development, exposed a wide array of sensitive data, including names, dates of birth, phone numbers, addresses, email and postal details, driver's license numbers, and Social Security information. The exposure of such comprehensive personal data has triggered alarm among both customers and cybersecurity experts. LexisNexis serves a varied clientele, ranging from law enforcement agencies to automotive manufacturers, which means the implications of the breach extend across numerous industries and organisations. The breadth and depth of the data held by LexisNexis amplify the potential fallout from the incident. Andrew Costis, Engineering Manager of the Adversary Research Team at AttackIQ, commented on the breach, highlighting its origins and wider impact: "Legal AI and data analytics company LexisNexis has disclosed a data breach that has affected at least 364,000 people. An unknown hacker accessed customer data through a third-party platform that LexisNexis utilises for software development. The stolen data includes names, dates of birth, phone numbers, postal and email addresses, driver's license numbers, and Social Security information. Given the range of LexisNexis' customer base, which spans law enforcement agencies to vehicle manufacturers, the scope of individuals and organisations impacted is substantial." Costis further stressed the critical importance of security for data brokers: "Protecting the information of its customers is a necessity for any successful company. However, for data brokers like LexisNexis, who profit from collecting and selling huge amounts of personal and financial customer data, the need for airtight security measures is exponentially greater. One breach can often set off a chain reaction of mistrust from their client base, putting not just the company at risk, but their massive stockpile of customer data as well. A recent example of this effect can be seen in the recent 23andMe breach and subsequent bankruptcy." He called for more proactive defence strategies: "To protect valuable customer data, organisations must prioritise proactive defense, with a strong focus on threat detection and response. By utilising techniques like adversarial exposure validation, organisations can test their system's response to identify and address any vulnerabilities before they can be exploited." Steve Cobb, Chief Information Security Officer at SecurityScorecard, added analysis on the risks associated with third-party platforms: "The breach at LexisNexis Risk Solutions, involving unauthorised access via GitHub and the exposure of over 360,000 individuals' personal data, highlights a critical blind spot in third-party risk management." He pointed out the ongoing challenges LexisNexis faces with its data broker role: "LexisNexis has already faced scrutiny over data sharing relationships and has faced multiple lawsuits for its role as a data broker that collects and sells sensitive information. The immense volume of sensitive data that the company holds makes the integrity of every access point, including software development platforms, non-negotiable." Cobb emphasised the importance of treating third-party platforms with the same security rigour as core systems: "Third-party platforms are high-value assets used by organisations that demand the same level of security oversight as any core system. When enterprises treat them as afterthoughts, they open the door to cascading risk. In today's ecosystem, third-party risk isn't an external issue, but an internal vulnerability. The future of cyber defence hinges on operationalising visibility and integrating supply chain detection and response into the heart of security operations." LexisNexis has historically faced scrutiny over its data collection practices and the sharing of sensitive information. This latest breach may reinvigorate debate around the accountability of data brokers and the regulatory frameworks designed to protect individuals' privacy. As the volume and value of digital information continue to rise, the incident serves as a stark reminder of the responsibility data custodians bear to maintain the highest standards of security across all facets of their operations, including those managed by third-party suppliers.


Techday NZ
29-05-2025
- Techday NZ
LexisNexis data breach exposes personal details of 364,000 people
LexisNexis, a prominent global data analytics and legal intelligence provider, has confirmed a data breach impacting more than 364,000 individuals, raising significant concerns over the security of personal information held by data brokers. The breach, reportedly executed through a third-party platform used for software development, has exposed a wide array of sensitive data, including names, dates of birth, phone numbers, addresses, email and postal details, driver's license numbers, and Social Security information. The exposure of such comprehensive personal data has triggered alarm among both customers and cybersecurity experts. LexisNexis serves a varied clientele, ranging from law enforcement agencies to automotive manufacturers, which means the implications of the breach extend across numerous industries and organisations. The breadth and depth of the data held by LexisNexis amplify the potential fallout from the incident. Cybersecurity specialists have been quick to comment on the nature and severity of the breach, as well as its broader implications for the industry. Andrew Costis, Engineering Manager of the Adversary Research Team at AttackIQ, highlighted that the breach originated through a third-party software development platform. This detail underscores emerging challenges in supply chain security. "Given the range of LexisNexis' customer base, which spans law enforcement agencies to vehicle manufacturers, the scope of individuals and organisations impacted is substantial," Costis said. He further warned of the risks inherent to data brokers who manage vast repositories of highly sensitive personal and financial information, noting that "one breach can often set off a chain reaction of mistrust," placing both the company and its clients' data in jeopardy. Costis referenced the recent example of genetic testing firm 23andMe, which faced severe operational and reputational damage following its own data compromise. He emphasised the urgent need for companies like LexisNexis to implement "airtight security measures." He called for organisations to adopt proactive defence strategies, including robust threat detection and their system's capability to respond to simulated adversarial attacks. "By utilising techniques like adversarial exposure validation, organisations can test their system's response to identify and address any vulnerabilities before they can be exploited," he said. Additional analysis was provided by Steve Cobb, Chief Information Security Officer at SecurityScorecard, who focused on the risks associated with third-party services. He stated, "The breach at LexisNexis Risk Solutions, involving unauthorised access via GitHub and the exposure of over 360,000 individuals' personal data, underscores a critical blind spot in third-party risk management." Cobb pointed out that platforms used for software development, like GitHub, demand the same level of defensive scrutiny as a company's core systems, yet are often overlooked. "Third-party platforms are high-value assets used by organisations that demand the same level of security oversight as any core system. When enterprises treat them as afterthoughts, they open the door to cascading risk," Cobb remarked. He emphasised the importance of visibility and supply chain detection as central elements in today's security operations, asserting that "the future of cyber defence hinges on operationalising visibility and integrating supply chain detection and response into the heart of security operations." LexisNexis has historically faced scrutiny over its data collection practices and the sharing of sensitive information. The latest breach may reinvigorate debate around the accountability of data brokers and the regulatory frameworks designed to protect individuals' privacy. The incident serves as a reminder that as the volume and value of digital information continue to rise, so too does the responsibility of data custodians to maintain the highest possible standards of security across all facets of their operations, including those managed by third-party suppliers.