
New Relic & GitHub Copilot integration boosts DevOps speed
New Relic has announced an integration between its AI-powered observability platform and the GitHub Copilot coding agent to streamline software development workflows and improve application reliability.
This integration aims to automate the traditional processes of change validation and incident response, enabling enterprises to accelerate software deployment cycles while maintaining system stability. The unified approach brings together proactive monitoring by New Relic and automated code remediation through GitHub Copilot's agentic capabilities.
Describing the significance of the new technology, New Relic Chief Product Officer Manav Khurana said: "Agentic AI is poised to be a transformative technology for enterprise software developers and engineers, who are facing intense pressure to ship more innovations at a faster pace without sacrificing quality and reliability."
He added: "With the innovative integration of New Relic's intelligent observability technology with GitHub Copilot coding agent, we are closing the loop on ensuring continued application health. Together with our long time partner GitHub, we are providing a new, agentic way for modern software development that uses the power of agentic AI to transform the way enterprises innovate."
The combined solution leverages New Relic's continuous code deployment monitoring to automatically detect performance issues arising from recent changes. If an issue is identified, New Relic pinpoints the root cause and generates a comprehensive GitHub issue, complete with context for developers. Developers can then review this automated issue and, if deemed sufficient, delegate it to GitHub Copilot. Copilot analyses the issue, drafts a suggested code fix, and initiates a pull request for human review. After the fix is merged, New Relic validates the correction, completing the issue resolution cycle.
This process aims to reduce the time required to resolve coding issues, allowing developers to dedicate more attention to impactful projects rather than repetitive troubleshooting tasks. Key benefits outlined by New Relic include automation of detection and validation processes, empowerment of engineers to focus on strategic work, quicker resolution of performance issues, and safer, faster deployment cycles.
Julia Liuson, President, Developer Division at Microsoft, commented on the importance of collaborative integrations: "Millions of organisations rely on GitHub every day for software delivery. Our integrations with key partners like New Relic are instrumental in making our tools that much more scalable, reliable and intelligent. Together with key partners like New Relic we provide developers valuable insights and automation to enhance Agentic DevOps, driving innovation and efficiency across the software lifecycle."
The GitHub Copilot coding agent integration builds upon the existing partnership between New Relic and both Microsoft and GitHub. According to New Relic, this expansion brings its agentic AI capabilities and critical observability data deeper into the developer workflow, delivering faster feedback and reducing the risk of code-related issues affecting business operations.
The integration is currently available via New Relic as a limited preview, accessible to Copilot Pro+ and Copilot Enterprise account holders. GitHub Copilot coding agent is also available in preview for GitHub Copilot Enterprise and Pro+ customers.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
2 days ago
- Techday NZ
Kurrent unveils open-source MCP Server for AI-driven databases
Kurrent has released its open-source MCP Server for KurrentDB, enabling developers to interact with data in the KurrentDB database using natural language and AI agents rather than traditional coding methods. The Kurrent MCP Server offers new functionalities, allowing developers not only to query data but also to create, test, and debug projections directly through conversational commands. This feature is not available in other MCP server implementations, establishing a novel approach to database interaction by integrating AI-driven workflows into the database layer. Central to this release is the introduction of a self-correcting engine, which assists in automatically identifying and fixing logic errors during the prototyping phase. This reduces the need for manual debugging loops, streamlining the development process significantly for users building or modifying projections. The software is fully open-source and released under the MIT license, with documentation and a development roadmap available on GitHub. This permits both enterprise users and open-source contributors to adopt, customise, and improve the KurrentDB MCP Server without licensing restrictions. Kurrent MCP Server supports natural language prompts for tasks such as reading streams, listing streams within the database, building and updating projections, writing events to streams, and retrieving projection status for debugging. These capabilities aim to make the visual and analytical exploration of data more accessible and conversational for users with varying levels of technical expertise. The MCP Server is compatible with a broad range of frontier AI models, such as Claude, GPT-4, and Gemini. It can be integrated with popular IDEs and agent frameworks, including Cursor and Windsurf. This compatibility enables developers to leverage their preferred tools while reducing friction points typically associated with traditional database interactions. Addressing the new approach, Kirk Dunn, CEO of Kurrent, said, "Our new MCP Server makes it possible to use the main features of the KurrentDB database, like reading and writing events to streams and using projections, in a way that's as simple as having a conversation. The system's ability to test and fix itself reduces the need for debugging and increases reliability. Copilots and AI assistants become productive database partners rather than just code generators, seamlessly interfacing with KurrentDB." The server's key functions are designed to reduce development times for database tasks, enabling a focus on higher-value project work. Eight core capabilities are available, including Read_stream, List_streams, Build_projection, Create_projection, Update_projection, Test_projection, Write_events_to_stream, and Get_projections_status. Each of these responds directly to natural language instructions provided by the developer or AI agent. Kurrent has highlighted opportunities for the open source community to participate in the MCP Server's ongoing development. Developers can contribute code, report or tackle issues, and suggest new features through the project's GitHub repository and discussion forums. Comprehensive educational resources and installation guides are intended to help developers quickly integrate the MCP Server with KurrentDB for various use cases. Lokhesh Ujhoodha, Lead Architect at Kurrent, commented, "Before, database interactions required developers to master complex query languages, understand intricate data structures, and spend significant time debugging projections and data flows. Now, everything agentic can interface with KurrentDB through this MCP Server. We're not just connecting to today's AI tools, but we're positioning for a future where AI agents autonomously manage data workflows, make analytical decisions and create business insights with minimal human intervention." Kurrent emphasises that its MCP Server aims to remove barriers historically associated with database development by supporting conversational, agent-driven workflows. This aligns with broader trends towards AI-native infrastructure in enterprise environments, where human and algorithmic agents increasingly collaborate to deliver data-driven business outcomes.


Techday NZ
3 days ago
- Techday NZ
LexisNexis data breach exposes 364,000 personal records
LexisNexis, a prominent global data analytics and legal intelligence provider, has confirmed a data breach impacting more than 364,000 individuals, raising significant concerns over the security of personal information held by data brokers. The breach, reportedly executed through a third-party platform used for software development, exposed a wide array of sensitive data, including names, dates of birth, phone numbers, addresses, email and postal details, driver's license numbers, and Social Security information. The exposure of such comprehensive personal data has triggered alarm among both customers and cybersecurity experts. LexisNexis serves a varied clientele, ranging from law enforcement agencies to automotive manufacturers, which means the implications of the breach extend across numerous industries and organisations. The breadth and depth of the data held by LexisNexis amplify the potential fallout from the incident. Andrew Costis, Engineering Manager of the Adversary Research Team at AttackIQ, commented on the breach, highlighting its origins and wider impact: "Legal AI and data analytics company LexisNexis has disclosed a data breach that has affected at least 364,000 people. An unknown hacker accessed customer data through a third-party platform that LexisNexis utilises for software development. The stolen data includes names, dates of birth, phone numbers, postal and email addresses, driver's license numbers, and Social Security information. Given the range of LexisNexis' customer base, which spans law enforcement agencies to vehicle manufacturers, the scope of individuals and organisations impacted is substantial." Costis further stressed the critical importance of security for data brokers: "Protecting the information of its customers is a necessity for any successful company. However, for data brokers like LexisNexis, who profit from collecting and selling huge amounts of personal and financial customer data, the need for airtight security measures is exponentially greater. One breach can often set off a chain reaction of mistrust from their client base, putting not just the company at risk, but their massive stockpile of customer data as well. A recent example of this effect can be seen in the recent 23andMe breach and subsequent bankruptcy." He called for more proactive defence strategies: "To protect valuable customer data, organisations must prioritise proactive defense, with a strong focus on threat detection and response. By utilising techniques like adversarial exposure validation, organisations can test their system's response to identify and address any vulnerabilities before they can be exploited." Steve Cobb, Chief Information Security Officer at SecurityScorecard, added analysis on the risks associated with third-party platforms: "The breach at LexisNexis Risk Solutions, involving unauthorised access via GitHub and the exposure of over 360,000 individuals' personal data, highlights a critical blind spot in third-party risk management." He pointed out the ongoing challenges LexisNexis faces with its data broker role: "LexisNexis has already faced scrutiny over data sharing relationships and has faced multiple lawsuits for its role as a data broker that collects and sells sensitive information. The immense volume of sensitive data that the company holds makes the integrity of every access point, including software development platforms, non-negotiable." Cobb emphasised the importance of treating third-party platforms with the same security rigour as core systems: "Third-party platforms are high-value assets used by organisations that demand the same level of security oversight as any core system. When enterprises treat them as afterthoughts, they open the door to cascading risk. In today's ecosystem, third-party risk isn't an external issue, but an internal vulnerability. The future of cyber defence hinges on operationalising visibility and integrating supply chain detection and response into the heart of security operations." LexisNexis has historically faced scrutiny over its data collection practices and the sharing of sensitive information. This latest breach may reinvigorate debate around the accountability of data brokers and the regulatory frameworks designed to protect individuals' privacy. As the volume and value of digital information continue to rise, the incident serves as a stark reminder of the responsibility data custodians bear to maintain the highest standards of security across all facets of their operations, including those managed by third-party suppliers.


Techday NZ
3 days ago
- Techday NZ
LexisNexis data breach exposes personal details of 364,000 people
LexisNexis, a prominent global data analytics and legal intelligence provider, has confirmed a data breach impacting more than 364,000 individuals, raising significant concerns over the security of personal information held by data brokers. The breach, reportedly executed through a third-party platform used for software development, has exposed a wide array of sensitive data, including names, dates of birth, phone numbers, addresses, email and postal details, driver's license numbers, and Social Security information. The exposure of such comprehensive personal data has triggered alarm among both customers and cybersecurity experts. LexisNexis serves a varied clientele, ranging from law enforcement agencies to automotive manufacturers, which means the implications of the breach extend across numerous industries and organisations. The breadth and depth of the data held by LexisNexis amplify the potential fallout from the incident. Cybersecurity specialists have been quick to comment on the nature and severity of the breach, as well as its broader implications for the industry. Andrew Costis, Engineering Manager of the Adversary Research Team at AttackIQ, highlighted that the breach originated through a third-party software development platform. This detail underscores emerging challenges in supply chain security. "Given the range of LexisNexis' customer base, which spans law enforcement agencies to vehicle manufacturers, the scope of individuals and organisations impacted is substantial," Costis said. He further warned of the risks inherent to data brokers who manage vast repositories of highly sensitive personal and financial information, noting that "one breach can often set off a chain reaction of mistrust," placing both the company and its clients' data in jeopardy. Costis referenced the recent example of genetic testing firm 23andMe, which faced severe operational and reputational damage following its own data compromise. He emphasised the urgent need for companies like LexisNexis to implement "airtight security measures." He called for organisations to adopt proactive defence strategies, including robust threat detection and their system's capability to respond to simulated adversarial attacks. "By utilising techniques like adversarial exposure validation, organisations can test their system's response to identify and address any vulnerabilities before they can be exploited," he said. Additional analysis was provided by Steve Cobb, Chief Information Security Officer at SecurityScorecard, who focused on the risks associated with third-party services. He stated, "The breach at LexisNexis Risk Solutions, involving unauthorised access via GitHub and the exposure of over 360,000 individuals' personal data, underscores a critical blind spot in third-party risk management." Cobb pointed out that platforms used for software development, like GitHub, demand the same level of defensive scrutiny as a company's core systems, yet are often overlooked. "Third-party platforms are high-value assets used by organisations that demand the same level of security oversight as any core system. When enterprises treat them as afterthoughts, they open the door to cascading risk," Cobb remarked. He emphasised the importance of visibility and supply chain detection as central elements in today's security operations, asserting that "the future of cyber defence hinges on operationalising visibility and integrating supply chain detection and response into the heart of security operations." LexisNexis has historically faced scrutiny over its data collection practices and the sharing of sensitive information. The latest breach may reinvigorate debate around the accountability of data brokers and the regulatory frameworks designed to protect individuals' privacy. The incident serves as a reminder that as the volume and value of digital information continue to rise, so too does the responsibility of data custodians to maintain the highest possible standards of security across all facets of their operations, including those managed by third-party suppliers.