logo
BrowserStack launches AI agent suite to automate, simplify software testing

BrowserStack launches AI agent suite to automate, simplify software testing

Time of India3 days ago
Academy
Empower your mind, elevate your skills
Accel-backed BrowserStack has launched a suite of artificial intelligence (AI)-powered agents integrated across its software testing platform, aimed at helping software teams accelerate release cycles, improve test coverage, and boost productivity.The product suite, called BrowserStack AI , comprises five agents that address key pain points in the software testing life cycle, which are test planning, authoring, maintenance, accessibility, and visual review.The company claims these tools can increase productivity by up to 50% and cut test creation time by over 90%.'We mapped the entire testing journey to identify where teams spend the most time and manual effort and reimagined it with AI at the core,' said Ritesh Arora , CEO and cofounder of BrowserStack. 'Early results are game-changing; our test case generator delivers 90% faster test creation with 91% accuracy and 92% coverage, results that generic LLMs can't match.'Unlike generic copilots or disconnected plugins, BrowserStack AI agents are built directly into BrowserStack products, drawing context-aware insights from a unified data store across the testing lifecycle, the company said.The suite includes the test case generator agent, which creates detailed test cases from product documents, and the low code authoring agent, which turns them into automated tests using natural language.The suite also includes the self-healing agent, which automatically adapts and remediates tests during execution, preventing failures caused by user interface (UI) changes, while the A11y issue detection agent uses AI to surface accessibility issues across websites and apps. Also, the visual review agent highlights only meaningful changes, making reviews faster.The company also has an integration layer, called BrowserStack MCP Server, that enables developers and testers to test directly from their integrated development environments (IDEs), large language models (LLMs) or any other MCP-enabled client.'AI is only useful if it delivers meaningful, context-rich outcomes,' said Arora. 'That's why we've invested in building AI agents that understand test environments, real-world execution data, and user behaviour across thousands of teams.'Founded in 2011 by Ritesh Arora and Nakul Aggarwal, BrowserStack is a cloud-based platform for developers to test websites and mobile apps across different devices, operating systems, and browsers. It operates across 21 data centres worldwide and provides access to more than 30,000 real devices and browsers for testing.In February, the company announced the launch of the AI-powered test platform that consolidates the entire toolchain for quality assurances under one platform from creating, planning, executing and debugging testing, with an aim to help development teams deliver applications faster and smarter.BrowserStack, which said that over 700 engineers are now working on its AI-powered test platform, has more than 20 additional agents in development. The company's tools currently power more than three million tests daily for over 50,000 teams, including companies like Amazon, Microsoft , and Nvidia.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

MCP servers: Lure of sharing your data with AI, and a likely security nightmare
MCP servers: Lure of sharing your data with AI, and a likely security nightmare

Hindustan Times

timean hour ago

  • Hindustan Times

MCP servers: Lure of sharing your data with AI, and a likely security nightmare

After generative AI, large language models, multi-modal intelligence, artificial general intelligence, and agentic AI, the artificial intelligence (AI) space is beginning to write another chapter. The phraseology we must wrap our heads around, and you'll increasingly hear about this, is MCP, or Model Context Protocol. It is supposed to solve an integration bottleneck, one that would allow AI systems to interact with external data sources and tools. But is this insulated against security risks, while handling personal data? (Clockwise from left) Canva's deep research connector in ChatGPT, MS illustrates workings of MCP servers & 11ai voice assistant. (Official images) It may have gone under the radar, but AI company Anthropic first mooted the idea of a singular connection language for AI assistants with other apps and systems users access, late last year — dubbed the 'USB-C for AI'. Claude Sonnet 3.5 is their first model, adept at building MCP implementations for connecting AI with datasets, as a user may want to. Indian fintech Zerodha launched an MCP integration with Anthropic's Claude. Among the things it can do is curate portfolio insights, plan trades, backtest investment strategies, and generate personal finance dashboards. For users who aren't proficient with the workings of the stock market, these insights may prove useful. 'MCPs are a new way for AI systems to interact with real-world services like trading accounts,' says Nithin Kamath, Founder and CEO of Zerodha, pointing out all the functionality is free to access. Globally, companies are rushing to build MCP integrations, and there's a core rationale for this sudden momentum. 'AI agents and assistants have become indispensable creative partners, yet current workflows require users to manually add context or references, creating complexity,' explains Anwar Haneef, GM and Head of Ecosystem at Canva. 11Labs, which has built the 11ai personal voice assistant, has bolted on MCP connections with platforms including Perplexity and Slack. Autonomous coding agent Cline too can combine MCP servers from Perplexity and others, to create research workflows. Amazon Web Services or AWS, in a technical document, explains MCP is an open standard that creates a universal language for AI systems to communicate with external data sources, tools, and services. Conceptually, MCP functions as a universal translator, enabling seamless dialogue between language models and the diverse systems, they say. Also Read: Apple Music at 10, India's 5G trajectory, Canva's AI tools, and Adobe's camera For users, this may open up a scenario where AI tools may be able to connect with different platforms, and thereby, a single window workflow approach, instead of manually copying data between applications or switching between multiple tools to complete tasks. Take for example Canva, which becomes the first company to launch its deep research connector with OpenAI's ChatGPT, and thereby give users access to designs and content created in Canva via their ChatGPT conversations. This will include Canva Docs and presentations as well. The advantage? Summarising reports or documents, asking AI to analyse data, and for a more contextual conversation. AI will be able to use these tools to create content depending on what a user asks. 'This is a major step in our vision to make the complex simple and build an all-in-one AI workflow that's secure and accessible to all,' adds Haneef. OpenAI announced MCP support earlier, says popular remote MCP servers include Cloudflare, HubSpot, Intercom, PayPal, Plaid, Shopify, Stripe, and Twilio, all encompassing various consumer and enterprise focused domains. Microsoft has made substantial investments in MCP infrastructure, integrating the protocol with Azure OpenAI Services to allow GPT models to interact with external services and fetch live data. The company has released multiple MCP servers. Anthropic, though an early mover, has had to change the approach to offering MCP to developers. The result, released a few days ago, are the new Desktop Extensions, to simplify MCP installations. 'We kept hearing the same feedback: installation was too complex. Users needed developer tools, had to manually edit configuration files, and often got stuck on dependency issues,' the company says, in a statement. Developers will need help with the integration. AWS has released their open-source AWS Serverless MCP Server, a tool that combines AI assistance with streamlined development, to help developers build modern applications. Unchartered territory? Risks, particularly with how a user's data is being shared between two distinct digital entities, are something tech companies must remain cognisant of. As Kailash Nadh, Zerodha's Chief Technology Officer explains, 'Strictly from a user perspective, it feels liberating to be able to access services outside of their walled gardens and bloated UIs riddled with dark patterns. It moves a considerable amount of control from service providers to users, but at the same time, it concentrates decision-making and mediation in the hands of AI blackboxes.' He is yet to find an answer to what happens in case of errors and failures with real-world implications, tracing accountability and the inevitable regulatory questions. 'Whether the long-term implications of MCP's viral, cross-cutting spread will be net positive or not, is unclear to me,' he adds. AI security expert Simon Wilson is worried about users going overboard in 'mixing and matching MCP Servers'. Particularly concerning is the attack method, called prompt injection. 'Any time you combine access to private data, exposure to untrusted content and the ability to externally communicate an attacker can trick the system into stealing your data,' he explains, in a Mastodon post. He points to the core of this approach, labelling it a 'lethal trifecta' — access to private data, exposure to untrusted content and an ability to communicate externally. 'Be careful with which custom MCP servers you add to your ChatGPT workspace. Currently, we only support deep research with custom MCP servers in ChatGPT, meaning the only tools intended to be available within the remote MCP servers are search and document retrieval. However, risks still apply even with this narrow scope,' OpenAI warns developers, in a technical note. Microsoft too has noted specific risks around misconfigured authorisation logic in MCP servers leading to sensitive data exposure and authentication tokens being stolen, which can then be used to impersonate and access resources inappropriately.

Explained: AI & copyright law
Explained: AI & copyright law

Indian Express

time3 hours ago

  • Indian Express

Explained: AI & copyright law

In two key copyright cases last week, US courts ruled in favour of tech companies developing artificial intelligence (AI) models. While the two judgments arrived at their conclusions differently, they are the first to address a central question around generative AI models: are these built on stolen creative work? At a very basic level, AI models such as ChatGPT and Gemini identify patterns from massive amounts of data. Their ability to generate passages, scenes, videos, and songs in response to prompts depends on the quality of the data they have been trained on. This training data has thus far come from a wide range of sources, from books and articles to images and sounds, and other material available on the Internet. There are at the moment at least 21 ongoing lawsuits in the US, filed by writers, music labels, and news agencies, among others, against tech companies for training AI models on copyrighted work. This, the petitioners have argued, amounts to 'theft'. In their defence, tech companies say they are using the data to create 'transformative' AI models, which falls within the ambit of 'fair use' — a concept in law that permits use of copyrighted material in limited capacities for larger public interests (for instance, quoting a paragraph from a book for a review). Here's what happened in the two cases, and why the judgments matter. In August 2024, journalist-writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed a class action complaint — a case that represents a large group that could be/were similarly harmed — against Anthropic, the company behind the Claude family of Large Language Models (LLMs). The petitioners argued that Anthropic downloaded pirated versions of their works, made copies of them, and 'fed these pirated copies into its models'. They said that Anthropic has 'not compensated the authors', and 'compromised their ability to make a living as the LLMs allow anyone to generate — automatically and freely (or very cheaply) — texts that writers would otherwise be paid to create and sell'. Anthropic downloaded and used Books3 — an online shadow library of pirated books with about seven million copies — to train its models. That said, it also spent millions of dollars to purchase millions of printed books and scanned them digitally to create a general 'research library' or 'generalised data area'. Judge William Alsup of the District Court in the Northern District of California ruled on June 23 that Anthropic's use of copyrighted data was 'fair use', centering his arguments around the 'transformative' potential of AI. Alsup wrote: 'Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different. If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use.' Thirteen published authors, including comedian Sarah Silverman and Ta-Nehisi Coates of Black Panther fame, filed a class action suit against Meta, arguing they were 'entitled to statutory damages, actual damages, restitution of profits, and other remedies provided by law'. The thrust of their reasoning was similar to what the petitioners in the Anthropic case had argued: Meta's Llama LLMs 'copied' massive amounts of text, with its responses only being derived from the training dataset comprising the authors' work. Meta too trained its models on data from Books3, as well as on two other shadow libraries — Anna's Archive and Libgen. However, Meta argued in court that it had 'post-trained' its models to prevent them from 'memorising' and 'outputting certain text from their training data, including copyrighted material'. Calling these efforts 'mitigations', Meta said it 'could get no model to generate more than 50 words and punctuation marks…' from the books of the authors that had sued it. In a ruling given on June 25, Judge Vince Chhabria of the Northern District of California noted that the plaintiffs were unable to prove that Llama's works diluted their markets. Explaining market dilution in this context, he cited the example of biographies. If an LLM were to use copyrighted biographies to train itself, it could, in theory, generate an endless number of biographies which would severely harm the market for biographies. But this does not seem to be the case thus far. However, while Chabbria agreed with Alsup that AI is groundbreaking technology, he also said that tech companies who have minted billions of dollars because of the AI boom should figure out a way to compensate copyright holders. Significance of rulings These judgments are a win for Anthropic and Meta. That said, both companies are not entirely scot-free: they still face questions regarding the legality of downloading content from pirated databases. Anthropic also faces another suit from music publishers who say Claude was trained on their copyrighted lyrics. And there are many more such cases in the pipeline. Twelve separate copyright lawsuits filed by authors, newspapers, and other publishers — including one high-profile lawsuit filed by The New York Times — against OpenAI and Microsoft are now clubbed into a single case. OpenAI is also being separately sued by publishing giant Ziff Davis. A group of visual artists are suing image generating tools Stability AI, Runway AI, Deviant Art, and Midjourney for training their tools on their work. Stability AI is also being sued by Getty Images for violating its copyright by taking more than 12 million of its photographs. In 2024, news agency ANI filed a case against OpenAI for unlawfully using Indian copyrighted material to train its AI models. The Digital News Publishers Association (DNPA), along with some of its members, which include The Indian Express, Hindustan Times, and NDTV, later joined the proceedings. Going forward, this is likely to be a major issue in India too. Thus, while significant, the judgments last week do not settle questions surrounding AI and copyright — far from it. And as AI models keep getting better, and spit out more and more content, there is also the larger question at hand: where does AI leave creators, their livelihoods, and more importantly, creativity itself?

AI, drones to check illegal mining in UP
AI, drones to check illegal mining in UP

Time of India

time9 hours ago

  • Time of India

AI, drones to check illegal mining in UP

Lucknow: To curb illegal mining and mineral transportation, Yogi govt deployed advanced technologies, including artificial intelligence (AI), drones and satellite-based monitoring systems. Demonstrating an unprecedented level of vigilance, the state blacklisted over 21,477 vehicles found involved in unlawful transportation activities. As part of this crackdown, 57 AI and IoT-enabled check gates were established across the state to monitor vehicles engaged in mining operations. These automated checkpoints, set up with the support of the transport department, utilise weigh-in-motion (WIM) technology to detect overloading of vehicles. The directorate of geology and mining is also using advanced satellite imagery and mapping tools such as Google Earth, Arc-GIS and LISS-IV data to detect illegal mining sites and identify untapped mineral zones. The department's remote sensing laboratory (PGRS lab) is preparing geological maps and monitoring approved mining leases. The use of drone technology has made it possible to measure the length, width and depth of mining areas. Volumetric analysis through drones helps accurately estimate the amount of mining done and action is taken based on these findings. Drones are also being used to analyse the volume of stored minerals and to mark out mineable areas for proper lease management. Regular inspections and tech-based monitoring improved transparency and curbed the activities of mining mafias.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store