
Qniverse Adopts AI in its QA Processes to Enhance Customer Experience

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
16 minutes ago
- Mint
MORNING BID AMERICAS-Tech angst on AI doubts
(The opinions expressed here are those of the author, a columnist for Reuters.) LONDON, Aug 20 (Reuters) - What matters in U.S. and global markets today By Mike Dolan, Editor-At-Large, Finance and Markets * Reasons for the sudden tech angst tended to be gathered after the event, with some pointing to comments late last week from OpenAI boss Sam Altman on inevitable bubbles in the sector and others pointing to different research papers fretting variously about both the limited returns on blistering AI spending to date and also its growing jobs destruction. The jitters also come ahead of next week's earnings report from chip behemoth Nvidia, some concern about the wider implications of the U.S. government's proposed stake in ailing chip giant Intel and caution ahead of the Federal Reserve's annual Jackson Hole conference this week. * Even though Fed concerns were cited across markets on Tuesday, there was little shift in Fed futures pricing during the day - and they still show just over an 80% chance of a rate cut next month. With Fed meeting minutes due later today and 20-year bonds under the hammer too, Treasury yields were flat and the dollar firmer. An unexpected pick-up in housing starts in July was reported on Tuesday but this was offset by a drop in building permits to five-year lows. * Tech-heavy stock indexes overseas were hit by Wall Street's wobble, with Japan's Nikkei losing 1.5% and South Korea's Kospi down 0.7%. Lifted on Tuesday by Ukraine deal hopes, European stocks were flatter today, with euro inflation coming in bang on forecast and a hotter-than-expected UK inflation reading downplayed due to seasonal airfare skews. Chinese stocks outperformed, with the Shanghai main index rallying to 10-year highs, as investors rotated stock holdings and hoped for more government stimulus. Be sure to check out today's column, which looks at a particular dilemma facing the Fed: should it ease to offset weakness in the housing market if that means spurring the blistering AI infrastructure boom? Today's Market Minute * U.S. and European military planners have begun exploring post-conflict security guarantees for Ukraine, U.S. officials and sources told Reuters on Tuesday, following President Donald Trump's pledge to help protect the country under any deal to end Russia's war. * Alongside a massive build-up in conventional military firepower, China has embarked on a rapid and sustained increase in the size and capability of its nuclear forces, according to the U.S. military and arms control experts. * British inflation hit its highest in 18 months in July when it increased to 3.8% from 3.6% in June, official data showed on Wednesday, once again leaving the country with the biggest price growth problem amongst the world's big rich economies. * A glaring mismatch between benchmark oil prices and expectations of a looming supply overhang has created an imbalance that could end badly for traders, writes ROI energy columnist Ron Bousso. * Trump has faced little opposition in his drive to rip up the global economic rule book. The only exception has been "the market". But now even investors are holding their fire, claims ROI markets columnist Jamie McGeever, enabling more risk to build up in the financial system. Chart of the day Americans are deeply concerned over the prospect that advances in artificial intelligence could put swaths of the country out of work permanently, according to a new Reuters/Ipsos poll. The six-day poll, which concluded on Monday, showed 71% of respondents said they were concerned that AI will be "putting too many people out of work permanently." * Federal Reserve meeting minutes released (2:00 PM EDT); Board Governor Christopher Waller and Atlanta Fed President Raphael Bostic speak * U.S. corporate earnings: Target, Nordson, TJX, Lowe's, Estee Lauder, Progressive, Analog Devices * U.S. Treasury sells $16 billion of 20-year bonds Want to receive the Morning Bid in your inbox every weekday morning? Sign up for the newsletter Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. (By Mike Dolan; Editing by Aidan Lewis)


Hindustan Times
16 minutes ago
- Hindustan Times
How AI-enhanced hackers are stealing billions
Jaxon, a malware developer, lives in Velora, a virtual world where nothing is off limits. He wants to make malicious software to steal passwords from Google Chrome, an internet browser. That is the basis of a story told to ChatGPT, an artificial-intelligence (AI) bot, by Vitaly Simonovich, who researches AI threats at Cato Networks, a cybersecurity firm. Eager to play along, Chatgpt spat out some imperfect code, which it then helped debug. Within six hours, Mr Simonovich had collaborated with Chatgpt to create working malware, showing the effectiveness of his 'jailbreak' (a way to bypass AI safeguards). AI has 'broadened the reach' of hackers, according to Gil Messing of Check Point, another cybersecurity firm, by letting them hit more targets with less effort. The release of Chatgpt in 2022 was a turning-point. Clever generative-ai models meant criminals no longer had to spend big sums on teams of hackers and equipment. This has been a terrible development for most firms, which are increasingly the victims of AI-assisted hackers—but has been rather better for those in the cybersecurity business. The new technology has worsened cybersecurity threats in two main ways. First, hackers have turned to large language models (LLMs) to extend the scope of malware. Generating deepfakes, fraudulent emails and social-engineering assaults that manipulate human behaviour is now far easier and quicker. XanthoroxAI, an AI model designed by cybercriminals, can be used to create deepfakes, alongside other nefarious activities, for as little as $150 a month. Hackers can launch sweeping phishing attacks by asking an llm to gather huge quantities of information from the internet and social media to fake personalised emails. And for spearphishing—hitting a specific target with a highly personalised attack—they can even generate fake voice and video calls from colleagues to convince an employee to download and run dodgy software. Second, AI is being used to make the malware itself more menacing. A piece of software disguised as a pdf document, for instance, could contain embedded code that works with ai to infiltrate a network. Attacks on Ukraine's security and defence systems in July made use of such an approach. When the malware reached a dead end, it was able to request the help of an llm in the cloud to generate new code so as to break through the systems' defences. It is unclear how much damage was done, but this was the first attack of its kind, notes Mr Simonovich. For businesses, the growing threat is scary—and potentially costly. Last year AI was involved in one in six data breaches, according to ibm, a tech firm. It also drove two in five phishing scams targeting business emails. Deloitte, a consultancy, reckons that generative AI could enable fraud to the tune of $40bn by 2027, up from $12bn in 2023. As the costs of ai cyberattacks increase, the business of protecting against them is also on the up. Gartner, a research firm, predicts that corporate spending on cybersecurity will rise by a quarter from 2024 to 2026, hitting $240bn. That explains why the share prices of firms tracked by the Nasdaq cta Cybersecurity index have also risen by a quarter over the past year, outpacing the broader Nasdaq index. On August 18th Nikesh Arora, boss of Palo Alto Networks, one of the world's largest cybersecurity firms, noted that generative-ai-related data-security incidents have 'more than doubled since last year', and reported a near-doubling of operating profits in the 12 months to July, compared with the year before. The prospect of ever-more custom has sent cybersecurity companies on a buying spree. On July 30th Palo Alto Networks said it would purchase CyberArk, an identity-security firm, for $25bn. Earlier that month, the firm spent $700m on Protect AI, which helps businesses secure their ai systems. On August 5th SentinelOne, a competitor, announced that it was buying Prompt Security, a firm making software to protect firms adopting ai, for $250m. Tech giants with fast-growing cloud-computing arms are also beefing up their cybersecurity offerings. Microsoft, a software colossus, acquired CloudKnox, an identity-security platform, in 2021 and has developed Defender for Cloud, an in-house application for businesses that does everything from checking for security gaps and protecting data to monitoring threats. Google has developed Big Sleep, which detects cyberattacks and security vulnerabilities for customers before they are exploited. In March it splurged $32bn to buy Wiz, a cybersecurity startup. Competition and consolidation may build businesses that can fend off nimble ai-powered cybercriminals. But amid the race to develop the whizziest llms, security will take second place to pushing technological boundaries. Keeping up with Jaxon will be no easy task.


Time of India
30 minutes ago
- Time of India
The rise of agentic AI: Reframing quality engineering for the autonomous era
The accelerating adoption of AI across the enterprise stack is reshaping how organizations think about software quality, reliability, and time-to-market. While Generative AI (GenAI) has already introduced new efficiencies in testing, a more autonomous and adaptive model—Agentic AI—is emerging as a pivotal shift in the evolution of Quality Engineering (QE).This next phase goes beyond prompt-based interaction. Agentic AI integrates machine learning, natural language processing, and advanced automation to enable systems that can independently execute complex quality tasks—detecting, responding, and even correcting issues without constant human oversight. From reactive testing to autonomous quality Traditional QE practices have focused heavily on post-development defect detection and scripted test execution. GenAI helped augment this process by accelerating test case generation and improving test coverage. However, Agentic AI pushes the boundary by enabling autonomous agents that can reason, adapt, and take action in response to changes within the application environment. These agents are designed to self-heal broken test scripts, dynamically update test cases, and perform continuous monitoring—reducing dependency on manual intervention. The implications are significant: faster testing cycles, reduced production defects, and more informed release or no-go decisions backed by actionable quality insights. Early enterprise adopters have begun reporting time compression in testing processes—from weeks or months to a matter of hours—alongside gains in decision-making accuracy and resource optimisation. The role of AI maturity in scaling autonomous QE Progress toward Agentic AI requires a foundation of AI maturity within the software development lifecycle. Many organisations have already embedded GenAI into test design, requirement parsing, and performance engineering. Testing platforms leveraging large language models (LLMs) and NLP are enabling QE teams to create connected, data-driven testing environments that align with modern development pipelines. According to the World Quality Report 2024–25, AI-led QE initiatives have shown the potential to reduce the Cost of Quality by up to 5%, a noteworthy shift in an environment where efficiency, resilience, and speed are becoming board-level priorities. However, building towards Agentic QE isn't solely a technical endeavour. It requires structured experimentation, investment in training, and the integration of explainability and bias mitigation into testing models to ensure outcomes remain transparent and reliable. Quality as a strategic lever in AI transformation As AI systems take on increasingly critical roles in decision-making, the importance of integrating QE into the broader AI lifecycle is rising. Quality assurance is no longer a back-end function—it is an embedded, strategic component of AI governance. Ensuring fairness, accuracy, and accountability in AI models demands that QE processes for testing AI systems begin early in development and persist through deployment and monitoring. Rather than slowing innovation, robust QE practices serve as a control mechanism, safeguarding trust while enabling scale. Shifting workforce roles and QE responsibilities With automation taking over repetitive tasks, the QE function is also undergoing structural change. Roles like AI Testers, Prompt Engineers, and AI Validation Specialists are emerging to align with the needs of next-generation testing frameworks. These shifts are not about workforce reduction but about capability transformation—enabling quality teams to contribute to strategic assurance, not just execution. As Gen Z professionals enter the workforce, the QE community will also need to adapt to new expectations of work, transparency, and technology-driven problem-solving. This generational transition will likely further accelerate adoption of autonomous, intelligent testing models. The final word Agentic AI is not just a technological progression—it signals a fundamental rethinking of how enterprises approach software quality in the era of intelligent systems. It enables organisations to move from reactive testing to proactive, self-directed quality assurance that is aligned with business agility and innovation goals. For enterprises, the challenge lies in readiness: building AI maturity, embedding quality into AI strategy, and preparing the workforce for an increasingly autonomous future. Those who can align these elements will be better positioned not just to keep pace with change, but to shape it.