
Organisations ramp up AI tool blocks to counter shadow AI risks
DNSFilter has reported that it blocked over 60 million generative AI queries during March, representing approximately 12% of all such queries processed by its DNS security platform that month.
Notion accounted for the vast majority of these blocked requests, making up 93% of all generative AI queries stopped by DNSFilter, surpassing the combined total of Microsoft Copilot, SwishApps, Quillbot and OpenAI queries.
According to the company's analysis, the average monthly number of generative AI-related queries processed has been over 330 million since January 2024, indicating growing usage and interest in these tools within professional environments.
Alongside increasing usage, organisations are developing policies to manage and regulate the adoption of generative AI technologies among their employees. Many have opted to block specific domains by employing DNS filtering, aiming to exert greater control and comply with internal policies designed to reduce the prevalence of shadow AI - the use of AI tools that operate outside the awareness or control of IT and security teams.
While the presence of malicious and fake generative AI domains such as those impersonating ChatGPT has seen a significant decrease - down 92% from April 2024 to April 2025 - DNSFilter's data shows a notable shift by threat actors towards domains containing "openai" in their names. There has been a 2,000% increase in such malicious sites during the same period, highlighting the evolving threat landscape related to generative AI.
The use of DNS-based filtering allows businesses to manage not just cyber threats but also the internal adoption of AI tools, ensuring that only approved solutions are accessible. This helps mitigate risks associated with unsanctioned adoption of generative AI, particularly when such adoption takes place without oversight from IT or security professionals.
Ken Carnesi, Chief Executive Officer and co-founder of DNSFilter, commented: "Companies know the benefits that generative AI offers, but they also know the potential cybersecurity risks. More organisations are now proactively choosing which tools to block and which to allow. That way, employees can't 'sneak' AI tools into the corporate network or inadvertently use a malicious one. In this way, a DNS filtering solution helps companies enforce policies, block possible threats and enable greater productivity all at the same time."
DNSFilter's data underscores the tension between the drive to leverage generative AI for workplace productivity and the imperative to maintain robust security controls against emerging threats linked to shadow AI and domain-based impersonation.
Follow us on:
Share on:

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
12 hours ago
- Techday NZ
Sensitive data exposure rises with employee use of GenAI tools
Harmonic Security has released its quarterly analysis finding that a significant proportion of data shared with Generative AI (GenAI) tools and AI-enabled SaaS applications by employees contains sensitive information. The analysis was conducted on a dataset comprising 1 million prompts and 20,000 files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June. According to the findings, 22% of files (total 4,400) and 4.37% of prompts (total 43,700) included sensitive data. The categories of sensitive data encompassed source code, access credentials, proprietary algorithms, merger and acquisition (M&A) documents, customer or employee records, and internal financial information. Use of new GenAI tools The data highlights that in the second quarter alone, organisations on average saw employees begin using 23 previously unreported GenAI tools. This expanding variety of tools increases the administrative load on security teams, who are required to vet each tool to ensure it meets security standards. A notable proportion of AI tool use occurs through personal accounts, which may be unsanctioned or lack sufficient safeguards. Almost half (47.42%) of sensitive uploads to Perplexity were made via standard, non-enterprise accounts. The numbers were lower for other platforms, with 26.3% of sensitive data entering ChatGPT through personal accounts, and just 15% for Google Gemini. Data exposure by platform Analysis of sensitive prompts identified ChatGPT as the most common origin point in Q2, accounting for 72.6%, followed by Microsoft Copilot with 13.7%, Google Gemini at 5.0%, Claude at 2.5%, Poe at 2.1%, and Perplexity at 1.8%. Code leakage represented the most prevalent form of sensitive data exposure, particularly within ChatGPT, Claude, DeepSeek, and Baidu Chat. File uploads and risks The report found that, on average, organisations uploaded 1.32GB of files in the second quarter, with PDFs making up approximately half of all uploads. Of these files, 21.86% contained sensitive data. The concentration of sensitive information was higher in files compared to prompts. For example, files accounted for 79.7% of all stored credit card exposure incidents, 75.3% of customer profile leaks, and 68.8% of employee personally identifiable information (PII) incidents. Files accounted for 52.6% of exposure volume related to financial projections. Less visible sources of risk GenAI risk does not only arise from well-known chatbots. Increasingly, regular SaaS tools that integrate large language models (LLMs) - often without clear labelling as GenAI - are becoming sources of risk as they access and process sensitive information. Canva was reportedly used for documents containing legal strategy, M&A planning, and client data. Replit and were involved with proprietary code and access keys, while Grammarly and Quillbot edited contracts, client emails, and internal legal content. International exposure Use of Chinese GenAI applications was cited as a concern. The study found that 7.95% of employees in the average enterprise engaged with a Chinese GenAI tool, leading to 535 distinct sensitive exposure incidents. Within these, 32.8% were related to source code, access credentials, or proprietary algorithms, 18.2% involved M&A documents and investment models, 17.8% exposed customer or employee PII, and 14.4% contained internal financial data. Preventative measures "The good news for Harmonic Security customers is that this sensitive customer data, personally identifiable information (PII), and proprietary file contents never actually left any customer tenant, it was prevented from doing so. But had organizations not had browser based protection in place, sensitive information could have ended up training a model, or worse, in the hands of a foreign state. AI is now embedded in the very tools employees rely on every day and in many cases, employees have little knowledge they are exposing business data." Harmonic Security Chief Executive Officer and Co-founder Alastair Paterson made this statement, referencing the protections offered to their customers and the wider risks posed by the pervasive nature of embedded AI within workplace tools. Harmonic Security advises enterprises to seek visibility into all tool usage – including tools available on free tiers and those with embedded AI – to monitor the types of data being entered into GenAI systems and to enforce context-aware controls at the data level. The recent analysis utilised the Harmonic Security Browser Extension, which records usage across SaaS and GenAI platforms and sanitises the information for aggregate study. Only anonymised and aggregated data from customer environments was used in the analysis.


Scoop
16 hours ago
- Scoop
Statement On AI In Universities From Aotearoa Communication & Media Scholars Network
We speak as a network of Aotearoa academics working in the inter-disciplines of Communication and Media Studies across our universities. Among us we have shared expertise in the political, social and economic impacts of commercially distributed and circulated generative artificial intelligence ('AI') in our university workplaces. While there is a tendency in our universities to be resigned to AI as an unstoppable and unquestionable technological force, our aim is to level the playing field to promote open critical and democratic debate. With this in mind, we make the following points: For universities… · AI is not an inevitable technological development which must be incorporated into higher education; rather it is the result of particular techno-capitalist ventures, a context which needs to be recognised and considered; · AI, as a corporate product of private companies such as OpenAI, Google, etc., encroaches on the public role of the university and its role as critic and conscience, and marginalises voices which might critique business interests; For researchers… · AI impedes rather than supports productive intellectual work because it erodes important critical thinking skills; instead, it devolves human scholarly work and critical engagement with ideas–elements vital to our cultural and social life–to software that produces 'ready-made', formulaic and backward looking 'results' that do not advance knowledge; · AI promotes an unethical, reckless approach to research which can promote 'hallucinations' and over valorise disruption for its own sake rather than support quality research; · AI normalises industrial scale theft of intellectual property as our written work is fed into AI datasets largely without citation or compensation; · AI limits the productivity of academic staff by requiring them to invent new forms of assessment which subvert AI, police students and their use of AI, or assess lengthy 'chat logs', rather than engage with students in activities and assessments that require deep, critical thinking and sharing, questioning and articulating ideas with peers; For students… · AI tools create anxiety for students; some are falsely-accused of using generative-AI when they haven't, or are very stressed that it could happen to them; · AI tools such as ChatGPT are contributing to mental-health crises and delusions in various ways; promoting the use of generative-AI in academic contexts is thus unethical, particularly when considering students and the role of universities in pastoral care; · AI thus undermines the fundamental relationships between teacher and student, academics and administration, and the university and the community by fostering an environment of distrust; For Aotearoa New Zealand… · AI clashes with Te Tiriti obligations around data sovereignty and threatens the possibility of data colonialism regarding te reo itself; · AI is devastating for the environment in terms of energy and water use and the extraction of natural resources needed for the processors that AI requires. Signed by: Rosemary Overell, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Olivier Jutel, Lecturer, Media, Film & Communications Programme, The University of Otago Emma Tennent, Senior Lecturer, Media & Communication, Te Herenga Waka Victoria University of Wellington Rachel Billington, Lecturer, Media, Film & Communications Programme, The University of Otago Brett Nicholls, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Yuki Watanabe, Lecturer, Media, Film & Communications Programme, The University of Otago Sy Taffel, Senior Lecturer, Media Studies Programme, Massey University Leon Salter, Senior Lecturer, Communications Programme, University of Auckland Angela Feekery, Senior Lecturer, Communications Programme, Massey University Ian Huffer, Senior Lecturer, Media Studies Programme, Massey University Pansy Duncan, Senior Lecturer, Media Studies Programme, Massey University Kevin Veale, Senior Lecturer, Media Studies Programme, Massey University Peter A. Thompson, Associate Professor, Media & Communication Programme, Te Herenga Waka/Victoria University of Wellington Nicholas Holm, Associate Professor, Media Studies Programme, Massey University Sean Phelan, Associate Professor, Massey University Yuan Gong, Senior Lecturer, Media Studies Programme, Massey University Chris McMillan, Teaching Fellow, Sociology Programme, University of Auckland Cherie Lacey, Researcher, Centre for Addiction Research, University of Auckland Thierry Jutel, Associate Professor, Film, Te Herenga Waka, Victoria University of Wellington Max Soar, Teaching Fellow, Political Communication, Te Herenga Waka Victoria University of Wellington Lewis Rarm, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Tim Groves, Senior Lecturer, Film. Te Herenga Waka, Victoria University of Wellington Valerie Cooper, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Wayne Hope, Professor, Faculty of Design & Creative Technologies, Auckland University of Technology Greg Treadwell, senior lecturer in journalism, School of Communication Studies, Auckland University of Technology Christina Vogels, Senior Lecturer, Critical Media Studies, School of Communication Studies, Auckland University of Technology


Techday NZ
2 days ago
- Techday NZ
Rising cyber threats fuel surge in malicious domain activity in Q2
DNSFilter has released its latest quarterly security report, highlighting the ongoing use of new domains and certain island nation domains in malicious online activity. The report, which analysed threat traffic between April and June 2025, identifies a continued trend of bad actors leveraging fresh domains, as well as an increased use of country code Top Level Domains (ccTLDs) from smaller island nations in attempts to evade detection. Increase in malicious activity According to the report, DNSFilter processed billions more DNS queries in the second quarter of 2025 compared to the previous quarter. June marked the highest volume of DNS traffic for the period. Almost 4% of this traffic was blocked, the highest proportion recorded by the company to date. While not all blocked queries were confirmed as malicious, the data suggests users are increasingly using DNS filtering to prevent access to both potential cyber threats and content considered time-wasting or inappropriate. The analysis found that malware and phishing attempts continue to rise. Malware accounted for the second most trafficked threat category on the network, indicating a persistent and growing threat environment. Role of new domains in threat campaigns Newly registered domains remain a significant challenge for security professionals. The report found that nearly 40% of requests associated with malicious activity targeted new domains. While this figure shows a slight decrease from the previous quarter, such domains still represent the main tactic for threat actors, who seek to exploit the period before these sites are identified and added to block lists. The report stated, "When domains are new, they've not yet had time to appear on block lists, which gives bad actors time for exploitation." Phishing trends After a temporary reduction in activity, phishing and deception accounted for 31.6% of malicious traffic observed on DNSFilter's network in the quarter. This translated to over 750 million queries, attributed in part to more sophisticated Phishing-as-a-Service offerings, including Tycoon 2FA. These tools and techniques provide attackers with the means to bypass security controls and target victims with convincing fraudulent schemes. Island nations' domains under scrutiny A notable trend identified in the report is the increased use of domains linked to island nations by threat actors. Of the five ccTLDs most likely to be associated with malicious activity, four belonged to small island territories. Domains associated with the Faroe Islands (.fo) topped the list, with 27% of traffic from these domains deemed malicious. Also prominent were domains from Grenada, Mayotte, and Wallis and Futuna. The report noted that "Threat actors adopt new TLDs for use in their campaigns and often choose TLDs and registries that are cheaper or even free in some cases, allowing them to quickly move on from domains and register new ones without cost concerns." Response from DNSFilter Ken Carnesi, CEO and Co-founder at DNSFilter, said: "Bad actors are agile, and the volume and variation of threats we saw in Q2 underscore that defenders must move as quickly and flexibly as attackers. Blocking new domains, which continue to drive threat traffic, remains a key defensive approach that can mitigate risk from emerging domains that bad actors are trying to weaponize quickly. We're seeing a structural shift in how modern attacks are launched and sustained and defenders must take notice." Follow us on: Share on: