logo
Integral Ad Science earns first AAM ethical AI certification

Integral Ad Science earns first AAM ethical AI certification

Techday NZ6 days ago
Integral Ad Science has received the first Ethical Artificial Intelligence Certification from the Alliance for Audited Media.
The certification is regarded as a milestone as artificial intelligence becomes more prevalent in the digital advertising sector. The Alliance for Audited Media's framework assesses a company's AI governance, data quality, risk mitigation, bias controls, and oversight processes.
Certification process
The certification is based on the Alliance for Audited Media's Ethical AI Framework. This framework covers areas such as disclosure, human oversight, privacy, bias mitigation, and risk management. The evaluation involved a comprehensive audit of Integral Ad Science's AI governance, including policies, AI risk management procedures, and oversight controls at multiple organisational levels. The auditors also examined the company's product-level methodologies and checked whether effective quality control mechanisms were in place for both the supporting data and the AI models' overall performance.
AI is a central component of Integral Ad Science's approach to digital advertising. The company's AI and machine learning platforms process up to 280 billion interactions each day, integrating AI into products for tasks such as real-time prediction, decision-making, fraud protection, brand safety, and attention measurement. These AI capabilities support solutions such as Total Media Quality, Quality Attention, and Fraud Solutions.
Industry recognition
Integral Ad Science is also the holder of TrustArc's Responsible AI certification and participates in ISO 42001 standards for AI management systems. According to the company, it is one of the few firms globally to hold both of these certifications.
Kevin Alvero, Chief Compliance Officer at Integral Ad Science, said, "As the first company to receive AAM's certification for ethical AI use, we are paving the way for the responsible use of AI within the advertising industry as a whole. AAM has a long history of providing transparency and assurance to the media and advertising industries, and we are pleased to be recognised as a leader in this area."
This recognition places an emphasis on transparency and the responsible implementation of AI practices in an industry that increasingly relies on automated data-driven solutions for media measurement and optimisation.
AI in practice
Integral Ad Science's use of AI is built into its long-term strategy, enabling enhanced analytical capabilities for its customers and partners. The company's proprietary digital advertising platform is designed to leverage large-scale data analytics, which supports its offering of actionable media insight for global brands, publishers, and digital platforms.
Richard Murphy, Chief Executive Officer, President, and Managing Director at the Alliance for Audited Media, commented, "We congratulate IAS for becoming the first organisation to achieve AAM's Ethical AI Certification. By certifying to AAM's framework, IAS is demonstrating how AI can be implemented to drive innovation and efficiency while maintaining trust with advertisers and partners. Their commitment to responsible AI practices backed by independent validation sets a new standard for accountability in the industry."
Broader context
The certification comes at a time of growing concern about AI's role in critical sectors such as digital advertising. Businesses across the industry are advancing the adoption of algorithmic solutions and machine learning methods to improve operational efficiency and advertising outcomes. In tandem, regulators and industry bodies are calling for strengthened oversight, transparency, and accountability in the use of such systems.
The Ethical AI Certification from the Alliance for Audited Media is designed to recognise and encourage industry practices that align with responsible AI governance, transparency, and bias mitigation, with the intention of setting a benchmark for ethical standards in media and advertising.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

EY & ACCA urge trustworthy AI with robust assessment frameworks
EY & ACCA urge trustworthy AI with robust assessment frameworks

Techday NZ

time4 hours ago

  • Techday NZ

EY & ACCA urge trustworthy AI with robust assessment frameworks

EY and the Association of Chartered Certified Accountants (ACCA) have released a joint policy paper offering practical guidance aimed at strengthening confidence in artificial intelligence (AI) systems through effective assessments. The report, titled "AI Assessments: Enhancing Confidence in AI", examines the expanding field of AI assessments and their role in helping organisations ensure their AI technologies are well governed, compliant, and reliable. The paper is positioned as a resource for business leaders and policymakers amid rapid AI adoption across global industries. Boosting trust in AI According to the paper, comprehensive AI assessments address a pressing challenge for organisations: boosting trust in AI deployments. The report outlines how governance, conformity, and performance assessments can help businesses ensure their AI systems perform as intended, meet legal and ethical standards, and align with organisational objectives. The guidance comes as recent research highlights an ongoing trust gap in AI. The EY Response AI Pulse survey found that 58% of consumers are concerned that companies are not holding themselves accountable for potential negative uses of the technology. This concern has underscored the need for greater transparency and assurance around AI applications. "Rigourous assessments are an important tool to help build confidence in the technology, and confidence is the key to unlocking AI's full potential as a driver of growth and prosperity." Marie-Laure Delarue, EY's Global Vice-Chair, Assurance, expressed the significance of the current moment for AI: "AI has been advancing faster than many of us could have imagined, and it now faces an inflection point, presenting incredible opportunities as well as complexities and risks. It is hard to overstate the importance of ensuring safe and effective adoption of AI. Rigourous assessments are an important tool to help build confidence in the technology, and confidence is the key to unlocking AI's full potential as a driver of growth and prosperity." She continued, "As businesses navigate the complexities of AI deployment, they are asking fundamental questions about the meaning and impact of their AI initiatives. This reflects a growing demand for trust services that align with EY's existing capabilities in assessments, readiness evaluations, and compliance." Types of assessments The report categorises AI assessments into three main areas: governance assessments, which evaluate the internal governance structures around AI; conformity assessments, determining compliance with laws, regulations and standards; and performance assessments, which measure AI systems against specific quality and performance metrics. The paper provides recommendations for businesses and policymakers alike. It calls for business leaders to consider both mandatory and voluntary AI assessments as part of their corporate governance and risk management frameworks. For policymakers, it advocates for clear definitions of assessment purposes, methodologies, and criteria, as well as support for internationally compatible assessment standards and market capacity-building. Public interest and skills gap Helen Brand, Chief Executive of ACCA, commented on the wider societal significance of trustworthy AI systems. "As AI scales across the economy, the ability to trust the technology is vital for the public interest. This is an area where we need to bridge skills gaps and build trust in the AI ecosystem as part of driving sustainable business. We look forward to collaborating with policymakers and others in this fascinating and important area." The ACCA and EY guidance addresses several challenges related to the current robustness and reliability of AI assessments. It notes that well-specified objectives, clear assessment criteria, and professional, objective assessment providers are essential to meaningful scrutiny of AI systems. Policy landscape The publication coincides with ongoing changes in the policy environment on AI evaluation. The report references recent developments such as the AI Action Plan released by the Trump administration, which highlighted the importance of rigorous evaluations for defining and measuring AI reliability and performance, particularly in regulated sectors. As AI technologies continue to proliferate across industries, the report argues that meaningful and standardised assessments could support the broader goal of safe and responsible AI adoption both in the private and public sectors. In outlining a potential way forward, the authors suggest both businesses and governments have roles to play in developing robust assessment frameworks that secure public confidence and deliver on the promise of emerging technologies.

AI-driven DNS threats & malicious adtech surge worldwide
AI-driven DNS threats & malicious adtech surge worldwide

Techday NZ

time4 hours ago

  • Techday NZ

AI-driven DNS threats & malicious adtech surge worldwide

Infoblox has published its 2025 DNS Threat Landscape Report, revealing increases in artificial intelligence-driven threats and widespread malicious adtech activity impacting organisations worldwide. DNS exploits rising The report draws on real-time analysis of more than 70 billion daily DNS queries across thousands of customer environments, providing data on how adversaries exploit DNS infrastructure to deceive users, evade detection, and undermine brand trust. Infoblox Threat Intel has identified over 660 unique threat actors and more than 204,000 suspicious domain clusters to date, with 10 new actors highlighted in the past year alone. The findings detail how malicious actors are registering unprecedented numbers of domains, using automation to enable large-scale campaigns and circumvent traditional cyber defences. In the past 12 months, 100.8 million newly observed domains were identified, with 25.1% classed as malicious or suspicious by researchers. According to Infoblox, the vast majority of these threat-related domains (95%) were unique to a single customer environment, increasing difficulty for the wider industry to detect and stop these threats. Malicious adtech and evasive tactics The analysis highlights the growing influence of malicious adtech, with 82% of customer environments reportedly querying domains associated with blacklisted advertising services. Malicious adtech schemes frequently rely on traffic distribution systems (TDS) to serve harmful content and mask the true nature of destination sites. Nearly 500,000 TDS domains were recorded within Infoblox networks over the year. Attackers are also harnessing DNS misconfigurations and deploying advanced techniques such as AI-enabled deepfakes and high-speed domain rotation. These tactics allow adversaries to hijack existing domains or impersonate prominent brands for phishing, malware delivery, drive-by downloads, or scams such as fraudulent cryptocurrency investment schemes. TDS enables threats to be redirected or disguised rapidly, hindering detection and response efforts. "This year's findings highlight the many ways in which threat actors are taking advantage of DNS to operate their campaigns, both in terms of registering large volumes of domain names and also leveraging DNS misconfigurations to hijack existing domains and impersonate major brands. The report exposes the widespread use of traffic distribution systems (TDS) to help disguise these crimes, among other trends security teams must look out for to stay ahead of attackers," said Dr. Renée Burton, head of Infoblox Threat Intel. Infoblox notes that traditional forensic-based, post-incident detection - also termed a "patient zero" approach - has proven less effective as attackers increase their use of new infrastructures and frequently rotate domains. As threats emerge and evolve at pace, reactive techniques may leave organisations exposed before threats are fully understood or shared across the security industry. AI, tunnelling and the threat intelligence gap DNS is also being leveraged for tunnelling, data exfiltration, and command and control activities. The report documents daily detections of activity involving tools such as Cobalt Strike, Sliver, and custom-built malware, which typically require machine learning algorithms to identify due to their obfuscation methods. Infoblox Threat Intel's research suggests that domain clusters - groups of interrelated domains operated by the same actor - are a significant trend. During the past year, security teams uncovered new actors and observed the continued growth of domain sets used for malicious activities. Proactive security recommended The report advocates a shift towards preemptive protection and predictive threat intelligence, emphasising the limitations of relying solely on detection after the fact. The data indicates that using Infoblox's protective DNS solution, 82% of threat-related queries were blocked before they could have a harmful impact, suggesting that proactive monitoring and early intervention can help counter adversarial tactics. Infoblox researchers argue that combining protective solutions with continuous monitoring of emerging threats is essential to providing security teams the necessary resources and intelligence to disrupt malicious campaigns before significant damage occurs. The report brings together research insights from the past twelve months to map out attack patterns and equip organisations with up-to-date knowledge on DNS-based threats, with a particular focus on the evolving role of harmful adtech in the modern threat landscape.

Dire need for AI support in primary, intermediate schools survey shows
Dire need for AI support in primary, intermediate schools survey shows

RNZ News

time7 hours ago

  • RNZ News

Dire need for AI support in primary, intermediate schools survey shows

A NZ Council for Education Research survey of teachers and students found that there was "a dire need" for guidance on best practice for AI in schools. Photo: UnSplash/ Taylor Flowe Primary school children say using AI sometimes feels like cheating and teachers warn their "Luddite" colleagues are "freaking out" about the technology. The insights come from an NZ Council for Education Research survey that warns primary and intermediate schools need urgent support for using Artificial Intelligence in the classroom. The council said its survey of 266 teachers and 147 pupils showed "a dire need" for guidance on best practice. It found teachers were experimenting with generative AI tools such as ChatGPT for tasks like lesson planning and personalising learning materials to match children's interests and skills, and many of their students were using it too though generally at home rather than in the classroom. But the survey of teachers and also found most primary schools did not have AI policies. "Teachers often don't have the appropriate training, they are often using the free models that are more prone to error and bias, and there is a dire need for guidance on best practice for using AI in the primary classroom," report author David Coblentz said. Coblentz said schools needed national guidance and students needed lessons in critical literacy so they understood the tools they were using and their in-built biases. He said in the meantime schools could immediately improve the quality of AI use and teacher and student privacy by avoiding free AI tools and using more reliable AI. The report said most of the teachers who responded to the survey said they had noted mistakes in AI-generated information. Most believed less than a third of their pupils, or none at all, were using AI for learning but 66 percent were worried their students might become too reliant on the technology. Most of the mostly Year 7-8 students surveyed in four schools had heard of AI, and less than half said they had never used it. Those who did use AI mostly did so outside of school. "Between one-eighth and one-half of users at each school said they asked AI to answer questions "for school or fun" (12%-50%). Checking or fixing writing attracted moderate proportions everywhere (29%-45%). Smaller proportions used AI for idea generation on projects or homework (6%-32%) and for gaming assistance (12%-41%). Talking to AI "like a friend" showed wide variation, from one in eight (12%) at Case A to nearly half (47%) at the all-girls' Case D," the survey report said. Across the four schools, between 55 and 72 percent agreed "Using AI sometimes feels like cheating" and between 38 and 74 percent agreed "Using AI too much can make it hard for kids to learn on their own". Roughly a quarter said they were better at using AI tools than the grown-ups they knew. Sign up for Ngā Pitopito Kōrero , a daily newsletter curated by our editors and delivered straight to your inbox every weekday.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store