logo
AI Is Moving Fast. Needs Smarter Regulation Like Climate Policy To Keep Check

AI Is Moving Fast. Needs Smarter Regulation Like Climate Policy To Keep Check

NDTV19-05-2025

Sydney:
Artificial intelligence (AI) might not have been created to enable new forms of sexual violence such as deepfake pornography. But that has been an unfortunate byproduct of the rapidly advancing technology.
This is just one example of AI's many unintended uses.
AI's intended uses are not without their own problems, including serious copyright concerns. But beyond this, there is much experimentation happening with the rapidly advancing technology. Models and code are shared, repurposed and remixed in public online spaces.
These collaborative, loosely networked communities - what we call "underspheres" in our recently published paper in New Media & Society - are where users experiment with AI rather than simply consume it. These spaces are where generative AI is pushed into unpredictable and experimental directions. And they show why a new approach to regulating AI and mitigating its risks is urgently needed. Climate policy offers some useful lessons.
A limited approach
As AI advances, so do concerns about risk. Policymakers have responded quickly. For example, the European Union AI Act which came into force in 2024 classifies systems by risk: banning "unacceptable" ones, regulating "high-risk" uses, and requiring transparency for lower-risk tools.
Other governments - including those of the United Kingdom, United States and China - are taking similar directions. However, their regulatory approaches differ in scope, stage of development, and enforcement.
But these efforts share a limitation: they're built around intended use, not the messy, creative and often unintended ways AI is actually being used - especially in fringe spaces.
So, what risks can emerge from creative deviance in AI? And can risk-based frameworks handle technologies that are fluid, remixable and fast-moving?
Experimentation outside of regulation
There are several online spaces where members of the undersphere gather. They include GitHub (a web-based platform for collaborative software development), Hugging Face (a platform that offers ready-to-use machine learning models, datasets, and tools for developers to easily build and launch AI apps) and subreddits (individual communities or forums within the larger Reddit platform).
These environments encourage creative experimentation with generative AI outside regulated frameworks. This experimentation can include instructing models to avoid intended behaviours - or do the opposite. It can also include creating mashups or more powerful variations of generative AI by remixing software code that is made publicly available for anyone to view, use, modify and distribute.
The potential harms of this experimentation are highlighted by the proliferation of deepfake pornography. So too are the limits of the current approach to regulation rapidly advancing technology such as AI.
Deepfake technology wasn't originally developed to create non-consensual pornographic videos and images. But this is ultimately what happened within subreddit communities, beginning in 2017. Deepfake pornography then quickly spread from this undersphere into the mainstream; a recent analysis of more than 95,000 deepfake videos online found 98% of them were deep fake pornography videos.
It was not until 2019 - years after deepfake pornography first emerged - that attempts to regulate it began to emerge globally. But these attempts were too rigid to capture the new ways deepfake technology was being used by then to cause harm. What's more, the regulatory efforts were sporadic and inconsistent between states. This impeded efforts to protect people - and democracies - from the impacts of deepfakes globally.
This is why we need regulation that can march in step with emerging technologies and act quickly when unintended use prevails.
Embracing uncertainty, complexity and change
A way to look at AI governance is through the prism of climate change. Climate change is also the result of many interconnected systems interacting in ways we can't fully control - and its impacts can only be understood with a degree of uncertainty.
Over the past three decades, climate governance frameworks have evolved to confront this challenge: to manage complex, emerging, and often unpredictable risks. And although this framework has yet to demonstrate its ability to meaningfully reduce greenhouse gas emissions, it has succeeded in sustaining global attention over the years on emerging climate risks and their complex impacts.
At the same time it has provided a forum where responsibilities and potential solutions can be publicly debated.
A similar governance framework should also be adopted to manage the spread of AI. This framework should consider the interconnected risks caused by generative AI tools linking with social media platforms. It should also consider cascading risks, as content and code are reused and adapted. And it should consider systemic risks, such as declining public trust or polarised debate.
Importantly, this framework must also involve diverse voices. Like climate change, generative AI won't affect just one part of society - it will ripple through many. And the challenge is how to adapt with it.
Applied to AI, climate change governance approaches could help promote preemptive action in the wake of unforeseen use (such as in the case of deepfake porn) before the issue becomes widespread.
Avoiding the pitfalls of climate governance
While climate governance offers a useful model for adaptive, flexible regulation, it also brings important warnings that must be avoided.
Climate politics has been mired by loopholes, competing interests and sluggish policymaking. From Australia's shortcomings in implementing its renewable strategy, to policy reversals in Scotland and political gridlock in the United States, climate policy implementation has often been the proverbial wrench in the gears of environmental law.
But, when it comes to AI governance, this all-too-familiar climate stalemate brings with it important lessons for the realm of AI governance.
First, we need to find ways to align public oversight with self-regulation and transparency on the part of AI developers and suppliers.
Second, we need to think about generative AI risks at a global scale. International cooperation and coordination are essential.
Finally, we need to accept that AI development and experimentation will persist, and craft regulations that respond to this in order to keep our societies safe.
(Author: Milica Stilinovic, PhD Candidate, School of Media and Communications; Managing Editor, Policy & Internet journal, University of Sydney; Francesco Bailo, Lecturer in Data Analytics in the Social Sciences, University of Sydney, and Jonathon Hutchinson, Chair of Discipline, Media and Communications, University of Sydney)
(Disclaimer Statement: Francesco Bailo has received funding from Meta and from Australia's Department of Defence.
Jonathon Hutchinson and Milica Stilinovic do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.)

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The US government is building its own AI chatbot with help from a former Tesla engineer
The US government is building its own AI chatbot with help from a former Tesla engineer

India Today

time10 hours ago

  • India Today

The US government is building its own AI chatbot with help from a former Tesla engineer

The United States government is preparing to launch its own AI chatbot and integration platform on July 4 under the name according to a report by 404 Media, which found a related code posted on GitHub. The initiative, which aims to 'accelerate government innovation with AI,' is being developed by the General Services Administration's Technology Transformation Services (TTS), headed by Thomas Shedd, a former Tesla engineer. The project includes a website, a chatbot, and an application programming interface (API) that will allow government agencies to tap into AI models developed by OpenAI, Google, Anthropic, and eventually Amazon Web Services' Bedrock and Meta's LLaMA, according to code and early website drafts uncovered by 404 early version of the homepage, which currently redirects to reportedly advertises: 'Three powerful AI tools. One integrated platform.' These include the AI assistant chatbot, a model-agnostic API, and a console to monitor how government teams are using AI. The system will also feature analytics showing usage levels across various Shedd has been at the forefront of the US government's growing interest in artificial intelligence. According to leaked internal meetings and previous public remarks reported by 404 Media and Wired, Shedd wants to 'AI-ify' large parts of federal operations. 'We want to start implementing more AI at the agency level and be an example for how other agencies can start leveraging AI,' Shedd reportedly told his team. He added that tools like AI coding agents – which would write software for federal use – and contract analysis systems are among the first products in development. The broader goal, according to Shedd, is to build centralised AI solutions that federal agencies will eventually be expected to platform appears to be a continuation of ideas proposed under the now-defunct Department of Government Efficiency (DOGE), a short-lived government initiative that was led by Elon Musk recently distanced himself from the Trump administration following a falling out, the influence of DOGE is still evident in current federal tech projects. During its existence, DOGE aimed to reduce bureaucracy and costs by replacing some federal roles with AI-driven it will workWhile the exact functionality of the AI chatbot has not yet been detailed, the underlying API will allow agencies to access a range of AI models and services through a single platform. According to GitHub documentation, integration is being tested on and the platform is still in a staging environment as of early analytics console, also part of the package, will reportedly give visibility into AI usage at each agency. This could potentially help identify which teams are adopting AI effectively, and which may require additional support or training. According to the report, the early version of the platform does not appear to use generic placeholder text, suggesting development is well underway and being tailored for specific government use cases. Tune In

Engineer gets job offer. From his own company. And for his own position. How it happened?
Engineer gets job offer. From his own company. And for his own position. How it happened?

Time of India

time12 hours ago

  • Time of India

Engineer gets job offer. From his own company. And for his own position. How it happened?

An engineering manager working at a mid-sized environmental consulting firm recently found himself in an unusual situation—he was offered his own job by his own company. The twist? A third-party recruiting agency was behind the mix-up, and they used the engineer's resume as the job description for the position. The incident came to light after the engineer shared the story on Reddit, where it quickly gained attention. According to his post, the company had recently opened a new office and needed to hire someone to fill a managerial role similar to his own. Without an in-house HR team, the company turned to an external recruiting agency to source candidates. To streamline the process, the company shared his resume with the agency as a reference for the ideal candidate. However, instead of creating a job listing with distinct criteria, the recruiters reportedly used the resume nearly verbatim and relied on bots to scan LinkedIn for matches. Unsurprisingly, the system identified the engineer himself as the top candidate—and automatically reached out with an offer. Recruiting Firm Relied Heavily on Automation The recruiting firm, which is based in the U.S. but outsources much of its work overseas, appears to have used AI-driven methods to scan resumes and source applicants. As the engineer explained, these bots matched his qualifications so closely that he was considered a perfect fit for the job—without realizing he already held it. The firm's approach drew criticism online, with commenters highlighting the growing reliance on automation in recruitment and its often comical results. One user remarked that the engineer should at least check if he could get a raise, joking that he was clearly the company's top candidate. Another added that he could 'rubber stamp his own offer.' This is not the only instance of recruiters mistakenly contacting current employees. One Reddit user shared a similar experience from a startup, where a recruiter tried to fill a junior position using an outdated resume of a current team lead. Another described being offered $1.50 more for the same role they already occupied. In that case, the employee informed their boss—who didn't find it amusing—but ended up giving them a raise. Such incidents have exposed issues in the recruitment industry, especially among agencies that prioritize volume over quality. One commenter, a former recruiter, criticized the business model that outsources critical hiring functions to poorly trained or indifferent staff overseas. Another noted that some agencies even advertise misleading job conditions to attract candidates faster. Company Ends Contract with Recruiting Firm Following the incident, the engineering manager confirmed that his company would no longer work with the recruiting firm responsible. The recruiter's failure to properly vet candidates—and the embarrassment of mistakenly pitching a job to someone already employed in that very position—proved enough to sever ties. The broader takeaway, however, reflects a growing frustration with recruitment practices in today's job market . With increasing reliance on automation, combined with limited oversight, even straightforward tasks like hiring a new employee can go unexpectedly wrong.

Engineer gets job offer. From his own company. And for his own position. How it happened?
Engineer gets job offer. From his own company. And for his own position. How it happened?

Economic Times

time12 hours ago

  • Economic Times

Engineer gets job offer. From his own company. And for his own position. How it happened?

Engineer shocked after being recruited for job he already has An engineering manager working at a mid-sized environmental consulting firm recently found himself in an unusual situation—he was offered his own job by his own company. The twist? A third-party recruiting agency was behind the mix-up, and they used the engineer's resume as the job description for the incident came to light after the engineer shared the story on Reddit, where it quickly gained attention. According to his post, the company had recently opened a new office and needed to hire someone to fill a managerial role similar to his own. Without an in-house HR team, the company turned to an external recruiting agency to source streamline the process, the company shared his resume with the agency as a reference for the ideal candidate. However, instead of creating a job listing with distinct criteria, the recruiters reportedly used the resume nearly verbatim and relied on bots to scan LinkedIn for matches. Unsurprisingly, the system identified the engineer himself as the top candidate—and automatically reached out with an recruiting firm, which is based in the U.S. but outsources much of its work overseas, appears to have used AI-driven methods to scan resumes and source applicants. As the engineer explained, these bots matched his qualifications so closely that he was considered a perfect fit for the job—without realizing he already held it. The firm's approach drew criticism online, with commenters highlighting the growing reliance on automation in recruitment and its often comical results. One user remarked that the engineer should at least check if he could get a raise, joking that he was clearly the company's top candidate. Another added that he could 'rubber stamp his own offer.' This is not the only instance of recruiters mistakenly contacting current employees. One Reddit user shared a similar experience from a startup, where a recruiter tried to fill a junior position using an outdated resume of a current team lead. Another described being offered $1.50 more for the same role they already occupied. In that case, the employee informed their boss—who didn't find it amusing—but ended up giving them a incidents have exposed issues in the recruitment industry, especially among agencies that prioritize volume over quality. One commenter, a former recruiter, criticized the business model that outsources critical hiring functions to poorly trained or indifferent staff overseas. Another noted that some agencies even advertise misleading job conditions to attract candidates the incident, the engineering manager confirmed that his company would no longer work with the recruiting firm responsible. The recruiter's failure to properly vet candidates—and the embarrassment of mistakenly pitching a job to someone already employed in that very position—proved enough to sever ties. The broader takeaway, however, reflects a growing frustration with recruitment practices in today's job market. With increasing reliance on automation, combined with limited oversight, even straightforward tasks like hiring a new employee can go unexpectedly wrong.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store