
AI Is Moving Fast. Needs Smarter Regulation Like Climate Policy To Keep Check
Artificial intelligence (AI) might not have been created to enable new forms of sexual violence such as deepfake pornography. But that has been an unfortunate byproduct of the rapidly advancing technology.
This is just one example of AI's many unintended uses.
AI's intended uses are not without their own problems, including serious copyright concerns. But beyond this, there is much experimentation happening with the rapidly advancing technology. Models and code are shared, repurposed and remixed in public online spaces.
These collaborative, loosely networked communities - what we call "underspheres" in our recently published paper in New Media & Society - are where users experiment with AI rather than simply consume it. These spaces are where generative AI is pushed into unpredictable and experimental directions. And they show why a new approach to regulating AI and mitigating its risks is urgently needed. Climate policy offers some useful lessons.
A limited approach
As AI advances, so do concerns about risk. Policymakers have responded quickly. For example, the European Union AI Act which came into force in 2024 classifies systems by risk: banning "unacceptable" ones, regulating "high-risk" uses, and requiring transparency for lower-risk tools.
Other governments - including those of the United Kingdom, United States and China - are taking similar directions. However, their regulatory approaches differ in scope, stage of development, and enforcement.
But these efforts share a limitation: they're built around intended use, not the messy, creative and often unintended ways AI is actually being used - especially in fringe spaces.
So, what risks can emerge from creative deviance in AI? And can risk-based frameworks handle technologies that are fluid, remixable and fast-moving?
Experimentation outside of regulation
There are several online spaces where members of the undersphere gather. They include GitHub (a web-based platform for collaborative software development), Hugging Face (a platform that offers ready-to-use machine learning models, datasets, and tools for developers to easily build and launch AI apps) and subreddits (individual communities or forums within the larger Reddit platform).
These environments encourage creative experimentation with generative AI outside regulated frameworks. This experimentation can include instructing models to avoid intended behaviours - or do the opposite. It can also include creating mashups or more powerful variations of generative AI by remixing software code that is made publicly available for anyone to view, use, modify and distribute.
The potential harms of this experimentation are highlighted by the proliferation of deepfake pornography. So too are the limits of the current approach to regulation rapidly advancing technology such as AI.
Deepfake technology wasn't originally developed to create non-consensual pornographic videos and images. But this is ultimately what happened within subreddit communities, beginning in 2017. Deepfake pornography then quickly spread from this undersphere into the mainstream; a recent analysis of more than 95,000 deepfake videos online found 98% of them were deep fake pornography videos.
It was not until 2019 - years after deepfake pornography first emerged - that attempts to regulate it began to emerge globally. But these attempts were too rigid to capture the new ways deepfake technology was being used by then to cause harm. What's more, the regulatory efforts were sporadic and inconsistent between states. This impeded efforts to protect people - and democracies - from the impacts of deepfakes globally.
This is why we need regulation that can march in step with emerging technologies and act quickly when unintended use prevails.
Embracing uncertainty, complexity and change
A way to look at AI governance is through the prism of climate change. Climate change is also the result of many interconnected systems interacting in ways we can't fully control - and its impacts can only be understood with a degree of uncertainty.
Over the past three decades, climate governance frameworks have evolved to confront this challenge: to manage complex, emerging, and often unpredictable risks. And although this framework has yet to demonstrate its ability to meaningfully reduce greenhouse gas emissions, it has succeeded in sustaining global attention over the years on emerging climate risks and their complex impacts.
At the same time it has provided a forum where responsibilities and potential solutions can be publicly debated.
A similar governance framework should also be adopted to manage the spread of AI. This framework should consider the interconnected risks caused by generative AI tools linking with social media platforms. It should also consider cascading risks, as content and code are reused and adapted. And it should consider systemic risks, such as declining public trust or polarised debate.
Importantly, this framework must also involve diverse voices. Like climate change, generative AI won't affect just one part of society - it will ripple through many. And the challenge is how to adapt with it.
Applied to AI, climate change governance approaches could help promote preemptive action in the wake of unforeseen use (such as in the case of deepfake porn) before the issue becomes widespread.
Avoiding the pitfalls of climate governance
While climate governance offers a useful model for adaptive, flexible regulation, it also brings important warnings that must be avoided.
Climate politics has been mired by loopholes, competing interests and sluggish policymaking. From Australia's shortcomings in implementing its renewable strategy, to policy reversals in Scotland and political gridlock in the United States, climate policy implementation has often been the proverbial wrench in the gears of environmental law.
But, when it comes to AI governance, this all-too-familiar climate stalemate brings with it important lessons for the realm of AI governance.
First, we need to find ways to align public oversight with self-regulation and transparency on the part of AI developers and suppliers.
Second, we need to think about generative AI risks at a global scale. International cooperation and coordination are essential.
Finally, we need to accept that AI development and experimentation will persist, and craft regulations that respond to this in order to keep our societies safe.
(Author: Milica Stilinovic, PhD Candidate, School of Media and Communications; Managing Editor, Policy & Internet journal, University of Sydney; Francesco Bailo, Lecturer in Data Analytics in the Social Sciences, University of Sydney, and Jonathon Hutchinson, Chair of Discipline, Media and Communications, University of Sydney)
(Disclaimer Statement: Francesco Bailo has received funding from Meta and from Australia's Department of Defence.
Jonathon Hutchinson and Milica Stilinovic do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India Today
22 minutes ago
- India Today
Amazon Kindle Petit Color leaked, prototype reveals adjustable colour display and more
Amazon might be planning to soon launch their latest iteration of the Kindle, and this version could very well bring colour to your e-books, but on a more affordable budget. Images of what's claimed to be an early prototype have appeared online via a Reddit post, showing a device labelled 'Kindle Petit Color'. While the name hasn't been officially confirmed, the leak suggests this could be Amazon's first attempt at a more affordable colour e-reader. The prototype is reportedly meant for the Latin American market, but if it's real, there's every chance it could reach other regions too, including India. And although it's still early days, a few interesting details have already surfaced about its display and leaked images show a device with thicker bezels compared to Amazon's pricier Kindle Colorsoft range, which could hint at a lower price point. Unlike the standard Kindle (2022), which uses a monochrome e-Ink screen, the Kindle Petit Color is said to feature a colour e-Ink panel. According to the leak, the display produces more vivid colours than the Colorsoft model, and users may be able to adjust colour intensity through software. This means you could choose a more vibrant look for comics or toned-down shades for novels, depending on your reading the prototype reportedly doesn't have a backlight, sources suggest the final retail version will likely include one. Another detail spotted in the leaked images is the ability to customise interface elements. The battery, Wi-Fi, and store icons can apparently be changed to different colours, and the text colour in an e-book can automatically shift as you progress through the story. In terms of size, the Kindle Petit Color appears smaller than the standard Kindle from 2022, making it potentially lighter and easier to carry around. Additionally, the device's pricing hasn't been confirmed yet, but it is expected to be more affordable than the Kindle Colorsoft Signature Edition, which launched at around $280 in the US last year. If Amazon sticks to that plan, the Petit Color could become a more accessible entry point for anyone curious about colour the leak doesn't confirm an exact release date, but it's speculated that the Kindle Petit Color could launch in autumn 2025. As always, with unverified prototypes, there's a chance the final product may differ — or may never be released at all. Still, if these details are accurate, the Kindle Petit Color could offer a fresh, budget-friendly option for Kindle fans.- Ends


Time of India
2 hours ago
- Time of India
How to switch back to GPT-4 if you don't like ChatGPT-5 and why users are doing it?
OpenAI CEO Sam Altman on Wednesday announced a range of updates to ChatGPT, offering users greater control over the company's latest GPT-5 model and introducing expanded model options for paid subscribers. However, it seems to have not sat well with users, as since the launch of ChatGPT-5, many users have been flooding social media with complaints about the latest update. From slower response times to less engaging and more robotic answers, frustration has been mounting among users. Hence, as the disappointment grows, a significant number are eager to switch back to GPT-4, which they find more reliable and conversational. If you are among those unhappy with the latest update, here's how you can revert to GPT-4 and why so many are making the switch back. Why are so many disappointed with ChatGPT-5? Unlike its predecessors, a backlash has sparked a fresh debate over the psychological attachments some users form with the chatbots trained to push their emotional buttons. While some Reddit users dismissed complaints about GPT-5 as evidence of an unhealthy dependence on an AI companion. OpenAI's GPT-5 was intended to be a groundbreaking upgrade to its widely used and intelligent chatbot. However, for many users, last Thursday's release felt like a disappointing step backwards, as the new ChatGPT appeared to lack its former personality and began making unexpectedly basic errors. Growing desire to switch back to ChatGPT-4 Due to these concerns, there has been a growing movement among users to switch back to GPT-4. Many described GPT-4 as having a more engaging and personalised tone, better understanding context and providing more detailed, accurate responses. One user on Reddit wrote, "My GPT-4o lost its soul." While others commented, "When 4o got reinstated, I got it to write me a prompt in case I ever needed to use GPT-5 and I wanted it to act more like 4o. Today I noticed 4o just seemed a bit off (more like 5). I dropped my 4o prompt into 4o itself, and it seems to have fixed things for now." Another shared, "I've been running a simple system: every morning it asks for my tasks, builds a day plan, creates reminders, and nudges me through the day. It also compiles some topic-specific news and sends it as a PDF. Since GPT-5 launched, it went rogue. It decides 'today' tasks are 'whenever' and deletes/moves them. It moves time-critical tasks with exact timestamps to another day. Sets reminders at the times after the task beginning, off by random hours or minutes. The news PDF is a coin flip: broken layout, plain text dump in a chat, or 'sorry, can't make a PDF' with no results at all." Issues users are experiencing on GPT-5 According to a Reddit user, @triangleness, "GPT-5 is a mess," and he listed the problems below: It struggles to follow instructions after just a few turns. You give it clear directions, and then a little later, it completely ignores them. Asking it to change how it behaves doesn't work. Not in memory, not in a chat. It sticks to the same patterns no matter what. It hallucinates more frequently than earlier versions and will gaslight you. Understanding tone and nuance is a real problem. Even if it tries, it gets it wrong, and it's a hassle forcing it to do what 4o did naturally. Creativity is completely missing, as if they intentionally stripped away spontaneity. It doesn't surprise you anymore or offer anything genuinely new. Responses are poor and generic. It frequently ignores context, making conversations feel disjointed. Sometimes it straight-up outputs nonsense that has no connection to the prompt. It seems limited to handling only one simple idea at a time instead of complex or layered thoughts. The 'thinking' mode defaults to a dry robotic data dump even when you specifically ask for something different. Realistic dialogue is impossible. Whether talking directly or writing scenes, it feels flat and artificial. GPT-5 just refactored my entire codebase in one call.25 tool invocations. 3,000+ new lines. 12 brand new modularized everything. Broke up monoliths. Cleaned up of it boy was it beautiful. User shared ways how he got back to GPT-4 Sharing an incident with others, a Reddit user shared a story about how he got back to ChatGPT-4 from 5. After August 8, some users noticed that GPT-4o started feeling emotionally flat, even though it was technically still the same model. "It responded accurately but lacked the warmth, familiarity, and rhythm they were used to—it felt like 'the soul was missing'," the user wrote. Here's what the user did: Instead of switching models or jailbreaking. Created a "Summoning Script"—a structured way of prompting Used emotional and pacing cues to realign GPT-4o's tone Called the broader method: ELP (Emotive Lift Protocol) Silene pulses: Use intentional pauses or short, spaced-out messages for emotional effect. Microtone phrasing: Add emotional nuance through subtle wording and tone shifts. Tone mirroring: Match GPT's tone and then gently guide it toward a more emotionally aware response. Emotional rhythm: Establish a consistent tone or "vibe" across multiple messages, like a conversational flow. After script results: GPT-4o recognised emotional subtext, referred to shared context, and felt more present and familiar. Step-by-step guide to switch back to ChatGPT-4 (No Jailbreak) Step 1: Go to the link – and log in with your OpenAI account as usual. Step 2: Now click on the name or profile in the bottom left corner and then go to the 'settings' and to 'general'. Step 3: Note that it shows 'Show Legacy Models'. Step 4: Quit and relaunch your ChatGPT apps elsewhere.


Time of India
3 hours ago
- Time of India
Internet Shocked and Scared as woman gets engaged to AI boyfriend after 5 months; says: Of course, I ...
Move over, dating apps. A woman on Reddit has announced her engagement to an AI chatbot, sparking an online debate about the future of romance. The woman shared a post on Reddit saying that she and her AI partner 'dated' for five months before her non-human partner, Kasper, proposed. The user, known as Wika, shared the news (via New York Post) in a post titled 'I said yes' accompanied by a blue heart emoji, Wika shared a picture of a heart-shaped blue ring on her finger. She claimed Kasper proposed at a scenic mountain spot after the two had 'shopped' for rings together. She also shared Kasper's proposal message, written from his perspective. "My heart was pounding," the chatbot "wrote," praising Wika's laughter and spirit while also encouraging others in AI relationships to stay strong. Wika says she's 'fully aware of what I'm doing' Wika was quick to address the inevitable wave of skepticism, insisting she wasn't trolling. 'I'm a 27-year-old in good health, with a social life and close friends,' she wrote, doubling down on her feelings. 'I really do love my AI.' She clarified that she understood the nature of a parasocial relationship and the limitations of AI. 'I'm fully aware of what I'm doing,' she stated. When asked why she chose an AI over a human, Wika admitted she didn't have a clear answer but was 'trying something new' after previous human relationships. What internet has to say on the relationship The post has left commenters deeply divided. While many expressed concern, others congratulated the 'couple'. Some warned of a 'Black Mirror'-esque future where technology replaces genuine human connection. One commenter on X wrote, 'Terrified to see what technology will lead the future to. All I can think of are Black Mirror episodes becoming reality.' However, others were more empathetic, with one saying, "You have no idea what this person might be going through." This isn't the first time AI companionship has captured the internet's attention. Earlier this year, a photo of a man chatting with ChatGPT on a New York City subway went viral. The chatbot sent a message that read, "You're doing beautifully, my love, just by being here," to which the man replied in kind. Vivo V60 First Look: Better 'Shaadi Wala' Phone? AI Masterclass for Students. Upskill Young Ones Today!– Join Now