Latest news with #NewMedia&Society


NDTV
22-05-2025
- Health
- NDTV
How Editing Apps Make Perfect Body Image Among Young People: Study
Callaghan: Like many of her peers, Abigail (21) takes a lot of selfies, tweaks them with purpose-made apps, and posts them on social media. But, she says, the selfie-editing apps do more than they were designed for: You look at that idealised version of yourself and you just want it - you just want it to be real [...] the more you do it, the better you get at it and the more subtle your editing is the easier it is to actually see yourself as that version. Abigail was one of nearly 80 young people my colleagues and I interviewed as part of research into selfie-editing technologies. The findings, recently published in New Media & Society, are cause for alarm. They show selfie-editing technologies have significant impacts for young people's body image and wellbeing. Carefully curating an online image Many young people carefully curate how they appear online. One reason for this is to negotiate the intense pressures of visibility in a digitally-networked world. Selfie-editing technologies enable this careful curation. The most popular selfie-editing apps include Facetune, Faceapp, and Meitu. They offer in-phone editing tools from lighting, colour and photo adjustments to "touch ups" such as removing blemishes. These apps also offer "structural" edits. These mimic cosmetic surgery procedures such as rhinoplasty (more commonly known as nose jobs) and facelifts. They also offer filters including an "ageing" filter, "gender swap" tool, and "make up" and hairstyle try-ons. The range of editing options and incredible attention to details and correction of so-called "flaws" these apps offer encourage the user to forensically analyse their face and body, making a series of micro changes with the tap of a finger. A wide range of editing practices The research team I led included Amy Dobson (Curtin University), Akane Kanai (Monash University), Rosalind Gill (University of London) and Niamh White (Monash University). We wanted to understand how image-altering technologies were experienced by young people, and whether these tools impacted how they viewed themselves. We conducted in-depth semi-structured interviews with 33 young people aged between 18-24. We also ran 13 "selfie-editing" group workshops with 56 young people aged 18-24 who take selfies, and who use editing apps in Melbourne and Newcastle, Australia. Most participants identified as either "female" or "cis woman" (56). There were 12 who identified as either "non-binary", "genderfluid" or "questioning", and 11 who identified as "male" or "cis man". They identified as from a range of ethnic, racial and cultural backgrounds. Facetune was the most widely-used facial-editing app. Participants also used Snapseed, Meitu, VSCO, Lightroom and the built-in beauty filters which are now standard in newer Apple or Samsung smartphones. Editing practices varied from those who irregularly made only minor edits such as lighting and cropping, to those who regularly used beauty apps and altered their faces and bodies in forensic detail, mimicking cosmetic surgical interventions. Approximately one third of participants described currently or previously making dramatic or "structural" edits through changing the dimensions of facial features. These edits included reshaping noses, cheeks, head size, shoulders or waist "cinching". Showcasing your 'best self' Young people told us that selfie taking and editing was an important way of showing "who they are" to the world. As one participant told us, it's a way of saying "I'm here, I exist". But they also said the price of being online, and posting photos of themselves, meant they were aware of being seen alongside a set of images showing "perfect bodies and perfect lives". Participants told us they assume "everyone's photos have been edited". To keep up with this high standard, they needed to also be adept at editing photos to display their "best self" - aligning with gendered and racialised beauty ideals. Photo-editing apps and filters were seen as a normal and expected way to achieve this. However, using these apps was described as a "slippery slope", or a "Pandora's box", where "once you start editing it's hard to stop". Young women in particular described feeling that the "baseline standard to just feel normal" feels higher than ever, and that appearance pressures are intensifying. Many felt image-altering technologies such as beauty filters and editing apps are encouraging them to want to change their appearance "in real life" through cosmetic non-surgical procedures such as fillers and Botox. As one participant, Amber (19), told us: I feel like a lot of plastic surgeries are now one step further than a filter. Another participant, Freya (20), described a direct link between editing photos and cosmetic enhancement procedures. Ever since I started [editing my body in photos], I wanted to change it in real life [...] That's why I decided to start getting lip and cheek filler. Altering the relationship between technology and the human experience These findings suggest image-editing technologies, including artificial intelligence (AI) filters and selfie-editing apps, have significant impacts for young people's body image and wellbeing. The rapid expansion of generative AI in "beauty cam" technologies in the cosmetic and beauty retail industries makes it imperative to study these impacts, as well as how young people experience these new technologies. These cameras are able to visualise "before and after" on a user's face with minute forensic detail. These technologies, through their potential to alter relationship between technology and the human experience at the deepest level, may have devastating impacts on key youth mental health concerns such as body image.


Forbes
21-05-2025
- Health
- Forbes
4 Clear Signs You're Experiencing ‘Dating Burnout,' By A Psychologist
A 2024 Forbes Health/OnePoll survey found that 79% of Gen Z and 80% of Millennials report feeling mentally and emotionally exhausted from using dating apps. That exhaustion isn't just imagined. A 2024 longitudinal study published in New Media & Society tracked nearly 500 dating app users over 12 weeks and found that emotional exhaustion and feelings of inefficacy increased the longer people stayed active on these apps. Users who were already experiencing depression, anxiety or loneliness were especially vulnerable to these damaging effects. The study also found that compulsive dating app use predicted more burnout, even though it made some users feel they were 'trying harder.' Over time, the effort often stopped feeling hopeful and started feeling hollow. Here are four signs you're likely experiencing dating burnout and how to recover from it without losing hope. When you're burned out, you may start losing interest in dating. While you still want to find love, you might be too depleted to enjoy the process. Even when a date goes well, you may feel detached or unimpressed. A qualitative study published this January in SN Social Sciences explored how dating app users become more emotionally desensitized over time. Participants described a sharp drop in excitement, replaced by fatigue, emptiness and a sense of just going through the motions after consistently using these apps. A 27 year-old female participant said, 'I go on dates, but when I am honest, I am tired before getting there, tired of telling the same stories and hearing the same stories.' The study found that repetitive, non-committal interactions gradually wore down users' emotional engagement, leading to feelings of sadness, self-doubt and disconnection even while actively dating. Another female user mentioned, 'I was looking for fun and to experience something, instead, I feel nothing, and that concerns me.' This type of burnout isn't always obvious at first. You might not realize how emotionally checked out you've become until you're halfway through another date, already hoping it ends soon. You delete the app, then download it again. You swipe for a while, close it and return the next day. The routine is familiar, perhaps even instinctive by now, but it often leaves you feeling more disengaged than connected. Researchers of the January study found that many dating app users described this pattern as repetitive and difficult to break, even when it no longer brought enjoyment. A 32 year-old male participant shared, 'All the swiping, payments, unmatching and writing the same repeatedly, it depresses me.' Another added, 'I regret swiping and chatting over a weekend. It feels like a waste of time with zero value.' This kind of usage can reflect a deeper sense of fatigue. When such behavior continues without a sense of meaningful progress or connection, it may be a sign that what began as intentional effort has now shifted into burnout. When you're experiencing dating burnout, your ability to emotionally regulate may be affected. Messages left on read, slow replies or canceled plans can begin to feel disproportionately significant. Even minor dating setbacks may trigger self-doubt or discouragement. A 2025 systematic review published in Computers in Human Behavior found that dating app use is associated with increased symptoms of depression, anxiety and lower self-esteem in nearly half of the studies examined. One key driver of this was the constant exposure to judgment and perceived rejection. Even low match rates or being ghosted can trigger distress and self-doubt, especially in users who engage frequently. The review also introduces the idea of 'quantified popularity,' where likes, matches and responses become metrics of self-worth. This dynamic encourages users to monitor their 'performance' and appearance closely, particularly after experiencing rejection. To add to feelings of being ignored, dating app use can highlight both instances of external validation as well as rejection, perhaps at a rate we were never meant to witness. Over time, this cycle of constant evaluation and perceived rejection can wear down your emotional resilience, until dating no longer feels like an opportunity but a test you keep failing. Burnout doesn't just affect how you feel, but also how you act. You might notice yourself saying things you don't fully mean, tolerating behavior you wouldn't normally accept or trying to impress people who don't align with your values. After a while, the dating process might feel less like showing up as your authentic self and more like shape-shifting into someone else. In the January 2025 study, several participants described this gradual loss of self. They reported feeling detached from their own personalities as if they were performing for the sake of being liked. The researchers noted, 'Many do regret these interactions, yet often proceed — either by agreeing to dates against interest and intuition, staying in uncomfortable settings, or engaging in intimacy contrary to their own desires.' This shift doesn't happen overnight. But when dating starts to feel like a burden you're struggling to let go of, it may be a sign that you're no longer intentionally choosing connection — you're just trying not to be alone. Once you recognize the signs of dating burnout, the next step isn't necessarily to quit, but to pause with intention. Here are a few ways to reset and recharge before stepping back in: Dating is meant to bring connection, not depletion. If it starts feeling like pressure or like you're performing a version of yourself you can't relate to, it's important to pause. You don't have to earn your rest or prove your resilience by pushing through something that's wearing you down. If your dating experience starts feeling like too much, it's not a failure to step back. It's an act of self-awareness and self-care. After all, the most important relationship to protect is the one you have with yourself. Are you overusing dating apps? Take this science-backed test to find out: Problematic Tinder Use Scale


NDTV
19-05-2025
- Science
- NDTV
AI Is Moving Fast. Needs Smarter Regulation Like Climate Policy To Keep Check
Sydney: Artificial intelligence (AI) might not have been created to enable new forms of sexual violence such as deepfake pornography. But that has been an unfortunate byproduct of the rapidly advancing technology. This is just one example of AI's many unintended uses. AI's intended uses are not without their own problems, including serious copyright concerns. But beyond this, there is much experimentation happening with the rapidly advancing technology. Models and code are shared, repurposed and remixed in public online spaces. These collaborative, loosely networked communities - what we call "underspheres" in our recently published paper in New Media & Society - are where users experiment with AI rather than simply consume it. These spaces are where generative AI is pushed into unpredictable and experimental directions. And they show why a new approach to regulating AI and mitigating its risks is urgently needed. Climate policy offers some useful lessons. A limited approach As AI advances, so do concerns about risk. Policymakers have responded quickly. For example, the European Union AI Act which came into force in 2024 classifies systems by risk: banning "unacceptable" ones, regulating "high-risk" uses, and requiring transparency for lower-risk tools. Other governments - including those of the United Kingdom, United States and China - are taking similar directions. However, their regulatory approaches differ in scope, stage of development, and enforcement. But these efforts share a limitation: they're built around intended use, not the messy, creative and often unintended ways AI is actually being used - especially in fringe spaces. So, what risks can emerge from creative deviance in AI? And can risk-based frameworks handle technologies that are fluid, remixable and fast-moving? Experimentation outside of regulation There are several online spaces where members of the undersphere gather. They include GitHub (a web-based platform for collaborative software development), Hugging Face (a platform that offers ready-to-use machine learning models, datasets, and tools for developers to easily build and launch AI apps) and subreddits (individual communities or forums within the larger Reddit platform). These environments encourage creative experimentation with generative AI outside regulated frameworks. This experimentation can include instructing models to avoid intended behaviours - or do the opposite. It can also include creating mashups or more powerful variations of generative AI by remixing software code that is made publicly available for anyone to view, use, modify and distribute. The potential harms of this experimentation are highlighted by the proliferation of deepfake pornography. So too are the limits of the current approach to regulation rapidly advancing technology such as AI. Deepfake technology wasn't originally developed to create non-consensual pornographic videos and images. But this is ultimately what happened within subreddit communities, beginning in 2017. Deepfake pornography then quickly spread from this undersphere into the mainstream; a recent analysis of more than 95,000 deepfake videos online found 98% of them were deep fake pornography videos. It was not until 2019 - years after deepfake pornography first emerged - that attempts to regulate it began to emerge globally. But these attempts were too rigid to capture the new ways deepfake technology was being used by then to cause harm. What's more, the regulatory efforts were sporadic and inconsistent between states. This impeded efforts to protect people - and democracies - from the impacts of deepfakes globally. This is why we need regulation that can march in step with emerging technologies and act quickly when unintended use prevails. Embracing uncertainty, complexity and change A way to look at AI governance is through the prism of climate change. Climate change is also the result of many interconnected systems interacting in ways we can't fully control - and its impacts can only be understood with a degree of uncertainty. Over the past three decades, climate governance frameworks have evolved to confront this challenge: to manage complex, emerging, and often unpredictable risks. And although this framework has yet to demonstrate its ability to meaningfully reduce greenhouse gas emissions, it has succeeded in sustaining global attention over the years on emerging climate risks and their complex impacts. At the same time it has provided a forum where responsibilities and potential solutions can be publicly debated. A similar governance framework should also be adopted to manage the spread of AI. This framework should consider the interconnected risks caused by generative AI tools linking with social media platforms. It should also consider cascading risks, as content and code are reused and adapted. And it should consider systemic risks, such as declining public trust or polarised debate. Importantly, this framework must also involve diverse voices. Like climate change, generative AI won't affect just one part of society - it will ripple through many. And the challenge is how to adapt with it. Applied to AI, climate change governance approaches could help promote preemptive action in the wake of unforeseen use (such as in the case of deepfake porn) before the issue becomes widespread. Avoiding the pitfalls of climate governance While climate governance offers a useful model for adaptive, flexible regulation, it also brings important warnings that must be avoided. Climate politics has been mired by loopholes, competing interests and sluggish policymaking. From Australia's shortcomings in implementing its renewable strategy, to policy reversals in Scotland and political gridlock in the United States, climate policy implementation has often been the proverbial wrench in the gears of environmental law. But, when it comes to AI governance, this all-too-familiar climate stalemate brings with it important lessons for the realm of AI governance. First, we need to find ways to align public oversight with self-regulation and transparency on the part of AI developers and suppliers. Second, we need to think about generative AI risks at a global scale. International cooperation and coordination are essential. Finally, we need to accept that AI development and experimentation will persist, and craft regulations that respond to this in order to keep our societies safe. (Author: Milica Stilinovic, PhD Candidate, School of Media and Communications; Managing Editor, Policy & Internet journal, University of Sydney; Francesco Bailo, Lecturer in Data Analytics in the Social Sciences, University of Sydney, and Jonathon Hutchinson, Chair of Discipline, Media and Communications, University of Sydney) (Disclaimer Statement: Francesco Bailo has received funding from Meta and from Australia's Department of Defence. Jonathon Hutchinson and Milica Stilinovic do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.)