logo
#

Latest news with #ShannonVallor

Alive > Automated: The case for living brands in the age of artificial everything
Alive > Automated: The case for living brands in the age of artificial everything

Campaign ME

time26-05-2025

  • Entertainment
  • Campaign ME

Alive > Automated: The case for living brands in the age of artificial everything

Albert Einstein once said, 'The intuitive mind is a sacred gift and the rational mind is a faithful servant.' Intuition – that unexplainable gut feeling, the spark that seems to come from nowhere – is central to human creativity. It can't be reasoned or replicated. It's not efficient or logical. But it's often where our best ideas come from. In a world shaped by automation and algorithms, something is getting lost: the human voice. AI can mimic human creativity, analyse data, and generate content with remarkable speed – but it doesn't dream or feel. It can assist, enhance, even surprise – but it can't create meaning. That remains a deeply human act. This isn't a call to reject AI – it is a valuable tool – but we must stay grounded in what makes us human. To collaborate with AI, without surrendering the wheel. Because the brands that truly endure aren't the most polished; they're the ones that feel alive. The AI mirror We don't always know where our creative ideas come from, but we do always know where the ideas of AI come from: us. As philosopher Shannon Vallor writes, 'It is these machines that now tell us the story of our own past, and project our futures. They do so without living even one day of that history, or knowing a single moment of the human condition.' A resurfaced interview with Studio Ghibli's Hayao Miyazaki went viral after the rise of AI tools that mimic his art. When shown an AI-generated animation of a grotesque figure dragging itself by its head, he recoiled: 'Whoever creates this stuff has no idea what pain is… I feel this is an insult to life itself.' His response raises a critical point: can AI, which has never felt pain, ethically depict emotion? In branding, this matters deeply. Emotional storytelling is a powerful tool for brands – but when that emotion is manufactured, it risks being manipulative, even exploitative. It becomes a performance of a feeling. Perfectly imperfect AI produces flawless output – but perfection isn't impressive anymore. It's expected. What truly captivates us now is imperfection: the unique fingerprint that says 'a human was here.' Mark Schaefer, in Audacious: How Humans Win in an AI Marketing World argues that while AI will reshape business, it's up to humans to 'own crazy' – and take creative risks. The human advantage lies in bold storytelling and the unpolished emotional nuance that only we can offer. As Deepti Velury, Global COO of Tag, puts it: 'The core of humanity is having beauty and imperfection together'. We want real Algorithm-driven feeds have trapped consumers in a loop. Everything feels relevant, but nothing feels surprising. Now, consumers are craving realness. This is an opportunity for bold, authentic brands. Take Oatly, known for its quirky, anti-corporate tone that feels human and unfiltered. Or Liquid Death, that built an entire brand on satire, poking fun at wellness culture and packaging water like a hardcore energy drink. It's absurd, unexpected – and consumers love it. These brands don't follow trends – they interrupt them. That's how we escape the loop. By creating work that doesn't necessarily align with data, but resonates emotionally. The kind of work that makes someone pause and feel something unexpected. Let brands live So where do we go from here? How can brands stay human – and stay relevant? The answer lies with living brands — ones that evolve, and, above all, feel. These aren't just businesses with clever slogans. They listen and reflect the realities of the people they serve. Much of what makes culture meaningful is intangible – feelings, memories and instincts. AI might recite facts about love or grief, but like Will in Good Will Hunting, it doesn't know them. As Robin Williams' character reflects: 'I'll bet you can't tell me what it smells like in the Sistine Chapel'. The same can be said of AI; it can catalogue every detail, but it will never feel the air inside. That feeling – that ineffable sensory detail – is the thread that connects us to meaning. Living brands stand as a counterforce. They remind us of what it means to feel something. And we carry the responsibility of representing that messy, passionate truth with care, courage, and humanity. By Mark Rollinson, Chairman, All About Brands

‘Dangerous nonsense': AI-authored books about ADHD for sale on Amazon
‘Dangerous nonsense': AI-authored books about ADHD for sale on Amazon

The Guardian

time04-05-2025

  • Health
  • The Guardian

‘Dangerous nonsense': AI-authored books about ADHD for sale on Amazon

Amazon is selling books marketed at people seeking techniques to manage their ADHD that claim to offer expert advice yet appear to be authored by a chatbot such as ChatGPT. Amazon's marketplace has been deluged with AI-produced works that are easy and cheap to publish, but which include unhelpful or dangerous misinformation, such as shoddy travel guidebooks and mushroom foraging books that encourage risky tasting. A number of books have appeared on the online retailer's site offering guides to ADHD that also seem to be written by chatbots. The titles include Navigating ADHD in Men: Thriving with a Late Diagnosis, Men with Adult ADHD: Highly Effective Techniques for Mastering Focus, Time Management and Overcoming Anxiety and Men with Adult ADHD Diet & Fitness. Samples from eight books were examined for the Guardian by a US company that detects AI content. The company said each had a rating of 100% on its AI detection score, meaning that its systems are highly confident that the books were written by a chatbot. Experts said online marketplaces are a 'wild west' owing to the lack of regulation around AI-produced work – and dangerous misinformation risks spreading as a result. Michael Cook, a computer science researcher at King's College London, said generative AI systems were known to give dangerous advice, for example around ingesting toxic substances, mixing together dangerous chemicals or ignoring health guidelines. As such, it is 'frustrating and depressing to see AI-authored books increasingly popping up on digital marketplaces' particularly on health and medical topics, which can result in misdiagnosis or worsen conditions, he said. 'Generative AI systems like ChatGPT may have been trained on a lot of medical textbooks and articles, but they've also been trained on pseudoscience, conspiracy theories and fiction. 'They also can't be relied on to critically analyse or reliably reproduce the knowledge they've previously read – it's not as simple as having the AI 'remember' things that they've seen in their training data. Generative AI systems should not be allowed to deal with sensitive or dangerous topics without the oversight of an expert,' he said. Yet he noted Amazon's business model incentivises this type of practice, as it 'makes money every time you buy a book, whether the book is trustworthy or not', while the generative AI companies that create the products are not held accountable. Prof Shannon Vallor, the director of the University of Edinburgh's Centre for Technomoral Futures, said Amazon had 'an ethical responsibility to not knowingly facilitate harm to their customers and to society', although it would be 'absurd' to make a bookseller responsible for the contents of all its books. Problems are arising because the guardrails previously deployed in the publishing industry – such as reputational concerns and the vetting of authors and manuscripts – have been completely transformed by AI, she noted. This is compounded by a 'wild west' regulatory environment in which there are no 'meaningful consequences for those who enable harms', fuelling a 'race to the bottom', she said. At present, there is no legislation that requires AI-authored books to be labelled as such. Copyright law only applies if a specific author's content has been reproduced, although Vallor noted that tort law should impose 'basic duties of care and due diligence'. The Advertising Standards Agency said AI-authored books cannot be advertised to give a misleading impression that they are written by a human, enabling people who have seen such books to submit a complaint. Richard Wordsworth was hoping to learn about his recent adult ADHD diagnosis when his father recommended a book he found on Amazon after searching 'ADHD adult men'. When Wordsworth sat down to read it, 'immediately, it sounded strange,' he said. The book opened with a quote from the conservative psychologist Jordan Petersen and then contained a string of random anecdotes, as well as historical inaccuracies. Some advice was actively harmful, he observed. For example, one chapter discussing emotional dysregulation warned that friends and family 'don't forgive the emotional damage you inflict. The pain and hurt caused by impulsive anger leave lasting scars.' When Wordsworth researched the author he spotted a headshot that looked AI-generated, plus his lack of qualifications. He searched several other titles in the Amazon marketplace and was shocked to encounter warnings that his condition was 'catastrophic' and that he was 'four times more likely to die significantly earlier'. He felt immediately 'upset', as did his father, who is highly educated. 'If he can be taken in by this type of book, anyone could be – and so well-meaning and desperate people have their heads filled with dangerous nonsense by profiteering scam artists while Amazon takes its cut,' Wordsworth said. An Amazon spokesperson said: 'We have content guidelines governing which books can be listed for sale and we have proactive and reactive methods that help us detect content that violates our guidelines, whether AI-generated or not. We invest significant time and resources to ensure our guidelines are followed and remove books that do not adhere to those guidelines. 'We continue to enhance our protections against non-compliant content and our process and guidelines will keep evolving as we see changes in publishing.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store