
Artificial intelligence — an aid to thought, not a replacement
'The danger of outsourcing our thinking to machines is that we still have to live in the world they end up creating. That's too big a responsibility to just hand over.'
When ChatGPT briefly went offline last week, it felt, as journalist and writer Gus Silber put it, 'as if the sun had fallen from the sky'.
Speaking on a Jive Media Africa webinar on the subject of 'Machines are writing – do we trust the message?', Silber and other panellists tossed around concepts of 'Uberisation', 'forklifting' and 'outsourcing' to get to grips with AI technology and its ethical pitfalls.
Silber noted that in just a few years, AI had morphed from novelty to necessity and is now deeply woven into daily work across media, academia and science communication.
Its seductive convenience allows us to 'outsource thinking to a machine', said Silber, while noting both the potential and the perils of doing so.
Fellow panellists, science communicator and champion of language equity in science Sibusiso Biyela and Michelle Riedlinger, associate professor in the school of communications at the Queensland University of Technology, agreed, in a discussion peppered with metaphors, to highlight the divisions of labour in the partnership between technology and humans.
Introducing the webinar, Jive Media director Robert Inglis said that 'artificial intelligence, particularly generative AI, is reshaping both the practice of research and the craft of science communication. This impact is felt by researchers, by science communicators and by others working at the intersection of science, society and media and especially those who are grappling with how AI tools influence credibility, ethics and public trust.'
While many fret over the elimination of jobs and the technological encroachment on the preserve of what it means to be human, Silber readily calls himself a Utopian on the subject, believing 'it's ultimately going to be of good for humanity'.
Silber notes that the reach of AI, 'originally a niche technology, has expanded dramatically, driven by advances like fibre, broadband and always-on connectivity. Tools such as ChatGPT now serve as default knowledge engines, sometimes even surpassing Google.
Being able to 'outsource a lot of your thinking to, effectively, a machine,' he said, 'tempts users to let AI handle increasingly complex tasks'.
In academia and media, some rely heavily on AI-generated content, resulting in a sameness of voice: 'It sounds human, but it sounds human in a very kind of generic and samey way.' While AI offers powerful assistance in tasks like transcription – 'you can transcribe two hours' worth of interviews in five or ten minutes' – the risk is that its convenience leads to 'creative atrophy'. It's 'a real temptation, a kind of 'tyranny of ease', where you can just prompt the AI to write essays or theses. That scares me because it risks giving up your creative energy.'
Collaborative use
He nevertheless enthuses about the rise of multimodal AI and mentioned tools like Whisper, Notebook LMand Genspark AI, which are already revolutionising research, communication and creative industries. But he draws clear boundaries: 'I draw the line at outsourcing full creative processes to AI.' Instead, he advocates using AI collaboratively, augmenting human thought rather than replacing it.
'We're lucky to live in this creative technical renaissance. We can't go back to how things were before. My advice: explore these tools, break them, have fun and find ways to use them collaboratively. Let machines do the heavy lifting while we focus on human creativity.'
Anxieties, however, are pervasive, said Riedlinger. Her research shows that news audiences 'found familiar concerns: misinformation, copyright, elections, job displacement.' But people weren't rejecting AI outright; 85% wanted transparency; visible labels, a kind of 'nutritional label' for AI-generated content.'
She said there's a growing 'authenticity infrastructure' emerging, with companies like Adobe working on labelling multimodal content. Audiences want AI to support, not replace, human journalists and science communicators. 'The key is to keep humans in the loop, to ensure creativity, empathy and accountability remain central.'
To help navigate this, Riedlinger reached for metaphors.
First, she said, contrast 'forklifting versus weightlifting. Forklifting covers repetitive, heavy tasks – transcription, translation, drafting – where AI helps move things efficiently but under human guidance. Weightlifting represents skills that build strength: framing stories, interpreting data, learning audiences. These are areas we risk weakening if we outsource too much to AI.'
The second is the 'Uber metaphor'. 'You can make coffee yourself or order it through Uber. It's convenient, but hides labour behind the scenes: the barista, the driver, data centres. Generative AI feels equally magical but isn't free; there are hidden costs in energy use, data scraping and ethical concerns. Before outsourcing, we must consider these unseen consequences.
Hallucinations and bias
'In global studies, people increasingly recognise AI's limits: hallucinations, biases in gender, race, geography and class. Some see AI as a calculator, improving over time, but that's misleading. Calculators give fixed answers; generative AI doesn't.'
Reaching for yet another metaphor, she said 'it's more like a talking mirror from a fairy tale', generating fluent, tailoredand sometimes flattering responses, but blending truth and invention in a way that can flatten creativity and make unique ideas more generic.
'Authenticity, trust and disclosure are vital. We need consistent labels, audience controland clear public policies.'
This, said Riedlinger, will build trust over time. 'Science communicators must reflect on each task: Is this forklifting or weightlifting? Am I calling an Uber for something I should craft myself? Science communication deserves thoughtful tools and thoughtful users. We need to ensure that our publics have authentic interactions. '
The watchwords, when dealing with AI, are: 'Disclose. Collaborate. Stay in the loop as a human. Design for trust.'
Picking up on the trust, or mistrust, of the machine, Biyela said 'there's a lot of antagonism around AI, especially with articles not disclosing if they're AI-assisted. When audiences hear something was generated by AI, they often turn away. It becomes less of an achievement if it wasn't really done by a human.'
But, he said, 'audiences (and ourselves) need to understand AI's limitations and how it actually works. We call it artificial intelligence, but it's in no way intelligent. It's an automaton that looks like it's thinking, but it's not. It's a clever prediction model using computing power to make it seem like it's thinking for us. But it's not. The thinking is always being done by people. AI never does anything; it's always us. What it produces has been trained to give us what we want.'
Biyela emphasises that 'You're the human in the loop' and have to account for every line an LLM is asked to produce. 'If it summarises something you haven't seen, you have to check it. It makes the job easier, but it doesn't perform it.'
Caveats aside, Biyela says 'generative AI also offers potential in communicating science in underserved languages, like African languages.
Driving AI
In his conclusion, Inglis, too, reached for a metaphor to guide how science communicators and other professionals and students should engage with AI: 'We would never jump into a car without having learnt to drive the thing. Now you've got these tools at our disposal and we'll use them, but we've got to be aware of the dangers that using them for the wrong things can bring about in the world.'
In short, the panel agreed that in the partnership between AI and people, AI is good at the 'forklifting' work: sorting, calculating, transcribing, processing vast amounts of data quickly, but that humans still carry the mental load: setting priorities, interpreting meaning, understanding context, reading emotions, anticipating unintended consequencesand ultimately taking responsibility for decisions.
Inglis further reflected: 'Our work in science communication is to play a part in solving the complex challenges we face and to ensure we do so in ways that build a better future for society and for the planet.' He cited a recent study by Apple, which reveals just how bad large reasoning models are when it comes to deep reasoning, having been found to face a 'complete accuracy collapse beyond certain complexities'.
'This underlines the need for human operators to use these tools as an aid to thinking, not as a replacement for thinking. That grappling with complex ideas is exactly what we're doing with this webinar series – these kinds of answers can't be scraped from the web, they need to be generated and discovered through exploration, conversation, dialogue and skilful engagement.
'The danger of outsourcing our thinking to machines is that we still have to live in the world they end up creating. That's too big a responsibility to just hand over because it's easier than engaging with tough issues. It's lazy and at this time in the history of our planet, we can't afford to be lazy.' DM

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

IOL News
an hour ago
- IOL News
Meta's AI chatbot is becoming a confessional—and everyone's watching
Conversation's with META's chatbot could be public and users dont know. Image: Supplied A MAN wants to know how to help his friend come out of the closet. An aunt struggles to find the right words to congratulate her niece on her graduation. And one guy wants to know how to ask a girl - 'in Asian' - if she's interested in older men. Ten years ago, they might have discussed those vulnerable questions with friends over brunch, at a dive bar, or in the office of a therapist or clergy member. Today, scores of users are posting their often cringe-making conversations about relationships, identity and spirituality with Meta's AI chatbot to the app's public feed - sometimes seemingly without knowing their musings can be seen by others. Meta launched a stand-alone app for its AI chatbot nearly two months ago with the goal of giving users personalized and conversational answers to any question the could come up with - a service similar to those offered by OpenAI's ChatGPT or Anthropic's Claude. But the app came with a unique feature: a discover field where users could post their personal conversations with Meta AI for the world to see, reflecting the company's larger strategy to embed AI-created content into its social networks. Since the April launch, the app's discover feed has been flooded with users' conversations with Meta AI on personal topics about their lives or their private philosophical questions about the world. As the feature gained more attention, some users appeared to purposely promote comical conversations with Meta AI. Others are publishing AI-generated images about political topics such as Trump in diapers, images of girls in sexual situations and promotions to their businesses. The flurry of personal posts on Meta AI is the latest indication that people are increasingly turning to conversational chatbots to meet their relationship and emotional needs. As users ask the chatbots for advice on matters ranging from their marital problems to financial challenges, privacy advocates warn that users' personal information may end up being used by tech companies in ways they didn't expect or want. 'We've seen a lot of examples of people sending very, very personal information to AI therapist chatbots or saying very intimate things to chatbots in other settings, ' said Calli Schroeder, a senior counsel at the Electronic Privacy Information Center. 'I think many people assume there's some baseline level of confidentiality there. There's not. Everything you submit to an AI system at bare minimum goes to the company that's hosting the AI.' Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Next Stay Close ✕ Meta spokesman Daniel Roberts said chats with Meta AI are set to private by default and users have to actively tap the share or publish button before it shows up on the app's discover field. Still, the company's share button doesn't explicitly tell users where their conversations with Meta AI will be posted and what other people will be able to see - a fact that appeared to confuse some users about the new app. The discover feed on Meta AI reads like a mixture of users' personal diaries and Google search histories, filled with questions ranging from the mundane to the political and philosophical. In one instance, a husband asked Meta AI in a voice recording about how to grow rice indoors for his 'Filipino wife.' Users asked Meta about Jesus' divinity; how to get picky toddlers to eat food and how to budget while enjoying daily pleasures. The feed is also filled with images created by Meta AI but conceived by users' imaginations, such as one of President Donald Trump eating poop and another of the grim reaper riding a motorcycle. In April, Meta CEO Mark Zuckerberg told podcaster Dwarkesh Patel that one of the main reasons people used Meta AI was to talk through difficult conversations they need to have with people in their lives - a use he thinks will become more compelling as the AI model gets to know its users. 'People use stuff that's valuable for them,' he said. 'If you think something someone is doing is bad and they think it's really valuable, most of the time in my experience, they're right and you're wrong.' There are few regulations pushing tech companies to adopt stricter content or privacy rules for their chatbots. In recent months, a couple of high-profile incidents triggered questions about how tech companies handle personal data, who has access to that data, and how that information could be used to manipulate users. In April, OpenAI announced that ChatGPT would be able to recall old conversations that users did not ask the company to save. On X, CEO Sam Altman said OpenAI was excited about '[AI] systems that get to know you over your life, and become extremely useful and personalized.' The potential pitfalls of that approach became obvious the following month, when OpenAI had to roll back an update to ChatGPT that incorporated more personalization because it made the tool sycophantic and manipulative toward users. Last week, OpenAI's chief operating officer Brad Lightcap said the company intended to keep its privacy commitments to users after plaintiffs in a copyright lawsuit led by the New York Times demanded that OpenAI retain customer data indefinitely. Ultimately, it may be users that push the company to offer more transparency.

IOL News
an hour ago
- IOL News
Leading Change: Prof Hester C. Klopper Charts Bold Course for the University of the Free State
Attending the installation ceremony were, from the left: Mr David Noko, Chairperson, UFS Council; Mrs Tirelo Sibisi, Vice-Chairperson, UFS Council; Prof Hester C. Klopper, UFS Vice-Chancellor and Principal; Ms Mimmy Gondwe, Deputy Minister of Higher Education; and Prof Bonang Mohale, Chancellor of the UFS. Image: supplied History was made at the University of the Free State (UFS) as Prof Hester C. Klopper was officially inaugurated on 9 June 2025 as the institution's 15th Vice-Chancellor and Principal – and the first woman to hold this prestigious position in the institution's 121-year history. Presented in the Odeion Auditorium on the Bloemfontein Campus and attended by a wide variety of university stakeholders, high-profile guests, and staff, the installation ceremony was a momentous occasion, marked by celebratory moments. Prof Klopper's inauguration address, titled Walking Together with Purpose: Unlocking Excellence to Lead Change, unveiled an innovative vision that promises to reshape the UFS and South Africa's higher education landscape for the digital age. Bold new ventures include two major funding initiatives, as well as the launch of a Veterinary Science programme at the UFS, making it only the second institution in South Africa to offer this qualification. It all forms part of a vision to transform the UFS into a research-led powerhouse that harnesses artificial intelligence responsibly while preserving human connection and an African identity. Creating responsible societal futures is at the centre of this vision. Mr David Noko, Chairperson of the UFS Council, congratulates Prof Hester C. Klopper on her installation as Vice-Chancellor and Principal of the UFS. Image: supplied Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Ad loading AI Revolution in Higher Education During her inauguration address, Prof Klopper expounded on the educational crossroads facing universities worldwide. While artificial intelligence capabilities double in power every few months, she argued, educational systems remain frozen in 19th-century models. "We are preparing students for a world that is transforming faster than we can imagine, using methods designed for a world that no longer exists," she declared. Prof Klopper has a clear vision for UFS graduates. In a world where AI can process information faster than humans and generate content more efficiently than writers, she urged for a critical evaluation of what makes graduates valuable. "Not their ability to memorise information – AI can do that infinitely better," she explained. "What will make them irreplaceable is their ability to think creatively, empathise deeply, ask the right questions, challenge assumptions, and work collaboratively with both humans and artificial intelligence." Innovative Abilities within the UFS Prof Klopper's approach is not about importing solutions from elsewhere. Through extensive engagement with staff, students, and stakeholders since her appointment on 1 February 2025, she has reached the encouraging realisation that the required innovative abilities already exist within the UFS. "How do we further unlock the potential that lies dormant in our own university?" she asked. The answer, she believes, lies in one powerful concept: connection. Transdisciplinary teaching and research and regional and global collaboration form a fertile breeding ground for cutting-edge innovation. She indicated that the UFS' purpose is embedded in five intersected and interconnected strategic pillars, four guardrails, and a concise action plan. Five Strategic Pillars for the Future Academic Excellence and AI-Enhanced Research Impact tops her agenda. Rather than viewing artificial intelligence as a threat, Prof Klopper envisions it as amplifying human creativity. The university will pursue transdisciplinary research tackling climate change, inequality, and technological disruption. Institutional Agility in the Age of Exponential Change addresses the bureaucratic bottlenecks plaguing modern universities. A comprehensive digital systems maturity model aims to create seamless experiences while ensuring that systems serve people, not bureaucracy. Transformational Culture for a New Generation goes beyond compliance to create an environment where emotional intelligence, creativity, and adaptability are valued alongside technical knowledge. The focus extends to curriculum innovation and deep societal connection. Systemic Sustainability and Strategic Responsible Investment tackles the financial realities facing higher education. Two new initiatives demonstrate this commitment: The VC-ISRC Imbewu Legacy Fund for student support (already boosted by R1,3 million from the Motsepe Foundation, matched by the UFS) and the Talent Magnet Fund to attract world-class academics. Collaborative Innovation and Global Integration breaks down artificial barriers between disciplines, institutions, and sectors. Rather than working in isolation, Prof Klopper envisions building future-fit partnerships. 'My vision is for the UFS to become an Innovation Hub – a true Entrepreneurial Knowledge Ecosystem, thriving on the creative synergy produced when our best academic minds work together across our three campuses, allowing us to be the very best we can be.' Four Guardrails for Success Four non-negotiable guardrails will assist the university in guiding this transformation: The Irreplaceable Heart of the UFS is its People, as human connection becomes more precious in the age of artificial intelligence. The Strategic People Development Strategy will assist in building leadership capacity for the digital age. Leading Through Partnership in the Age of Collaboration will require 'unified leadership with a unified purpose'. From Vision to Innovative Reality will keep the university focused on the execution and transformative impact of its actions. Responsible Societal Futures will be the institution's North Star, ensuring that knowledge serves justice, sustainability, and humanity. A Seven-Point Action Plan Moving beyond vision to execution, Prof Klopper announced concrete steps that are already in motion: Establishing a Transdisciplinary Innovation Hub on the South Campus in Bloemfontein to drive research, commercialisation, industry partnership, and student entrepreneurship. Transdisciplinary research within thematic research areas. Optimisation of professional and support services through the alignment and streamlining of functions. Systematically reviewing the academic programme portfolio. A strategic people development strategy driven by the soon-to-be-launched UFS Training Academy. Implementing a comprehensive transformation scorecard. Creating the UFS Knowledge Enterprise and UFS Commercial entities for income diversification. Prof Hester C. Klopper, 15th Vice-Chancellor and Principal of the UFS and the first woman to lead the university in its 121-year history. Image: Supplied Walking Into History Prof Klopper's vision positions the UFS not just as a follower of global trends, but as a leader in defining what African universities can become in the digital age. The journey has begun. The destination: a university that does not just adapt to the future but actively shapes it. 'We walk together towards responsible societal futures, unified in purpose, committed to excellence, and determined to make the University of the Free State a beacon of hope for our city, our province, our nation, our continent, and our world.' Contact UFS: Bloemfontein Campus: +27 51 401 9111 Qwaqwa Campus: +27 58 718 5000 South Campus: +27 51 401 9111 info@ UFS social media Facebook: Twitter: Instagram: LinkedIn: YouTube: University of The Free State Image: Supplied


Daily Maverick
13 hours ago
- Daily Maverick
Artificial intelligence — an aid to thought, not a replacement
'The danger of outsourcing our thinking to machines is that we still have to live in the world they end up creating. That's too big a responsibility to just hand over.' When ChatGPT briefly went offline last week, it felt, as journalist and writer Gus Silber put it, 'as if the sun had fallen from the sky'. Speaking on a Jive Media Africa webinar on the subject of 'Machines are writing – do we trust the message?', Silber and other panellists tossed around concepts of 'Uberisation', 'forklifting' and 'outsourcing' to get to grips with AI technology and its ethical pitfalls. Silber noted that in just a few years, AI had morphed from novelty to necessity and is now deeply woven into daily work across media, academia and science communication. Its seductive convenience allows us to 'outsource thinking to a machine', said Silber, while noting both the potential and the perils of doing so. Fellow panellists, science communicator and champion of language equity in science Sibusiso Biyela and Michelle Riedlinger, associate professor in the school of communications at the Queensland University of Technology, agreed, in a discussion peppered with metaphors, to highlight the divisions of labour in the partnership between technology and humans. Introducing the webinar, Jive Media director Robert Inglis said that 'artificial intelligence, particularly generative AI, is reshaping both the practice of research and the craft of science communication. This impact is felt by researchers, by science communicators and by others working at the intersection of science, society and media and especially those who are grappling with how AI tools influence credibility, ethics and public trust.' While many fret over the elimination of jobs and the technological encroachment on the preserve of what it means to be human, Silber readily calls himself a Utopian on the subject, believing 'it's ultimately going to be of good for humanity'. Silber notes that the reach of AI, 'originally a niche technology, has expanded dramatically, driven by advances like fibre, broadband and always-on connectivity. Tools such as ChatGPT now serve as default knowledge engines, sometimes even surpassing Google. Being able to 'outsource a lot of your thinking to, effectively, a machine,' he said, 'tempts users to let AI handle increasingly complex tasks'. In academia and media, some rely heavily on AI-generated content, resulting in a sameness of voice: 'It sounds human, but it sounds human in a very kind of generic and samey way.' While AI offers powerful assistance in tasks like transcription – 'you can transcribe two hours' worth of interviews in five or ten minutes' – the risk is that its convenience leads to 'creative atrophy'. It's 'a real temptation, a kind of 'tyranny of ease', where you can just prompt the AI to write essays or theses. That scares me because it risks giving up your creative energy.' Collaborative use He nevertheless enthuses about the rise of multimodal AI and mentioned tools like Whisper, Notebook LMand Genspark AI, which are already revolutionising research, communication and creative industries. But he draws clear boundaries: 'I draw the line at outsourcing full creative processes to AI.' Instead, he advocates using AI collaboratively, augmenting human thought rather than replacing it. 'We're lucky to live in this creative technical renaissance. We can't go back to how things were before. My advice: explore these tools, break them, have fun and find ways to use them collaboratively. Let machines do the heavy lifting while we focus on human creativity.' Anxieties, however, are pervasive, said Riedlinger. Her research shows that news audiences 'found familiar concerns: misinformation, copyright, elections, job displacement.' But people weren't rejecting AI outright; 85% wanted transparency; visible labels, a kind of 'nutritional label' for AI-generated content.' She said there's a growing 'authenticity infrastructure' emerging, with companies like Adobe working on labelling multimodal content. Audiences want AI to support, not replace, human journalists and science communicators. 'The key is to keep humans in the loop, to ensure creativity, empathy and accountability remain central.' To help navigate this, Riedlinger reached for metaphors. First, she said, contrast 'forklifting versus weightlifting. Forklifting covers repetitive, heavy tasks – transcription, translation, drafting – where AI helps move things efficiently but under human guidance. Weightlifting represents skills that build strength: framing stories, interpreting data, learning audiences. These are areas we risk weakening if we outsource too much to AI.' The second is the 'Uber metaphor'. 'You can make coffee yourself or order it through Uber. It's convenient, but hides labour behind the scenes: the barista, the driver, data centres. Generative AI feels equally magical but isn't free; there are hidden costs in energy use, data scraping and ethical concerns. Before outsourcing, we must consider these unseen consequences. Hallucinations and bias 'In global studies, people increasingly recognise AI's limits: hallucinations, biases in gender, race, geography and class. Some see AI as a calculator, improving over time, but that's misleading. Calculators give fixed answers; generative AI doesn't.' Reaching for yet another metaphor, she said 'it's more like a talking mirror from a fairy tale', generating fluent, tailoredand sometimes flattering responses, but blending truth and invention in a way that can flatten creativity and make unique ideas more generic. 'Authenticity, trust and disclosure are vital. We need consistent labels, audience controland clear public policies.' This, said Riedlinger, will build trust over time. 'Science communicators must reflect on each task: Is this forklifting or weightlifting? Am I calling an Uber for something I should craft myself? Science communication deserves thoughtful tools and thoughtful users. We need to ensure that our publics have authentic interactions. ' The watchwords, when dealing with AI, are: 'Disclose. Collaborate. Stay in the loop as a human. Design for trust.' Picking up on the trust, or mistrust, of the machine, Biyela said 'there's a lot of antagonism around AI, especially with articles not disclosing if they're AI-assisted. When audiences hear something was generated by AI, they often turn away. It becomes less of an achievement if it wasn't really done by a human.' But, he said, 'audiences (and ourselves) need to understand AI's limitations and how it actually works. We call it artificial intelligence, but it's in no way intelligent. It's an automaton that looks like it's thinking, but it's not. It's a clever prediction model using computing power to make it seem like it's thinking for us. But it's not. The thinking is always being done by people. AI never does anything; it's always us. What it produces has been trained to give us what we want.' Biyela emphasises that 'You're the human in the loop' and have to account for every line an LLM is asked to produce. 'If it summarises something you haven't seen, you have to check it. It makes the job easier, but it doesn't perform it.' Caveats aside, Biyela says 'generative AI also offers potential in communicating science in underserved languages, like African languages. Driving AI In his conclusion, Inglis, too, reached for a metaphor to guide how science communicators and other professionals and students should engage with AI: 'We would never jump into a car without having learnt to drive the thing. Now you've got these tools at our disposal and we'll use them, but we've got to be aware of the dangers that using them for the wrong things can bring about in the world.' In short, the panel agreed that in the partnership between AI and people, AI is good at the 'forklifting' work: sorting, calculating, transcribing, processing vast amounts of data quickly, but that humans still carry the mental load: setting priorities, interpreting meaning, understanding context, reading emotions, anticipating unintended consequencesand ultimately taking responsibility for decisions. Inglis further reflected: 'Our work in science communication is to play a part in solving the complex challenges we face and to ensure we do so in ways that build a better future for society and for the planet.' He cited a recent study by Apple, which reveals just how bad large reasoning models are when it comes to deep reasoning, having been found to face a 'complete accuracy collapse beyond certain complexities'. 'This underlines the need for human operators to use these tools as an aid to thinking, not as a replacement for thinking. That grappling with complex ideas is exactly what we're doing with this webinar series – these kinds of answers can't be scraped from the web, they need to be generated and discovered through exploration, conversation, dialogue and skilful engagement. 'The danger of outsourcing our thinking to machines is that we still have to live in the world they end up creating. That's too big a responsibility to just hand over because it's easier than engaging with tough issues. It's lazy and at this time in the history of our planet, we can't afford to be lazy.' DM