logo
Artificial intelligence — an aid to thought, not a replacement

Artificial intelligence — an aid to thought, not a replacement

Daily Maverick17-06-2025
'The danger of outsourcing our thinking to machines is that we still have to live in the world they end up creating. That's too big a responsibility to just hand over.'
When ChatGPT briefly went offline last week, it felt, as journalist and writer Gus Silber put it, 'as if the sun had fallen from the sky'.
Speaking on a Jive Media Africa webinar on the subject of 'Machines are writing – do we trust the message?', Silber and other panellists tossed around concepts of 'Uberisation', 'forklifting' and 'outsourcing' to get to grips with AI technology and its ethical pitfalls.
Silber noted that in just a few years, AI had morphed from novelty to necessity and is now deeply woven into daily work across media, academia and science communication.
Its seductive convenience allows us to 'outsource thinking to a machine', said Silber, while noting both the potential and the perils of doing so.
Fellow panellists, science communicator and champion of language equity in science Sibusiso Biyela and Michelle Riedlinger, associate professor in the school of communications at the Queensland University of Technology, agreed, in a discussion peppered with metaphors, to highlight the divisions of labour in the partnership between technology and humans.
Introducing the webinar, Jive Media director Robert Inglis said that 'artificial intelligence, particularly generative AI, is reshaping both the practice of research and the craft of science communication. This impact is felt by researchers, by science communicators and by others working at the intersection of science, society and media and especially those who are grappling with how AI tools influence credibility, ethics and public trust.'
While many fret over the elimination of jobs and the technological encroachment on the preserve of what it means to be human, Silber readily calls himself a Utopian on the subject, believing 'it's ultimately going to be of good for humanity'.
Silber notes that the reach of AI, 'originally a niche technology, has expanded dramatically, driven by advances like fibre, broadband and always-on connectivity. Tools such as ChatGPT now serve as default knowledge engines, sometimes even surpassing Google.
Being able to 'outsource a lot of your thinking to, effectively, a machine,' he said, 'tempts users to let AI handle increasingly complex tasks'.
In academia and media, some rely heavily on AI-generated content, resulting in a sameness of voice: 'It sounds human, but it sounds human in a very kind of generic and samey way.' While AI offers powerful assistance in tasks like transcription – 'you can transcribe two hours' worth of interviews in five or ten minutes' – the risk is that its convenience leads to 'creative atrophy'. It's 'a real temptation, a kind of 'tyranny of ease', where you can just prompt the AI to write essays or theses. That scares me because it risks giving up your creative energy.'
Collaborative use
He nevertheless enthuses about the rise of multimodal AI and mentioned tools like Whisper, Notebook LMand Genspark AI, which are already revolutionising research, communication and creative industries. But he draws clear boundaries: 'I draw the line at outsourcing full creative processes to AI.' Instead, he advocates using AI collaboratively, augmenting human thought rather than replacing it.
'We're lucky to live in this creative technical renaissance. We can't go back to how things were before. My advice: explore these tools, break them, have fun and find ways to use them collaboratively. Let machines do the heavy lifting while we focus on human creativity.'
Anxieties, however, are pervasive, said Riedlinger. Her research shows that news audiences 'found familiar concerns: misinformation, copyright, elections, job displacement.' But people weren't rejecting AI outright; 85% wanted transparency; visible labels, a kind of 'nutritional label' for AI-generated content.'
She said there's a growing 'authenticity infrastructure' emerging, with companies like Adobe working on labelling multimodal content. Audiences want AI to support, not replace, human journalists and science communicators. 'The key is to keep humans in the loop, to ensure creativity, empathy and accountability remain central.'
To help navigate this, Riedlinger reached for metaphors.
First, she said, contrast 'forklifting versus weightlifting. Forklifting covers repetitive, heavy tasks – transcription, translation, drafting – where AI helps move things efficiently but under human guidance. Weightlifting represents skills that build strength: framing stories, interpreting data, learning audiences. These are areas we risk weakening if we outsource too much to AI.'
The second is the 'Uber metaphor'. 'You can make coffee yourself or order it through Uber. It's convenient, but hides labour behind the scenes: the barista, the driver, data centres. Generative AI feels equally magical but isn't free; there are hidden costs in energy use, data scraping and ethical concerns. Before outsourcing, we must consider these unseen consequences.
Hallucinations and bias
'In global studies, people increasingly recognise AI's limits: hallucinations, biases in gender, race, geography and class. Some see AI as a calculator, improving over time, but that's misleading. Calculators give fixed answers; generative AI doesn't.'
Reaching for yet another metaphor, she said 'it's more like a talking mirror from a fairy tale', generating fluent, tailoredand sometimes flattering responses, but blending truth and invention in a way that can flatten creativity and make unique ideas more generic.
'Authenticity, trust and disclosure are vital. We need consistent labels, audience controland clear public policies.'
This, said Riedlinger, will build trust over time. 'Science communicators must reflect on each task: Is this forklifting or weightlifting? Am I calling an Uber for something I should craft myself? Science communication deserves thoughtful tools and thoughtful users. We need to ensure that our publics have authentic interactions. '
The watchwords, when dealing with AI, are: 'Disclose. Collaborate. Stay in the loop as a human. Design for trust.'
Picking up on the trust, or mistrust, of the machine, Biyela said 'there's a lot of antagonism around AI, especially with articles not disclosing if they're AI-assisted. When audiences hear something was generated by AI, they often turn away. It becomes less of an achievement if it wasn't really done by a human.'
But, he said, 'audiences (and ourselves) need to understand AI's limitations and how it actually works. We call it artificial intelligence, but it's in no way intelligent. It's an automaton that looks like it's thinking, but it's not. It's a clever prediction model using computing power to make it seem like it's thinking for us. But it's not. The thinking is always being done by people. AI never does anything; it's always us. What it produces has been trained to give us what we want.'
Biyela emphasises that 'You're the human in the loop' and have to account for every line an LLM is asked to produce. 'If it summarises something you haven't seen, you have to check it. It makes the job easier, but it doesn't perform it.'
Caveats aside, Biyela says 'generative AI also offers potential in communicating science in underserved languages, like African languages.
Driving AI
In his conclusion, Inglis, too, reached for a metaphor to guide how science communicators and other professionals and students should engage with AI: 'We would never jump into a car without having learnt to drive the thing. Now you've got these tools at our disposal and we'll use them, but we've got to be aware of the dangers that using them for the wrong things can bring about in the world.'
In short, the panel agreed that in the partnership between AI and people, AI is good at the 'forklifting' work: sorting, calculating, transcribing, processing vast amounts of data quickly, but that humans still carry the mental load: setting priorities, interpreting meaning, understanding context, reading emotions, anticipating unintended consequencesand ultimately taking responsibility for decisions.
Inglis further reflected: 'Our work in science communication is to play a part in solving the complex challenges we face and to ensure we do so in ways that build a better future for society and for the planet.' He cited a recent study by Apple, which reveals just how bad large reasoning models are when it comes to deep reasoning, having been found to face a 'complete accuracy collapse beyond certain complexities'.
'This underlines the need for human operators to use these tools as an aid to thinking, not as a replacement for thinking. That grappling with complex ideas is exactly what we're doing with this webinar series – these kinds of answers can't be scraped from the web, they need to be generated and discovered through exploration, conversation, dialogue and skilful engagement.
'The danger of outsourcing our thinking to machines is that we still have to live in the world they end up creating. That's too big a responsibility to just hand over because it's easier than engaging with tough issues. It's lazy and at this time in the history of our planet, we can't afford to be lazy.' DM
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Forget coffee runs: survey reveals Gen Z interns are 60% more experimental with AI
Forget coffee runs: survey reveals Gen Z interns are 60% more experimental with AI

IOL News

time7 hours ago

  • IOL News

Forget coffee runs: survey reveals Gen Z interns are 60% more experimental with AI

AI does not replace these young workers but accelerates their professional development. But Gen Z is flipping that script armed with AI; they're not just fetching reports, they're training algorithms, questioning outdated work norms, and demanding balance in a way no generation has before. When you think about interns, you might imagine coffee runs and filing. A new survey by KPMG US, which asked more than 1,100 interns about their views on artificial intelligence and work, reveals something striking: Gen Z is 60% more experimental with AI than older generations. For them, AI isn't a looming threat; it's a toolbox for making work smarter, faster, and often, more human. And here's the kicker: while older workers are still debating whether to trust AI, Gen Z is already using it daily to brainstorm ideas, write code, and automate repetitive tasks. According to the study, nearly nine in ten Gen Z interns use generative AI at least once or twice a week, extending its use beyond work projects into school assignments and even personal life management. Why Gen Z feels so at home with AI This confidence makes sense when you look at how they've grown up. As digital natives, Gen Z has never known a world without Google, streaming, or on-demand everything. AI is simply the next evolution in their toolkit. 'Gen Z is making AI work for them,' said Derek Thomas, national partner-in-charge of university talent acquisition at KPMG US, in an interview with "Fortune". 'While other generations are still debating whether to use it, Gen Z is exploring new and creative ways to utilise AI for increased efficiency in their daily lives and enhance their performance at work.' For Thomas, the shift is clear: automating 'mundane' tasks frees up younger employees to tackle more complex, high-stakes challenges earlier in their careers. In other words, AI isn't replacing them; it's accelerating their learning curve. But don't mistake enthusiasm for blind trust. The survey shows that Gen Z interns are deeply aware of AI's risks. Their biggest fear? Overreliance. Many worry that leaning too heavily on algorithms could chip away at creativity and critical thinking. Close behind are concerns about misinformation and bias. That balance between excitement and caution sets them apart. They want to master AI, not surrender to it. Interestingly, most reported that their universities hadn't fully embraced AI; only 8% said their schools strongly encouraged its use. For many, that signals a gap: they're learning about AI on the job, not in the classroom. Perhaps the most eye-opening revelation from the survey isn't about technology at all; it's about what Gen Z wants from work itself.

Green bytes - uncovering the hidden environmental cost of our digital world
Green bytes - uncovering the hidden environmental cost of our digital world

IOL News

time9 hours ago

  • IOL News

Green bytes - uncovering the hidden environmental cost of our digital world

Image: Supplied Our quest for knowledge has led to an alarming insight: it is harming the environment. According to the World Economic Forum, since the release of ChatGPT in November 2022, investment in artificial intelligence (AI) has grown eight-fold. This investment has seen the increase in the physical infrastructure needed to support these emerging technologies, including servers and energy-generation plants. While big data provides transformative opportunities, it also poses environmental challenges. Data centres, which are essential for storing and processing vast amounts of information, contribute up to 2% of global greenhouse- gas emissions and could account for up to 4% of global electricity consumption by 2030. Therefore, there is an urgent need for digital decarbonisation efforts to reduce the carbon footprint of data storage and processing. Research by Professor Hanlie Smuts of the Faculty of Engineering, Built Environment and Information Technology at the University of Pretoria (UP) reveals that effective data management should be a key focus of these efforts. 'By managing data properly, organisations can use technologies like AI and machine-learning to support sustainability goals, particularly those related to clean energy (Sustainable Development Goal [SDG] 7) and reduced carbon emissions (SDG 13).' Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Ad loading Organisations should also adopt green information technology strategies, including e-waste management, energy- efficient data centres, green procurement, dark data management and sustainable software development. One specific challenge is dark data – information that organisations collect and store but do not actively use or analyse. This data is often unstructured (such as emails, outdated documents and logs), leading to inefficient storage practices and increased energy consumption. 'Organisations must unlock the value of dark data by using advanced technologies like AI and machine-learning to extract insights,' Prof Smuts says. Head of the Department of Informatics at UP, Professor Hanlie Smuts Head of the Department of Informatics at University of Pretoria Image: Supplied In short, use it or lose it. However, technology alone is insufficient. 'A holistic approach is needed; this includes strategies to identify and analyse dark data, implement retention and destruction policies, enhance data science skills, and embed dark data analytics into business processes,' she adds. In the face of rapid digitalisation, can businesses achieve the SDGs? 'Sustainability requires embedding environmental, social and governance principles into strategies,' Prof Smuts says. 'While applying technologies like AI and machine-learning, predictive analytics and the internet of things are key to meeting stakeholder expectations; these technologies must also address environmental challenges by reducing carbon footprints. Thus, integrating data management with advanced technologies supports digital decarbonisation while transforming data into actionable insights that align with sustainability objectives.' Dark data – which is underutilised yet valuable – can uncover opportunities to cut emissions and drive sustainability when integrated into robust data management strategies. Addressing dark data enhances governance, ensuring all data supports green initiatives while creating organisational value. By fostering knowledge-sharing and collaboration, organisations can tap into collective intelligence to innovate solutions that promote digital decarbonisation. Organisations can derive substantial value from data by leveraging it for digital decarbonisation.

How the IRIS robot is revolutionising education in South Africa with AI
How the IRIS robot is revolutionising education in South Africa with AI

IOL News

timea day ago

  • IOL News

How the IRIS robot is revolutionising education in South Africa with AI

Left: KwaZulu-Natal Deputy Director-General Mbongiseni Mazibuko, IRIS founder Thando Gumede, and Science, Technology and Innovation Deputy Minister Nomalungelo Gina at the IRIS launch in Durban on Tuesday. Image: Supplied The Science, Technology and Innovation Department has moved to allay fears that the introduction of Artificial Intelligence (AI) will further push the unemployment rate higher in South Africa. Speaking at the official launch of AI educational robot teacher, IRIS, in Durban on Tuesday evening, Science, Technology and Innovation Deputy Minister Nomalungelo Gina urged people to embrace technology that comes with AI, dismissing the assertion that it will further strain unemployment in the country, as most businesses will reduce their reliance on intensive labour and opt for AI. Gina said the department has started a campaign to demystify the wrong perception people have about AI and urged them to see it in a positive light 'It's high time that we embrace technology, particularly AI. We must change our mindset and accept that life is evolving. As a department, we must demystify the perception that science is for specific people or learned people. We live science on a daily basis,' said Gina. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Ad loading She further stated that the country must take pride that IRIS was invented by a South African, and more so by a woman, particularly one from the deepest rural area of KwaZulu-Natal. She added that the bigger problem in the country is that for a long time, people produce what they don't consume and consume what they don't produce, and that must change. She noted that when the government introduced Shesha lanes at tollgates and bank cards, many were sceptical, believing this would lead to job losses. However, today everyone is happy with these changes because they enhance convenience. The department pledged to support the innovation and work hand in hand with its founder, Thando Gumede. Speaking at the event, Gumede said she experienced gender-based challenges with her innovation because some men saw a potential girlfriend more than a business partner. Born in Hluhluwe in the north-east of the province, the businesswoman's tech company, BSG Technologies, made history by becoming the first in the country to design an educational tutor robot that possesses knowledge of subjects from Grade R to tertiary level, in all 11 official languages of South Africa. She said her robot will revolutionise the education sector and bridge the digital gap in the country, particularly in the fields of science and technology. The innovative robot will provide personalised learning, interactive lessons, and real-time feedback, empowering students to reach their full potential.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store