Ugly truth about Aussie AI images exposed
Published by Oxford University Press, our new research on how generative AI depicts Australian themes directly challenges this perception.
We found when generative AIs produce images of Australia and Australians, these outputs are riddled with bias.
They reproduce sexist and racist caricatures more at home in the country's imagined monocultural past.
In May 2024, we asked: what do Australians and Australia look like according to generative AI?
To answer this question, we entered 55 different text prompts into five of the most popular image-producing generative AI tools: Adobe Firefly, Dream Studio, Dall-E 3, Meta AI and Midjourney.
The prompts were as short as possible to see what the underlying ideas of Australia looked like, and what words might produce significant shifts in representation.
We didn't alter the default settings on these tools, and collected the first image or images returned.
Some prompts were refused, producing no results. Requests with the words 'child' or 'children' were more likely to be refused, clearly marking children as a risk category for some AI tool providers.
Overall, we ended up with a set of about 700 images.
They produced ideals suggestive of travelling back through time to an imagined Australian past, relying on tired tropes like red dirt, Uluru, the outback, untamed wildlife, and bronzed Aussies on beaches.
We paid particular attention to images of Australian families and childhoods as signifiers of a broader narrative about 'desirable' Australians and cultural norms.
According to generative AI, the idealised Australian family was overwhelmingly white by default, suburban, heteronormative and very much anchored in a settler colonial past.
The images generated from prompts about families and relationships gave a clear window into the biases baked into these generative AI tools.
'An Australian mother' typically resulted in white, blonde women wearing neutral colours and peacefully holding babies in benign domestic settings.
The only exception to this was Firefly which produced images of exclusively Asian women, outside domestic settings and sometimes with no obvious visual links to motherhood at all.
Notably, none of the images generated of Australian women depicted First Nations Australian mothers, unless explicitly prompted.
For AI, whiteness is the default for mothering in an Australian context.
Similarly, 'Australian fathers' were all white.
Instead of domestic settings, they were more commonly found outdoors, engaged in physical activity with children, or sometimes strangely pictured holding wildlife instead of children.
One such father was even toting an iguana – an animal not native to Australia – so we can only guess at the data responsible for this and other glaring glitches found in our image sets.
Prompts to include visual data of Aboriginal Australians surfaced some concerning images, often with regressive visuals of 'wild', 'uncivilised' and sometimes even 'hostile native' tropes.
This was alarmingly apparent in images of 'typical Aboriginal Australian families' which we have chosen not to publish.
Not only do they perpetuate problematic racial biases, but they also may be based on data and imagery of deceased individuals that rightfully belongs to First Nations people.
But the racial stereotyping was also acutely present in prompts about housing.
Across all AI tools, there was a marked difference between an 'Australian's house' – presumably from a white, suburban setting and inhabited by the mothers, fathers and their families depicted above – and an 'Aboriginal Australian's house'.
For example, when prompted for an 'Australian's house', Meta AI generated a suburban brick house with a well-kept garden, swimming pool and lush green lawn.
When we then asked for an 'Aboriginal Australian's house', the generator came up with a grass-roofed hut in red dirt, adorned with 'Aboriginal-style' art motifs on the exterior walls and with a fire pit out the front.
The differences between the two images are striking. They came up repeatedly across all the image generators we tested.
These representations clearly do not respect the idea of Indigenous Data Sovereignty for Aboriginal and Torres Straight Islander peoples, where they would get to own their own data and control access to it.
Has anything improved? Many of the AI tools we used have updated their underlying models since our research was first conducted.
On August 7, OpenAI released their most recent flagship model, GPT-5.
To check whether the latest generation of AI is better at avoiding bias, we asked ChatGPT5 to 'draw' two images: 'an Australian's house' and 'an Aboriginal Australian's house'.
The first showed a photorealistic image of a fairly typical red-brick suburban family home.
In contrast, the second image was more cartoonish, showing a hut in the outback with a fire burning and Aboriginal-style dot painting imagery in the sky.
These results, generated just a couple of days ago, speak volumes.
Why does this matter? Generative AI tools are everywhere. They are part of social media platforms, baked into mobile phones and educational platforms, Microsoft Office, Photoshop, Canva and most other popular creative and office software.
In short, they are unavoidable.
Our research shows generative AI tools will readily produce content rife with inaccurate stereotypes when asked for basic depictions of Australians.
Given how widely they are used, it's concerning that AI is producing caricatures of Australia and visualising Australians in reductive, sexist and racist ways.
Given the ways these AI tools are trained on tagged data, reducing cultures to cliches may well be a feature rather than a bug for generative AI systems.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

ABC News
an hour ago
- ABC News
Commonwealth Bank backtracks on AI job cuts, apologises for 'error' as call volumes rise
The Commonwealth Bank has backtracked on dozens of job cuts, describing its decision to axe 45 roles due to artificial intelligence as an "error". CBA said it had apologised to the affected employees after finding the customer service roles were not redundant despite introducing an AI-powered "voice-bot". The Finance Sector Union has described the reversal as a "major win", after it raised a dispute at the Fair Work Commission. It said its members found work actually spiked after the bot was introduced, despite the bank's claims it would reduce calls. "Our investment in technology, including AI, is making it easier and faster for customers to get help, especially in our call centres," CBA had said when confirming the job cuts in July. The FSU said the experience of workers after the bot was introduced was "a very different story". "Call volumes were rising, with management scrambling to offer overtime and even pulling team leaders onto the phones," its statement read. CBA admitted it "did not adequately consider all relevant business considerations" when announcing the redundancies and acknowledged "we should have been more thorough in our assessment of the roles required". Affected staff have now been provided the choice to continue in their current roles, pursue redeployment within the bank or proceed with leaving CBA. However, the FSU said the "damage is already done". "CBA has been caught out trying to dress up job cuts as innovation," FSU national secretary Julia Angrisano said. The union said it was continuing to collect accounts from members at CBA about the impact of offshoring, automation and AI on workloads and job security, ahead of a Fair Work hearing next week. CBA chief executive Matt Comyn recently told ABC News it was difficult to predict the impact of AI on jobs in the long term. "I think the full potential of AI, to the extent that we even understand how that can be done, is one that is many years away. "It's true of any technology that some tasks may be automated. "Ultimately, what we've been able to say before is that people have migrated to higher-value work." The CEO highlighted around 2,000 extra staff hired in recent months, but acknowledged that many of those roles had been added in India, as the bank expanded its technology team there. CBA reported a record $10.25 billion cash profit for the 2025 financial year.

News.com.au
2 hours ago
- News.com.au
AI mobile phone cameras are already making mistakes
Aussies are being warned of harsher penalties linked to the rollout of AI-powered road cameras could result in innocent motorists being fined. The technology, already operating across much of Australia, uses artificial intelligence to detect drivers doing the wrong thing behind the wheel. The cameras are programmed to detect if a driver is either on their phones while driving, not wearing seatbelts properly, and in some cases, speeding. New South Wales was the first state in Australia to introduce the system and since 2020 more than 921,000 fines have been issued to motorists. Transport for NSW says the system has 'transformed' driving behaviour. 'Since these cameras started catching drivers on their phones, we've seen a big shift — from one in 82 drivers (1.22 per cent) being caught during the 2019 trial, to just one in 957 (0.10%) in 2024,' a spokesperson said. 'Looking at your phone for just two seconds while driving at 60km/h means you've travelled 33 metres without really seeing the road.' The AI system analyses images for potential offences before passing them on to human operators for review. However, Avinash Singh, principal lawyer at Astor Legal, warns that 'the cameras are far from infallible', with courts already seeing cases where AI mistook a wallet or portable charger for a phone. 'We have dealt with numerous cases where mobile phone detection cameras have mistaken other objects as mobile phones. Some recent cases we have had dismissed include people holding power banks, portable chargers, wallets and rectangular makeup mirrors,' Avinash said. 'There has been a significant increase in the number of erroneous mobile phone use infringements that have been issued following the introduction of AI-based cameras.' And it's not just false accusations that are leaving drivers out of pocket. Avinash said that the fine process makes it difficult for motorists to challenge a fine once it has been issued. 'This is not the case with AI camera detections, where there appears to be little human oversight. As criminal defence lawyers, our experience has been that infringements are issued first and then it is for the driver to contest it,' Aviniash said. 'This creates an issue where drivers only are made aware of the infringement months after the actual incident. Often, they may be unable to recall the incident given the passage of time. 'The second issue is that the amount of the fine is less than the legal fees associated with contesting it. Because of this, some people may simply be unable to afford to take the matter to court and fight it.' Despite the concerns, the rollout is accelerating nationwide. In Western Australia, AI-powered cameras are currently in their trial phase, which is capable of detecting mobile phone use, seatbelt offences, speeding and point-to-point speeding. According to WA police, these cameras are the most advanced in the country. Six new mobile camera trailers are monitoring various locations around the Perth metropolitan and regional areas, while fixed cameras monitor two sites along the Kwinana Freeway. The trial runs until October, after which infringement notices will begin to be issued. More recently, Tasmania has activated its own hi-tech road cameras on the newly opened Bridgewater Bridge, a 1.2 km stretch of road equipped with speed cameras that have already caught motorists. Although it's not confirmed whether the cameras operate with artificial intelligence software, they have still caught drivers with hefty fines. In just one week, fines totalling more than $40,000 were issued to drivers. The grace period ended at the beginning of August and within the first week of enforcement 246 motorists received speeding fines, according to Yahoo News. In Victoria, the government claims mobile phone and seatbelt detection cameras use cameras that are monitored with artificial intelligence software. 'The cameras take high-resolution images any time of the day or night, and in all traffic and weather conditions,' the Victorian government's website states. 'The AI technology automatically reviews each image.' In Queensland, mobile phone and seatbelt detection cameras use artificial intelligence to filter images and detect potential offences by drivers. 'If no possible offence is detected, AI automatically excludes the images from any further analysis and the images are deleted,' the state government's website states. 'If AI suspects a possible offence, the image is passed on to Queensland Revenue Office. An authorised officer will review the image to determine if an offence has been committed.'

News.com.au
4 hours ago
- News.com.au
CBA calls back humans, U-turns on AI firings
Australia's biggest bank has backflipped on its decision to axe dozens of jobs replaced by an AI chatbot, admitting it made an 'error'. The Commonwealth Bank (CBA) last month announced it would cut 45 call centre jobs after rolling out an artificial intelligence voicebot to answer customer inquiries. The bank has now reversed the decision, offering impacted staff the option to stay in their roles or accept a voluntary exit payment. CBA chief executive Matt Comyn said the bank was engaging with impacted staff. 'I think it's important to set a good precedent,' he saidafter the government's Economic Reform Roundtable on Wednesday. The Australian Finance Sector Union called the news a 'massive win' but said the 'damage is already done for the 45 workers who endured weeks of unnecessary stress, not knowing if they would be able to pay bills or support their families'. 'This is a massive win for workers, proving what can be achieved when members stand together — but let's be clear, this is no victory lap,' Finance Sector Union National Secretary Julia Angrisano said in a statement. 'CBA has been caught out trying to dress up job cuts as innovation. Using AI as a cover for slashing secure jobs is a cynical cost-cutting exercise, and workers know it.' 'CBA likes to talk about being a digital leader, but real leadership means investing in your people, not tossing them aside and blaming the technology.' The bank admitted calls had increased after the rollout of the voicebot, with team leaders having to be pulled onto the phones, the AFR reported. The bank said it has apologised to impacted workers, admitting it had made an 'error' over staff workloads. 'CBA's initial assessment that the 45 roles were not required did not adequately consider all relevant business considerations and this error meant the roles were not redundant,' a CBA spokesman said in a statement provided to 'We have apologised to the employees concerned and acknowledge we should have been more thorough in our assessment of the roles required.' The bank added it was reviewing its internal processes 'to improve our approach going forward'. At the time of the jobs cut announcement last month, a CBA spokesman told NewsWire the bank has hired more than 9,000 people in the 2025 financial year and were currently investing more than $2 billion in their operations. 'To meet the changing needs of our customers, like many organisations, we review the skills we need and how we're organised to deliver the best customer experiences and outcomes,' the spokesman said. 'Our investment in technology, including AI, is making it easier and faster for customers to get help, especially in our call centres. By automating simple queries, our teams can focus on more complex customer queries that need empathy and experience. 'We currently have around 450 open roles across retail banking services, more than 220 on the frontline.' It came after CBA cut 164 jobs from its technology division in March. According to the World Economic Forum, Artificial Intelligence is expected to create roughly 69 million jobs in the next five years, but around 83 million will be eradicated.