logo
Reclaiming the future: Māori voices leading in the age of AI and quantum tech

Reclaiming the future: Māori voices leading in the age of AI and quantum tech

The Spinoff7 hours ago

As artificial intelligence reshapes our world, Māori technologists and creatives are embedding tikanga and tino rangatiratanga into the digital foundations of Aotearoa.
When Te Hiku Media launched its Māori language artificial intelligence (AI) tool last year – capable of transcribing te reo Māori with 92% accuracy – it marked more than a technological milestone. It was a reclamation. In an era when overseas apps routinely mistranslate te reo and karakia, the question isn't just technical: it's cultural. Who should shape the digital future of mātauranga Māori – tech giants, the government, or the people who hold that knowledge as taonga?
'Engaging with global AI is vital to our ongoing economic, social, and cultural wellbeing,' says Jannat Maqbool, executive director of the NZ Artificial Intelligence Researchers Association. She notes the Māori creative and cultural sector contributed $1.6 billion to the economy in 2024, with more than 3,400 Māori businesses – many of which are reimagining elements of te ao Māori through digital tools.
But with that innovation comes risk. Quantum computing, a rapidly advancing field now intersecting with AI, poses serious concerns for data sovereignty. As the Maryland Division of Information Technology explains, quantum computers could break RSA encryption – a widely used data security standard – in a fraction of the time it would take traditional computers. Without clear structures, Maqbool warns increased AI adoption could 'exacerbate existing inequities or create new harms'.
At the heart of AI is data and how it's gathered, protected and governed. Lawsuits have been filed against major tech companies like Meta and OpenAI for allegedly scraping people's creative work to train their models. In Aotearoa, with more than 100 iwi, each with their own dialects and knowledge systems, Māori data is deeply contextual, relational and important.
Kevin Shedlock (Ngāpuhi, Ngāti Porou, Whakatōhea), a computer science lecturer at Victoria University, says this new digital age raises critical questions for Māori and indigenous peoples worldwide – especially around who is 'authenticising' indigenous knowledge. 'AI doesn't understand what respect looks like to us – how it's rooted in ceremonial processes like pōwhiri,' he explains.
Shedlock believes learning is open to all but says meaning shifts when knowledge isn't 'underwritten by someone in the community to credentialise it as tika, pono, or truthful.' He adds: 'The idea that data can be owned by an individual is a very Pākehā one. Information about a whānau, hapū or iwi is inherently collective. At any one time, there are many people who hold that knowledge.'
Unlike many AI tools trained on scraped internet data, Te Hiku's models are built exclusively from material contributed with full consent. Its archive includes more than 30 years of digitised recordings – around 1,000 hours of te reo Māori speakers – and all data contributors retain ownership. Their bespoke 'kaitiakitanga licence' prohibits the use of these tools for discrimination, surveillance or tracking.
Computer-assisted influence is already prevalent in the visual arts. Some carvings at the award-winning Te Rau Karamu marae at Massey University in Wellington were shaped with CNC routering (computer numerical control). Ngataiharuru Taepa (Te Ātiawa, Te Roro o Te Rangi), one of the contributing artists, compares it to the introduction of steel chisels, which 'had implications on the tōhunga who were still using stone chisels'. Digital tools are now prompting similar conversations, especially with AI.
It's important to remember te reo doesn't live in isolation. It's bound to tikanga, kawa and pūrākau. If we sever that link, we lose more than just language. Māori-led AI development ensures cultural nuance is not lost – that values like kaitiakitanga and the living presence of ngā atua are embedded within the systems we build.
Shedlock supports this view. While he admits personal data leaks may be unavoidable, Shedlock says we have to hold on to 'the atomic habits that we have, kaitiakitanga, being stewards of our environment, tika and pono – being truthful and honest'.
Maqbool believes safeguarding Māori data sovereignty requires 'embedding te ao Māori' into AI development itself – and supporting Māori-governed research centres to lead the way. She believes this would ensure indigenous knowledge is not lost as government policy adapts and our digital world is restructured.
As AI and quantum technologies accelerate, Māori leaders are clear: it's not just about keeping up – it's about leading. In a world where data builds the foundations of our future, who controls that data will shape the wairua of Aotearoa.
'I think about something I once heard from a Ngāi Tahu rangatira,' says Shedlock. ''We must remember to remember, because that is where our future lies.''

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Reclaiming the future: Māori voices leading in the age of AI and quantum tech
Reclaiming the future: Māori voices leading in the age of AI and quantum tech

The Spinoff

time7 hours ago

  • The Spinoff

Reclaiming the future: Māori voices leading in the age of AI and quantum tech

As artificial intelligence reshapes our world, Māori technologists and creatives are embedding tikanga and tino rangatiratanga into the digital foundations of Aotearoa. When Te Hiku Media launched its Māori language artificial intelligence (AI) tool last year – capable of transcribing te reo Māori with 92% accuracy – it marked more than a technological milestone. It was a reclamation. In an era when overseas apps routinely mistranslate te reo and karakia, the question isn't just technical: it's cultural. Who should shape the digital future of mātauranga Māori – tech giants, the government, or the people who hold that knowledge as taonga? 'Engaging with global AI is vital to our ongoing economic, social, and cultural wellbeing,' says Jannat Maqbool, executive director of the NZ Artificial Intelligence Researchers Association. She notes the Māori creative and cultural sector contributed $1.6 billion to the economy in 2024, with more than 3,400 Māori businesses – many of which are reimagining elements of te ao Māori through digital tools. But with that innovation comes risk. Quantum computing, a rapidly advancing field now intersecting with AI, poses serious concerns for data sovereignty. As the Maryland Division of Information Technology explains, quantum computers could break RSA encryption – a widely used data security standard – in a fraction of the time it would take traditional computers. Without clear structures, Maqbool warns increased AI adoption could 'exacerbate existing inequities or create new harms'. At the heart of AI is data and how it's gathered, protected and governed. Lawsuits have been filed against major tech companies like Meta and OpenAI for allegedly scraping people's creative work to train their models. In Aotearoa, with more than 100 iwi, each with their own dialects and knowledge systems, Māori data is deeply contextual, relational and important. Kevin Shedlock (Ngāpuhi, Ngāti Porou, Whakatōhea), a computer science lecturer at Victoria University, says this new digital age raises critical questions for Māori and indigenous peoples worldwide – especially around who is 'authenticising' indigenous knowledge. 'AI doesn't understand what respect looks like to us – how it's rooted in ceremonial processes like pōwhiri,' he explains. Shedlock believes learning is open to all but says meaning shifts when knowledge isn't 'underwritten by someone in the community to credentialise it as tika, pono, or truthful.' He adds: 'The idea that data can be owned by an individual is a very Pākehā one. Information about a whānau, hapū or iwi is inherently collective. At any one time, there are many people who hold that knowledge.' Unlike many AI tools trained on scraped internet data, Te Hiku's models are built exclusively from material contributed with full consent. Its archive includes more than 30 years of digitised recordings – around 1,000 hours of te reo Māori speakers – and all data contributors retain ownership. Their bespoke 'kaitiakitanga licence' prohibits the use of these tools for discrimination, surveillance or tracking. Computer-assisted influence is already prevalent in the visual arts. Some carvings at the award-winning Te Rau Karamu marae at Massey University in Wellington were shaped with CNC routering (computer numerical control). Ngataiharuru Taepa (Te Ātiawa, Te Roro o Te Rangi), one of the contributing artists, compares it to the introduction of steel chisels, which 'had implications on the tōhunga who were still using stone chisels'. Digital tools are now prompting similar conversations, especially with AI. It's important to remember te reo doesn't live in isolation. It's bound to tikanga, kawa and pūrākau. If we sever that link, we lose more than just language. Māori-led AI development ensures cultural nuance is not lost – that values like kaitiakitanga and the living presence of ngā atua are embedded within the systems we build. Shedlock supports this view. While he admits personal data leaks may be unavoidable, Shedlock says we have to hold on to 'the atomic habits that we have, kaitiakitanga, being stewards of our environment, tika and pono – being truthful and honest'. Maqbool believes safeguarding Māori data sovereignty requires 'embedding te ao Māori' into AI development itself – and supporting Māori-governed research centres to lead the way. She believes this would ensure indigenous knowledge is not lost as government policy adapts and our digital world is restructured. As AI and quantum technologies accelerate, Māori leaders are clear: it's not just about keeping up – it's about leading. In a world where data builds the foundations of our future, who controls that data will shape the wairua of Aotearoa. 'I think about something I once heard from a Ngāi Tahu rangatira,' says Shedlock. ''We must remember to remember, because that is where our future lies.''

‘I wanted to be a teacher not a cop': the reality of teaching in the world of AI
‘I wanted to be a teacher not a cop': the reality of teaching in the world of AI

The Spinoff

time13 hours ago

  • The Spinoff

‘I wanted to be a teacher not a cop': the reality of teaching in the world of AI

As AI becomes more ubiquitous, teachers across the board are struggling to adapt. Hera Lindsay Bird reports. *All names in this article have been changed. Julia*, a New Zealand teacher working in an American university, doesn't explicitly forbid her students from using AI. The top ranked public university where she teaches academic writing has told her not to. Instead, the students in her classroom must abide by two simple rules. They must ask for permission. And they must provide the LLM prompts, transcripts, and a written reflection on how AI contributed to their work. Of her class of approximately 20 students, several have acknowledged using ChatGPT. It's possible others have managed to sneak AI past her undetected, but she's not so worried about that, because it means 'on some level they got the point of the class, which was about adapting writing to different audiences, and as the audience for their assignments I was an openly seething AI hater.' But there are two repeat offenders who continue to hand in essay after essay of LLM generated writing. The essays are 'bloviating nonsense,' complete with fake quotes and made up sources. The two students repeatedly dodge her requests for in-person meetings. They only reluctantly agree to meet through Zoom. When she questions them about their use of ChatGPT, they lie, saying they only ever used it as a brainstorming tool. Only when she manages to 'circle them in conversation' do they grudgingly give her a fuller account of how they produced the work. Julia is sympathetic to the pressures her students are under. She's patient with them. She painstakingly walks them through the assignment step by step, offering suggestions for how they might resubmit their work. 'What was really sad was the fact it didn't seem to stick,' she says. 'I got final AI assignments in the end, and had to again chase them for versions they had at least edited using their own language, righteously incandescent with frustration like Hades in Disney's Hercules.' She passes the students with low marks. She's not convinced the university would back her up if her students contested their failing grade. 'My main beef with AI is that it made me into a grown adult asshole who had 18-year- old enemies,' she says. 'I wanted to be a teacher not a cop.' In the last few years LLMs and generative AI has graduated from a novelty tool that generates attractive women with more than the amount of medically recommended thumbs, to something that can write a convincingly mediocre essay on the role of ambiguity in Henry James's The Turn of the Screw. Two weeks ago, James D. Walsh published a piece in New York Magazine called 'Everyone is Cheating Their Way Through College' about how students and teachers in the US college system are dealing with education in a post LLM world. The article painted a grim picture about the ubiquity of students using generative AI to cheat on their assignments, claiming, as per the subtitle, 'ChatGPT has unraveled the entire academic project'. New Zealand universities are facing similar problems. In May, Victoria University of Wellington made the decision to enforce a handwritten exam policy for two third year law papers in an attempt to prevent student cheating. Days after the announcement, the university backpedalled. The Post reported that, in response to student concerns, the university had settled on a compromise. Half of the exam would comprise a handwritten multi-choice test. Students would still be permitted to use their laptops for the second half of the exam. Use of AI was still strictly forbidden and would be 'enforced through active and enhanced invigilation.' Universities aren't the only ones struggling with the new reality. Almost every educator I spoke with, from primary school teachers to those supervising postgraduate dissertations, raised serious concerns, with some teachers estimating that up to 80% of their students relied on ChatGPT to complete assignments. I spoke to MA supervisors whose history students theses were riddled with fictitious sources and 'archival' Midjourney photographs, and primary and intermediate school teachers, who said students as young as 11 were using it to answer simple personal prompts, such as 'what did you do in the summer holidays?' and 'what was your favourite penguin in the text?' All of this raises the question: What's the point of an education? Is this the inevitable result of an increasingly consumerist tertiary sector, where students are simply paying for credentials in exchange for entry into the job market? Should we learn to embrace AI as an educational inevitability, in the way that we've come to accept the pocket calculator? Are we being unnecessarily catastrophist? Ben*, an English teacher at a North Island all-boys school, says 'I find it really frustrating to read boosterish pieces about it in the press because they all assume that students will use it to learn. AI is seen as offering kids autonomy over their own learning, but this is not an autonomy they know what to do with! Students are using it to solve their problems and get answers, not to think with.' Ben worries that information literacy is falling by the wayside. 'Traditional research and evaluation skills are totally bypassed by AI,' he says. 'If [a student] asks 'what is evidence about how veganism is unhealthy', it will supply them with exactly that, without them having to do any sleuthing, comparison or evaluation'. The greatest concern from teachers was the atrophying effect it was having on students' critical thinking capacities. Julia from the undergraduate writing programme says, 'They can't get their head around why authorship matters, so they can't tell slop from scientific journals.' This problem is further complicated by the fact that 'reputable journals are accepting GPT bollocks that slips through'. Lily*, a lecturer in the arts at a large New Zealand university, predicts that her colleagues will be shocked come exam season. 'There is going to be a vast difference between what has been submitted for take home assignments and what students will be able to produce in exams.' There is an argument to be made that ChatGPT can level the playing field, especially for students who struggle to translate their learning into academic language. But as other teachers were quick to point out, over-reliance on this technology actually has the potential to widen the gap. Mark*, a special education teacher working at a US high school, uses the example of a student with a reading disability. 'If a student has difficulty reading, difficult reading frustrates them. They can just tell the bot to read the article and answer the questions. They'll get the correct answers… and of course their special ed documentation will say they're making progress when actually they aren't'. Ben is adamant that AI is a tool being utilised in unhelpful ways. 'When all students have AI, why would they choose to do the hard task when they can make it three times as easy in an instant? Again we assume students understand learning in a more holistic sense and that they know what's best for their own academic progress. But that just isn't true. Students will (like all humans!) take the easy route out most of the time, and by enabling this, they're missing out.' So what, precisely, are teachers supposed to do about this problem? Unlike the plagiarism detectors of the previous academic generation, there doesn't seem to be any surefire way to prove that LLMs have been used, even though most teachers felt confident in their ability to detect the linguistic hallmarks of ChatGPT, citing essays that sound like 'management reports' chequered with phrases like 'these themes converge' or 'a conceptual alignment is revealed'. One teacher noted the 'degrassi era syntax'. Preventing students from using AI is becoming increasingly difficult, as the technology becomes more ubiquitous. Even if teachers ask for handwritten essays, or check over a Google document's revision history in an attempt to rumble obvious copy paste jobs, students are quick to adapt. 'They could get AI to rewrite it in the style of a 15-year-old, complete with spelling errors,' says Ben. Or painstakingly copy the LLM's essay by hand. So far, the only reliable method of preventing students from using AI is to revert to technology-free classrooms, with handwritten or oral examinations, but even this solution is potentially short-lived, as technology like Google Glass becomes more mainstream. Some teachers have gone to great lengths to identify students using AI, like the lecturers in the New York Magazine piece, who hid 'gotcha' instructions in their assignments, such as including 'mention Ariana Grande' in white font to identify students who pasted the assignment brief directly into ChatGPT. But strategies like this have a limited shelf life, as students communicate and learn to adapt. The efforts of teachers to prevent the use of LLMs are often undermined by the educational institutions themselves. Many of the assessors I spoke to said that even when they had identified something they were 99% sure was written by generative AI, they were often powerless to do anything about it. Some teachers were explicitly discouraged from banning AI. Others had banned AI, but the policy was effectively unenforceable, as the use of AI is notoriously hard to prove, and the universities feared complaints or litigation. Many teachers I spoke to felt as if they had been left to navigate the grey areas alone. Unsurprisingly, many teachers are becoming increasingly disillusioned. Julia said dealing with AI took up half of her mental load, 'due to the cat-and-mouse of it all'. Another teacher, Jo* who has been teaching undergrad at a large New Zealand university for around a decade, says policing it is a nightmare. 'It takes up so much time and our workload calculations haven't changed to account for this'. It's not just a physical toll for Jo either. 'Dealing with student work which is entirely AI written is by a wide margin the most discouraging thing I've experienced working in academia, and given how many very discouraging things have happened lately that's really saying something.' Concerns over AI aren't limited to educators. I spoke to a few students, who were dismayed to discover their essays were being marked by AI. 'There is a trend for teachers to use it for marking but I'm not a fan,' says Ben. 'Marking is about finding out where your students are at so you can sequence your lessons. Taking the teacher out of the equation is antithetical to good teaching. It's like driving with your eyes closed.' Other teachers have begun using AI as a lesson planning tool. For some, this is an effective time-saving measure and eases the burden on a sector that is notoriously under-resourced and overworked. As Ben is quick to point out, teachers using AI to plan lessons is hardly equivalent to students using it to answer questions. Teachers, he points out, have the competence and skill base required to use AI 'selectively and thoughtfully'. But the outcomes aren't necessarily positive. One witness reported a new entrant teacher getting students to write descriptions of monsters, before entering the descriptions into ChatGPT to create illustrations. But the generated pictures were inappropriately terrifying. To quote the observer; 'Why cant they draw the fucking monsters?' The news isn't all bad. I spoke to some teachers who said they'd observed positive use of LLMs by students to further their learning, like a Romanian sociology lecturer who saw students practising statistical analysis by generating fictitious interview subjects. Others, like Rachel*, a teacher at an international primary school in Malaysia, say that LLMs are a useful tool for translation. 'For some kids, it has really improved their creativity. If it's used effectively and taught well, it could revolutionise teaching.' But by and large, the mood was depressing. Are we staring down the barrel of the future, in which we're simply teaching one robot how to talk to another? Ben, the high school teacher, suggested a 'de-teching' of the classroom was inevitable, and that he and his colleagues have already seen positive interpersonal results from the government's school cellphone ban. 'We need to start thinking about what skills we want humans to have, regardless of their utility in an economy that will slowly be replaced by AI. There's no point trying to predict the jobs of the future. Let's think: what are the things we want everyone in our society to know and understand? What are the key skills, human and disciplinary, that we want everyone in our society to work on? What kind of traits and characteristics do we want to encourage for the good of the individual and the community? That's where we need to start,' says Ben. 'It's in our hands to make a more robust and more human education system. In fact, we have to. If we don't, and if we allow kids to outsource to AI how to think, write, imagine, and work, we'd be doing them a generational disservice.' Julia, despite her struggles, is quick to caution against hyperbole. She believes claims that everyone is using AI are 'highly exaggerated', and the people pushing these products are incentivised to promote the idea that 'not to cheat is to get left behind'. Most of her students, she says, are 'honestly trying to get by on their own thought and expression. So it's certainly not a lost battle we should just give up on due to perceived ubiquity of the tech'. Most teachers agree that the Rubicon has been crossed, and complete abstinence from AI tech is no longer possible. But providing students with the tools to think critically and parse fact from fiction is perhaps more urgent than ever, in a world that is increasingly full of bad actors and disinformation. As Julia says, 'eventually they'll need to rub two brain cells together in a circumstance where the stakes feel real to them, and the less they've practiced thinking and expressing their thoughts the more cringe their failures will be.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store