logo
‘I wanted to be a teacher not a cop': the reality of teaching in the world of AI

‘I wanted to be a teacher not a cop': the reality of teaching in the world of AI

The Spinoff6 hours ago

As AI becomes more ubiquitous, teachers across the board are struggling to adapt. Hera Lindsay Bird reports.
*All names in this article have been changed.
Julia*, a New Zealand teacher working in an American university, doesn't explicitly forbid her students from using AI. The top ranked public university where she teaches academic writing has told her not to. Instead, the students in her classroom must abide by two simple rules. They must ask for permission. And they must provide the LLM prompts, transcripts, and a written reflection on how AI contributed to their work.
Of her class of approximately 20 students, several have acknowledged using ChatGPT. It's possible others have managed to sneak AI past her undetected, but she's not so worried about that, because it means 'on some level they got the point of the class, which was about adapting writing to different audiences, and as the audience for their assignments I was an openly seething AI hater.'
But there are two repeat offenders who continue to hand in essay after essay of LLM generated writing. The essays are 'bloviating nonsense,' complete with fake quotes and made up sources.
The two students repeatedly dodge her requests for in-person meetings. They only reluctantly agree to meet through Zoom. When she questions them about their use of ChatGPT, they lie, saying they only ever used it as a brainstorming tool. Only when she manages to 'circle them in conversation' do they grudgingly give her a fuller account of how they produced the work.
Julia is sympathetic to the pressures her students are under. She's patient with them. She painstakingly walks them through the assignment step by step, offering suggestions for how they might resubmit their work. 'What was really sad was the fact it didn't seem to stick,' she says. 'I got final AI assignments in the end, and had to again chase them for versions they had at least edited using their own language, righteously incandescent with frustration like Hades in Disney's Hercules.' She passes the students with low marks. She's not convinced the university would back her up if her students contested their failing grade.
'My main beef with AI is that it made me into a grown adult asshole who had 18-year- old enemies,' she says. 'I wanted to be a teacher not a cop.'
In the last few years LLMs and generative AI has graduated from a novelty tool that generates attractive women with more than the amount of medically recommended thumbs, to something that can write a convincingly mediocre essay on the role of ambiguity in Henry James's The Turn of the Screw.
Two weeks ago, James D. Walsh published a piece in New York Magazine called 'Everyone is Cheating Their Way Through College' about how students and teachers in the US college system are dealing with education in a post LLM world. The article painted a grim picture about the ubiquity of students using generative AI to cheat on their assignments, claiming, as per the subtitle, 'ChatGPT has unraveled the entire academic project'.
New Zealand universities are facing similar problems. In May, Victoria University of Wellington made the decision to enforce a handwritten exam policy for two third year law papers in an attempt to prevent student cheating. Days after the announcement, the university backpedalled. The Post reported that, in response to student concerns, the university had settled on a compromise. Half of the exam would comprise a handwritten multi-choice test. Students would still be permitted to use their laptops for the second half of the exam. Use of AI was still strictly forbidden and would be 'enforced through active and enhanced invigilation.'
Universities aren't the only ones struggling with the new reality. Almost every educator I spoke with, from primary school teachers to those supervising postgraduate dissertations, raised serious concerns, with some teachers estimating that up to 80% of their students relied on ChatGPT to complete assignments. I spoke to MA supervisors whose history students theses were riddled with fictitious sources and 'archival' Midjourney photographs, and primary and intermediate school teachers, who said students as young as 11 were using it to answer simple personal prompts, such as 'what did you do in the summer holidays?' and 'what was your favourite penguin in the text?'
All of this raises the question: What's the point of an education? Is this the inevitable result of an increasingly consumerist tertiary sector, where students are simply paying for credentials in exchange for entry into the job market? Should we learn to embrace AI as an educational inevitability, in the way that we've come to accept the pocket calculator? Are we being unnecessarily catastrophist?
Ben*, an English teacher at a North Island all-boys school, says 'I find it really frustrating to read boosterish pieces about it in the press because they all assume that students will use it to learn. AI is seen as offering kids autonomy over their own learning, but this is not an autonomy they know what to do with! Students are using it to solve their problems and get answers, not to think with.'
Ben worries that information literacy is falling by the wayside. 'Traditional research and evaluation skills are totally bypassed by AI,' he says. 'If [a student] asks 'what is evidence about how veganism is unhealthy', it will supply them with exactly that, without them having to do any sleuthing, comparison or evaluation'.
The greatest concern from teachers was the atrophying effect it was having on students' critical thinking capacities. Julia from the undergraduate writing programme says, 'They can't get their head around why authorship matters, so they can't tell slop from scientific journals.' This problem is further complicated by the fact that 'reputable journals are accepting GPT bollocks that slips through'.
Lily*, a lecturer in the arts at a large New Zealand university, predicts that her colleagues will be shocked come exam season. 'There is going to be a vast difference between what has been submitted for take home assignments and what students will be able to produce in exams.'
There is an argument to be made that ChatGPT can level the playing field, especially for students who struggle to translate their learning into academic language. But as other teachers were quick to point out, over-reliance on this technology actually has the potential to widen the gap.
Mark*, a special education teacher working at a US high school, uses the example of a student with a reading disability. 'If a student has difficulty reading, difficult reading frustrates them. They can just tell the bot to read the article and answer the questions. They'll get the correct answers… and of course their special ed documentation will say they're making progress when actually they aren't'.
Ben is adamant that AI is a tool being utilised in unhelpful ways. 'When all students have AI, why would they choose to do the hard task when they can make it three times as easy in an instant? Again we assume students understand learning in a more holistic sense and that they know what's best for their own academic progress. But that just isn't true. Students will (like all humans!) take the easy route out most of the time, and by enabling this, they're missing out.'
So what, precisely, are teachers supposed to do about this problem? Unlike the plagiarism detectors of the previous academic generation, there doesn't seem to be any surefire way to prove that LLMs have been used, even though most teachers felt confident in their ability to detect the linguistic hallmarks of ChatGPT, citing essays that sound like 'management reports' chequered with phrases like 'these themes converge' or 'a conceptual alignment is revealed'. One teacher noted the 'degrassi era syntax'.
Preventing students from using AI is becoming increasingly difficult, as the technology becomes more ubiquitous. Even if teachers ask for handwritten essays, or check over a Google document's revision history in an attempt to rumble obvious copy paste jobs, students are quick to adapt.
'They could get AI to rewrite it in the style of a 15-year-old, complete with spelling errors,' says Ben. Or painstakingly copy the LLM's essay by hand. So far, the only reliable method of preventing students from using AI is to revert to technology-free classrooms, with handwritten or oral examinations, but even this solution is potentially short-lived, as technology like Google Glass becomes more mainstream.
Some teachers have gone to great lengths to identify students using AI, like the lecturers in the New York Magazine piece, who hid 'gotcha' instructions in their assignments, such as including 'mention Ariana Grande' in white font to identify students who pasted the assignment brief directly into ChatGPT. But strategies like this have a limited shelf life, as students communicate and learn to adapt.
The efforts of teachers to prevent the use of LLMs are often undermined by the educational institutions themselves. Many of the assessors I spoke to said that even when they had identified something they were 99% sure was written by generative AI, they were often powerless to do anything about it. Some teachers were explicitly discouraged from banning AI. Others had banned AI, but the policy was effectively unenforceable, as the use of AI is notoriously hard to prove, and the universities feared complaints or litigation. Many teachers I spoke to felt as if they had been left to navigate the grey areas alone.
Unsurprisingly, many teachers are becoming increasingly disillusioned. Julia said dealing with AI took up half of her mental load, 'due to the cat-and-mouse of it all'.
Another teacher, Jo* who has been teaching undergrad at a large New Zealand university for around a decade, says policing it is a nightmare. 'It takes up so much time and our workload calculations haven't changed to account for this'.
It's not just a physical toll for Jo either. 'Dealing with student work which is entirely AI written is by a wide margin the most discouraging thing I've experienced working in academia, and given how many very discouraging things have happened lately that's really saying something.'
Concerns over AI aren't limited to educators. I spoke to a few students, who were dismayed to discover their essays were being marked by AI. 'There is a trend for teachers to use it for marking but I'm not a fan,' says Ben. 'Marking is about finding out where your students are at so you can sequence your lessons. Taking the teacher out of the equation is antithetical to good teaching. It's like driving with your eyes closed.'
Other teachers have begun using AI as a lesson planning tool. For some, this is an effective time-saving measure and eases the burden on a sector that is notoriously under-resourced and overworked. As Ben is quick to point out, teachers using AI to plan lessons is hardly equivalent to students using it to answer questions. Teachers, he points out, have the competence and skill base required to use AI 'selectively and thoughtfully'. But the outcomes aren't necessarily positive. One witness reported a new entrant teacher getting students to write descriptions of monsters, before entering the descriptions into ChatGPT to create illustrations. But the generated pictures were inappropriately terrifying. To quote the observer; 'Why cant they draw the fucking monsters?'
The news isn't all bad. I spoke to some teachers who said they'd observed positive use of LLMs by students to further their learning, like a Romanian sociology lecturer who saw students practising statistical analysis by generating fictitious interview subjects.
Others, like Rachel*, a teacher at an international primary school in Malaysia, say that LLMs are a useful tool for translation. 'For some kids, it has really improved their creativity. If it's used effectively and taught well, it could revolutionise teaching.' But by and large, the mood was depressing. Are we staring down the barrel of the future, in which we're simply teaching one robot how to talk to another?
Ben, the high school teacher, suggested a 'de-teching' of the classroom was inevitable, and that he and his colleagues have already seen positive interpersonal results from the government's school cellphone ban. 'We need to start thinking about what skills we want humans to have, regardless of their utility in an economy that will slowly be replaced by AI. There's no point trying to predict the jobs of the future. Let's think: what are the things we want everyone in our society to know and understand? What are the key skills, human and disciplinary, that we want everyone in our society to work on? What kind of traits and characteristics do we want to encourage for the good of the individual and the community? That's where we need to start,' says Ben.
'It's in our hands to make a more robust and more human education system. In fact, we have to. If we don't, and if we allow kids to outsource to AI how to think, write, imagine, and work, we'd be doing them a generational disservice.'
Julia, despite her struggles, is quick to caution against hyperbole. She believes claims that everyone is using AI are 'highly exaggerated', and the people pushing these products are incentivised to promote the idea that 'not to cheat is to get left behind'. Most of her students, she says, are 'honestly trying to get by on their own thought and expression. So it's certainly not a lost battle we should just give up on due to perceived ubiquity of the tech'.
Most teachers agree that the Rubicon has been crossed, and complete abstinence from AI tech is no longer possible. But providing students with the tools to think critically and parse fact from fiction is perhaps more urgent than ever, in a world that is increasingly full of bad actors and disinformation. As Julia says, 'eventually they'll need to rub two brain cells together in a circumstance where the stakes feel real to them, and the less they've practiced thinking and expressing their thoughts the more cringe their failures will be.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘I wanted to be a teacher not a cop': the reality of teaching in the world of AI
‘I wanted to be a teacher not a cop': the reality of teaching in the world of AI

The Spinoff

time6 hours ago

  • The Spinoff

‘I wanted to be a teacher not a cop': the reality of teaching in the world of AI

As AI becomes more ubiquitous, teachers across the board are struggling to adapt. Hera Lindsay Bird reports. *All names in this article have been changed. Julia*, a New Zealand teacher working in an American university, doesn't explicitly forbid her students from using AI. The top ranked public university where she teaches academic writing has told her not to. Instead, the students in her classroom must abide by two simple rules. They must ask for permission. And they must provide the LLM prompts, transcripts, and a written reflection on how AI contributed to their work. Of her class of approximately 20 students, several have acknowledged using ChatGPT. It's possible others have managed to sneak AI past her undetected, but she's not so worried about that, because it means 'on some level they got the point of the class, which was about adapting writing to different audiences, and as the audience for their assignments I was an openly seething AI hater.' But there are two repeat offenders who continue to hand in essay after essay of LLM generated writing. The essays are 'bloviating nonsense,' complete with fake quotes and made up sources. The two students repeatedly dodge her requests for in-person meetings. They only reluctantly agree to meet through Zoom. When she questions them about their use of ChatGPT, they lie, saying they only ever used it as a brainstorming tool. Only when she manages to 'circle them in conversation' do they grudgingly give her a fuller account of how they produced the work. Julia is sympathetic to the pressures her students are under. She's patient with them. She painstakingly walks them through the assignment step by step, offering suggestions for how they might resubmit their work. 'What was really sad was the fact it didn't seem to stick,' she says. 'I got final AI assignments in the end, and had to again chase them for versions they had at least edited using their own language, righteously incandescent with frustration like Hades in Disney's Hercules.' She passes the students with low marks. She's not convinced the university would back her up if her students contested their failing grade. 'My main beef with AI is that it made me into a grown adult asshole who had 18-year- old enemies,' she says. 'I wanted to be a teacher not a cop.' In the last few years LLMs and generative AI has graduated from a novelty tool that generates attractive women with more than the amount of medically recommended thumbs, to something that can write a convincingly mediocre essay on the role of ambiguity in Henry James's The Turn of the Screw. Two weeks ago, James D. Walsh published a piece in New York Magazine called 'Everyone is Cheating Their Way Through College' about how students and teachers in the US college system are dealing with education in a post LLM world. The article painted a grim picture about the ubiquity of students using generative AI to cheat on their assignments, claiming, as per the subtitle, 'ChatGPT has unraveled the entire academic project'. New Zealand universities are facing similar problems. In May, Victoria University of Wellington made the decision to enforce a handwritten exam policy for two third year law papers in an attempt to prevent student cheating. Days after the announcement, the university backpedalled. The Post reported that, in response to student concerns, the university had settled on a compromise. Half of the exam would comprise a handwritten multi-choice test. Students would still be permitted to use their laptops for the second half of the exam. Use of AI was still strictly forbidden and would be 'enforced through active and enhanced invigilation.' Universities aren't the only ones struggling with the new reality. Almost every educator I spoke with, from primary school teachers to those supervising postgraduate dissertations, raised serious concerns, with some teachers estimating that up to 80% of their students relied on ChatGPT to complete assignments. I spoke to MA supervisors whose history students theses were riddled with fictitious sources and 'archival' Midjourney photographs, and primary and intermediate school teachers, who said students as young as 11 were using it to answer simple personal prompts, such as 'what did you do in the summer holidays?' and 'what was your favourite penguin in the text?' All of this raises the question: What's the point of an education? Is this the inevitable result of an increasingly consumerist tertiary sector, where students are simply paying for credentials in exchange for entry into the job market? Should we learn to embrace AI as an educational inevitability, in the way that we've come to accept the pocket calculator? Are we being unnecessarily catastrophist? Ben*, an English teacher at a North Island all-boys school, says 'I find it really frustrating to read boosterish pieces about it in the press because they all assume that students will use it to learn. AI is seen as offering kids autonomy over their own learning, but this is not an autonomy they know what to do with! Students are using it to solve their problems and get answers, not to think with.' Ben worries that information literacy is falling by the wayside. 'Traditional research and evaluation skills are totally bypassed by AI,' he says. 'If [a student] asks 'what is evidence about how veganism is unhealthy', it will supply them with exactly that, without them having to do any sleuthing, comparison or evaluation'. The greatest concern from teachers was the atrophying effect it was having on students' critical thinking capacities. Julia from the undergraduate writing programme says, 'They can't get their head around why authorship matters, so they can't tell slop from scientific journals.' This problem is further complicated by the fact that 'reputable journals are accepting GPT bollocks that slips through'. Lily*, a lecturer in the arts at a large New Zealand university, predicts that her colleagues will be shocked come exam season. 'There is going to be a vast difference between what has been submitted for take home assignments and what students will be able to produce in exams.' There is an argument to be made that ChatGPT can level the playing field, especially for students who struggle to translate their learning into academic language. But as other teachers were quick to point out, over-reliance on this technology actually has the potential to widen the gap. Mark*, a special education teacher working at a US high school, uses the example of a student with a reading disability. 'If a student has difficulty reading, difficult reading frustrates them. They can just tell the bot to read the article and answer the questions. They'll get the correct answers… and of course their special ed documentation will say they're making progress when actually they aren't'. Ben is adamant that AI is a tool being utilised in unhelpful ways. 'When all students have AI, why would they choose to do the hard task when they can make it three times as easy in an instant? Again we assume students understand learning in a more holistic sense and that they know what's best for their own academic progress. But that just isn't true. Students will (like all humans!) take the easy route out most of the time, and by enabling this, they're missing out.' So what, precisely, are teachers supposed to do about this problem? Unlike the plagiarism detectors of the previous academic generation, there doesn't seem to be any surefire way to prove that LLMs have been used, even though most teachers felt confident in their ability to detect the linguistic hallmarks of ChatGPT, citing essays that sound like 'management reports' chequered with phrases like 'these themes converge' or 'a conceptual alignment is revealed'. One teacher noted the 'degrassi era syntax'. Preventing students from using AI is becoming increasingly difficult, as the technology becomes more ubiquitous. Even if teachers ask for handwritten essays, or check over a Google document's revision history in an attempt to rumble obvious copy paste jobs, students are quick to adapt. 'They could get AI to rewrite it in the style of a 15-year-old, complete with spelling errors,' says Ben. Or painstakingly copy the LLM's essay by hand. So far, the only reliable method of preventing students from using AI is to revert to technology-free classrooms, with handwritten or oral examinations, but even this solution is potentially short-lived, as technology like Google Glass becomes more mainstream. Some teachers have gone to great lengths to identify students using AI, like the lecturers in the New York Magazine piece, who hid 'gotcha' instructions in their assignments, such as including 'mention Ariana Grande' in white font to identify students who pasted the assignment brief directly into ChatGPT. But strategies like this have a limited shelf life, as students communicate and learn to adapt. The efforts of teachers to prevent the use of LLMs are often undermined by the educational institutions themselves. Many of the assessors I spoke to said that even when they had identified something they were 99% sure was written by generative AI, they were often powerless to do anything about it. Some teachers were explicitly discouraged from banning AI. Others had banned AI, but the policy was effectively unenforceable, as the use of AI is notoriously hard to prove, and the universities feared complaints or litigation. Many teachers I spoke to felt as if they had been left to navigate the grey areas alone. Unsurprisingly, many teachers are becoming increasingly disillusioned. Julia said dealing with AI took up half of her mental load, 'due to the cat-and-mouse of it all'. Another teacher, Jo* who has been teaching undergrad at a large New Zealand university for around a decade, says policing it is a nightmare. 'It takes up so much time and our workload calculations haven't changed to account for this'. It's not just a physical toll for Jo either. 'Dealing with student work which is entirely AI written is by a wide margin the most discouraging thing I've experienced working in academia, and given how many very discouraging things have happened lately that's really saying something.' Concerns over AI aren't limited to educators. I spoke to a few students, who were dismayed to discover their essays were being marked by AI. 'There is a trend for teachers to use it for marking but I'm not a fan,' says Ben. 'Marking is about finding out where your students are at so you can sequence your lessons. Taking the teacher out of the equation is antithetical to good teaching. It's like driving with your eyes closed.' Other teachers have begun using AI as a lesson planning tool. For some, this is an effective time-saving measure and eases the burden on a sector that is notoriously under-resourced and overworked. As Ben is quick to point out, teachers using AI to plan lessons is hardly equivalent to students using it to answer questions. Teachers, he points out, have the competence and skill base required to use AI 'selectively and thoughtfully'. But the outcomes aren't necessarily positive. One witness reported a new entrant teacher getting students to write descriptions of monsters, before entering the descriptions into ChatGPT to create illustrations. But the generated pictures were inappropriately terrifying. To quote the observer; 'Why cant they draw the fucking monsters?' The news isn't all bad. I spoke to some teachers who said they'd observed positive use of LLMs by students to further their learning, like a Romanian sociology lecturer who saw students practising statistical analysis by generating fictitious interview subjects. Others, like Rachel*, a teacher at an international primary school in Malaysia, say that LLMs are a useful tool for translation. 'For some kids, it has really improved their creativity. If it's used effectively and taught well, it could revolutionise teaching.' But by and large, the mood was depressing. Are we staring down the barrel of the future, in which we're simply teaching one robot how to talk to another? Ben, the high school teacher, suggested a 'de-teching' of the classroom was inevitable, and that he and his colleagues have already seen positive interpersonal results from the government's school cellphone ban. 'We need to start thinking about what skills we want humans to have, regardless of their utility in an economy that will slowly be replaced by AI. There's no point trying to predict the jobs of the future. Let's think: what are the things we want everyone in our society to know and understand? What are the key skills, human and disciplinary, that we want everyone in our society to work on? What kind of traits and characteristics do we want to encourage for the good of the individual and the community? That's where we need to start,' says Ben. 'It's in our hands to make a more robust and more human education system. In fact, we have to. If we don't, and if we allow kids to outsource to AI how to think, write, imagine, and work, we'd be doing them a generational disservice.' Julia, despite her struggles, is quick to caution against hyperbole. She believes claims that everyone is using AI are 'highly exaggerated', and the people pushing these products are incentivised to promote the idea that 'not to cheat is to get left behind'. Most of her students, she says, are 'honestly trying to get by on their own thought and expression. So it's certainly not a lost battle we should just give up on due to perceived ubiquity of the tech'. Most teachers agree that the Rubicon has been crossed, and complete abstinence from AI tech is no longer possible. But providing students with the tools to think critically and parse fact from fiction is perhaps more urgent than ever, in a world that is increasingly full of bad actors and disinformation. As Julia says, 'eventually they'll need to rub two brain cells together in a circumstance where the stakes feel real to them, and the less they've practiced thinking and expressing their thoughts the more cringe their failures will be.'

AI use intensifying scams: Netsafe
AI use intensifying scams: Netsafe

Otago Daily Times

timea day ago

  • Otago Daily Times

AI use intensifying scams: Netsafe

Artificial intelligence (AI) is enabling fraudsters to devise ever-slicker romance scams, Netsafe says. The online safety agency recently presented updated resources as part of its Get Set Up for Safety programme, aimed at protecting older people from an upswing in sophisticated digital cons. Business development manager Sarah Bramhall said scammers might spend weeks or months building online relationships before seeking money. "Scammers most often use the techniques or the emotions of trust, fear and hope, usually in a combination. "So they will tap into human emotions." Exploiting lonely or companionship-seeking victims, scammers try to stop them sharing information with friends or family. "They will try to keep them isolated so that they don't tell anyone, because obviously otherwise friends and family will pick up on something happening." At some point the scammer will begin requesting money, sometimes large amounts or gradually increasing amounts. These requests could be couched in ways that played on people's natural desire to be kind or helpful. "Usually it presents itself in something like a medical requirement, they need to travel, they have got family that are sick. "Those sorts of things that really play on emotions." Kind-hearted people who felt they had developed a bond would feel like they wanted to help that person out. "Most of the time, people really don't recognise that they are being scammed in those scenarios. "It is really quite hard for even support workers and family to get them to come to that realisation because they suffer heartbreak, essentially." Generative AI tools were enabling scammers to polish their English, generate fake images or create believable back-stories. Poor grammar or language used to be a red flag that it was a scam message. "That is getting harder to pick up on now," she said. While there were many ways AI was opening up useful and beneficial possibilities, it was important to be mindful of some of the drawbacks of AI, in particular large language models such as ChatGPT, which could create "hallucinations" that could seem plausible but were falsehoods. "I just say 'sometimes AI can lie'." Netsafe has refreshed its portfolio of resources that can help organisations and individuals navigate the online digital realm safely. The material tackles challenges such as spotting scams, safer online dating, privacy settings, securing accounts and verifying requests for personal information. Get Set Up for Safety offers a wide range of resources, including checklists, fact sheets, videos and interactive activities. • To find out more, visit

US-China chip export debate highlights risks for AI leadership
US-China chip export debate highlights risks for AI leadership

Techday NZ

time2 days ago

  • Techday NZ

US-China chip export debate highlights risks for AI leadership

DeepSeek. TikTok. Taiwan. And a White House shake-up on AI rules. The spiralling US-China technology rivalry landed at the heart of Johns Hopkins University last week, as a panel of top experts and policymakers took to the stage to debate whether restricting exports of advanced semiconductors to China can help the US maintain its edge in the race for artificial intelligence. The discussion, hosted by Open to Debate in partnership with the SNF Agora Institute, comes at a critical time. In Washington, the Trump administration has announced plans to roll back the Biden-era AI Diffusion Rule and introduce new chip export controls targeting China – a move seen by many as a signal that the technology contest between the two superpowers is only intensifying. On one side of the Johns Hopkins debate were Lindsay Gorman, managing director at the German Marshall Fund's Technology Program, and former CIA officer and congressman Will Hurd. They argued the answer is yes: semiconductor controls can give the US a real advantage in the AI race. Gorman pointed to DeepSeek, a Chinese AI model whose CEO has publicly lamented the impact of advanced chip bans. "Money has never been the problem for us. Bans on shipments of advanced chips are the problem. And they have to consume twice the power to achieve the same results," she quoted, highlighting how China's AI advances still depend heavily on imported hardware. "The United States has significant hard computing power advantages – the ability to produce high-end chips, designed specifically for training AI models," Gorman told the audience. She argued that, together with its allies, the US controls a "strategic choke point" on computing power. "Properly implemented controls can have an effect and also have an increasing and compounding effect over time in retarding China's AI advantages and giving the United States a head start," she explained. Will Hurd, who also served on OpenAI's board before running for US president, compared the AI contest to the nuclear arms race. "Artificial intelligence is the equivalent of nuclear fission. Nuclear fission controlled gives you nuclear power… uncontrolled, nuclear weapons can kill everybody," he said. Hurd emphasised the importance of first-mover advantage, warning that the US cannot afford to lose its technological lead. He also highlighted a lack of reciprocity in the tech relationship between the two countries. "Chinese companies like Baidu, DJI, and TikTok operate freely in the US, but American companies are not allowed to operate in China," Hurd pointed out. "If there was a level of reciprocity between our two countries, we wouldn't be here having this debate about chip controls." Yet, on the opposing side, former senior US diplomat Susan Thornton and technology strategist Paul Triolo insisted the US could not outpace China in AI simply by tightening export controls. Triolo argued that the controls are "not working and will not lead to US dominance in AI", describing them as a blunt instrument that creates confusion for industry and disrupts global supply chains. "Most experts believe that Chinese companies are only three months behind US leaders in developing advanced AI models," Triolo said, suggesting any technological gap is vanishingly slim. Thornton, who spent decades at the heart of US-China diplomacy, warned of unintended consequences. "The main thing we should be asking ourselves about this question… is what is the cost benefit of US policy actions?" she said. "We have to face the reality that China is already building AI… a third of the world's top AI scientists are Chinese. China is one third of the entire global technology market. So it's clearly a player." She cautioned that blocking China from critical technology could backfire, hurting US companies, alienating allies and raising the risks around Taiwan, the global centre of advanced chip manufacturing. "Certainly, the one thing we need to do is avoid going to war," Thornton warned. "Taiwan, the most sensitive issue in US-China relations, has now been dragged right into the middle of this AI issue because they're the place that produces all the cutting-edge chips that we're trying to control." Audience members pressed the panel on whether international collaboration on AI safety was possible, and whether the US could ever match China's data advantage, given the size of the Chinese population and its permissive data environment. Hurd conceded that "the US will always have less data because we have a little thing called civil liberties," but argued that superior algorithms and privacy-protective machine learning could level the playing field. For Triolo, the dynamic nature of the technology means that attempts to wall off China are self-defeating. "There are many ways to get to different ends. The controls have forced Chinese companies to work together, develop innovations, and become more competitive both domestically and globally," he said. Gorman, in closing, rejected what she called "a defeatism that says America can't out-compete China or slow its progress". "Our companies are doing well. There isn't an issue here with demand, it's with supply. Doing better means that we have to throw what we can at this problem now with a smart application of tools," she argued. But Thornton had the last word, urging caution. "Making the AI competition with China a zero-sum game, not only will not work, it is dangerous," she said. "We should focus on the things that are going to matter to our children and their children, which is the long-term AI competition, which if not constrained and bounded by international agreements and by cooperation among countries… it'll be a very dangerous world."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store