
NZ and ROK should work together on AI issues
Co-operation between Korea and New Zealand on AI is essential Choontae Park writes.
Artificial Intelligence (AI) has now emerged as a key force, not only for technological innovation, but also as a variable that will shape the future of human civilisation.
As of 2025, AI is actively integrated into nearly every field — daily life, industry, economy, education, healthcare and beyond. However, the changes brought by AI are not entirely positive.
Challenges such as job displacement, invasion of privacy, deepening social inequality and ethical dilemmas lie ahead. These are complex issues that cannot be resolved by any single nation alone, as the impact of AI transcends national borders and spreads globally.
Therefore, international co-operation and close alliances are more important than ever. The declaration announced at the 3rd AI Smart Forum held in Seoul on May 16 provided important direction for the future of AI.
It emphasised AI was "not a replacement for humans, but a collaborator in building the future".
It also asserted that technology could not "be completed by science alone; true civilisation [was] only possible when accompanied by humanistic reflection and philosophy."
The declaration clearly stated that AI should enrich human life and contribute to world peace and universal values. This message offers profound insights into the path that Korea and New Zealand should walk together in the AI era.
Korea has been experiencing rapid growth in the AI field on a global scale. In December 2023, the Korean National Assembly passed the Basic Act on Artificial Intelligence, one of the world's first comprehensive AI laws. This law institutionalises the transparency, safety and ethical use of AI.
Korea is taking the lead in shaping international norms through collaboration. It continues to pursue a balance between technological innovation and ethical responsibility, ensuring that AI development earns public trust and accountability.
Meanwhile, New Zealand emphasises inclusive and sustainable AI development through its AI Blueprint for Aotearoa New Zealand (2020) and the Algorithm Charter for Aotearoa New Zealand (introduced in 2020). The country's AI policies prioritise core values such as human rights, diversity, inclusion, transparency, safety and accountability.
Notably, New Zealand places indigenous data sovereignty, welfare and the protection of minority rights at the heart of its AI policies. This approach ensures that AI does not harm any specific group and earns broad public trust and participation.
Korea and New Zealand possess complementary strengths in their AI strategies: Korea excels in technological innovation and policy leadership, while New Zealand offers a strong human-rights-centred and inclusive approach.
There are clear reasons why the two countries should collaborate. First, to strengthen ethical standards and humanistic reflection in AI. By jointly promoting AI research and education that combine science, humanities and philosophy, we can ensure that AI does not undermine human dignity or social values.
Second, to build a global AI governance system together. To address risks such as algorithmic bias and privacy infringement, the two nations must collaborate in international organisations and multilateral forums.
Third, to create an inclusive AI ecosystem. Korea and New Zealand should work to increase participation by women, indigenous peoples and minorities in AI development, and share policies that guarantee data sovereignty and fairness.
Fourth, joint research and standardisation are essential to secure public trust. Only AI that citizens can trust will achieve broad social acceptance.
Fifth, AI should be actively applied to realise social values. Collaborative projects should focus on addressing global issues such as climate change, public health, welfare and education.
Korea and New Zealand have already laid the groundwork for diverse AI co-operation. Going forward, the two countries should deepen their partnership in the following ways: conducting joint research and develop policies in AI ethics and governance; collaborating in the creation of international AI standards and actively participating in global regulatory discussions; and expanding AI education and talent exchange programmes to help nurture global AI leaders from the younger generation.
The countries should also develop and implement inclusive AI policies that protect vulnerable populations and guarantee data rights and promote joint projects in areas like climate action, healthcare, welfare and education using AI.
Ultimately, AI must enrich human life and help achieve global peace and shared values. Technology is not completed by science alone — true civilisation is possible only when paired with reflection and philosophy.
By pursuing a harmonious blend of innovation, ethics, inclusion, trust and international co-operation, Korea and New Zealand can become exemplary partners in the AI era.
— Dr Choontae Park is a former university dean who now lives in New Zealand.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
7 hours ago
- Techday NZ
AI drives 80 percent of phishing with USD $112 million lost in India
Artificial intelligence has become the predominant tool in cybercrime, according to recent research and data from law enforcement and the cybersecurity sector. AI's growing influence A June 2025 report revealed that AI is now utilised in 80 percent of all phishing campaigns analysed this year. This marks a shift from traditional, manually created scams to attacks fuelled by machine-generated deception. Concurrently, Indian police recorded that criminals stole the equivalent of USD $112 million in a single state between January and May 2025, attributing the sharp rise in financial losses to AI-assisted fraudulent operations. These findings are reflected in the daily experiences of security professionals, who observe an increasing use of automation in social engineering, malware development, and reconnaissance. The pace at which cyber attackers are operating is a significant challenge for current defensive strategies. Methods of attack Large language models are now being deployed to analyse public-facing employee data and construct highly personalised phishing messages. These emails replicate a victim's communication style, job role and business context. Additionally, deepfake technology has enabled attackers to create convincing audio and video content. Notably, an incident in Hong Kong this year saw a finance officer send HK $200 million after participating in a deepfake video call bearing the likeness of their chief executive. Generative AI is also powering the development of malware capable of altering its own code and behaviour within hours. This constant mutation enables it to bypass traditional defences like endpoint detection and sandboxing solutions. Another tactic, platform impersonation, was highlighted by Check Point, which identified fake online ads for a popular AI image generator. These ads redirected users to malicious software disguised as legitimate installers, merging advanced loader techniques with sophisticated social engineering. The overall result is a landscape where AI lowers the barriers to entry for cyber criminals while amplifying the reach and accuracy of their attacks. Regulatory landscape Regulators are under pressure to keep pace with the changing threat environment. The European Union's AI Act, described as the first horizontal regulation of its kind, became effective last year. However, significant obligations affecting general-purpose AI systems will begin from August 2025. Industry groups in Brussels have requested a delay on compliance deadlines due to uncertainty over some of the rules, but firms developing or deploying AI will soon be subject to financial penalties for not adhering to the regulations. Guidance issued under the Act directly links the risks posed by advanced AI models to cybersecurity, including the creation of adaptive malware and the automation of phishing. This has created an expectation that security and responsible AI management are now interrelated priorities for organisations. Company boards are expected to treat the risks associated with generative models with the same seriousness as data protection or financial governance risks. Defensive measures A number of strategies have been recommended in response to the evolving threat environment. Top of the list is the deployment of behaviour-based detection systems that use machine learning in conjunction with threat intelligence, as traditional signature-based tools struggle against ever-changing AI-generated malware. Regular vulnerability assessments and penetration testing, ideally by CREST-accredited experts, are also regarded as essential to expose weaknesses overlooked by both automated and manual processes. Verification protocols for audio and video content are another priority. Using additional communication channels or biometric checks can help prevent fraudulent transactions initiated by synthetic media. Adopting zero-trust architectures, which strictly limit user privileges and segment networks, is advised to contain potential breaches. Teams managing AI-related projects should map inputs and outputs, track possible abuse cases, and retain detailed logs in order to meet audit obligations under the forthcoming EU regulations. Staff training programmes are also shifting focus. Employees are being taught to recognise subtle cues and nuanced context, rather than relying on spotting poor grammar or spelling mistakes as indicators of phishing attempts. Training simulations must evolve alongside the sophistication of modern cyber attacks. The human factor Despite advancements in technology, experts reiterate that people remain a core part of the defence against AI-driven cybercrime. Attackers are leveraging speed and scale, but defenders can rely on creativity, expertise, and interdisciplinary collaboration. "Technology alone will not solve AI‑enabled cybercrime. Attackers rely on speed and scale, but defenders can leverage creativity, domain expertise and cross‑disciplinary thinking. Pair seasoned red‑teamers with automated fuzzers; combine SOC analysts' intuition with real‑time ML insights; empower finance and HR staff to challenge 'urgent' requests no matter how realistic the voice on the call," said Himali Dhande, Cybersecurity Operations Lead at Borderless CS. The path ahead There is a consensus among experts that the landscape has been permanently altered by the widespread adoption of AI. It is increasingly seen as necessary for organisations to shift from responding to known threats to anticipating future methods of attack. Proactive security, embedded into every project and process, is viewed as essential not only for compliance but also for continued protection. Borderless CS stated it, "continues to track AI‐driven attack vectors and integrate them into our penetration‐testing methodology, ensuring our clients stay ahead of a rapidly accelerating adversary. Let's shift from reacting to yesterday's exploits to pre‐empting tomorrow's."

RNZ News
9 hours ago
- RNZ News
Why Mark Zuckerberg is offering millions to work for him.
Mark Zuckerberg, the man behind Facebook and Instagram, is on the hunt for new staff, and his approach is wild. In the race to snap up A.I talent, Zuckerberg is reportedly going on house visits and dangling big incentives, in some cases offering people a signing bonus of 100 million dollars. To embed this content on your own webpage, cut and paste the following: See terms of use.


Techday NZ
9 hours ago
- Techday NZ
APAC leaders show lowest AI confidence but see steady ROI gains
Business leaders across the Asia-Pacific (APAC) region display lower levels of confidence in their organisations' ability to use artificial intelligence (AI) effectively compared to counterparts in other global regions, according to the 2025 Celonis AI Process Optimisation Report. The report, based on a survey of 1,620 global business leaders with a quarter of respondents from APAC and 10% from Australia, highlights that just 82% of APAC executives feel confident in their ability to leverage AI to drive business value. This figure represents the lowest confidence level among the surveyed regions, trailing the United States, where 92% express confidence. Despite this confidence gap, AI adoption in APAC is making measurable progress. Seventy percent of business leaders in the region indicate that their current AI investments are delivering the return on investment expected, a result only slightly behind the US (82%) and ahead of Europe (69%). "The data shows that APAC is not lagging in capability, but more so in confidence," said Pascal Coubard, APAC Lead at Celonis. "AI has no borders. Businesses in Australia and across the region shouldn't see their geography as a disadvantage. The real ROI from AI will come when companies apply it to the operational core of their business, not just on the surface, but across processes like payments, collections, and supply chain execution." The research found high awareness of generative AI (GenAI) globally, with 81% of business leaders now utilising foundational GenAI models for functions such as developer productivity, knowledge management, and customer service. GenAI-powered chatbots or virtual assistants have been deployed by 61% of surveyed organisations worldwide. The United States leads with 75% adoption, followed by APAC at 63% and Europe at 57%. While these use cases are expanding, most organisations see themselves in the early stages of integrating AI. Sixty-four percent of leaders surveyed globally believe AI will generate significant return on investment in the coming year, while nearly three quarters (74%) plan to increase their AI budgets. Expectations are rising with this investment, and 73% of companies are aiming to launch department-specific AI use cases. Process intelligence concerns Confidence in the potential of AI is moderated by process challenges. More than half (58%) of business leaders surveyed feel that the current state of their processes may restrict the benefits AI can deliver, with nearly a quarter (24%) expressing strong agreement on this point. A large majority (89%) state that effective AI deployment requires a deep understanding of organisational processes — described as 'process context.' "Leaders recognise that you can't optimise what you don't understand," said Coubard. "They're increasingly aware that process visibility and intelligence are essential for unlocking the full value of enterprise AI." In APAC, these concerns are particularly pronounced. Sixty-two percent of APAC leaders express worry over a lack of process understanding potentially limiting AI success. This is higher than the figures reported in Europe (60%) and the US (55%). Ninety-three percent of US leaders say AI requires detailed operational context to reach its potential, the highest proportion of any region surveyed. Process mining and departmental perspectives To tackle process visibility challenges, businesses are increasingly turning to process mining technologies. Thirty-nine percent of companies now use process mining, with over half (52%) planning to adopt such tools in the next 12 months. This technology underpins process intelligence initiatives, aiming to provide the context AI systems require. The report notes that confidence in AI varies across organisational departments. Process and Operations leaders are the most confident in current AI use cases (76%), closely followed by Finance & Shared Services (75%), IT (73%), and Supply Chain (68%). However, only 61% of Supply Chain leaders believe AI will deliver significant return on investment in the next year, compared to 66% in Process & Operations and in IT. Year ahead The survey's findings point to 2025 as a pivotal period for enterprise AI. Business leaders are set to increase investment and seek refined strategies to pair AI with appropriate process context, using technologies such as process intelligence to underwrite these efforts. "AI isn't just a tech upgrade, it's a new operating model," said Coubard. "But to maximise the ROI of their AI deployments, businesses need AI powered with the process knowledge and business context provided by Celonis Process Intelligence."