
Secure Code Warrior unveils free AI security rules for developers
The resource is designed for use with a variety of AI coding tools, including GitHub Copilot, Cline, Roo, Cursor, Aider, and Windsurf. The newly available rulesets are structured to provide security-focused guidance to developers who are increasingly using AI to assist with code generation and development processes.
Secure Code Warrior's ongoing goal is to enable developers to produce more secure code from the outset when leveraging AI, aligning with broader efforts to embed security awareness and best practices across development workflows. The company emphasises that developers who possess a strong understanding of security can potentially create much safer and higher-quality code with AI assistance, compared to those who lack such proficiency.
Security within workflow "These guardrails add a meaningful layer of defence, especially when developers are moving fast, multitasking, or find themselves trusting AI tools a little too much," said Pieter Danhieux, Secure Code Warrior Co-Founder & CEO. "We've kept our rules clear, concise and strictly focused on security practices that work across a wide range of environments, intentionally avoiding language or framework-specific guidance. Our vision is a future where security is seamlessly integrated into the developer workflow, regardless of how code is written. This is just the beginning."
The AI Security Rules offer what the company describes as a pragmatic and lightweight baseline that can be adopted by any developer or organisation, regardless of whether they are a Secure Code Warrior customer. The rules are presented in a way that reduces reliance on language- or framework-specific advice, allowing broad applicability.
Features and flexibility
The rulesets function as secure defaults, guiding AI tools away from hazardous coding patterns and well-known security pitfalls such as unsafe use of functions like eval, insecure authentication methods, or deployment without parameterised queries. The rules are grouped by development domain—including web frontend, backend, and mobile—so that developers in varied environments can benefit. They are designed to be adaptable and can be incorporated with AI coding tools that support external rule files.
Another feature highlighted is the public availability and ease of adjustment, meaning development teams of any size or configuration can tailor the rules to their workflow, technology stack, or project requirements. This is intended to foster consistency and collaboration within and between development teams when reviewing or generating AI-assisted code.
Supplementary content
The introduction of the AI Security Rules follows several recent releases from Secure Code Warrior centred around artificial intelligence and large language model (LLM) security. These include four new courses—such as "Coding With AI" and "OWASP Top 10 for LLMs"—along with six interactive walkthrough missions, upwards of 40 new AI Challenges, and an expanded set of guidelines and video content. All resources are available on-demand within the Secure Code Warrior platform.
This rollout represents the initial phase of a broader initiative to provide ongoing training and up-to-date resources supporting secure development as AI technologies continue to be integrated into software engineering practices. The company states that additional related content is already in development and is expected to be released in the near future.
Secure Code Warrior's efforts align with increasing industry focus on the intersection of AI and cybersecurity, as the adoption of AI coding assistants becomes widespread. The emphasis on clear, practical security rules is intended to help mitigate common vulnerabilities that can be introduced through both manual and AI-assisted programming.
The AI Security Rules are publicly available on GitHub for any developers or organisations wishing to incorporate the guidance into their existing development operations using compatible AI tools.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

RNZ News
an hour ago
- RNZ News
Book Critic: The implications of AI on writing
Pip Adam has been reading and thinking a lot about how AI could affect writing. She shares books that offer insights to AI and that show how important human writers are. Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI - Karen Hao Blame it on the Rain (no more poetry 2025) - Hana Pera Aoake Show you're working out - Liz Breslin Hana Pera Aoake Photo: Supplied

1News
an hour ago
- 1News
Dissolving the nuclear taboo would benefit NZ hugely – but do we have the guts?
OPINION: The AI future will require unprecedented amounts of power, and embracing nuclear energy is an obvious, clean solution that could boost the NZ economy for generations. But do we have a leader with the courage? By Thomas Scrimgeour Artificial Intelligence is transforming our world, though not in the way most people imagine. While the knowledge industry revolution is still around the corner, the warehouse-sized computers driving this innovation can't be built fast enough. Data centres already consume roughly 2% of global electricity, which is more than 10 times New Zealand's annual generation, and this figure is projected to double by 2026. The COL4 AI-ready data centre, on a seven-acre campus, Columbus, Ohio. COL4 spans 256,000 square feet with 50 MW of power across three data halls. (Source: Getty) ADVERTISEMENT Elon Musk's xAI recently built the world's largest supercomputer, 'Colossus,' in 122 days. They then doubled its size in just 92. It now requires the power of a small city to operate, and xAI is turning to non-renewables to supply this insatiable need. Around the world, AI's energy demands are rising faster than clean energy capacity can keep up. In Northern Virginia, a major data centre hub, AI-driven power use is expected to triple by 2029, while clean energy capacity will only double. An opportunity for New Zealand This problem is our opportunity. Countries able to deliver clean, reliable, and affordable energy will be best placed to attract billions in data centre investment. The good news is that we already have a distinct competitive advantage. Nearly 90% of New Zealand's electricity generation is renewable, our temperate climate lowers cooling costs, and we're politically stable with strong privacy protections. The sales pitch writes itself. Global hyperscalers have already noticed. Microsoft has invested $1 billion in New Zealand data centres, and Amazon Web Services plans to spend $7.5 billion on their new data centre in northwest Auckland. But here's the hitch: we might have what the world wants, but we don't have enough of it. ADVERTISEMENT Manapōuri hydro power station (Source: Meridian) Hydroelectricity is great, but we're not about to dam another river. Wind and solar are neat, but in midwinter they contribute very little. When renewables fall short, coal and gas fire up, bringing last winter's power price headlines back to haunt us. We need more generation and innovation. The big four power companies, known as gentailers, both produce and sell electricity to consumers. They aren't investing enough in new generation and critics argue the market incentives aren't there to expand capacity. Paul Fuge from Consumer NZ site Powerswitch puts it bluntly: 'the results we're seeing aren't what you'd expect from a thriving competitive market.' Market reform could help. But the real opportunity lies in increasing power production. The morning's headlines in 90 seconds, including Zelensky's suit becomes hot topic at peace summit, a cold blast on the way, and Auckland FC lures new signing back home. (Source: 1News) Conventional geothermal is our best near-term lever. It already supplies nearly 20% of our electricity and operates 24/7, unlike weather-dependent renewable energy. The best estimates suggest that we have enough active geothermal zones to double our output. I believe supercritical geothermal is the natural next step. It involves drilling five kilometres into the Earth's crust to unlock ten times the power of conventional geothermal. However, supercritical geothermal still faces significant technological hurdles. Commercialisation isn't expected until the late 2030s, and it's unclear how quickly it could scale. ADVERTISEMENT So we need a second pillar of clean energy generation, which brings us to the last swear word in New Zealand politics. Nuclear. (I can already smell the uranium.) David Lange at the Oxford Union debate on nuclear weapons,1985, where the then prime minister quipped to his American opponent that he could 'smell the uranium' on his breath. (Source: TVNZ) Can we turn around the taboo? It's only a strange quirk of history that nuclear power is controversial in New Zealand. It got bundled together with the protest backlash of the 1980s, and we've never quite moved on. The 1978 New Zealand Royal Commission on Nuclear Power was expecting a 'significant nuclear power program in the early part of next century.' Better late than never, I suppose. Although traditional nuclear power is brilliant, high upfront costs and a long build time put it in the too-hard basket, especially given New Zealand's basic revulsion. But nuclear technology is rapidly evolving. Small Modular Reactors (SMRs), one-tenth the size of conventional plants, are on the horizon and could be installed in a fraction of the time. A bold government could break the nuclear taboo in a single term. ADVERTISEMENT The Americans want SMRs by the end of the decade. One company, NuScale, already has regulatory approval. Canada will build four 300-megawatt reactors by the mid-2030s, and Japan is reversing plans to decommission its nuclear power plants. All we need is a leader with the courage to take the first step. A feasibility study to work out the who, when, and where of SMRs could be started today. The first politician to raise the issue will take some heat, but Kiwi voters will reward conviction and enjoy the benefits for generations. Energy abundance is the foundation of every productive economy, and the only road to lasting prosperity. We have the chance to do two big, good things: create a data centre industry for New Zealand and generate enough power to bring down costs for everyday Kiwis. But opportunities like this don't wait around. The time to act is now. Thomas Scrimgeour is a reasearcher at the Maxim Institute, an independent think tank based in Auckland.


Techday NZ
2 hours ago
- Techday NZ
AI adoption boosts productivity across New Zealand businesses
Research from Datacom's latest State of AI Index has found 88 percent of New Zealand organisations using artificial intelligence report a positive impact on their operations. The survey, which covered 200 senior business leaders, indicated that 87 percent of New Zealand businesses now use some form of AI in their operations, compared to 66 percent in 2024 and 48 percent in 2023. Among larger organisations with over 200 employees, AI usage was found to be even higher at 92 percent. Productivity gains reported Productivity improvements were cited as the most common benefit of AI adoption. According to the study, 89 percent of AI users reported 'productivity gains'. Digging deeper, 20 percent of organisations said they achieved significant productivity gains - defined as 25 percent or more time saved or increased output - while a further 28 percent saw moderate gains between 10 and 25 percent, and 35 percent reported minor improvements. Other reported benefits included enhanced decision-making and insights (42 percent), cost reduction (30 percent), staff enablement and retention (29 percent), and improved customer experience (26 percent). The most widespread applications of AI in New Zealand organisations are automation of repetitive tasks (68 percent), data analytics and reporting (54 percent), workflow optimisation (51 percent), and customer or employee experience enhancement (32 percent). Sixteen percent of organisations surveyed said they are using AI to transform a core aspect of their operations or services. "The business case for AI is increasingly clear and it is encouraging to see New Zealand organisations capitalising on the benefits AI offers," says Datacom New Zealand MD Justin Gray. Gray noted a shift in focus among organisations towards long-term preparedness, stating, "We're also seeing organisations starting to think in a more long-term way about AI, so they are having conversations with our team about data readiness, whether they have the right cloud environment to managing the increasing data demands, and about the interfaces between their existing applications and AI." Within Datacom itself, Gray reported over 90 internal AI productivity tools have been integrated. The company has also restructured its digital engineering services to deploy a hybrid workforce of AI agents and human software engineers. This hybrid approach, he said, has enabled rebuilding of legacy systems within shorter timelines and delivered cost savings for customers ranging from 30 to 50 percent. Challenges in scaling AI Despite rising adoption rates, the research highlights challenges in scaling AI implementations beyond pilots and departmental use. While a third of respondents have deployed AI at the departmental level, only 12 percent have managed to scale it across their entire organisation. Eight percent reported using AI to transform core operations, and nearly half (46 percent) are still in an exploratory phase, using pilot projects to assess AI's potential. Datacom Director of AI Lou Compagnone commented on the pace of change, stating, "We have seen significant progress in the past year, with some organisations moving from experimenting with genAI to rolling out agentic solutions in the space of 12 months." He said many organisations face challenges moving from pilots to large-scale deployment. "There is a difference between being able to pilot AI and scale it successfully across your organisation. Creating a proof of concept with today's consumer AI tools is relatively straightforward, but productionising these solutions reveals critical challenges around data readiness, system integration, security and long-term maintainability." Compagnone suggested effective scaling should focus on developing operational capability for AI: "That might look like setting up an 'AI Centre of Enablement' or an AI council that has cross-functional representation across the organisation, so they have visibility and coordination over their AI initiatives." He added, "Success in AI implementation requires having a clear vision for the role AI will play in achieving business objectives, backed by a comprehensive AI strategy with clearly defined initiatives. This strategy should address key pillars such as optimisation of business functions, AI technical foundations, data governance, and talent development." "Organisations that move beyond experimental projects to establish these strategic frameworks are the ones that will truly transform their operations with AI. Rather than isolated use cases, they create an ecosystem where AI solutions can be developed, deployed and managed at scale, with appropriate governance and measurable business outcomes." Barriers and concerns Barriers to broader AI adoption include lack of internal capability or skills (32 percent), issues with data quality or integration (22 percent), and uncertainty over governance or regulation (16 percent). Respondents also cited staff resistance and a lack of internal buy-in as obstacles. Despite the increasing use of AI, skills training appears limited, with 46 percent of organisations providing AI training in the past six months and 10 percent in the past year. Another 28 percent are planning to provide training. Fifty-five percent of organisations indicated they want best-practice frameworks from the industry, while 40 percent are seeking external training support. Internally, 55 percent have an AI policy, but only 29 percent have formal ethics or safety guidelines in place. Risks around AI are a notable concern, with 52 percent of leaders identifying "shadow AI" - the use of unapproved tools - as a problem. Other concerns included uncertainty about the implications of AI (80 percent) and loss of control (57 percent).