
Developers adopt AI tools but trust issues persist, survey finds
The AI trust gap
The survey indicates a significant disparity between AI adoption and trust among developers. While 84% of respondents use or plan to use artificial intelligence tools in their workflow, nearly half (46%) report that they do not trust the accuracy of AI-generated output. This marks a substantial rise from 31% indicating a lack of trust in the previous year.
This year's expanded artificial intelligence section included 15 new questions, addressing topics such as the utility of AI agent tools, the impact of AI on developers' jobs, and the phenomenon of "vibe coding". "The growing lack of trust in AI tools stood out to us as the key data point in this year's survey, especially given the increased pace of growth and adoption of these AI tools. AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance. With the use of AI now ubiquitous and 'AI slop' rapidly replacing the content we see online, an approach that leans heavily on trustworthy, responsible use of data from curated knowledge bases is critical. By providing a trusted human intelligence layer in the age of AI, we believe the tech enthusiasts of today can play a larger role in adding value to build the AI technologies and products of tomorrow," said Prashanth Chandrasekar, CEO of Stack Overflow.
The survey found that 75% of users do not trust AI-generated answers, and 45% find debugging AI-generated code time-consuming. Ethical and security concerns are prevalent, with 61.7% citing these as reasons for hesitancy, while 61.3% wish to maintain full understanding of their code.
AI use and productivity
Despite low overall adoption, AI agents are associated with productivity improvements. Only 31% of developers currently use AI agents, but among those, 69% report increased workplace productivity. Meanwhile, 17% are planning to adopt such tools, while 38% are not planning to use them at all.
A majority (64%) of developers do not see AI as a threat to their employment, though this figure has declined slightly from the previous year's 68%.
Platforms and tools
Visual Studio Code and Visual Studio remain the most used Integrated Development Environments (IDEs). New AI-enabled IDEs have entered the market, with Cursor at an 18% usage rate, Claude Code at 10%, and Windsurf at 5% among respondents.
Among large language models (LLMs), OpenAI's GPT series is the most popular, used by 81% of developers surveyed. Claude Sonnet received 43% usage, and Gemini Flash 35%.
Vibe coding and new ways of learning
'Vibe coding', defined as generating software from LLM prompts, was explored for the first time. While AI tools are being adopted for learning and development, nearly 77% of developers indicated that vibe coding is not part of their professional workflow. The trend is more relevant for less experienced developers seeking a rapid start, but it comes with a trade-off in the level of trust and confidence in the output.
Community platforms continue to play an important role. Stack Overflow is the most common platform, used or planned to be used by 84% of respondents, followed by GitHub at 67%, and YouTube at 61%. Notably, 35% of respondents reported consulting Stack Overflow when confronted with AI-related issues.
The survey shows that 69% of developers have learned a new technology or programming language in the past year, with 36% focusing specifically on AI-enabled tools. Usage of AI tools for learning to code has risen to 44%, up from 37% last year. Top resources remain technical documentation (68%), online resources (59%), and Stack Overflow (51%). For those learning AI-specific skills, 53% used AI tools.
Gen Z developers (aged 18-24) are more likely to engage with coding challenges, with 15% participating compared to an overall average of 12%. Additionally, a higher proportion of this age group prefers chat-based and challenge-based learning approaches than other cohorts.
International responses and technology adoption
The United States, Germany, India, United Kingdom, France, Canada, Ukraine, Poland, Netherlands, and Italy were the top ten countries by survey participation. Trust in AI tools differs by region; India saw the highest proportion of developers expressing some or significant trust in AI at 56%, followed by Ukraine at 41%. Other countries showed lower levels of trust, including Italy (31%), Netherlands and United States (28%), Poland (26%), Canada and France (25%), United Kingdom (23%), and Germany (22%).
Python continues to gain in popularity, with a seven percentage point increase since 2024. JavaScript (66%), HTML/CSS (62%), and SQL (59%) remain popular programming languages. Docker usage grew by 17 percentage points to 71%, marking it as a widely adopted tool in cloud and infrastructure development.
PostgreSQL holds the position as the most sought-after database technology, with 47% planning to use it in the next year or continuing usage, marking its third year at the top in this category. For documentation and collaboration, GitHub leads at 81%, followed by Jira (46%) and GitLab (36%).
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
an hour ago
- Techday NZ
Pentera unveils AI web attack testing to boost cyber defences
Pentera has launched AI-powered Web Attack Testing with new features designed to emulate advanced cyber threats and enhance security validation for organisations. The latest addition introduces AI-driven payload generation and adaptive testing logic, aiming to provide security teams with tools to emulate contemporary threats more effectively. These capabilities are intended to deliver more nuanced and context-aware attack emulation, supporting organisations in validating their defences against increasingly sophisticated, AI-assisted cyberattacks. Pentera's Chief Product Officer, Ran Tamir, commented on the growing impact of artificial intelligence in the cybersecurity landscape, stating, AI is leveling the playing field, turning even keyboard kiddies into credible threat actors. Leveraging AI, attackers can move faster and with more precision than ever before. With the addition of AI to our adversarial testing arsenal we're giving defenders that same advantage, adapting in real time to new threat patterns and tuning each test with the context needed to uncover what traditional scans miss. We have a strong vision for how AI will permeate throughout the security validation practice, and these additions are only the beginning. The new capabilities extend Pentera's AI suite, which commenced with the introduction of AI Insight Reporting earlier in the year. Drawing on the experience from that launch, the company is now focusing on the external-facing web attack surface, incorporating AI in several key areas. AI-driven payload generation According to Pentera, the system can now generate attack payloads informed by current threat intelligence, allowing for faster emulation of newly discovered attack techniques. By building payloads based on the latest trends, the platform is designed to ensure that testing keeps pace with the evolution of real-world cyberattacks. PII-aware attack chaining Another capability highlighted by Pentera is PII-aware attack chaining. The system proactively identifies and extracts exposed Personally Identifiable Information (PII) during testing, automatically leveraging that data within identity threat attack emulations when relevant. This aims to reflect how attackers might exploit such data in actual intrusion attempts. No language or cultural barriers Pentera's platform reportedly accommodates variations in language, naming conventions, and terminology across different regions. The company states this enables consistent and accurate attack simulations regardless of regional differences in labelling or structuring of user-facing components. This feature aims to improve the realism and applicability of tests in diverse environments. System-aware logic The platform also features system-aware logic within its attack tactics. It can recognise the type of system it is interacting with, attempting the most relevant default credentials based on how authentication is structured in each case. This approach is intended to support more precise and context-driven attack scenarios. AI security insights reporting Alongside the AI-based web attack testing, Pentera has also introduced AI-powered security posture reporting specifically for assets exposed externally. These reports analyse historical test data across a selected timeframe, surfacing trends in security posture, regressions, and top remediation priorities. The AI insights reports are exportable, supporting both technical teams and executives by offering a clear overview of exposure and progress over time. The goal, according to Pentera, is to furnish stakeholders with actionable intelligence to guide security priorities and track the effectiveness of remediation efforts. Pentera's growing suite of AI tools reflects a broader movement in the cyber defence sector, where rapid advancements in attack automation and adversarial AI present ongoing challenges to enterprise security. The organisation focuses on supporting security teams by equipping them with assessment and validation functions that align with developments in the threat landscape.


Techday NZ
6 hours ago
- Techday NZ
CrowdStrike & OpenAI enhance SaaS security with AI agent oversight
CrowdStrike has announced a new integration with OpenAI aimed at improving security and governance for AI agents used throughout the software-as-a-service (SaaS) landscape. The company's Falcon Shield product now features integration with the OpenAI ChatGPT Enterprise Compliance API, providing the ability to discover and manage both GPT and Codex agents created within OpenAI's ChatGPT Enterprise environment. This expansion supports more than 175 SaaS applications, addressing the increasing use of agentic AI in business operations. AI and the expanding attack surface As enterprises leverage AI agents to automate workflows and increase efficiency, the number of such agents is rising rapidly. CrowdStrike highlighted that while these agents deliver operational benefits, they also introduce new security challenges. Organisations may struggle to monitor agent activities, understand the data and systems these agents can access, and determine who is responsible for creating or controlling them. Autonomous AI agents frequently operate with non-human identities and persistent privileges. If a human identity associated with such an agent is compromised, there is potential for adversaries to use the agent to exfiltrate data, manipulate systems, or move across key business applications undetected. The proliferation of these agents increases the attack surface and can significantly amplify the impact of a security incident. Enhanced visibility and governance Falcon Shield's new capabilities are intended to help organisations address these risks by mapping each AI agent to its human creator, identifying risky behaviour, and aiding real-time policy enforcement. When combined with the company's Falcon Identity Protection, CrowdStrike's platform aims for unified visibility and protection for both human and non-human identities. "AI agents are emerging as superhuman identities, with the ability to access systems, trigger workflows, and operate at machine speed," said Elia Zaitsev, chief technology officer, CrowdStrike. "As these agents multiply across SaaS environments, they're reshaping the enterprise attack surface, and are only as secure as the human identities behind them. Falcon Shield and Falcon Identity Protection help secure this new layer of identity to prevent exploitation." Key features of the Falcon Shield integration include the discovery of embedded AI tools such as GPTs and Codex agents across various platforms, including ChatGPT Enterprise, Microsoft 365, Snowflake, and Salesforce. This is designed to give security teams increased visibility into AI agent proliferation within an organisation's digital environment. Accountability and threat containment The integration links each AI agent to its respective human creator. According to CrowdStrike, this supports greater accountability and enables organisations to trace access and manage privileges using contextual information. Falcon Identity Protection works alongside these capabilities to further secure human identities associated with AI agent activity. CrowdStrike stated that the system is capable of analysing identity, application, and data context to flag risks such as overprivileged agents, GPTs with sensitive abilities, and any unusual activity. Threats can be contained automatically using Falcon Fusion, the company's no-code security orchestration, automation, and response (SOAR) engine, which can block risky access, disable compromised agents, and trigger response workflows as required. Unified protection approach The product suite combines Falcon Shield, Falcon Identity Protection, and Falcon Cloud Security to provide what the company describes as end-to-end visibility and control over AI agent activity, tracking actions from the person who created an agent to the cloud systems it is able to access. Organisations using agentic AI in their operations are being encouraged to consider tools and approaches that not only monitor the agents themselves but also strengthen oversight of the human identities behind these digital entities.

RNZ News
10 hours ago
- RNZ News
Government should match enthusiasm for AI with investment in teachers
[xh Teachers using AI to mark student work 'dangerous' - principals' group Auckland Secondary Principals Association and member of the AI Forum, Claire Amos. Photo: Supplied The Auckland Secondary Principals Association warns the government needs to match its enthusiasm for artificial intelligence with training and funding for teachers. It said suggestions teachers could use AI to mark most student work were "dangerous". Earlier this week, the government said AI marking was critical for its planned replacement of the NCEA and the Education Ministry told schools they "must have a policy" on acceptable use of generative AI such as ChatGPT. On Tuesday Education Minister Erica Stanford said the Qualifications Authority used AI for marking NCEA literacy and numeracy corequisite exams and would use it more widely by the time the Certificate of Education replaced NCEA level 2 in 2029. She said New Zealand was a world-leader and she expected teachers would use the technology to reduce their workloads as the replacement for NCEA was introduced. "If we didn't have AI, this is something that probably wouldn't be possible without a massive injection for NZQA. "But we do have AI, it is coming, and it is getting better and better every year, we are already world-leading in the way that we use it." The same day, the Education Ministry sent schools updated advice on AI . It said schools that offered NCEA must have a policy on the acceptable use of generative AI. "Schools with consent to assess listed standards are required to have an authenticity policy to support assessment practice. This policy must include the acceptable use of GenAI," it said. "The principle of authenticity means that students are required to demonstrate their own knowledge, understanding, and skills in NCEA assessment events. Any evidence submitted for assessment must be their own work." Auckland Secondary Principals Association and member of the AI Forum, Claire Amos, said it was confusing for educators that the ministry was urging caution over AI, but the minster said it would be crucial for replacing NCEA. Amos said the ministry's guidance was very high-level and fell well short of the centralised training, advice and resources that teachers needed. "There's no point having words on a page if we're not upskilling school leaders and we're not upskilling teachers to navigate their way through both the opportunities that AI offers and the great benefits that AI can offer but also understanding the risks," she said. Amos said other countries were much more advanced than New Zealand in terms of school use of AI. Estonia, for example, was investing in teacher training and in AI platforms specifically for schools so student privacy was protected, she said. Amos said AI should not replace classroom teachers when it came to marking student work. "There's huge risks if we start thinking AI is the answer to assessing all student work," she said. "We should be using the technologies and tools to support teachers to make good judgements, we could be using AI in the moderation process and addressing those concerns people have around internal assessment in schools but we are walking a dangerous line if we think we are going to mark all of our student work using AI. That is hugely disempowering for students, hugely disempowering for teachers as well." Amos said the rising use of AI could exacerbate the digital divide between rich and poor schools and between those that were advanced technology users and those that were not. Sign up for Ngā Pitopito Kōrero , a daily newsletter curated by our editors and delivered straight to your inbox every weekday.