
Extend unveils open-source AI toolkit for smarter finance
The toolkit supports multiple frameworks, including Anthropic's Model Context Protocol (MCP), OpenAI, native integration with LangChain, and compatibility with CrewAI to facilitate complex multi-agent workflows. The company states that this versatility allows businesses to incorporate Extend's API seamlessly into their existing AI-driven systems, enabling more advanced spend analysis and automated finance processes.
The toolkit is designed to offer flexibility to businesses, allowing them to interact with Extend while continuing to use their preferred banks or credit cards. Its intention is to help organisations adopt AI solutions tailored to their needs, supporting functions such as intelligent financial queries, custom reporting, and workflow automation.
Jonathan Bailey, Extend's Chief Technology Officer, commented on the motivation behind the toolkit: "When I started to explore the multitude of use cases for AI in our industry, I zeroed in on the power of 'agentic frameworks', and realised we could enable tools like Claude to interact directly with Extend via our APIs and immediately unlock extensive AI functionality for our customers."
Through the integration of these frameworks, users are able to query financial data using natural language input, conduct advanced analytics, and generate custom reports. Automation powered by AI agents can manage tasks such as expense categorisation and budget tracking. Businesses will also be able to analyse spending patterns, identify cost-saving opportunities, and obtain greater insights into areas such as cash flow, team spending, and overall budget allocations.
Andrew Jamison, Extend's Chief Executive Officer and co-founder explained the broader company strategy: "At Extend, we believe in empowering businesses to do more with what they already have - whether that's credit lines, banking relationships, or software investments. With this AI toolkit, we're taking that mission to the next level, giving our customers the tools they need to make smarter, faster, and more informed decisions."
Extend indicated that development efforts will continue to focus on expanding AI automation features within its platform, in response to increasing demand from companies seeking more streamlined financial management solutions.
Extend is a modern spend and expense management platform that helps businesses gain control over spending - without changing their existing bank or credit card programs. Thousands of companies use Extend to create and manage virtual cards, streamline payment workflows, and get real-time visibility into team and vendor spend. According to the company, Extend powers billions of dollars in transactions while partnering with the financial institutions businesses already trust.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Scoop
3 days ago
- Scoop
Statement On AI In Universities From Aotearoa Communication & Media Scholars Network
We speak as a network of Aotearoa academics working in the inter-disciplines of Communication and Media Studies across our universities. Among us we have shared expertise in the political, social and economic impacts of commercially distributed and circulated generative artificial intelligence ('AI') in our university workplaces. While there is a tendency in our universities to be resigned to AI as an unstoppable and unquestionable technological force, our aim is to level the playing field to promote open critical and democratic debate. With this in mind, we make the following points: For universities… · AI is not an inevitable technological development which must be incorporated into higher education; rather it is the result of particular techno-capitalist ventures, a context which needs to be recognised and considered; · AI, as a corporate product of private companies such as OpenAI, Google, etc., encroaches on the public role of the university and its role as critic and conscience, and marginalises voices which might critique business interests; For researchers… · AI impedes rather than supports productive intellectual work because it erodes important critical thinking skills; instead, it devolves human scholarly work and critical engagement with ideas–elements vital to our cultural and social life–to software that produces 'ready-made', formulaic and backward looking 'results' that do not advance knowledge; · AI promotes an unethical, reckless approach to research which can promote 'hallucinations' and over valorise disruption for its own sake rather than support quality research; · AI normalises industrial scale theft of intellectual property as our written work is fed into AI datasets largely without citation or compensation; · AI limits the productivity of academic staff by requiring them to invent new forms of assessment which subvert AI, police students and their use of AI, or assess lengthy 'chat logs', rather than engage with students in activities and assessments that require deep, critical thinking and sharing, questioning and articulating ideas with peers; For students… · AI tools create anxiety for students; some are falsely-accused of using generative-AI when they haven't, or are very stressed that it could happen to them; · AI tools such as ChatGPT are contributing to mental-health crises and delusions in various ways; promoting the use of generative-AI in academic contexts is thus unethical, particularly when considering students and the role of universities in pastoral care; · AI thus undermines the fundamental relationships between teacher and student, academics and administration, and the university and the community by fostering an environment of distrust; For Aotearoa New Zealand… · AI clashes with Te Tiriti obligations around data sovereignty and threatens the possibility of data colonialism regarding te reo itself; · AI is devastating for the environment in terms of energy and water use and the extraction of natural resources needed for the processors that AI requires. Signed by: Rosemary Overell, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Olivier Jutel, Lecturer, Media, Film & Communications Programme, The University of Otago Emma Tennent, Senior Lecturer, Media & Communication, Te Herenga Waka Victoria University of Wellington Rachel Billington, Lecturer, Media, Film & Communications Programme, The University of Otago Brett Nicholls, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Yuki Watanabe, Lecturer, Media, Film & Communications Programme, The University of Otago Sy Taffel, Senior Lecturer, Media Studies Programme, Massey University Leon Salter, Senior Lecturer, Communications Programme, University of Auckland Angela Feekery, Senior Lecturer, Communications Programme, Massey University Ian Huffer, Senior Lecturer, Media Studies Programme, Massey University Pansy Duncan, Senior Lecturer, Media Studies Programme, Massey University Kevin Veale, Senior Lecturer, Media Studies Programme, Massey University Peter A. Thompson, Associate Professor, Media & Communication Programme, Te Herenga Waka/Victoria University of Wellington Nicholas Holm, Associate Professor, Media Studies Programme, Massey University Sean Phelan, Associate Professor, Massey University Yuan Gong, Senior Lecturer, Media Studies Programme, Massey University Chris McMillan, Teaching Fellow, Sociology Programme, University of Auckland Cherie Lacey, Researcher, Centre for Addiction Research, University of Auckland Thierry Jutel, Associate Professor, Film, Te Herenga Waka, Victoria University of Wellington Max Soar, Teaching Fellow, Political Communication, Te Herenga Waka Victoria University of Wellington Lewis Rarm, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Tim Groves, Senior Lecturer, Film. Te Herenga Waka, Victoria University of Wellington Valerie Cooper, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Wayne Hope, Professor, Faculty of Design & Creative Technologies, Auckland University of Technology Greg Treadwell, senior lecturer in journalism, School of Communication Studies, Auckland University of Technology Christina Vogels, Senior Lecturer, Critical Media Studies, School of Communication Studies, Auckland University of Technology


Techday NZ
5 days ago
- Techday NZ
Linux Foundation adopts AGNTCY to standardise agentic AI
The Linux Foundation has announced that it is welcoming the AGNTCY project, an open source initiative aimed at standardising foundational infrastructure for open multi-agent artificial intelligence (AI) systems. AGNTCY delivers core components required for discovery, secure messaging, and cross-platform collaboration among AI agents that originate from different companies and frameworks. The project has the backing of industry players including Cisco, Dell Technologies, Google Cloud, Oracle, and Red Hat, all of whom have joined as formative members under the Linux Foundation's open governance. Originally released as open source by Cisco in March 2025 with collaboration from LangChain and Galileo, AGNTCY now includes support from over 75 companies. Its infrastructure forms the basis for the so-called 'Internet of Agents' - an environment where AI agents from diverse origins are able to communicate, collaborate, and be discovered regardless of vendor or execution environment. The increasing adoption of AI agents across industries has led to concerns about fragmentation and the formation of closed silos, constraining agents' ability to communicate across platforms securely and efficiently. AGNTCY's infrastructure aims to address these issues by standardising secure identity, robust messaging, and comprehensive observability. This allows organisations and developers to manage AI agents with improved transparency, performance, and trust. Compatibility is a focus for AGNTCY, which is interoperable with the Agent2Agent (A2A) project, also part of the Linux Foundation, as well as Anthropic's Model Context Protocol (MCP). The project supports agent discovery through AGNTCY directories, enables observable environments using AGNTCY's software development kits (SDKs), and utilises the Secure Low Latency Interactive Messaging (SLIM) protocol for secure message transport. "The AGNTCY project lays groundwork for secure, interoperable collaboration among autonomous agents," said Jim Zemlin, executive director of the Linux Foundation. "We are pleased to welcome the AGNTCY project to the Linux Foundation to ensure its infrastructure remains open, neutral, and community-driven." The AGNTCY project's infrastructure offers several key functions for multi-agent environments. Agent discovery is facilitated using the Open Agent Schema Framework (OASF), allowing agents to identify and understand each other's capabilities. Agent identity is supported via cryptographically verifiable processes to ensure secure activity across organisational boundaries. The agent messaging component supports various communication modes, including human-in-the-loop and quantum-safe options via the SLIM protocol. Observability functionalities provide evaluation and debugging across complex, multi-vendor workflows. "Building the foundational infrastructure for the Internet of Agents requires community ownership, not vendor control," said Vijoy Pandey, general manager and senior vice president of Outshift by Cisco. "The Linux Foundation ensures this critical infrastructure remains neutral and accessible to everyone building multi-agent systems." The project is underpinned by real-world applications, including AI-driven continuous integration and deployment pipelines, multi-agent IT operations, and the automation of telecom networks. This underlines the diversity of use cases benefitting from AGNTCY's open source approach. Various leaders and members have shared their perspective on the announcement: "Interoperability is central to Dell's agentic AI vision. The ability of agents to work together empowers enterprises to reap the full value of AI. Additionally, interworking technologies must accommodate agents wherever they are deployed whether in public clouds, private data centres, the edge or on devices. Dell is working hand-in-hand with industry leaders to establish open standards for agentic interoperability. Being a formative member of the Linux Foundation's AGNTCY project is one such step towards fulfilling the promise of agentic AI." – John Roese, global CTO and chief AI officer, Dell Technologies. "We've been building AGNTCY's evaluation and observability components from day one because reliable Agents cannot scale without purpose-built monitoring. Moving all components of AGNTCY to the Linux Foundation ensures these tools serve the entire ecosystem, not just our customers. As a founding member of AGNTCY, we're eager to see neutral governance accelerate adoption of standards we know enterprises need for production agent deployments." – Yash Sheth, co-founder, Galileo. "Open, community-driven standards are essential for creating a diverse, interoperable agentic AI ecosystem. We're pleased that Cisco is moving AGNTCY to the Linux Foundation, where it will be neutrally governed alongside the Agent2Agent protocol to advance powerful, collaborative agent systems for the industry." – Rao Surapaneni, vice president, business applications platform, Google Cloud. "Enterprise customers need agent infrastructure they can trust for mission-critical workloads. We welcome AGNTCY's move to the Linux Foundation and are proud to be a formative member of this project. A tight control over data security and governance helps discovery, identity, and observability components work reliably across the entire enterprise technology stack, not just specific vendor ecosystems." – Roger Barga, senior vice president, AI & ML, Oracle Cloud Infrastructure. "Our customers and partners, as well as the open source communities we work with, are actively exploring agentic capabilities to bring the inferencing benefits of vLLM and llm-d to their applications. Red Hat welcomes AGNTCY's move to the Linux Foundation and we look forward to working with the community to help bring open, agnostic governance to the agentic AI ecosystem." – Steve Watt, vice president and distinguished engineer, Office of the CTO, Red Hat. Follow us on: Share on:


Techday NZ
6 days ago
- Techday NZ
Developers adopt AI tools but trust issues persist, survey finds
Stack Overflow has released the results of its 2025 Developer Survey, detailing the perceptions and habits of more than 49,000 technologists across 177 countries. The AI trust gap The survey indicates a significant disparity between AI adoption and trust among developers. While 84% of respondents use or plan to use artificial intelligence tools in their workflow, nearly half (46%) report that they do not trust the accuracy of AI-generated output. This marks a substantial rise from 31% indicating a lack of trust in the previous year. This year's expanded artificial intelligence section included 15 new questions, addressing topics such as the utility of AI agent tools, the impact of AI on developers' jobs, and the phenomenon of "vibe coding". "The growing lack of trust in AI tools stood out to us as the key data point in this year's survey, especially given the increased pace of growth and adoption of these AI tools. AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance. With the use of AI now ubiquitous and 'AI slop' rapidly replacing the content we see online, an approach that leans heavily on trustworthy, responsible use of data from curated knowledge bases is critical. By providing a trusted human intelligence layer in the age of AI, we believe the tech enthusiasts of today can play a larger role in adding value to build the AI technologies and products of tomorrow," said Prashanth Chandrasekar, CEO of Stack Overflow. The survey found that 75% of users do not trust AI-generated answers, and 45% find debugging AI-generated code time-consuming. Ethical and security concerns are prevalent, with 61.7% citing these as reasons for hesitancy, while 61.3% wish to maintain full understanding of their code. AI use and productivity Despite low overall adoption, AI agents are associated with productivity improvements. Only 31% of developers currently use AI agents, but among those, 69% report increased workplace productivity. Meanwhile, 17% are planning to adopt such tools, while 38% are not planning to use them at all. A majority (64%) of developers do not see AI as a threat to their employment, though this figure has declined slightly from the previous year's 68%. Platforms and tools Visual Studio Code and Visual Studio remain the most used Integrated Development Environments (IDEs). New AI-enabled IDEs have entered the market, with Cursor at an 18% usage rate, Claude Code at 10%, and Windsurf at 5% among respondents. Among large language models (LLMs), OpenAI's GPT series is the most popular, used by 81% of developers surveyed. Claude Sonnet received 43% usage, and Gemini Flash 35%. Vibe coding and new ways of learning 'Vibe coding', defined as generating software from LLM prompts, was explored for the first time. While AI tools are being adopted for learning and development, nearly 77% of developers indicated that vibe coding is not part of their professional workflow. The trend is more relevant for less experienced developers seeking a rapid start, but it comes with a trade-off in the level of trust and confidence in the output. Community platforms continue to play an important role. Stack Overflow is the most common platform, used or planned to be used by 84% of respondents, followed by GitHub at 67%, and YouTube at 61%. Notably, 35% of respondents reported consulting Stack Overflow when confronted with AI-related issues. The survey shows that 69% of developers have learned a new technology or programming language in the past year, with 36% focusing specifically on AI-enabled tools. Usage of AI tools for learning to code has risen to 44%, up from 37% last year. Top resources remain technical documentation (68%), online resources (59%), and Stack Overflow (51%). For those learning AI-specific skills, 53% used AI tools. Gen Z developers (aged 18-24) are more likely to engage with coding challenges, with 15% participating compared to an overall average of 12%. Additionally, a higher proportion of this age group prefers chat-based and challenge-based learning approaches than other cohorts. International responses and technology adoption The United States, Germany, India, United Kingdom, France, Canada, Ukraine, Poland, Netherlands, and Italy were the top ten countries by survey participation. Trust in AI tools differs by region; India saw the highest proportion of developers expressing some or significant trust in AI at 56%, followed by Ukraine at 41%. Other countries showed lower levels of trust, including Italy (31%), Netherlands and United States (28%), Poland (26%), Canada and France (25%), United Kingdom (23%), and Germany (22%). Python continues to gain in popularity, with a seven percentage point increase since 2024. JavaScript (66%), HTML/CSS (62%), and SQL (59%) remain popular programming languages. Docker usage grew by 17 percentage points to 71%, marking it as a widely adopted tool in cloud and infrastructure development. PostgreSQL holds the position as the most sought-after database technology, with 47% planning to use it in the next year or continuing usage, marking its third year at the top in this category. For documentation and collaboration, GitHub leads at 81%, followed by Jira (46%) and GitLab (36%).