Latest news with #AIPrinciples


Techday NZ
08-07-2025
- Business
- Techday NZ
New Zealand government unveils national strategy for AI adoption
The New Zealand government has released its first national artificial intelligence (AI) strategy, setting out a roadmap to foster innovation and productivity while ensuring responsible and safe AI development. The strategy, titled "New Zealand's Strategy for Artificial Intelligence: Investing with Confidence," aims to provide clarity and confidence for businesses looking to integrate AI into their operations and signals a step-change in the country's approach to emerging technologies. A response to international trends and local needs Until now, New Zealand had been the only country in the OECD without a formal AI strategy, despite rapidly advancing AI uptake internationally. The new strategy is designed to help New Zealand catch up with other small, advanced economies and to address a growing gap in AI readiness among local organisations. Government analysis has found that most New Zealand businesses are still in the early stages of adopting AI, with many lacking a clear plan for integrating the technology. The government's announcement highlights the potential for AI to drive significant economic growth, with estimates suggesting the technology could add up to NZ$76 billion to the country's GDP by 2038. Recognising both the opportunities and the challenges, the strategy focuses on building local capability, promoting innovation, and managing risks responsibly. Light-touch regulation and clear guidance A key feature of the strategy is its commitment to light-touch regulation. The government has opted not to introduce new AI-specific laws at this stage, instead relying on existing legal frameworks around privacy, consumer protection, and human rights. The aim is to reduce barriers to AI adoption and provide clear regulatory guidance that enables innovation, while still protecting New Zealanders' rights and building public trust. To support safe and responsible use of AI, the government has released voluntary Responsible AI Guidance alongside the strategy. This guidance is intended to help organisations use, develop, and innovate with AI in a way that is ethical, transparent, and aligned with international best practice. The approach is informed by the OECD's AI Principles and emphasises transparency, accountability, and the importance of maintaining public confidence. The strategy also commits to monitoring international developments closely and participating in global efforts to develop consistent approaches to the governance of AI. Focus on business adoption and workforce skills Unlike some strategies that prioritise AI research and development, New Zealand's approach is focused primarily on enabling the adoption and application of AI by businesses. The government sees the greatest opportunity in supporting local firms to take up AI technologies, adapt them to New Zealand's needs, and use them to create value in key sectors such as agriculture, healthcare, and education. The strategy acknowledges several barriers to wider AI adoption, including a shortage of skilled workers, a lack of understanding about how to deploy AI effectively, and uncertainty about the regulatory environment. To address these issues, the government is supporting a range of initiatives to build AI skills within the workforce and is providing advice and support to help businesses prepare for and benefit from AI. The strategy is designed to give businesses the confidence to invest in AI, with the government promising to identify and remove any legal or practical obstacles that may hinder innovation. There is also a commitment to ensuring that AI is developed and used in a way that is inclusive and reflects the needs of Māori and other communities. Supporting both public and private sector innovation While the new strategy is primarily focused on the private sector, the government has signalled its intention to lead by example in the public sector as well. A separate stream of work, led by the Minister for Digitising Government, is underway to explore how AI can improve public services and support digital transformation across government agencies. By taking a coordinated and enabling approach, the government hopes to position New Zealand as a leader among smaller advanced economies in the responsible adoption of AI. The strategy sets out clear expectations for businesses and government agencies alike, encouraging investment in AI technologies that drive productivity, deliver better services, and help New Zealand compete on the global stage. Next steps The government will continue to monitor the rollout of the AI strategy and engage with industry, academia, and communities to ensure that New Zealand's approach remains responsive to technological and social change. The Responsible AI Guidance will be updated as required, and officials will keep a close watch on international developments to ensure New Zealand's regulatory environment remains fit for purpose. With this announcement, New Zealand signals its intention to embrace the opportunities of artificial intelligence, with a focus on responsible, inclusive, and innovation-driven adoption for the benefit of all New Zealanders.


Techday NZ
26-06-2025
- Health
- Techday NZ
Elsevier unveils Embase AI to transform biomedical data research
Elsevier has launched Embase AI, a generative artificial intelligence tool aimed at changing how researchers and medical professionals access and analyse biomedical data. The tool has been developed in collaboration with the scientific community and is built upon Elsevier's Embase platform, a widely used biomedical literature database. According to feedback from beta users, Embase AI can reduce the time spent on reviewing biomedical data by as much as 50%. Natural language features Among its central features, Embase AI allows users to conduct searches in natural language, ranging from basic to complex scientific queries. The system then provides instant summaries of the relevant data and research insights. Each answer comes with a list of linked citations to assist users in evaluating the evidence and meeting expectations around medical regulation. Unlike some other AI solutions that may obscure data provenance, Embase AI delivers transparency by presenting citations and ensuring that the underlying sources can be cross-checked. The database underpinning Embase AI is updated continuously and includes records such as adverse drug reaction reports, peer-reviewed journal articles, and around 500,000 clinical trial listings from This makes it suitable for a range of professional needs, including medical research, pharmacovigilance, regulatory submissions and the generation of market insights. Expanded access By enabling natural language querying, Embase AI seeks to open up biomedical data analysis to a broader group of users, including those who may lack advanced technical experience with literature reviews. Information is summarised for swift consumption while retaining the supporting references, limiting the likelihood that important findings go overlooked. The AI solution uses a dual-stage ranking system to generate summary responses with inline citations. This approach is designed to ensure transparency and help users trust the results. A human-curated hierarchy of medical concepts and their synonyms underpins the system, contributing to the precision and transparency of its outputs. Embase AI's records are updated daily, and its architecture allows the tool to function in real time, searching the platform's full content including peer-reviewed research, clinical trials, preprints and conference abstracts. Security and privacy Elsevier has stated that Embase AI was developed in accordance with its Responsible AI Principles and Privacy Principles to ensure robust data privacy and security. The company notes that the model's use of third-party large language models (LLMs) is private, with no user information being stored or employed to train public versions of these models. All data is retained solely within Elsevier's protected environment. "Embase AI is changing the way researchers and other users go about solving problems and helps them save valuable time searching for answers, digesting information, and avoiding the risk of missing valuable insights. Every user should have access to trusted research tools that help them advance human progress, and we remain committed to working in partnership with scientists across academia, life sciences and other innovative industries to ensure that our solutions address their needs. We know that our users seek solutions that they can trust, and we built Embase AI in a way that ensures transparency, explainability and accuracy." This statement was made by Mirit Eldor, Managing Director, Life Sciences at Elsevier. Ongoing development Embase AI is the latest addition to Elsevier's suite of products aimed at supporting the biomedical research community by facilitating discovery, analysis, and evidence synthesis using responsibly developed AI tools underpinned by trusted content. The platform is designed to meet the needs of professionals in roles such as research and development, medical affairs, academic research, knowledge management, and medical education.


Business Journals
02-05-2025
- Health
- Business Journals
Artificial intelligence and virtual care: Transforming healthcare delivery
Artificial Intelligence (AI) has the power to improve patient outcomes while driving down costs, and emerging AI systems have already changed doctor-patient interaction by making virtual visits and remote care significantly more convenient. But adoption of every new technology requires adherence to regulation and a measured, thoughtful approach to ensure that it can deliver what it promises to deliver. At Intersect 2025, three leading minds in healthcare, information, and legal best practices sat down to discuss their individual personal views of the challenges of AI adoption and the regulatory landscape facing AI-enabled solutions. Daniel Cody is a Health Care and Life Sciences Member at leading law firm Mintz, and he spoke about the pressing need for AI-driven solutions in medical care, saying that, 'Hospitals are stressed, especially with ongoing threats to Medicaid and other programs. So, the twin goals of improving outcomes and reducing costs are universal.' Cody went on to list key ways that AI is already improving the experience of providers and patients. 'Remote monitoring devices are more advanced, with AI capabilities. It's not just about helping folks with diabetes and chronic disease track their conditions but being predictive and giving information to their PCPs on a 24/7 basis. AI tools are also fantastic for helping radiologists evaluate images so they can diagnose and start treatment earlier.' The tools we now call AI have actually been in use for years, giving organizations a long runway to find the ideal approach. 'Five years ago, AI was called clinical decision support,' says Adnan E. Hamid, Regional Vice President and Chief Information Officer at CommonSpirit Health. 'As one of the larger Catholic healthcare systems in the nation, CommonSpirit makes sure that when we select technology, it's human centric and mission centric. The goal is to not replace but augment the human interaction between the clinician and patient.' expand To reach this goal, medical organizations must navigate an ever-evolving field of regulations. 'We have a systemwide UC AI Council and similar oversight committees, and a chief AI officer at each medical center. The UC AI Council sponsored the development of the UC Responsible AI Principles, and a publicly-available model risk assessment guide with procurement process questions built in. We offer an AI primer, and many of our education webinars are open to the public. Twenty UC policies connect to UC AI guidance, considering the many privacy and security requirements on the campus and health side,' says Noelle Vidal, Healthcare Compliance and Privacy Officer for the Office of the President of the University of California. Regulations such as HIPAA are all-important when considering whether to use an AI tool, especially since the better-known apps add user data to their own algorithms. 'When ChatGPT was released, our providers were interested in the power of generative AI,' Hamid says. 'But if you enter patient information, it's no longer private but resides as part of the tool. To ensure nobody was accessing ChatGPT from our systems, we accelerated efforts to produce our own internal generative AI tool using Google Gemini on the back end. Data and security are our IT cornerstones.' AI adds a new layer to assess. As Vidal says, "A thorough assessment can take awhile. Whenever we get a request, it goes through a multi-team scan for privacy, security, and other UC requirements, including the new AI assessment questions. An AI tool's use of data continues to evolve and change how impactful the tool will be. Every change in the technology could contradict what we negotiated earlier in a prior contract with the same vendor. We've got different teams to rank the risk of using a tool. I know it frustrates our stakeholders who want to get every innovation in quickly, but we try to balance adoption with risk prevention.' Ultimately, only the AI applications with the most practical uses are going to clear the vetting and regulatory process to change how practitioners improve the patient experience and the efficacy of healthcare. 'The targeted tools that solve real problems are going to win,' Cody says. 'They're going to ensure security and privacy compliance.' As noted by Hamid, 'the fastest way to get technology approved is to have a really good use case. If you can provide the details of how the tool will solve a problem, then users will complete that process faster. Ultimately, AI adoption is influenced by the structure and mission of the organization.'
Yahoo
24-03-2025
- Business
- Yahoo
6 ways Silicon Valley is getting close with the Pentagon
OpenAI's release of ChatGPT in November 2022 spurred a race to develop advanced generative artificial intelligence models — one that has seen some companies shell out tens of billions of dollars on AI infrastructure. But that's not the only place major spending is happening. Since then, the U.S. government has paid companies $700 million for AI-enabled defense and security, according to an analysis by Fortune in November. Before ChatGPT came out, the Defense Department was already working on more than 685 AI projects, according to C4ISRNET. Tech companies working with U.S. defense and intelligence agencies isn't new — some semiconductor companies worked with the U.S. government at the start, for example. However, some tech companies shifted away from working with the U.S. government as the focus shifted more to consumers. Now, some AI companies are getting closer to the federal government — forming partnerships to provide defense and intelligence agencies with AI in the name of national security. Here are just a few ways AI companies are working with the U.S. government. In 2017, the Pentagon established an AI program called Project Maven for processing drone footage to find potential drone strike targets. Google (GOOGL) was tapped for its AI — a contract that received backlash from thousands of its employees. 'Building this technology to assist the U.S. Government in military surveillance — and potentially lethal outcomes — is not acceptable,' Google employees said in a letter to Alphabet chief executive Sundar Pichai. Despite not renewing its Project Maven contract, Google has pursued other partnerships with the U.S. government. In February, the company updated its AI Principles to remove a pledge to 'not pursue' AI that could be used for applications such as 'weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people' and 'technologies that gather or use information for surveillance violating internationally accepted norms.' Data annotation startup Scale AI announced in March that it had won a Defense Department contract for a project called Thunderforge. The program aims to integrate AI into U.S. military planning and operations, and is the department's 'first foray into integrating AI agents in and across military workflows to provide advanced decision-making support systems for military leaders,' Scale said. The startup added that Anduril and Microsoft (MSFT) will initially develop and deploy the AI agents — 'always under human oversight' — for the Indo-Pacific Command (INDOPACOM) and European Command (EUCOM). Anduril, which develops autonomous systems used by the military, will integrate the startup's large language models into its modeling and simulation infrastructure for planning, while Microsoft will provide multimodal models. Data analytics platform Palantir (PLTR) announced that it was delivering 'AI-defined vehicles' to the U.S. Army in March. The AI-enabled TITAN vehicles are part of a $178 million contract the company signed with the U.S. Army in 2024. The TITAN system, which stands for Tactical Intelligence Targeting Access Node, has deep-sensing capabilities and 'seeks to enhance the automation of target recognition and geolocation from multiple sensors to reduce the sensor-to-shooter (S2S) timelines through target nominations and fuse the common intelligence picture,' according to Palantir. TITAN was developed with partners including Northrop Grumman (NOC) and Anduril. AI startup Anthropic and Palantir announced a partnership with Amazon Web Services (AMZN) in November to provide the startup's Claude AI models to U.S. intelligence and defense agencies. Anthropic's 3 and 3.5 family of AI models will be accessible through Palantir's AI Platform, while AWS will provide security and other benefits. 'The partnership facilitates the responsible application of AI, enabling the use of Claude within Palantir's products to support government operations such as processing vast amounts of complex data rapidly, elevating data driven insights, identifying patterns and trends more effectively, streamlining document review and preparation, and helping U.S. officials to make more informed decisions in time-sensitive situations while preserving their decision-making authorities,' Palantir said in a statement. OpenAI launched a version of its chatbot called ChatGPT Gov in January to give U.S. government agencies another way to access its frontier AI models. Through ChatGPT Gov, U.S. agencies can save and share conversations within their workspaces, use the flagship GPT-4o model, and build custom GPTs for use in government workspaces. OpenAI said the infrastructure would 'expedite internal authorization of OpenAI's tools for the handling of non-public sensitive data.' 'We believe the U.S. government's adoption of artificial intelligence can boost efficiency and productivity and is crucial for maintaining and enhancing America's global leadership in this technology,' the startup said. Palantir and Anduril announced a 'consortium' in December to combine technologies to provide the Defense Department with AI infrastructure, such as Anduril's Lattice software system and Palantir's AI Platform. 'This partnership is focused on solving two main problems that limit the adoption of AI for national security purposes,' the companies said in a statement — data readiness and processing large amounts of data. Both companies have been awarded large contracts with the Defense Department, including Palantir's $480 million deal with the U.S. Army for a prototype of its Maven Smart System in May, and Anduril's involvement with the Replicator initiative of thousands of drones and anti-drone systems in places such as the Indo-Pacific. For the latest news, Facebook, Twitter and Instagram.


NBC News
14-02-2025
- Business
- NBC News
Google AI chief tells employees company has 'all the ingredients' to hold AI lead over China's DeepSeek
Google 's AI chief told employees that he's not worried about China's DeepSeek and said the search giant has superior artificial intelligence technology, according to audio of an all-hands meeting in Paris on Wednesday. At the meeting, Alphabet CEO Sundar Pichai read aloud a question about DeepSeek, the Chinese start-up lab that roiled U.S. markets recently, when its app shot to the top of the Apple's App Store, supplanting ChatGPT. DeepSeek released a research paper last month claiming its AI model was trained at a fraction of the cost of other leading models. The question, which was an AI summary of submissions from employees, asked 'what lessons and implications' Google can glean from DeepSeek's success as the company trains future models. Google DeepMind CEO Demis Hassabis was called on to provide the answer. 'When you look into the details,' Hassabis said, some of DeepSeek's claims are 'exaggerated.' Hassabis added that DeepSeek's reported cost of its AI training was likely 'only a tiny fraction' of the total cost of developing its systems. He said DeepSeek probably used a lot more hardware than it let on, and relied on western AI models. 'We actually have more efficient, more performant models than DeepSeek,' Hassabis said. 'So we're very calm and confident in our strategy and we have all the ingredients to maintain our leadership into this year.' But he admitted that DeepSeek's accomplishments are impressive. 'It's definitely also the best team I think I've seen come out of China so something to be taken seriously,' Hassabis said, noting that there are 'security' and 'geopolitical' implications. Several U.S. agencies have barred staffers from using DeepSeek, citing security concerns. Google declined to comment. DeepSeek didn't respond to a request for comment. Google executives also received a number of employee questions about the company's recent decision to change its 'AI Principles' to no longer include a pledge against using AI for weapons or surveillance. Pichai read aloud an AI-summarized version of the questions, ending with 'Why did we remove this section?' Pichai directed the question to Kent Walker, Google's president of global affairs, who said he had worked with Hassabis, James Manyika, a senior vice president a the company, and others on an effort that 'shifted our approach,' starting last year. Google established its AI principles in 2018 after declining to renew a government contract called Project Maven, which helped to analyze and interpret drone videos using AI. 'Some of the strict prohibitions that were in v1 of the AI principles don't jibe well with the more nuanced conversations that we're having now,' Walker said, referring to the rules from 2018. Walker said 'an awful lot has changed in those seven years,' and that the technology has advanced to the point where 'it's used in lots of very nuanced scenarios.'