logo
#

Latest news with #AIPrinciples

Artificial intelligence and virtual care: Transforming healthcare delivery
Artificial intelligence and virtual care: Transforming healthcare delivery

Business Journals

time02-05-2025

  • Health
  • Business Journals

Artificial intelligence and virtual care: Transforming healthcare delivery

Artificial Intelligence (AI) has the power to improve patient outcomes while driving down costs, and emerging AI systems have already changed doctor-patient interaction by making virtual visits and remote care significantly more convenient. But adoption of every new technology requires adherence to regulation and a measured, thoughtful approach to ensure that it can deliver what it promises to deliver. At Intersect 2025, three leading minds in healthcare, information, and legal best practices sat down to discuss their individual personal views of the challenges of AI adoption and the regulatory landscape facing AI-enabled solutions. Daniel Cody is a Health Care and Life Sciences Member at leading law firm Mintz, and he spoke about the pressing need for AI-driven solutions in medical care, saying that, 'Hospitals are stressed, especially with ongoing threats to Medicaid and other programs. So, the twin goals of improving outcomes and reducing costs are universal.' Cody went on to list key ways that AI is already improving the experience of providers and patients. 'Remote monitoring devices are more advanced, with AI capabilities. It's not just about helping folks with diabetes and chronic disease track their conditions but being predictive and giving information to their PCPs on a 24/7 basis. AI tools are also fantastic for helping radiologists evaluate images so they can diagnose and start treatment earlier.' The tools we now call AI have actually been in use for years, giving organizations a long runway to find the ideal approach. 'Five years ago, AI was called clinical decision support,' says Adnan E. Hamid, Regional Vice President and Chief Information Officer at CommonSpirit Health. 'As one of the larger Catholic healthcare systems in the nation, CommonSpirit makes sure that when we select technology, it's human centric and mission centric. The goal is to not replace but augment the human interaction between the clinician and patient.' expand To reach this goal, medical organizations must navigate an ever-evolving field of regulations. 'We have a systemwide UC AI Council and similar oversight committees, and a chief AI officer at each medical center. The UC AI Council sponsored the development of the UC Responsible AI Principles, and a publicly-available model risk assessment guide with procurement process questions built in. We offer an AI primer, and many of our education webinars are open to the public. Twenty UC policies connect to UC AI guidance, considering the many privacy and security requirements on the campus and health side,' says Noelle Vidal, Healthcare Compliance and Privacy Officer for the Office of the President of the University of California. Regulations such as HIPAA are all-important when considering whether to use an AI tool, especially since the better-known apps add user data to their own algorithms. 'When ChatGPT was released, our providers were interested in the power of generative AI,' Hamid says. 'But if you enter patient information, it's no longer private but resides as part of the tool. To ensure nobody was accessing ChatGPT from our systems, we accelerated efforts to produce our own internal generative AI tool using Google Gemini on the back end. Data and security are our IT cornerstones.' AI adds a new layer to assess. As Vidal says, "A thorough assessment can take awhile. Whenever we get a request, it goes through a multi-team scan for privacy, security, and other UC requirements, including the new AI assessment questions. An AI tool's use of data continues to evolve and change how impactful the tool will be. Every change in the technology could contradict what we negotiated earlier in a prior contract with the same vendor. We've got different teams to rank the risk of using a tool. I know it frustrates our stakeholders who want to get every innovation in quickly, but we try to balance adoption with risk prevention.' Ultimately, only the AI applications with the most practical uses are going to clear the vetting and regulatory process to change how practitioners improve the patient experience and the efficacy of healthcare. 'The targeted tools that solve real problems are going to win,' Cody says. 'They're going to ensure security and privacy compliance.' As noted by Hamid, 'the fastest way to get technology approved is to have a really good use case. If you can provide the details of how the tool will solve a problem, then users will complete that process faster. Ultimately, AI adoption is influenced by the structure and mission of the organization.'

6 ways Silicon Valley is getting close with the Pentagon
6 ways Silicon Valley is getting close with the Pentagon

Yahoo

time24-03-2025

  • Business
  • Yahoo

6 ways Silicon Valley is getting close with the Pentagon

OpenAI's release of ChatGPT in November 2022 spurred a race to develop advanced generative artificial intelligence models — one that has seen some companies shell out tens of billions of dollars on AI infrastructure. But that's not the only place major spending is happening. Since then, the U.S. government has paid companies $700 million for AI-enabled defense and security, according to an analysis by Fortune in November. Before ChatGPT came out, the Defense Department was already working on more than 685 AI projects, according to C4ISRNET. Tech companies working with U.S. defense and intelligence agencies isn't new — some semiconductor companies worked with the U.S. government at the start, for example. However, some tech companies shifted away from working with the U.S. government as the focus shifted more to consumers. Now, some AI companies are getting closer to the federal government — forming partnerships to provide defense and intelligence agencies with AI in the name of national security. Here are just a few ways AI companies are working with the U.S. government. In 2017, the Pentagon established an AI program called Project Maven for processing drone footage to find potential drone strike targets. Google (GOOGL) was tapped for its AI — a contract that received backlash from thousands of its employees. 'Building this technology to assist the U.S. Government in military surveillance — and potentially lethal outcomes — is not acceptable,' Google employees said in a letter to Alphabet chief executive Sundar Pichai. Despite not renewing its Project Maven contract, Google has pursued other partnerships with the U.S. government. In February, the company updated its AI Principles to remove a pledge to 'not pursue' AI that could be used for applications such as 'weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people' and 'technologies that gather or use information for surveillance violating internationally accepted norms.' Data annotation startup Scale AI announced in March that it had won a Defense Department contract for a project called Thunderforge. The program aims to integrate AI into U.S. military planning and operations, and is the department's 'first foray into integrating AI agents in and across military workflows to provide advanced decision-making support systems for military leaders,' Scale said. The startup added that Anduril and Microsoft (MSFT) will initially develop and deploy the AI agents — 'always under human oversight' — for the Indo-Pacific Command (INDOPACOM) and European Command (EUCOM). Anduril, which develops autonomous systems used by the military, will integrate the startup's large language models into its modeling and simulation infrastructure for planning, while Microsoft will provide multimodal models. Data analytics platform Palantir (PLTR) announced that it was delivering 'AI-defined vehicles' to the U.S. Army in March. The AI-enabled TITAN vehicles are part of a $178 million contract the company signed with the U.S. Army in 2024. The TITAN system, which stands for Tactical Intelligence Targeting Access Node, has deep-sensing capabilities and 'seeks to enhance the automation of target recognition and geolocation from multiple sensors to reduce the sensor-to-shooter (S2S) timelines through target nominations and fuse the common intelligence picture,' according to Palantir. TITAN was developed with partners including Northrop Grumman (NOC) and Anduril. AI startup Anthropic and Palantir announced a partnership with Amazon Web Services (AMZN) in November to provide the startup's Claude AI models to U.S. intelligence and defense agencies. Anthropic's 3 and 3.5 family of AI models will be accessible through Palantir's AI Platform, while AWS will provide security and other benefits. 'The partnership facilitates the responsible application of AI, enabling the use of Claude within Palantir's products to support government operations such as processing vast amounts of complex data rapidly, elevating data driven insights, identifying patterns and trends more effectively, streamlining document review and preparation, and helping U.S. officials to make more informed decisions in time-sensitive situations while preserving their decision-making authorities,' Palantir said in a statement. OpenAI launched a version of its chatbot called ChatGPT Gov in January to give U.S. government agencies another way to access its frontier AI models. Through ChatGPT Gov, U.S. agencies can save and share conversations within their workspaces, use the flagship GPT-4o model, and build custom GPTs for use in government workspaces. OpenAI said the infrastructure would 'expedite internal authorization of OpenAI's tools for the handling of non-public sensitive data.' 'We believe the U.S. government's adoption of artificial intelligence can boost efficiency and productivity and is crucial for maintaining and enhancing America's global leadership⁠ in this technology,' the startup said. Palantir and Anduril announced a 'consortium' in December to combine technologies to provide the Defense Department with AI infrastructure, such as Anduril's Lattice software system and Palantir's AI Platform. 'This partnership is focused on solving two main problems that limit the adoption of AI for national security purposes,' the companies said in a statement — data readiness and processing large amounts of data. Both companies have been awarded large contracts with the Defense Department, including Palantir's $480 million deal with the U.S. Army for a prototype of its Maven Smart System in May, and Anduril's involvement with the Replicator initiative of thousands of drones and anti-drone systems in places such as the Indo-Pacific. For the latest news, Facebook, Twitter and Instagram.

Google AI chief tells employees company has 'all the ingredients' to hold AI lead over China's DeepSeek
Google AI chief tells employees company has 'all the ingredients' to hold AI lead over China's DeepSeek

NBC News

time14-02-2025

  • Business
  • NBC News

Google AI chief tells employees company has 'all the ingredients' to hold AI lead over China's DeepSeek

Google 's AI chief told employees that he's not worried about China's DeepSeek and said the search giant has superior artificial intelligence technology, according to audio of an all-hands meeting in Paris on Wednesday. At the meeting, Alphabet CEO Sundar Pichai read aloud a question about DeepSeek, the Chinese start-up lab that roiled U.S. markets recently, when its app shot to the top of the Apple's App Store, supplanting ChatGPT. DeepSeek released a research paper last month claiming its AI model was trained at a fraction of the cost of other leading models. The question, which was an AI summary of submissions from employees, asked 'what lessons and implications' Google can glean from DeepSeek's success as the company trains future models. Google DeepMind CEO Demis Hassabis was called on to provide the answer. 'When you look into the details,' Hassabis said, some of DeepSeek's claims are 'exaggerated.' Hassabis added that DeepSeek's reported cost of its AI training was likely 'only a tiny fraction' of the total cost of developing its systems. He said DeepSeek probably used a lot more hardware than it let on, and relied on western AI models. 'We actually have more efficient, more performant models than DeepSeek,' Hassabis said. 'So we're very calm and confident in our strategy and we have all the ingredients to maintain our leadership into this year.' But he admitted that DeepSeek's accomplishments are impressive. 'It's definitely also the best team I think I've seen come out of China so something to be taken seriously,' Hassabis said, noting that there are 'security' and 'geopolitical' implications. Several U.S. agencies have barred staffers from using DeepSeek, citing security concerns. Google declined to comment. DeepSeek didn't respond to a request for comment. Google executives also received a number of employee questions about the company's recent decision to change its 'AI Principles' to no longer include a pledge against using AI for weapons or surveillance. Pichai read aloud an AI-summarized version of the questions, ending with 'Why did we remove this section?' Pichai directed the question to Kent Walker, Google's president of global affairs, who said he had worked with Hassabis, James Manyika, a senior vice president a the company, and others on an effort that 'shifted our approach,' starting last year. Google established its AI principles in 2018 after declining to renew a government contract called Project Maven, which helped to analyze and interpret drone videos using AI. 'Some of the strict prohibitions that were in v1 of the AI principles don't jibe well with the more nuanced conversations that we're having now,' Walker said, referring to the rules from 2018. Walker said 'an awful lot has changed in those seven years,' and that the technology has advanced to the point where 'it's used in lots of very nuanced scenarios.'

Google just opened the door for people to do evil with AI
Google just opened the door for people to do evil with AI

Yahoo

time09-02-2025

  • Business
  • Yahoo

Google just opened the door for people to do evil with AI

When you buy through links on our articles, Future and its syndication partners may earn a commission. Google quietly became more evil this past week. The company has changed its promise of AI responsibility and no longer promises not to develop AI for use in dangerous tech. Prior versions of Google's AI Principles promised the company wouldn't develop AI for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people" or "technologies that gather or use information for surveillance violating internationally accepted norms." Those promises are now gone. If you're not great at deciphering technobabble public relations pseudo-languages, that means making AI for weapons and spy "stuff." It suggests that Google is willing to develop or aid in the development of software that could be used for war. Instead of Gemini just drawing pictures of AI-powered death robots, it could essentially be used to help build them. This is a slow but steady change from just a few years ago. In 2018, the company declined to renew the "Project Maven" contract with the government, which analyzed drone surveillance, and failed to bid on a cloud contract for the Pentagon because it wasn't sure these could align with the company's AI principles and ethics. Then in 2022, it was discovered that Google's participation in "Project Nimbus" gave some executives at the company concerns that 'Google Cloud services could be used for, or linked to, the facilitation of human rights violations.' Google's response was to force employees to stop discussing political conflicts like the one in Palestine. That didn't go well, leading to protests, mass layoffs, and further policy changes. In 2025, Google isn't shying away from the warfare potential of its cloud AI. This isn't too surprising. There's plenty of money to be made working for the Department of Defense or the Pentagon, and executives and shareholders really like plenty of money. However, there's also the more sinister thought that we're in an AI arms race and have to win it. Demis Hassabis, CEO of Google DeepMind, says in a blog post that "democracies should lead in AI development." That's not a dangerous idea — until you read it alongside comments like Palantir CTO Shyam Sankar's, who says that an AI arms race must be a "whole-of-nation effort that extends well beyond the DoD in order for us as a nation to win." These ideas can bring us to the brink of World War III. A winner-take-all AI arms race between the U.S. and China seems only good for the well-protected leaders of the winning side. We all knew that AI would eventually be used this way. While joking about the Rise of the Machines, we were half-serious, knowing that there is a real possibility that AI could turn into some kind of super soldier that never needs to sleep or eat, only stopping to change its battery and fill its ammunition reserves. What is a video game idea today can become a reality in the future. And there isn't a damn thing we can do about it. We could stop using all of Google's (and Nvidia's, Tesla's, Amazon's, and Microsoft's ... you get the idea) products and services as a way to protest and force a change. That might have an impact, but it's not a solution. If Google stops doing it, another company will take its place and hire the same people because they can offer more money. Or Google could simply stop making consumer products and have more time to work on very lucrative DoD contracts. Technology should make the world a better place — that's what we are promised. Nobody ever talks about the evils and carnage it also enables. Let's hope someone in charge likes the betterment of mankind more than the money.

Google removes published promise not to use AI tech for weapons
Google removes published promise not to use AI tech for weapons

Yahoo

time05-02-2025

  • Business
  • Yahoo

Google removes published promise not to use AI tech for weapons

(NewsNation) — Google has updated its ethical policies on artificial intelligence, eliminating a pledge not to use AI technology for weapons development and surveillance. According to a now-archived version of Google's AI principles seen on the digital archive Wayback Machine and reported by NewsNation partner The Hill, the section titled 'Applications we will not pursue' included weapons and other technology aimed at injuring people, along with technologies that 'gather or use information for surveillance.' As of Tuesday, the section was no longer listed on Google's AI principles page. Trump wants US to take ownership over Gaza 'Since we first published our AI Principles in 2018, the technology has evolved rapidly. Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organizations and individuals use to build applications. It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers,' according to Google's published 2024 report. As Washington increasingly embraces the use of AI, some policymakers have expressed concerns the technology could be used for harm when in the hands of bad actors. The Defense Department announced late last year a new office focused on accelerating and adopting AI technology for the military to deploy autonomous weapons in the near future. NewsNation partner The Hill contributed to this report. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store