logo
#

Latest news with #KevColeman

Trump's new AI action plan
Trump's new AI action plan

Politico

time23-07-2025

  • Business
  • Politico

Trump's new AI action plan

OPERATING ROOM President Donald Trump's announcement Wednesday about his plans for artificial intelligence include a push to grow AI adoption in health care and across the federal government by testing it in regulatory free zones. The White House AI Action Plan says the government should set up regulatory sandboxes, or regulation-free environments, where AI can be tested in real world scenarios with heavy oversight. Kev Coleman, a fellow at the Trump-aligned Paragon Health Institute, suggested such an approach last year. The strategy could allow developers to demonstrate their products' utility, he told Ruth at the time, while also giving policymakers insight that could help shape future policy. Outside the government: AI Centers of Excellence around the country will enable 'researchers, startups, and established enterprises' to test AI tools with the understanding that they will have to publicly share the data and results of their experiments. The Food and Drug Administration will oversee testing of tools related to health care with support from the National Institute of Standards and Technology. The action plan also charges NIST with convening a broad range of health care industry stakeholders — academics, company executives, nonprofits, and industry groups — to develop national standards for AI systems, including measurements for understanding how much AI increases productivity. It calls on both NIST, the National Science Foundation, and federal agencies to develop methods for evaluating the performance and reliability of AI systems using regulatory sandboxes. Inside the agencies: The action plan establishes a Chief Artificial Intelligence Officer Council to coordinate interagency collaboration on AI. This group would work with the White House's Office of Personnel Management to create a talent exchange program that would allow federal employees to be quickly detailed to other agencies in need of expertise. It would also develop an AI procurement toolbox, managed by the General Services Administration in coordination with the White House Office of Management and Budget, that would allow any federal agency to adopt a model already in use within the federal government and customize it for its own purposes. The new council is also supposed to set up a technology and capability transfer program, such that agencies can more easily share knowledge and tools. Finally, the plan requires agencies to ensure employees who could benefit from AI tools have access to them. And it asks that agencies facilitate uses of AI that could improve delivery of services to the public. The big picture: Health systems want to be sure AI tools are safe before deploying them but there is no established framework for doing that. Several industry groups are trying to get consensus on the issue. Trump has largely pursued a deregulatory approach to advancing AI, but his new plan acknowledges the industry's desire for guardrails. WELCOME TO FUTURE PULSE This is where we explore the ideas and innovators shaping health care. According to Science, researchers developing a new type of dental floss to protect against the flu ran into a challenge while testing their needleless vaccine: trying to floss a mouse. Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@ Want to share a tip securely? Message us on Signal: CarmenP.82, RuthReader.02 or ErinSchumaker.01. AROUND THE AGENCIES The National Institutes of Health is capping the number of grant applications researchers can submit each year. The agency posted a notice last week about the new restrictions, which limit principal investigators to six new, renewal, resubmission or revision applications each calendar year. The stated reason behind the change: the risk of researchers overwhelming reviewers with artificial intelligence-generated applications. The NIH said it had identified instances of principal investigators who submitted large numbers of applications that might have leaned heavily on AI. In one instance, an investigator submitted more than 40 different applications in one submission round. 'While AI may be a helpful tool in reducing the burden of preparing applications, the rapid submission of large numbers of research applications from a single Principal Investigator may unfairly strain NIH's application review processes,' the notice says. Since NIH policy requires that grant applications be the original work of the applicants, the agency won't consider applications 'substantially developed' by AI or with sections that are AI-generated. Reality check: The percentage of investigators submitting an average of more than six applications has been low, according to NIH. Carrie Wolinetz, a lobbyist at Lewis-Burke Associates and former senior adviser to NIH director Francis Collins, told Erin that she thinks the impact of the cap will vary by institution. 'I don't think it's a bad idea as a matter of policy. If funding is robust, it could increase the quality of applications,' Wolinetz said. 'I am a little skeptical that limiting applications somehow disincentivizes the use of AI,' she said, adding, 'Although I also don't think limiting the use of AI for application writing is a bad idea.' Do as I say: The White House acknowledged in May that a Make America Healthy Again report spearheaded by HHS Secretary Robert F. Kennedy Jr. contained 'formatting issues' and pledged to correct them. The acknowledgment came after the news outlet NOTUS reported that the MAHA report cited sources that didn't exist, a hallmark of AI use. What's next: The policy goes into effect on Sept. 25.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store