logo
#

Latest news with #ChathamHouseRules

Hill Roundtable: What's next for AI infrastructure
Hill Roundtable: What's next for AI infrastructure

Yahoo

time10-05-2025

  • Business
  • Yahoo

Hill Roundtable: What's next for AI infrastructure

The following is an executive summary from a roundtable breakfast that focused on discussing roadblocks and solutions surrounding the integration and implementation of artificial intelligence (AI) into our everyday lives. Participants cautioned against oversimplifying AI as 'magic' and emphasized the importance of understanding its actual capabilities and limitations. It was held under Chatham House Rules, prior to The Hill's Energy & Environment Summit on May 6 in Washington, D.C. The discussion featured a diverse group of more than 20 attendees, including U.S. Sen. Mike Rounds (R-S.D.), co-chair of the AI Caucus, and Rep. Ted Lieu (D-Calif.), vice chair of the Democratic Caucus as well as business and philanthropy leaders, researchers and policy advisers. The discussion was moderated by The Hill's Technology reporter Miranda Nazzaro and Bill Sammon, SVP of Editorial Content for The Hill. Introduction Artificial Intelligence is becoming increasingly ubiquitous, so much so that there's even a term for it. 'Ubiquitous AI' refers to the concept of AI being integrated into every aspect of our lives, from everyday devices to complex systems, making it accessible and beneficial to everyone, the concept sounds wonderful, putting it into practice is a different story. As lawmakers continue to grapple with how to regulate the technology, companies are scrambling to inform their opinions on the best way to create rules of the road for AI. How will AI continue to transform our society? How should we balance AI innovation and its potential risks? What will it take to reach 'Ubiquitous AI'? Using the energy sector as an example, which areas will be most impacted? And what does an informed, collaborative, and evidence-based approach to AI regulation and governance look like? 1. The Imperative of Public-Private Partnerships and Collaboration: Essential for Progress: There was a strong consensus that collaboration between the public and private sectors is not just beneficial but critical for the future development and responsible deployment of AI. This includes sharing expertise, resources, and understanding. Balancing Roles: The discussion explored the balance of leadership in AI policy. While industry drives innovation, government has a crucial role in setting top-line best practices, ensuring accountability, and establishing standards to foster trust. No single entity should operate in isolation. Government as an Enabler and Agenda Setter: Participants highlighted the government's potential to proactively shape the direction of AI development through funding initiatives, prize structures, and identifying areas where AI can address societal goals (e.g., wildfire prevention). Addressing the Government's Knowledge Gap: Lawmakers acknowledged the rapid pace of AI development and the need for Congress to learn from experts in the field. Initiatives like the Senate AI Caucus's events and the ASAP project aim to bridge this gap. 2. The Challenge of Pace and the Need for Adaptive Governance: Industry's Breakneck Speed vs. Government's Deliberation: A significant concern was the stark contrast between the rapid advancements in AI and the often-slower pace of governmental processes. This raises questions about the government's ability to keep up and regulate effectively. Regulation vs. Shaping: Participants suggested that instead of solely focusing on traditional regulation, the government should also focus on 'shaping' the market through incentives and strategic investments. The Need for Adaptability: Given the constant evolution of AI, rigid rules may become quickly outdated. The discussion emphasized the importance of adaptive systems, continuous information sharing, and iterative approaches to governance. Challenges for Higher Education: The rapid pace also presents challenges for educational institutions in developing relevant curricula that keep pace with industry changes. 3. Sectoral Regulation as a Preferred Approach: Targeted Expertise: Both Congressman Lieu and Senator Rounds advocated for a sectoral approach to regulation, where existing agencies with specific expertise (e.g., FAA, FDA, Department of Agriculture) tailor AI oversight to their respective domains, rather than a single, overarching AI bill. 4. The Profound Implications of Emerging AI Capabilities: AI Agents: The potential of AI agents capable of autonomously executing complex tasks based on simple prompts was highlighted as both 'amazing and alarming,' with uncertain societal and economic consequences. Artificial General Intelligence (AGI): While considered further out, the long-term implications of AGI, including potential widespread unemployment across all skill levels, were raised as critical considerations for future planning. 5. The Critical Intersection of AI and Energy: Growing Energy Demands: The increasing energy consumption of AI, particularly for large language models and data centers, was identified as a significant challenge. Projections suggest a substantial increase in national electricity demand due to AI. AI for Energy Solutions: Conversely, the potential of AI to drive advancements in energy efficiency, material science, and the development of new energy sources was also acknowledged. Infrastructure and Permitting: The need for significant investment in energy infrastructure, including transmission and new generation capacity (potentially including small nuclear reactors), was discussed, along with the challenges of permitting and local acceptance. Resilience and National Security: Concerns were raised about the concentration of data centers in specific geographic areas and the need for a resilient and secure energy supply to support both the AI industry and national defense. 6. The Importance of Data Regulation: Enabling Innovation and Avoiding Fragmentation: The lack of comprehensive data regulation in the U.S. was identified as a hindrance to innovation and a potential driver of market fragmentation, as companies are forced to comply with varying international standards (e.g., GDPR in Europe). 7. The Uncertainty of Timelines and Adoption: Adoption S-Curve: The discussion acknowledged that the widespread adoption of AI may follow an S-curve, potentially taking longer than current rapid advancements might suggest. AGI Timeline Uncertainty: The timeline for achieving AGI remains highly uncertain, impacting the relevance of different policy approaches. Focus on Foundational Elements: Despite timeline uncertainties, investments in fundamental research, digital infrastructure, and energy solutions were deemed crucial regardless of the specific pace of AI development. Comparative and Absolute Advantage: The discussion highlighted that even with absolute advantages in certain tasks, AI adoption will also be influenced by factors like cost, practicality, and comparative advantages of existing solutions. In summary, the roundtable highlighted the urgent need for proactive and collaborative approaches to AI governance, focusing on sectoral expertise, adaptability, and addressing the significant implications for energy infrastructure and the future of work. The rapid pace of innovation necessitates continuous learning and engagement between policymakers, industry leaders, and researchers. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Hill Roundtable: What's next for AI infrastructure
Hill Roundtable: What's next for AI infrastructure

The Hill

time09-05-2025

  • Business
  • The Hill

Hill Roundtable: What's next for AI infrastructure

The following is an executive summary from a roundtable breakfast that focused on discussing roadblocks and solutions surrounding the integration and implementation of Artificial Intelligence into our everyday lives. Participants cautioned against oversimplifying AI as 'magic' and emphasized the importance of understanding its actual capabilities and limitations. It was held under Chatham House Rules, prior to The Hill's Energy & Environment Summit on May 6th in Washington, D.C. The discussion featured a diverse group of more than 20 attendees, including U.S. Senator Mike Rounds (R-SD), Co-Chair of the AI Caucus, and Congressman Ted Lieu (D-CA), Vice Chair of the Democratic Caucus as well as business and philanthropy leaders, researchers and policy advisors. The discussion was moderated by The Hill's Technology reporter Miranda Nazzaro and Bill Sammon, SVP of Editorial Content for The Hill. Introduction Artificial Intelligence is becoming increasingly ubiquitous, so much so that there's even a term for it. 'Ubiquitous AI' refers to the concept of AI being integrated into every aspect of our lives, from everyday devices to complex systems, making it accessible and beneficial to everyone, the concept sounds wonderful, putting it into practice is a different story. As lawmakers continue to grapple with how to regulate the technology, companies are scrambling to inform their opinions on the best way to create rules of the road for AI. How will AI continue to transform our society? How should we balance AI innovation and its potential risks? What will it take to reach 'Ubiquitous AI'? Using the energy sector as an example, which areas will be most impacted? And what does an informed, collaborative, and evidence-based approach to AI regulation and governance look like? 1. The Imperative of Public-Private Partnerships and Collaboration: 2. The Challenge of Pace and the Need for Adaptive Governance: 3. Sectoral Regulation as a Preferred Approach: 4. The Profound Implications of Emerging AI Capabilities: 5. The Critical Intersection of AI and Energy: 6. The Importance of Data Regulation: 7. The Uncertainty of Timelines and Adoption: In summary, the roundtable highlighted the urgent need for proactive and collaborative approaches to AI governance, focusing on sectoral expertise, adaptability, and addressing the significant implications for energy infrastructure and the future of work. The rapid pace of innovation necessitates continuous learning and engagement between policymakers, industry leaders, and researchers.

UAE participates in IMFC Deputies Meeting in Riyadh
UAE participates in IMFC Deputies Meeting in Riyadh

Zawya

time07-04-2025

  • Business
  • Zawya

UAE participates in IMFC Deputies Meeting in Riyadh

Addressing agenda of the upcoming Spring Meetings of the IMF and World Bank Exploring strategies to overcome sluggish economic growth Discussing solutions to high debt levels and ways to strengthen the global financial safety net Riyadh: The UAE took part in the International Monetary and Financial Committee (IMFC) Deputies Meeting, held in Riyadh on April 6 and 7, 2025. The UAE delegation was led by H.E. Younis Haji AlKhoori, Undersecretary of the Ministry of Finance, and included Ali Abdullah Sharafi, Acting Assistant Undersecretary for International Financial Relations at the Ministry. The Emirati delegation joined high-level discussions addressing global economic challenges and how to enhance international financial coordination. On the first day, participants explored strategies to overcome sluggish global economic growth and rising debt levels in a session held under the theme 'Breaking from the low-growth, high-debt path.' Sessions The first day agenda also featured a high-level panel discussion themed 'Strengthening Global Financial Safety Net: Enhancing Coordination Between the IMF and Regional Financial Arrangements'. The Emirati delegation also took part in an informal, closed-door session among deputies, under Chatham House Rules, to discuss IMF operations and governance in a changing world. On the second day, discussions turned to the managing director's global policy agenda and medium-term direction. This was followed by a review of the first draft of the IMFC's official statement and outcome document, which summarizes key insights from the committee's recent survey and strategic discussions at the highest level. The two-day event concluded with a session discussing topics set to be addressed at the upcoming IMF and World Bank Spring Meetings, including global economic priorities and projected fiscal policies in response to ongoing developments. Enhancing economic resilience "The UAE is committed to supporting global financial stability and contributing to developing financial economic policies," said His Excellency Younis Haji AlKhoori. "During the meeting of IMFC deputies, we shared with the committee our balanced and forward-looking perspective—one that focuses on resilience, sustainability, and inclusive growth to help address global challenges and support a sustainable economic recovery. 'The economic challenges we're facing today make it more important than ever to strengthen international cooperation and enhance coordination between countries and financial institutions at every level. During such meetings, we share the UAE's successful experiences and learn from global best practices to design financial policies that support national economies and drive inclusive, sustainable growth—thereby reinforcing global financial stability and helping build confidence in the global economy,' AlKhoori said. His Excellency also noted that this meeting provided an ideal platform to exchange insights with representatives from other member states on global economic priorities, particularly ahead of the upcoming Spring Meetings of the IMF and World Bank. 'We are committed to supporting international coordination, ensuring that financial policies are well-integrated and effective in strengthening economies at both the national and global levels, while promoting long-term stability and inclusive growth," he added.

How news organizations should overhaul their operations as the gen AI threatens their livelihoods
How news organizations should overhaul their operations as the gen AI threatens their livelihoods

Yahoo

time18-03-2025

  • Business
  • Yahoo

How news organizations should overhaul their operations as the gen AI threatens their livelihoods

Hello and welcome to Eye on AI. In this edition…The news media grapples with AI; Trump orders U.S. AI Safety efforts to refocus on combating 'ideological bias'; distributed training is gaining increasing traction; increasingly powerful AI could tip the scales toward is potentially disruptive to many organizations' business models. In few sectors, however, is the threat as seemingly existential as the news business. That happens to be the business I'm in, so I hope you will forgive a somewhat self-indulgent newsletter. But news ought to matter to all of us since a functioning free press performs an essential role in democracy—informing the public and helping to hold power to account. And, there are some similarities between how news executives are—and critically, are not—addressing the challenges and opportunities AI presents that business leaders in other sectors can learn from, too. Last week, I spent a day at an Aspen Institute conference entitled 'AI & News: Charting the Course,' that was hosted at Reuters' headquarters in London. The conference was attended by top executives from a number of U.K. and European news organizations. It was held under Chatham House Rules so I can't tell you who exactly said what, but I can relay what was said. News executives spoke about using AI primarily in internally-facing products to make their teams more efficient. AI is helping write search engine-optimized headlines and translate content—potentially letting organizations reach new audiences in places they haven't traditionally served, though most emphasized keeping humans in the loop to monitor accuracy. One editor described using AI to automatically produce short articles from press releases, freeing journalists for more original reporting, while maintaining human editors for quality control. Journalists are also using AI to summarize documents and analyze large datasets—like government document dumps and satellite imagery—enabling investigative journalism that would be difficult without these tools. These are good use cases, but they result in modest impact—mostly around making existing workflows more efficient. There was active debate among the newsroom leaders and techies present about whether news organizations should take a bottom-up approach—putting generative AI tools in the hands of every journalist and editor, allowing these folks to run their own data analysis or 'vibe code' AI-powered widgets to help them in their jobs, or whether efforts should be top-down, with the management prioritizing projects. The bottom-up approach has merits—it democratizes access to AI, empowers frontline employees who often know the pain points and can often spot good use cases before high-level execs can, and frees limited AI developer talent to be spent only on projects that are bigger, more complex, and potentially more strategically important. The downside of the bottom-up approach is that it can be chaotic, making it hard for the organization to ensure compliance with ethical and legal policies. It can create technical debt, with tools being built on the fly that can't be easily maintained or updated. One editor worried about creating a two-tiered newsroom, with some editors embracing the new tech, and others falling behind. Bottom-up also doesn't ensure that solutions generate the best return on investment—a key consideration as AI models can quickly get expensive. Many called for a balanced approach, though there was no consensus on how to achieve it. From conversations I've had with execs in other sectors, this dilemma is familiar across industries. News outfits are also being cautious about building audience-facing AI tools. Many have begun using AI to produce bullet-point summaries of articles that can help busy and increasingly impatient readers. Some have built AI chatbots that can answer questions about a particular, narrow subset of their coverage—like stories about the Olympics or climate change—but they have tended to label these as 'experiments' in order to help flag to readers that the answers may not always be accurate. Few have gone further in terms of AI-generated content. They worry that gen AI-produced hallucinations will undercut trust in the accuracy of their journalism. Their brands and their businesses ultimately depend on that trust. This caution, while understandable, is itself a colossal risk. If news organizations themselves aren't using AI to summarize the news and make it more interactive, technology companies are. People are increasingly turning to AI search engines and chatbots, including Perplexity, OpenAI's ChatGPT, and Google's Gemini and the 'AI Overviews' Google now provides in response to many searches, and many others. Several news executives at the conference said 'disintermediation'—the loss of a direct connection with their audience—was their biggest fear. They have cause to be worried. Many news organizations (including Fortune) are at least partly dependent on Google search to bring in audiences. A recent study by Tollbit—which sells software that helps protect websites from web crawlers—found that clickthrough rates for Google AI Overviews were 91% lower than from a traditional Google Search. (Google has not yet used AI overviews for news queries, although many think it is only a matter of time.) Other studies of click through rates from chatbot conversations are equally abysmal. Cloudflare, which is also offering to help protect news publishers from web scraping, found that OpenAI scraped a news site 250 times for every one referral page view it sent that site. So far, news organizations have responded to this potentially existential threat through a mix of legal pushback—the New York Times has sued OpenAI for copyright violations, while Dow Jones and the New York Post have sued Perplexity—and partnerships. Those partnerships have involved multiyear, seven-figure licensing deals for news content. (Fortune has a partnership with both Perplexity and ProRata.) Many of the execs at the conference said the licensing deals were a way to make revenue from content the tech companies had most likely already 'stolen' anyway. They also saw the partnerships as a way to build relationships with the tech companies and tap their expertise to help them build AI products or train their staffs. None saw the relationships as particularly stable. They were all aware of the risk of becoming overly reliant on AI licensing revenue, having been burned previously when the media industry let Facebook become a major driver of traffic and ad revenue. Later, that money vanished practically overnight when Meta CEO Mark Zuckerberg decided, after the 2016 U.S. presidential election, to de-emphasize news in people's feeds. Executives acknowledged needing to build direct audience relationships that can't be disintermediated by AI companies, but few had clear strategies for doing so. One expert at the conference said bluntly that "the news industry is not taking AI seriously," focusing on "incremental adaptation rather than structural transformation." He likened current approaches to a three-step process that had "an AI-powered Ferrari" at both ends, but "a horse and cart in the middle." He and another media industry advisor urged news organizations to get away from structuring their approach to news around 'articles." Instead, they encouraged the news execs to think about ways in which source material (public data, interview transcripts, documents obtained from sources, raw video footage, audio recordings, and archival news stories) could be turned into a variety of outputs—podcasts, short-form video, bullet-point summaries, or yes, a traditional news article—to suit audience tastes on the fly by generative AI technology. They also urged news organizations to stop thinking of the production of news as a linear process, and begin thinking about it more as a circular loop, perhaps one in which there was no human in the middle. One person at the conference said that news organizations needed to become less insular and look more closely at insights and lessons from other industries and how they were adapting to AI. Others said that it might require startups—perhaps incubated by the news organizations themselves—to pioneer new business models for the AI age. The stakes couldn't be higher. While AI poses existential challenges to traditional journalism, it also offers unprecedented opportunities to expand reach and potentially reconnect with audiences who have "turned off news"—if leaders are bold enough to reimagine what news can be in the AI era. With that, here's more AI news. Jeremy Correction: Last week's Tuesday edition of Eye on AI misidentified the country where Trustpilot is headquartered. It is Denmark. Also, a news item in that edition misidentified the name of the Chinese startup behind the viral AI model Manus. The name of the startup is Butterfly Effect. This story was originally featured on

MAR 10-12: Cyber Leaders from States, Localities, Federal Government, Corporate, and Academia to Speak at Billington State and Local CyberSecurity Summit in D.C.
MAR 10-12: Cyber Leaders from States, Localities, Federal Government, Corporate, and Academia to Speak at Billington State and Local CyberSecurity Summit in D.C.

Yahoo

time28-02-2025

  • Business
  • Yahoo

MAR 10-12: Cyber Leaders from States, Localities, Federal Government, Corporate, and Academia to Speak at Billington State and Local CyberSecurity Summit in D.C.

Nearly 1K Attendees to Gain Insights from Cyber Experts about Protecting Nation February 28, 2025--(BUSINESS WIRE)--Billington CyberSecurity: WHAT Leaders from states and cities, including CISOs from Florida, California, Texas, Maryland, Kansas, New York, Chicago, New Jersey, Virginia, and North Dakota, as well as experts from DOD, NSA, FBI, and National Guard—will share key insights about protecting the nation's states and municipalities from cyberattacks during the 2nd Billington State and Local CyberSecurity Summit. The summit will convene nearly 1,000 government officials, tech leaders, and academia who will explore topics such as ransomware, AI, China risk, cyber criminals, legacy systems, Zero Trust, energy grid, and more. Additionally, Billington is partnering with StateRAMP to host the 2025 StateRAMP Symposium on Cybersecurity Framework Harmonization on March 10, on site and immediately prior to the summit. WHEN March 10-12, 2025. Check agenda for times. WHO More than 36 cyber leaders will be speaking during 25 sessions, including: Amanda Crawford, Executive Director and CIO, State of Texas Vitaliy Panych, CISO, State of California Jeremy Rodgers, CISO, State of Florida Tony Sauerhoff, CISO, State of Texas Katie Savage, Secretary, Department of IT, State of Maryland Michael Watson, CISO, Commonwealth of Virginia Bruce Coffing, CISO, City of Chicago Dr. Brian Gardner, Interim CIO, City of Dallas, Texas Michael Geraghty, CISO and Director, State of New Jersey Colin Ahern, Chief Cyber Officer, State of New York Michael Gregg, CISO, State of North Dakota JR Sloan, CIO, State of Arizona Miguel Penaranda, State of Wyoming Jeff Maxon, CITO, State of Kansas Hemant Jain, CISO, State of Indiana WHERE Ronald Reagan Building, 1300 Pennsylvania Avenue NW, Washington, DC WHY State and local governments and the critical infrastructure they support are facing significant cyber-attacks. The summit brings together leaders to discuss issues and solutions for state and local government entities. The summit is presented by more than three dozen sponsors, led by: Armis, AWS, Carahsoft Technology Corp., and NightDragon. HOW Credentialed working media are invited to cover in person or online. All sessions are open to credentialed working press except the workshops and sessions noted on the agenda that follow Chatham House Rules. Reporters are encouraged to register in advance to cover the event at ABOUT Founded in 2010, Billington CyberSecurity is the leading cyber education company for executives, hosting high-caliber summits and workshops that explore key cyber issues that advance the cybersecurity of the U.S. government, its allied partners, and critical infrastructure. These events convene seniormost government officials and industry partners, highlighted by its signature conference—the annual Billington CyberSecurity Summit held each fall. View source version on Contacts Shawn Flaherty, 703-554-3609

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store