logo
#

Latest news with #FederalPrivacyAct

Culture War on Harvard Spells Disaster for America's AI Future
Culture War on Harvard Spells Disaster for America's AI Future

Newsweek

time03-07-2025

  • Politics
  • Newsweek

Culture War on Harvard Spells Disaster for America's AI Future

Advocates for ideas and draws conclusions based on the interpretation of facts and data. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. The battle between the White House and Harvard University over a $2.2 billion federal funding freeze and demands to ban international students is no isolated attack. It's part of a broader war on liberal higher education—and a harbinger of a wider global struggle. A federal court ruling may have temporarily blocked the student ban, but the message is clear: these attacks are ideological, deliberate, and dangerous. The 24 universities backing Harvard's lawsuit know this is bigger than campus politics. Undermining academia weakens one of the last independent institutions shaping AI's impact on society. By weakening the institutions that embed human knowledge and ethical reasoning into AI, we risk creating a vacuum where technological power advances without meaningful checks, shaped by those with the fastest resources, not necessarily the best intentions. The language used in discussions about ethical AI—terms like "procedural justice," "informed consent," and "structural bias"—originates not from engineering labs, but from the humanities and social sciences. In the 1970s, philosopher Tom Beauchamp helped author the Belmont Report, the basis for modern medical ethics. Legal scholar Alan Westin's work at Columbia shaped the 1974 Federal Privacy Act and the very notion that individuals should control their own data. This intellectual infrastructure now underpins the world's most important AI governance frameworks. Liberal arts scholars helped shape the EU's Trustworthy AI initiative and the OECD's 2019 AI Principles—global standards for rule of law, transparency, and accountability. U.S. universities have briefed lawmakers, scored AI companies on ethics, and championed democratized access to datasets through the bipartisan CREATE AI Act. But American universities face an onslaught. Since his inauguration, Trump has banned international students, slashed humanities and human rights programs, and frozen more than $5 billion in federal funding to leading universities like Harvard. These policies are driving us into a future shaped by those who move fastest and break the most. Left to their own devices, private AI companies give lip service to ethical safeguards, but tend not to implement them. And several, like Google, Meta, and Amazon, are covertly lobbying against government regulation. Harvard banners hang in front of Widener Library during the 374th Harvard Commencement in Harvard Yard in Cambridge, Massachusetts, on May 29, 2025. Harvard banners hang in front of Widener Library during the 374th Harvard Commencement in Harvard Yard in Cambridge, Massachusetts, on May 29, 2025. Rick Friedman / AFP/Getty Images This is already creating real-world harm. Facial recognition software routinely discriminates against women and people of color. Denmark's AI-powered welfare system discriminates against the most vulnerable. In Florida, a 14-year-old boy died by suicide after bonding with a chatbot that reportedly included sexual content. The risks compound when AI intersects with disinformation, militarization, or ideological extremism. Around the world, state and non-state actors are exploring how AI can be harnessed for influence and control, sometimes beyond public scrutiny. The Muslim World League (MWL) has also warned that groups like ISIS are using AI to recruit a new generation of terrorists. Just last month, the FBI warned of scammers using AI-generated voice clones to impersonate senior U.S. officials. What's needed is a broader, more inclusive AI ecosystem—one that fuses technical knowledge with ethical reasoning, diverse cultural voices, and global cooperation. Such models already exist. The Vatican's Rome Call for AI Ethics unites tech leaders and faith groups around shared values. In Latin America and Africa, grassroots coalitions like the Mozilla Foundation have helped embed community voices into national AI strategies. For instance, MWL Secretary-General Mohammad Al-Issa recently signed a landmark long-term memorandum of understanding with the president of Duke University, aimed at strengthening interfaith academic cooperation around shared global challenges. During the visit, Al-Issa also delivered a keynote speech on education, warning of the risks posed by extremists exploiting AI. Drawing on his work confronting digital radicalization by groups like ISIS, he has emerged as one of the few global religious figures urging faith leaders to be directly involved in shaping the ethical development of AI. The United States has long been a global AI leader because it draws on diverse intellectual and cultural resources. But that edge is fading. China has tripled its universities since 1998 and poured billions into state-led AI research. The EU's newly passed AI Act is already reshaping the global regulatory landscape. The world needs not just engineers, but ethicists; not just coders, but critics. The tech industry may have the tools to build AI, but it is academia that holds the moral compass to guide it. If America continues undermining its universities, it won't just lose the tech race. It will forfeit its ability to lead the future of AI. Professor Yu Xiong is Associate Vice President at the University of Surrey and founder of the Surrey Academy for Blockchain and Metaverse Applications. He chaired the UK All-Party Parliamentary Group on Metaverse and Web 3.0 advisory board. The views expressed in this article are the writer's own.

Mass. Sen. Warren: DOGE accessed ‘sensitive' student loan data at Education Dept., calls for probe
Mass. Sen. Warren: DOGE accessed ‘sensitive' student loan data at Education Dept., calls for probe

Yahoo

time09-06-2025

  • Business
  • Yahoo

Mass. Sen. Warren: DOGE accessed ‘sensitive' student loan data at Education Dept., calls for probe

U.S. Sen. Elizabeth Warren says she wants to know how the quasi-governmental Department of Government Efficiency gained access to 'sensitive' student loan information at the U.S. Department of Education. On Monday, Warren and U.S. Sen. Ed Markey, both Democrats, called for the agency's acting inspector general to find out how that breach happened. They were joined by Democratic senators from eight states, including U.S. Sen. Richard Blumenthal of Connecticut. Warren said lawmakers learned of the potential breach of systems at Federal Student Aid after DOGE, which was helmed until recently by tech titan Elon Musk, infiltrated the agency. In response, Education Department officials revealed that DOGE workers 'supported' a review of the FSA's contracts. As a part of that review, one employee was granted 'read-only' access to two internal systems that held sensitive personal information about borrowers. The agency said it had since revoked that access. But, according to Warren, it did not explain why that access had been revoked, or whether the employee had continued access to other databases. 'Because of the [Education] department's refusal to provide full and complete information, the full extent of DOGE's role and influence at ED remains unknown,' the lawmakers wrote in a June 8 letter to René L. Rocque, the agency's acting inspector general. That 'lack of clarity is not only frustrating for borrowers but also dangerous for the future of an agency that handles an extensive student loan portfolio and a range of federal aid programs for higher education,' the lawmakers continued. Warren, Markey and their colleagues have called on Roque's office to determine whether the department adhered to the Federal Privacy Act, which dictates how the government can collect and use personal information. They also asked Roque to 'determine the impact of DOGE's new plans to consolidate Americans' personal information across government databases.' 'It won't end well for Trump' if he does this amid LA protests, ex-GOP rep says All Ivy League schools are supporting Harvard lawsuit — except these 2 Embassies directed to resume processing Harvard University student visas Over 12,000 Harvard alums lend weight to court battle with Trump in new filing Markey: Trump using National Guard in LA to distract from big cuts in 'Big Beautiful Bill' Read the original article on MassLive.

Michelle King resigns as SSA commissioner clash with Musk's DOGE over Social Security data access
Michelle King resigns as SSA commissioner clash with Musk's DOGE over Social Security data access

Express Tribune

time18-02-2025

  • Business
  • Express Tribune

Michelle King resigns as SSA commissioner clash with Musk's DOGE over Social Security data access

Michelle King, the acting commissioner of the Social Security Administration (SSA), has resigned following a dispute with Elon Musk's Department of Government Efficiency (DOGE) over access to confidential Social Security recipient data. King, a veteran official with over 30 years at the SSA, reportedly stepped down after resisting DOGE's request, citing privacy, security, and legal concerns under the Federal Privacy Act. DOGE, established to streamline federal operations and reduce government spending, sought SSA data access to detect fraudulent claims and enhance efficiency. However, the SSA enforces strict privacy protocols to safeguard millions of Americans' sensitive personal and financial data. Critics argue that expanding access to such records could set a dangerous precedent and open the door for potential misuse of information. A similar push by DOGE to access the Internal Revenue Service's (IRS) Integrated Data Retrieval System previously sparked controversy. Privacy advocates and IRS officials warned against potential confidentiality breaches, emphasizing the need for strict oversight in government data-sharing practices. (Source: IRS Official Website) Following King's resignation, Leland Dudek was appointed as the new acting commissioner of the SSA. Dudek, known for advocating data-driven efficiency measures, has previously worked on government modernization initiatives. His appointment suggests a potential shift in SSA's stance, as he is expected to facilitate smoother collaboration with DOGE. However, no formal policy changes have been announced yet. The resignation has triggered political debate over the balance between government efficiency and citizen privacy. Some officials support DOGE's reform efforts, arguing that data access can curb fraud, while privacy advocates and lawmakers warn against expanding government surveillance without proper safeguards. As the federal government continues pushing modernization efforts, the tension between privacy and efficiency is unlikely to subside.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store