
US Senate Judiciary Committee examines impact of AI-generated deepfakes in 2025
The Senate Judiciary Committee holds a hearing titled 'The Good, the Bad, and the Ugly: AI-Generated Deepfakes in 2025'. Witnesses include Justin Brookman, director of technology policy at Consumer Reports; Suzana Carlos, head of music policy at YouTube; Mitch Glazier, CEO of the Recording Industry Association of America; Martina McBride, country music singer-songwriter; and Christen Price, senior legal counsel at the National Center on Sexual Exploitation.
Show more
Show less
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
an hour ago
- Mint
India prepares reporting standard as AI failures may hold clues to managing risks
India is framing guidelines for companies, developers and public institutions to report artificial intelligence-related incidents as the government seeks to create a database to understand and manage the risks AI poses to critical infrastructure. The proposed standard aims to record and classify problems such as AI system failures, unexpected results, or harmful effects of automated decisions, according to a new draft from the Telecommunications Engineering Centre (TEC). Mint has reviewed the document released by the technical arm of the Department of Telecommunications (DoT). The guidelines will ask stakeholders to report events such as telecom network outages, power grid failures, security breaches, and AI mismanagement, and document their impact, according to the draft. 'Consultations with stakeholders are going on pertaining to the draft standard to document such AI-related incidents. TEC's focus is primarily on the telecom and other critical digital infrastructure sectors such as energy and power,"said a government official, speaking on the condition of anonymity. 'However, once a standard to record such incidents is framed, it can be used interoperably in other sectors as AI is being used everywhere." The plan is to create a central repository and pitch the standard globally to the United Nations' International Telecommunication Union, the official said. Recording and analysing AI incidents is important because system failures, bias, privacy breaches, and unexpected results have raised concerns about how the technology affects people and society. 'AI systems are now instrumental in making decisions that affect individuals and society at large," TEC said in the document proposing the draft standard. 'Despite their numerous benefits, these systems are not without risks and challenges." Queries emailed to TEC didn't elicit a response till press time. Also read | AI at war: Artificial intelligence is reshaping defence strategies Incidents similar to the recent Crowdstrike incident, the largest IT outage in history, can be reported under India's proposed standard. Any malfunction in chatbots, cyber breaches, telecom service quality degradation, IoT sensor failures, etc. will also be covered. The draft requires developers, companies, regulators, and other entities to report the name of the AI application involved in an incident, the cause, location, and industry/sector affected, as well as the severity and kind of harm it caused. Like OECD AI Monitor The TEC's proposal builds on a recommendation from a MeitY sub-committee of on 'AI Governance and Guidelines Development'. The panel's report in January had called for the creation of a national AI incident database to improve transparency, oversight, and accountability. MeitY is also engaged in developing a comprehensive governance framework for the country, with a focus on fostering innovation while ensuring responsible and ethical development and deployment of AI. According to the TEC, the draft defines a standardized scheme for AI incident databases in telecommunications and critical digital infrastructure. 'It also establishes a structured taxonomy for classifying AI incidents systematically. The schema ensures consistency in how incidents are recorded, making data collection and exchange more uniform across different systems," the draft document said. Also read | Apple quietly opens AI gates to developers at WWDC 2025 India's proposed framework is similar to the AI Incidents Monitor of the Organization for Economic Co-operation and Development (OECD), which documents incidents to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable information about the real-world risks and harms posed by the technology. 'So far, most of the conversations have been primarily around first principles of ethical and responsible AI. However, there is a need to have domain and sector-specific discussions around AI safety," said Dhruv Garg, a tech policy lawyer and partner at Indian Governance and Policy Project (IGAP). 'We need domain specialist technical bodies like TEC for setting up a standardized approach to AI incidents and risks of AI for their own sectoral use cases," Garg said. 'Ideally, the sectoral approach may feed into the objective of the proposed AI Safety Institute at the national level and may also be discussed internationally through the network of AI Safety Institutes." Need for self-reglation In January, MeitY announced the IndiaAI Safety Institute under the ₹10,000 crore IndiaAI Mission to address AI risks and safety challenges. The institute focuses on risk assessment and management, ethical frameworks, deepfake detection tools, and stress testing tools. 'Standardisation is always beneficial as it has generic advantages," said Satya N. Gupta, former principal advisor at the Telecom Regulatory Authority of India (Trai). 'Telecom and Information and Communication Technology (ICT) cuts across all sectors and, therefore, once standards to mitigate AI risks are formed here, then other sectors can also take a cue." Also read | AI hallucination spooks law firms, halts adoption According to Gupta, recording the AI issues should start with guidelines and self-regulation, as enforcing these norms will increase the compliance burden on telecom operators and other companies. The MeitY sub-committee had recommended that the AI incident database should not be started as an enforcement tool and its objective should not be to penalise people who report AI incidents. 'There is a clarity within the government that the plan is not to do fault finding with this exercise but help policy makers, researchers, AI practitioners, etc., learn from the incidents to minimize or prevent future AI harms," the official cited above said.


Time of India
5 hours ago
- Time of India
Byju's sells US subsidiaries at steep discount
BENGALURU: Byju's has sold its US-based subsidiaries, Epic and Tynker, as part of US bankruptcy proceedings, in what appears to be a fire sale. This marks the latest step in the Indian edtech company's asset liquidation following its financial collapse. Epic was acquired for $95 million by the Chinese education firm TAL Education Group, while CodeHS purchased Tynker for $2.2 million in cash, according to court filings. Both transactions were approved by US Bankruptcy Judge Brendan Shannon on May 20 and are intended to help lenders recoup losses from a $1.2 billion term loan extended to Byju's. Tynker was acquired by Byju's in 2021 for a reported $200 million, while Epic was bought the same year for about $500 million. The latest sale values underscore the sharp write-downs now facing the company's global portfolio. According to a report by EdWeek Market Brief, Tynker's latest sale followed 48 rounds of competitive bidding between CodeHS, operating through a newly formed entity called Tynker Holdings, and another party, Future Minds. CodeHS CEO Jeremy Keeshin, identified in court as the sole member of Tynker Holdings, said the acquisition would allow the company to support learners as they progress from basic coding tools to advanced computer science content. Epic's sale faced an eleventh-hour intervention from the US Department of Justice, which flagged the potential need for a CFIUS (Committee on Foreign Investment in the United States) review due to the buyer's Chinese ownership, court records show. Judge Shannon described the episode as a 'fire drill,' though the transaction ultimately received approval. Both sales are being overseen by a court-appointed trustee managing the asset disposal on behalf of creditors. Byju's, once valued at $22 billion, is now facing insolvency proceedings in India over non-payment of dues, while its international operations are being dismantled through US bankruptcy court. TOI previously reported that the asset sales form part of a larger restructuring effort as Byju's attempts to navigate legal, regulatory, and financial pressures following its aggressive global acquisition spree. Other subsidiaries, such as Aakash, remain under scrutiny amid separate legal proceedings. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Economic Times
6 hours ago
- Economic Times
Tesla hit by major leadership shakeup as key engineering executive quits during critical time
Optimus Head Milan Kovac Announces Exit Personal Reasons Behind Departure Over the past 9+ years, I've had the immense privilege to work with some of the most brilliant minds in AI & engineering. I've built friendships that will last a lifetime. This week, I've had to make the most difficult decision of my life and will be moving out of my position.… — Milan Kovac (@_milankovac_) June 6, 2025 Live Events From Autopilot Engineer to VP of Optimus Departure Comes at a Crucial Time Who Will Fill the Role? FAQs (You can now subscribe to our (You can now subscribe to our Economic Times WhatsApp channel While Tesla is already facing challenges due to its CEO Elon Musk's political controversies, the EV maker is now facing another trouble as it prepares for another leadership change, as per a company has expanded from just a car company to programmes like Tesla Optimus, a division that develops robotic humanoid technology , and now Musk's robotic ambitions may be facing a challenge as the program's leader is leaving the EV giant, as per The to the report, Milan Kovac recently announced that he would step down from his role as the vice president of engineering at shared in a post on his social media platform X (previously Twitter) account that he decided to step down, as he felt the need to spend more time with his family abroad. He wrote, "I've been far away from home for too long, and will need to spend more time with family abroad. I want to make it clear that this is the only reason, and has absolutely nothing to do with anything else. My support for @elonmusk and the team is ironclad - Tesla team forever," as per his X had joined Tesla in 2016 and started as a staff software engineer at Tesla's Autopilot division, and as he spent years advancing through roles to being named director and later vice president of Optimus, as per The Street. While, Kovac worked at Tesla, he played an important role in shaping both its autopilot and robotic humanoid technology, according to the LinkedIn profile mentions that his responsibilities included 'driving the engineering teams responsible for all the software foundations and infrastructure common between Optimus and Autopilot,' helping guide both divisions forward, reported The sudden departure comes at a time when the EV maker is gearing up for launching its self-driving robotaxis in Austin, Texas, on June 12, as per the and AI software team VP Ashok Elluswamy is expected to take over Kovac's position, reported The was the vice president of engineering for Tesla's Optimus robotics said he wants to spend more time with family abroad, and emphasized it's not due to internal issues.