Latest news with #AIforGoodSummit


Deccan Herald
2 hours ago
- Business
- Deccan Herald
Taiwanese, Vietnamese firms keen to invest in Indian footwear
Companies must use advanced tools to detect and stamp out misinformation and deepfake content to help counter growing risks of election interference and financial fraud, the United Nations' International Telecommunication Union (ITU) urged in a recent report. Deepfakes such as AI-generated images and videos, and audio that convincingly impersonates real people, pose mounting risks, the ITU said in the report released at its "AI for Good Summit" in Geneva. The ITU called for robust standards to combat manipulated multimedia and recommended that content distributors such as social media platforms use digital verification tools to authenticate images and videos before sharing. "Trust in social media has dropped significantly because people don't know what's true and what's fake," Bilel Jamoussi, Chief of the Study Groups Department at the ITU's Standardization Bureau, noted. Combatting deepfakes was a top challenge due to Generative AI's ability to fabricate realistic multimedia, he said. Leonard Rosenthol of Adobe, a digital editing software leader that has been addressing deepfakes since 2019, underscored the importance of establishing the provenance of digital content to help users assess its trustworthiness. "We need more of the places where users consume their content to show this you are scrolling through your feeds you want to know: 'can I trust this image, this video...'" Rosenthol said. Dr. Farzaneh Badiei, founder of digital governance research firm Digital Medusa, stressed the importance of a global approach to the problem, given there is currently no single international watchdog focusing on detecting manipulated material. "If we have patchworks of standards and solutions, then the harmful deepfake can be more effective," she told Reuters. The ITU is currently developing standards for watermarking videos - which make up 80% of internet traffic - to embed provenance data such as creator identity and timestamps.


Borneo Post
2 days ago
- Politics
- Borneo Post
Fahmi: Govt considering mandatory 'AI generated' label under online safety act
Communications Minister Datuk Fahmi Fadzil said the move is crucial to address the misuse of AI, especially on social media platforms. – Bernama photo KUALA LUMPUR (July 13): The government is considering making it a requirement to label artificial intelligence (AI)-generated content as 'AI generated' under the Online Safety Act 2024, which is expected to come into force by the end of this year. Communications Minister Datuk Fahmi Fadzil said the move is crucial to address the misuse of AI, especially on social media platforms for purposes such as scams, defamation and identity impersonation. 'We may consider this requirement, for example, under the Online Safety Act, which is expected to come into effect, Insya-Allah, by the end of this year. 'We also believe platforms must be proactive in labelling AI-generated content as such,' he said at a press conference after attending the Institute of Public Relations Malaysia's (IPRM) programme YOU & AI: MEET@BANGSAR here today. Also present were Communications Ministry Deputy Secretary-General (Strategic Communications and Creative Industry) Nik Kamaruzaman Nik Husin, Tun Abdul Razak Broadcasting and Information Institute (IPPTAR) director Roslan Ariffin, and IPRM president Jaffri Amin. Fahmi noted that several social media platforms have already begun voluntarily labelling AI-generated content, and that such initiatives could be expanded regionally through cooperation among ASEAN countries. On concerns over the spread of fake videos and images generated by AI, he said there are currently no globally satisfactory regulatory guidelines in place. However, he added that active discussions are ongoing, including at the level of the United Nations (UN) and the International Telecommunication Union (ITU). 'I recently attended the AI for Good Summit in Geneva, Switzerland. Indeed, at both the UN and ITU levels, there is ongoing debate over who should be responsible for AI regulation. 'Certainly, at the national level, Parliament and ministries such as the Ministry of Digital must lead. But we also recognise that every ministry has a role in assessing and evaluating AI use within its scope,' he said. Earlier, in his speech, Fahmi stressed that AI cannot fully replace human roles. He also urged the younger generation, especially Gen Alpha, to understand the benefits, challenges, and limitations of AI, given that they are growing up in a world increasingly shaped by artificial intelligence. – Bernama AI generated artificial intelligence fahmi fadzil Online Safety Act 2024


The Sun
2 days ago
- Politics
- The Sun
Malaysia may require AI-generated content labels under new Online Safety Act
KUALA LUMPUR: The government is exploring the possibility of making it compulsory to label artificial intelligence (AI)-generated content under the upcoming Online Safety Act 2024. Communications Minister Datuk Fahmi Fadzil stated that this measure aims to tackle the misuse of AI, particularly in scams, defamation, and identity fraud on social media. Fahmi mentioned that the Online Safety Act is expected to be enforced by the end of this year. He emphasised that platforms should also take proactive steps in identifying AI-generated content. The minister shared these remarks during a press conference after attending the Institute of Public Relations Malaysia's (IPRM) event titled 'YOU & AI: MEET@BANGSAR'. Several social media platforms have already started voluntarily labelling AI-generated content. Fahmi suggested that such efforts could be expanded regionally through cooperation among ASEAN nations. Regarding concerns over AI-generated fake videos and images, Fahmi acknowledged the lack of global regulatory standards. However, discussions are ongoing at international levels, including within the United Nations (UN) and the International Telecommunication Union (ITU). Fahmi recently attended the AI for Good Summit in Geneva, where debates on AI regulation responsibilities took place. He stressed that while national bodies like Parliament and the Ministry of Digital must lead regulatory efforts, every ministry has a role in assessing AI's impact within its jurisdiction. Earlier in his speech, Fahmi highlighted that AI cannot fully replace human roles. He also encouraged younger generations, especially Gen Alpha, to understand AI's benefits and limitations as they grow up in an AI-driven world. – Bernama


The Star
2 days ago
- Politics
- The Star
Govt considering mandatory 'AI generated' label under Online Safety Act, says Fahmi
KUALA LUMPUR: The government is considering making it a requirement to label artificial intelligence (AI)-generated content as "AI generated" under the Online Safety Act 2024, which is expected to come into force by the end of this year. Communications Minister Datuk Fahmi Fadzil said the move is crucial to address the misuse of AI, especially on social media platforms, for purposes such as scams, defamation and impersonation. "We believe platforms must be proactive in labelling AI-generated content as such,' he told a press conference after attending the Institute of Public Relations Malaysia's (IPRM) programme YOU & AI: MEET@BANGSAR here on Sunday (July 13). Also present were ministry deputy secretary-general (Strategic Communications and Creative Industry) Nik Kamaruzaman Nik Husin, Tun Abdul Razak Broadcasting and Information Institute (Ipptar) director Roslan Ariffin, and IPRM president Jaffri Amin. Fahmi noted that several social media platforms have already begun voluntarily labelling AI-generated content, and that such initiatives could be expanded regionally through cooperation among Asean countries. On concerns over the spread of fake videos and images generated by AI, he said there are currently no globally satisfactory regulatory guidelines in place. However, he added that active discussions are ongoing, including at the level of the United Nations (UN) and the International Telecommunication Union (ITU). "I recently attended the AI for Good Summit in Geneva, Switzerland. Indeed, at both the UN and ITU levels, there is ongoing debate over who should be responsible for AI regulation. "Certainly, at the national level, Parliament and ministries such as the Digital Ministry must lead. "We also recognise that every ministry has a role in assessing and evaluating AI use within its scope,' he said. Earlier, in his speech, Fahmi stressed that AI cannot fully replace human roles. He also urged the younger generation, especially Gen Alpha, to understand the benefits, challenges, and limitations of AI, given that they are growing up in a world increasingly shaped by artificial intelligence. – Bernama


Indian Express
3 days ago
- Business
- Indian Express
UN report urges stronger measures to detect AI-driven deepfakes
Companies must use advanced tools to detect and stamp out misinformation and deepfake content to help counter growing risks of election interference and financial fraud, the United Nations' International Telecommunication Union urged in a report on Friday. Deepfakes such as AI-generated images and videos, and audio that convincingly impersonates real people, pose mounting risks, the ITU said in the report released at its 'AI for Good Summit' in Geneva. The ITU called for robust standards to combat manipulated multimedia and recommended that content distributors such as social media platforms use digital verification tools to authenticate images and videos before sharing. 'Trust in social media has dropped significantly because people don't know what's true and what's fake,' Bilel Jamoussi, Chief of the Study Groups Department at the ITU's Standardization Bureau, noted. Combatting deepfakes was a top challenge due to Generative AI's ability to fabricate realistic multimedia, he said. Leonard Rosenthol of Adobe, a digital editing software leader that has been addressing deepfakes since 2019, underscored the importance of establishing the provenance of digital content to help users assess its trustworthiness. 'We need more of the places where users consume their content to show this information…When you are scrolling through your feeds you want to know: 'can I trust this image, this video…'' Rosenthol said. Dr. Farzaneh Badiei, founder of digital governance research firm Digital Medusa, stressed the importance of a global approach to the problem, given there is currently no single international watchdog focusing on detecting manipulated material. 'If we have patchworks of standards and solutions, then the harmful deepfake can be more effective,' she told Reuters. The ITU is currently developing standards for watermarking videos – which make up 80% of internet traffic – to embed provenance data such as creator identity and timestamps. Tomaz Levak, founder of Switzerland-based Umanitek, urged the private sector to proactively implement safety measures and educate users. 'AI will only get more powerful, faster or smarter… We'll need to upskill people to make sure that they are not victims of the systems,' he said.