Latest news with #BletchleyDeclaration


South China Morning Post
21-05-2025
- Business
- South China Morning Post
Hong Kong firms must take initiative on safe AI practices
As artificial intelligence (AI) develops rapidly, an increasing number of organisations are leveraging this technology to streamline operations, improve quality and enhance competitiveness. However, AI poses security risks, including personal data privacy risks, that cannot be ignored. For instance, organisations developing or using AI systems often collect, use and process personal data, posing privacy risks such as excessive collection, unauthorised use and breaches of personal data. The importance of AI security has become a common theme in international declarations and resolutions adopted in recent years. In 2023, 28 countries, including China and the United States, signed the Bletchley Declaration at the AI Safety Summit in the UK. The declaration stated that misuse of advanced AI models could lead to catastrophic harm and emphasised the urgent need to address these risks. In 2024, the United Nations General Assembly adopted an international resolution on AI , promoting 'safe, secure and trustworthy' AI systems. At the AI Action Summit in Paris in February, more than 60 countries, including China, signed a statement emphasising that leveraging the benefits of AI for economic and societal growth depends on advancing AI safety and trust. Concerning technological and industrial innovation, China has emphasised both development and security. In 2023, the Chinese mainland launched the Global AI Governance Initiative , proposing principles such as taking a people-centred approach and developing AI for good.


Indian Express
08-05-2025
- Business
- Indian Express
Understanding shift from AI Safety to Security, and India's opportunities
Written by Balaraman Ravindran, Vibhav Mithal and Omir Kumar In February 2025, the UK announced that its AI Safety Institute would become the AI Security Institute. This triggered several debates about what this means for AI safety. As India prepares to host the AI Summit, a key question will be how to approach AI safety. The What and How of AI Safety In November 2023, more than 20 countries, including the US, UK, India, China, and Japan, attended the inaugural AI Safety Summit at Bletchley Park in the UK. The Summit took place against the backdrop of increasing capabilities of AI systems and their integration into multiple domains of life, including employment, healthcare, education, and transportation. Countries acknowledged that while AI is a transformative technology with potential for socio-economic benefit, it also poses significant risks through both deliberate and unintentional misuse. A consensus emerged among the participating countries on the importance of ensuring that AI systems are safe and that their design, development, deployment, or use does not harm society—leading to the Bletchley Declaration. The Declaration further advocated for developing risk-based policies across nations, taking into account national contexts and legal frameworks, while promoting collaboration, transparency from private actors, robust safety evaluation metrics, and enhanced public sector capability and scientific research. It was instrumental in bringing AI safety to the forefront and laid the foundation for global cooperation. Following the Summit, the UK established the AI Safety Institute (AISI), with similar institutes set up in the US, Japan, Singapore, Canada, and the EU. Key functions of AISIs include advancing AI safety research, setting standards, and fostering international cooperation. India has also announced the establishment of its AISI, which will operate on a hub-and-spoke model involving research institutions, academic partners, and private sector entities under the Safe and Trusted pillar of the IndiaAI Mission. UK's Shift from Safety to Security The establishment of AISIs in various countries reflected a global consensus on AI safety. However, the discourse took a turn in January 2025, when the UK rebranded its Safety Institute as the Security Institute. The press release noted that the new name reflects a focus on risks with security implications, such as the use of AI in developing chemical and biological weapons, cybercrimes, and child sexual abuse. It clarified that the Institute would not prioritise issues like bias or free speech but focus on the most serious risks, helping policymakers ensure national safety. The UK government also announced a partnership with Anthropic to deploy AI systems for public services, assess AI security risks, and drive economic growth. India's Understanding of Safety Given the UK's recent developments, it is important to explore what AI safety means for India. Firstly, when we refer to AI safety — i.e., making AI systems safe — we usually talk about mitigating harms such as bias, inaccuracy, and misinformation. While these are pressing concerns, AI safety should also encompass broader societal impacts, such as effects on labour markets, cultural norms, and knowledge systems. One of the Responsible AI (RAI) principles laid down by NITI Aayog in 2021 hinted at this broader view: 'AI should promote positive human values and not disturb in any way social harmony in community relationships.' The RAI principles also address equality, reliability, non-discrimination, privacy protection, and security — all of which are relevant to AI safety. Thus, adherence to RAI principles could be one way of operationalising AI safety. Secondly, safety and security should not be seen as mutually exclusive. We cannot focus on security without first ensuring safety. For example, in a country like India, bias in AI systems could pose national security risks by inciting unrest. As we aim to deploy 'AI for All' in sectors such as healthcare and education, it is essential that these systems are not only secure but also safe and responsible. A narrow focus on security alone is insufficient. Lastly, AI safety must align with AI governance and be viewed through a risk mitigation lens, addressing risks throughout the AI system lifecycle. This includes safety considerations from the conception of the AI model/system, through data collection, processing, and use, to design, development, testing, deployment, and post-deployment monitoring and maintenance. India is already taking steps in this direction. The Draft Report on AI Governance by IndiaAI emphasises the need to apply existing laws to AI-related challenges while also considering new laws to address legal gaps. In parallel, other regulatory approaches, such as self-regulation, are also being explored. Given the global shift from safety to security, the upcoming AI Summit presents India with an important opportunity to articulate its unique perspective on AI safety — both in the national context and as part of a broader global dialogue. Ravindran is Head, Wadhwani School of Data Science and AI & CeRAI; Mithal is Associate Research Fellow, CeRAI (& Associate Partner, Anand and Anand) and Kumar is Policy Analyst, CeRAI. CeRAI – Centre for Responsible AI, IIT Madras