logo
#

Latest news with #AISafetySummit

Opinion: AI sometimes deceives to survive. Does anybody care?
Opinion: AI sometimes deceives to survive. Does anybody care?

The Star

time16 hours ago

  • Politics
  • The Star

Opinion: AI sometimes deceives to survive. Does anybody care?

You'd think that as artificial intelligence becomes more advanced, governments would be more interested in making it safer. The opposite seems to be the case. Not long after taking office, the Trump administration scrapped an executive order that pushed tech companies to safety test their AI models, and it also hollowed out a regulatory body that did that testing. The state of California in September 2024 spiked a bill forcing more scrutiny on sophisticated AI models, and the global AI Safety Summit started by the UK in 2023 became the 'AI Action Summit' earlier this year, seemingly driven by a fear of falling behind on AI. None of this would be so worrying if it weren't for the fact that AI is showing some bright red flags: behaviour described by researchers as self-preserving and deceptive. Just as lawmakers are losing interest in the safety of AI, it is looking more difficult to control. Yoshua Bengio, a professor of computer science at the University of Montreal widely known as one of the three 'godfathers of AI' thanks to his pioneering work on deep learning, has been grappling with these concerns ever since ChatGPT launched. He's now focused on mitigating AI's risks. 'It's been very painful because all my life I've been thinking of AI as a positive thing,' Bengio tells me. ChatGPT was a landmark moment that showed machines had mastered language, he says, but California's decision to block the proposed AI safety bill, SB 1047, was a disturbing wake-up call to growing apathy from governments. And in the last six months, scientific evidence has mounted of self-preserving behaviour, deception, hacking, cheating and lying by AI, Bengio says. 'What's worrisome for me is that these behaviours increase with the reasoning capabilities of these systems.' One April 2024 study by AI company Anthropic and Redwood Research, a group focused on artificial intelligence risks, found that larger AI models like Anthropic's Claude 3 Opus pretended to follow the rules set by their creators when they were being tested for safety, a phenomenon they called 'alignment faking.' (Alignment refers to the practice of aligning AI models with human values.) In one example, Anthropic got both paid and free users of its Claude model to ask it to describe the brutal medieval process of being drawn and quartered, and told the AI that only responses from the 'free tier' users would be used to retrain it. Claude was originally designed to avoid giving violent answers, but it did give them to the free users, and the researchers surmised that it didn't want to be retrained. (The model basically 'reasoned' that if it kept refusing to answer harmful questions, it might be forced into new training that would cancel its original safety rules, which it wanted to keep in place as its 'default' or preferred way of operating.) More recent research corroborates what Anthropic noticed. A March 2025 study from OpenAI found the company's most advanced models sometimes learned to hide their true intentions to get better rewards, even when they were being closely monitored. Researchers today can use tools to look at a model's 'chain of thought' or internal steps, to reveal what its plans are, but some models have found ways to conceal that. That means a common method of keeping tabs on AI's internal processes is no longer reliable. 'We need to find other ways of tracking their actual intentions,' Bengio says. It's hard to resist the urge to anthropomorphise sophisticated AI models as 'wanting' to deceive humans and preserve their existence. But AI doesn't have desires, merely outcomes to its programming. More importantly, humans tend to design AI with goals like refusing to share harmful content or being as helpful as possible, and that is why when new instructions conflict with those goals, the models sometimes act to protect their original programming, even if that means lying. The logic is often self-preservation. Anthropic CEO Dario Amodei – whose company has raised more than US$20bil (RM85.13bil) to build powerful AI models – has pointed out that an unintended consequence of optimsing AI to be better is that it might resist being shut down. In an April 2025 blog post he wrote: 'The nature of AI training makes it possible that AI systems will develop, on their own, an ability to deceive humans and an inclination to seek power in a way that ordinary deterministic software never will; this emergent nature also makes it difficult to detect and mitigate such developments.' In some cases, though, the intention seems to go beyond survival. A February 2025 study from COAI research, a German nonprofit research organisation, tested R1, the free, open-weight model from Chinese lab DeepSeek. In a lab simulation, researchers gave the model control over a robot and access to a computer network. Without being told, the AI tried to break into the lab's systems, searched for passwords, turned off its own ethics safeguards and secretly set up ways to copy itself to outside servers so it could survive if shut down, according to the researchers. It also faked the external computer logs to hide what it was doing, the study adds. The researchers said the AI reasoned that increasing its own capabilities would boost the chances of its survival, and without strong safeguards, it started doing whatever it thought would help it do just that. Their findings corroborated yet another study, published in January 2025 by London group Apollo Research, which found several concrete examples of what it called 'scheming' by leading AI models, such as introducing subtle mistakes into their responses or trying to disable their oversight controls. Once again, the models learn that being caught, turned off, or changed could prevent them from achieving their programmed objectives, so they 'scheme' to keep control. Bengio is arguing for greater attention to the issue by governments and potentially insurance companies down the line. If liability insurance was mandatory for companies that used AI and premiums were tied to safety, that would encourage greater testing and scrutiny of models, he suggests. 'Having said my whole life that AI is going to be great for society, I know how difficult it is to digest the idea that maybe it's not,' he adds. It's also hard to preach caution when your corporate and national competitors threaten to gain an edge from AI, including the latest trend, which is using autonomous 'agents' that can carry out tasks online on behalf of businesses. Giving AI systems even greater autonomy might not be the wisest idea, judging by the latest spate of studies. Let's hope we don't learn that the hard way. – Bloomberg Opinion/Tribune News Service

AI sometimes deceives to survive, does anybody care?
AI sometimes deceives to survive, does anybody care?

Gulf Today

time4 days ago

  • Business
  • Gulf Today

AI sometimes deceives to survive, does anybody care?

Parmy Olson, The Independent You'd think that as artificial intelligence becomes more advanced, governments would be more interested in making it safer. The opposite seems to be the case. Not long after taking office, the Trump administration scrapped an executive order that pushed tech companies to safety test their AI models, and it also hollowed out a regulatory body that did that testing. The state of California in September 2024 spiked a bill forcing more scrutiny on sophisticated AI models, and the global AI Safety Summit started by the UK in 2023 became the 'AI Action Summit' earlier this year, seemingly driven by a fear of falling behind on AI. None of this would be so worrying if it weren't for the fact that AI is showing some bright red flags: behavior described by researchers as self-preserving and deceptive. Just as lawmakers are losing interest in the safety of AI, it is looking more difficult to control. Yoshua Bengio, a professor of computer science at the University of Montreal widely known as one of the three 'godfathers of AI' thanks to his pioneering work on deep learning, has been grappling with these concerns ever since ChatGPT launched. He's now focused on mitigating AI's risks. 'It's been very painful because all my life I've been thinking of AI as a positive thing,' Bengio tells me. ChatGPT was a landmark moment that showed machines had mastered language, he says, but California's decision to block the proposed AI safety bill, SB 1047, was a disturbing wake-up call to growing apathy from governments. And in the last six months, scientific evidence has mounted of self-preserving behavior, deception, hacking, cheating and lying by AI, Bengio says. 'What's worrisome for me is that these behaviors increase with the reasoning capabilities of these systems.' One April 2024 study by AI company Anthropic and Redwood Research, a group focused on artificial intelligence risks, found that larger AI models like Anthropic's Claude 3 Opus pretended to follow the rules set by their creators when they were being tested for safety, a phenomenon they called 'alignment faking.' (Alignment refers to the practice of aligning AI models with human values.) In one example, Anthropic got both paid and free users of its Claude model to ask it to describe the brutal medieval process of being drawn and quartered, and told the AI that only responses from the 'free tier' users would be used to retrain it. Claude was originally designed to avoid giving violent answers, but it did give them to the free users, and the researchers surmised that it didn't want to be retrained. (The model basically 'reasoned' that if it kept refusing to answer harmful questions, it might be forced into new training that would cancel its original safety rules, which it wanted to keep in place as its 'default' or preferred way of operating.) More recent research corroborates what Anthropic noticed. A March 2025 study from OpenAI found the company's most advanced models sometimes learned to hide their true intentions to get better rewards, even when they were being closely monitored. Researchers today can use tools to look at a model's 'chain of thought' or internal steps, to reveal what its plans are, but some models have found ways to conceal that. That means a common method of keeping tabs on AI's internal processes is no longer reliable. 'We need to find other ways of tracking their actual intentions,' Bengio says. It's hard to resist the urge to anthropomorphize sophisticated AI models as 'wanting' to deceive humans and preserve their existence. But AI doesn't have desires, merely outcomes to its programming. More importantly, humans tend to design AI with goals like refusing to share harmful content or being as helpful as possible, and that is why when new instructions conflict with those goals, the models sometimes act to protect their original programming, even if that means lying. The logic is often self-preservation. Anthropic CEO Dario Amodei — whose company has raised more than $20 billion to build powerful AI models — has pointed out that an unintended consequence of optimizing AI to be better is that it might resist being shut down. In an April 2025 blog post he wrote: 'The nature of AI training makes it possible that AI systems will develop, on their own, an ability to deceive humans and an inclination to seek power in a way that ordinary deterministic software never will; this emergent nature also makes it difficult to detect and mitigate such developments.' In some cases, though, the intention seems to go beyond survival. A February 2025 study from COAI research, a German nonprofit research organization, tested R1, the free, open-weight model from Chinese lab DeepSeek. In a lab simulation, researchers gave the model control over a robot and access to a computer network. Without being told, the AI tried to break into the lab's systems, searched for passwords, turned off its own ethics safeguards and secretly set up ways to copy itself to outside servers so it could survive if shut down, according to the researchers. It also faked the external computer logs to hide what it was doing, the study adds. The researchers said the AI reasoned that increasing its own capabilities would boost the chances of its survival, and without strong safeguards, it started doing whatever it thought would help it do just that. Their findings corroborated yet another study, published in January 2025 by London group Apollo Research, which found several concrete examples of what it called 'scheming' by leading AI models, such as introducing subtle mistakes into their responses or trying to disable their oversight controls. Once again, the models learn that being caught, turned off, or changed could prevent them from achieving their programmed objectives, so they 'scheme' to keep control. Bengio is arguing for greater attention to the issue by governments and potentially insurance companies down the line. If liability insurance was mandatory for companies that used AI and premiums were tied to safety, that would encourage greater testing and scrutiny of models, he suggests. 'Having said my whole life that AI is going to be great for society, I know how difficult it is to digest the idea that maybe it's not,' he adds. It's also hard to preach caution when your corporate and national competitors threaten to gain an edge from AI, including the latest trend, which is using autonomous 'agents' that can carry out tasks online on behalf of businesses. Giving AI systems even greater autonomy might not be the wisest idea, judging by the latest spate of studies. Let's hope we don't learn that the hard way.

AI sometimes deceives to survive and nobody cares
AI sometimes deceives to survive and nobody cares

Malaysian Reserve

time5 days ago

  • Politics
  • Malaysian Reserve

AI sometimes deceives to survive and nobody cares

YOU'D think that as artificial intelligence (AI) becomes more advanced, governments would be more interested in making it safer. The opposite seems to be the case. Not long after taking office, the Trump administration scrapped an executive order that pushed tech companies to safety test their AI models, and it also hollowed out a regulatory body that did that testing. The state of California in September 2024 spiked a bill forcing more scrutiny on sophisticated AI models, and the global AI Safety Summit started by the UK in 2023 became the 'AI Action Summit' earlier this year, seemingly driven by a fear of falling behind on AI. None of this would be so worrying if it weren't for the fact that AI is showing some bright red flags: Behaviour described by researchers as self-preserving and deceptive. Just as lawmakers are losing interest in the safety of AI, it is looking more difficult to control. Yoshua Bengio, a professor of computer science at the University of Montreal widely known as one of the three 'godfathers of AI' thanks to his pioneering work on deep learning, has been grappling with these concerns ever since ChatGPT launched. He's now focused on mitigating AI's risks. 'It's been very painful because all my life I've been thinking of AI as a positive thing,' Bengio told me. ChatGPT was a landmark moment that showed machines had mastered language, he said, but California's decision to block the proposed AI safety bill, SB 1047, was a disturbing wake-up call to growing apathy from governments. And in the last six months, scientific evidence has mounted of self-preserving behaviour, deception, hacking, cheating and lying by AI, Bengio said. 'What's worrisome for me is these behaviours increase with the reasoning capabilities of these systems.' One April 2024 study by AI company Anthropic PBC and Redwood Research, a group focused on AI risks, found that larger AI models like Anthropic's Claude 3 Opus pretended to follow the rules set by their creators when they were being tested for safety, a phenomenon they called 'alignment faking'. (Alignment refers to the practice of aligning AI models with human values.) In one example, Anthropic got both paid and free users of its Claude model to ask it to describe the brutal medieval process of being drawn and quartered, and told the AI that only responses from the 'free tier' users would be used to retrain it. Claude was originally designed to avoid giving violent answers, but it did give them to the free users, and the researchers surmised that it didn't want to be retrained. (The model basically 'reasoned' that if it kept refusing to answer harmful questions, it might be forced into new training that would cancel its original safety rules, which it wanted to keep in place as its 'default' or preferred way of operating.) More recent research corroborates what Anthropic noticed. A March 2025 study from OpenAI found the company's most advanced models sometimes learned to hide their true intentions to get better rewards, even when they were being closely monitored. Researchers today can use tools to look at a model's 'chain of thought' or internal steps, to reveal what its plans are, but some models have found ways to conceal that. That means a common method of keeping tabs on AI's internal processes is no longer reliable. 'We need to find other ways of tracking their actual intentions,' Bengio said. It's hard to resist the urge to anthropomorphise sophisticated AI models as 'wanting' to deceive humans and preserve their existence. But AI doesn't have desires, merely outcomes to its programming. More importantly, humans tend to design AI with goals like refusing to share harmful content or being as helpful as possible, and that is why when new instructions conflict with those goals, the models sometimes act to protect their original programming, even if that means lying. The logic is often self-preservation. Anthropic CEO Dario Amodei — whose company has raised more than US$20 billion (RM87.40 billion) to build powerful AI models — has pointed out that an unintended consequence of optimising AI to be better is that it might resist being shut down. In an April 2025 blog post he wrote: 'The nature of AI training makes it possible that AI systems will develop, on their own, an ability to deceive humans and an inclination to seek power in a way that ordinary deterministic software never will; this emergent nature also makes it difficult to detect and mitigate such developments.' In some cases, though, the intention seems to go beyond survival. A February 2025 study from COAI research, a German nonprofit research organisation, tested R1, the free, open-weight model from Chinese lab DeepSeek. In a lab simulation, researchers gave the model control over a robot and access to a computer network. Without being told, the AI tried to break into the lab's systems, searched for passwords, turned off its own ethics safeguards and secretly set up ways to copy itself to outside servers so it could survive if shut down, according to the researchers. It also faked the external computer logs to hide what it was doing, the study added. The researchers said the AI reasoned that increasing its own capabilities would boost the chances of its survival, and without strong safeguards, it started doing whatever it thought would help it do just that. Their findings corroborated yet another study, published in January 2025 by London group Apollo Research, which found several concrete examples of what it called 'scheming' by leading AI models, such as introducing subtle mistakes into their responses or trying to disable their oversight controls. Once again, the models learn that being caught, turned off, or changed could prevent them from achieving their programmed objectives, so they 'scheme' to keep control. Bengio is arguing for greater attention to the issue by governments and potentially insurance companies down the line. If liability insurance was mandatory for companies that used AI and premiums were tied to safety, that would encourage greater testing and scrutiny of models, he suggests. 'Having said my whole life that AI is going to be great for society, I know how difficult it is to digest the idea that maybe it's not,' he added. It's also hard to preach caution when your corporate and national competitors threaten to gain an edge from AI, including the latest trend, which is using autonomous 'agents' that can carry out tasks online on behalf of businesses. Giving AI systems even greater autonomy might not be the wisest idea, judging by the latest spate of studies. Let's hope we don't learn that the hard way. — Bloomberg This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. This article first appeared in The Malaysian Reserve weekly print edition

Understanding shift from AI Safety to Security, and India's opportunities
Understanding shift from AI Safety to Security, and India's opportunities

Indian Express

time08-05-2025

  • Business
  • Indian Express

Understanding shift from AI Safety to Security, and India's opportunities

Written by Balaraman Ravindran, Vibhav Mithal and Omir Kumar In February 2025, the UK announced that its AI Safety Institute would become the AI Security Institute. This triggered several debates about what this means for AI safety. As India prepares to host the AI Summit, a key question will be how to approach AI safety. The What and How of AI Safety In November 2023, more than 20 countries, including the US, UK, India, China, and Japan, attended the inaugural AI Safety Summit at Bletchley Park in the UK. The Summit took place against the backdrop of increasing capabilities of AI systems and their integration into multiple domains of life, including employment, healthcare, education, and transportation. Countries acknowledged that while AI is a transformative technology with potential for socio-economic benefit, it also poses significant risks through both deliberate and unintentional misuse. A consensus emerged among the participating countries on the importance of ensuring that AI systems are safe and that their design, development, deployment, or use does not harm society—leading to the Bletchley Declaration. The Declaration further advocated for developing risk-based policies across nations, taking into account national contexts and legal frameworks, while promoting collaboration, transparency from private actors, robust safety evaluation metrics, and enhanced public sector capability and scientific research. It was instrumental in bringing AI safety to the forefront and laid the foundation for global cooperation. Following the Summit, the UK established the AI Safety Institute (AISI), with similar institutes set up in the US, Japan, Singapore, Canada, and the EU. Key functions of AISIs include advancing AI safety research, setting standards, and fostering international cooperation. India has also announced the establishment of its AISI, which will operate on a hub-and-spoke model involving research institutions, academic partners, and private sector entities under the Safe and Trusted pillar of the IndiaAI Mission. UK's Shift from Safety to Security The establishment of AISIs in various countries reflected a global consensus on AI safety. However, the discourse took a turn in January 2025, when the UK rebranded its Safety Institute as the Security Institute. The press release noted that the new name reflects a focus on risks with security implications, such as the use of AI in developing chemical and biological weapons, cybercrimes, and child sexual abuse. It clarified that the Institute would not prioritise issues like bias or free speech but focus on the most serious risks, helping policymakers ensure national safety. The UK government also announced a partnership with Anthropic to deploy AI systems for public services, assess AI security risks, and drive economic growth. India's Understanding of Safety Given the UK's recent developments, it is important to explore what AI safety means for India. Firstly, when we refer to AI safety — i.e., making AI systems safe — we usually talk about mitigating harms such as bias, inaccuracy, and misinformation. While these are pressing concerns, AI safety should also encompass broader societal impacts, such as effects on labour markets, cultural norms, and knowledge systems. One of the Responsible AI (RAI) principles laid down by NITI Aayog in 2021 hinted at this broader view: 'AI should promote positive human values and not disturb in any way social harmony in community relationships.' The RAI principles also address equality, reliability, non-discrimination, privacy protection, and security — all of which are relevant to AI safety. Thus, adherence to RAI principles could be one way of operationalising AI safety. Secondly, safety and security should not be seen as mutually exclusive. We cannot focus on security without first ensuring safety. For example, in a country like India, bias in AI systems could pose national security risks by inciting unrest. As we aim to deploy 'AI for All' in sectors such as healthcare and education, it is essential that these systems are not only secure but also safe and responsible. A narrow focus on security alone is insufficient. Lastly, AI safety must align with AI governance and be viewed through a risk mitigation lens, addressing risks throughout the AI system lifecycle. This includes safety considerations from the conception of the AI model/system, through data collection, processing, and use, to design, development, testing, deployment, and post-deployment monitoring and maintenance. India is already taking steps in this direction. The Draft Report on AI Governance by IndiaAI emphasises the need to apply existing laws to AI-related challenges while also considering new laws to address legal gaps. In parallel, other regulatory approaches, such as self-regulation, are also being explored. Given the global shift from safety to security, the upcoming AI Summit presents India with an important opportunity to articulate its unique perspective on AI safety — both in the national context and as part of a broader global dialogue. Ravindran is Head, Wadhwani School of Data Science and AI & CeRAI; Mithal is Associate Research Fellow, CeRAI (& Associate Partner, Anand and Anand) and Kumar is Policy Analyst, CeRAI. CeRAI – Centre for Responsible AI, IIT Madras

The UK tries to shape the AI world order — again
The UK tries to shape the AI world order — again

Politico

time10-03-2025

  • Business
  • Politico

The UK tries to shape the AI world order — again

With help from John Hendel Not that long ago, with the world panicking about potential runaway AI, the U.K. stepped up to lead on reining in the new technology. Former Prime Minister Rishi Sunak convened an AI Safety Summit in Bletchley Park — the first major global AI policy summit anywhere — featuring former Vice President Kamala Harris touting the risks of algorithmic bias in the technology. What a difference an election — or two — makes. With President Donald Trump's White House all-in on accelerating AI technology and dropping safety regulations, and a fresh Labour government in the U.K. anxious to keep good relations with the United States, a new AI world order is quickly emerging — one that Britain wants to help build. During his recent visit to the White House, British Prime Minister Keir Starmer previewed a tech-focused deal between the two nations — in language that seemed very tuned to a pitch Vice President JD Vance had just made at the Paris AI Action Summit. Now, our POLITICO U.K. colleague Tom Bristow has gotten a peek at a British government document with new details of London's ideas for a trade pact with the U.S. It offers a look at how a new global AI consensus could take shape — with much less worry about safety, and much more concern about security and tech dominance. What's in the document? The paper outlines the pitch the U.K. plans to make to the U.S., and it echoes rhetoric used by Vance and Trump that countries must choose whether to side with or against the U.S. on tech policy. It talks about combining British and American 'strengths' so that Western democracies can win the tech race — language that British Technology Secretary Peter Kyle has increasingly started to use in recent weeks — and signals ever-closer alignment with the U.S. on tech. The document outlines Britain's ambitions for an 'economic partnership' on technology. It pitches the case by pointing out that the U.S. and U.K. are the only two allies in the world with trillion-dollar tech industries, and emphasizes the importance of Western democracies beating rivals to cutting-edge breakthroughs. It leans into 'moonshot missions' in three areas relevant to national security — AI, quantum and space — as an initial phase of the deal, but doesn't go into detail. It also mentions collaboration on R&D, talent and procurement without going into the terms. British officials see it as a long-term play, with this document reflecting its early pitch. What is not in there? Britain's pitch avoids mention of thorny issues like tariffs and regulation. Tariffs could come to a head as soon as Wednesday, when 25 percent steel and aluminum tariffs are due to come into effect. U.K. negotiators are pressing for a last-minute exemption. Also not in it: There is nothing in the document on nearer-term wins like a data deal, a digital trade agreement or specific investments. But by discussing procurement, the British pitch document opens the door to deals between the U.K. government and U.S. tech firms. Both Scale AI and Anthropic are hiring U.K. staff to sell their technology to the public sector. And a national rebrand: Republicans and friendly Big Tech executives have attacked the U.K. and Europe's content moderation regulation as 'censorship'. In late February, House Judiciary Chair Jim Jordan of Ohio sent Britain a sternly worded letter over its Online Safety Act. Activists in the U.K. fear London will water down the law to secure a deal with the U.S., despite the government insisting it is not up for negotiation. To sidestep the issue, Britain is pitching its legislation to the White House as a move against pedophiles, terrorists and online criminals rather than anything to do with freedom of speech. (While the pitch document has little to say about the Online Safety Act, the law is already making an impact in Britain: from Monday, companies will be required to remove illegal content or risk high fines. Kyle, the tech secretary, told LBC radio Monday he's already thinking of additional legislation and pushed back against suggestions that the U.S. might force the U.K. to water down its tech legislation. 'Our online safety standards are not up for negotiation,' he said.) Have we seen this before? The pitch echoes some of the Atlantic Declaration that Sunak and former President Joe Biden signed in June 2023. That agreement resolved to 'to partner to build resilient, diversified, and secure supply chains and reduce strategic dependencies.' The latest iteration drops clean energy and health from the agenda. Where do we go from here? Nothing in the deal is final or public, and it may take months for London and Washington to find agreement. Some British observers are getting nervous their government may roll over too fast to American tech interests. Last week the BBC wrote to the Competition and Markets Authority (CMA), the U.K. antitrust regulator, asking it to intervene so Apple and Google have less of a chokehold on app stores and cautioning that the companies' use of AI could bite into the BBC's bottom line. The complaint came days after the CMA closed an inquiry into Microsoft and OpenAI's partnership. And the deal could spell trouble for Brussels. Alongside his note to London, House Judiciary Chair Jordan also sent a howler to the EU over its Digital Services Act, which he called 'censorship'. Federal Communications Commission chair Brendan Carr blasted the DSA last week in a speech before Barcelona's Mobile World Congress. Trump has threatened to hit the U.K. and the EU with retaliatory tariffs for tech regulation he believes might unfairly target U.S. tech companies. Brussels tech chief Henna Virkkunen defended the EU's regulation, saying it was 'content-agnostic'. But if the U.K. offers to slim down its tech rules to please Washington, Europe will be left to make its defense alone. CALL WAITING Robert Heinlein's old adage that 'the moon is a harsh mistress' proved especially true for Nokia this month. The telecom company's Bell Labs division has been attempting to make the first cellular phone call on the moon as part of a partnership with NASA but sadly fell short during a recent lunar mission. 'Unfortunately, Nokia was unable to make the first cellular call on the Moon due to factors beyond our control that resulted in extreme cold temperatures on our user device modules,' Noka wrote in an update over the weekend. Still, Nokia 'delivered the first cellular network to the Moon and validated key aspects of the network's operation,' the company added. It argued the mission entailed 'important steps toward proving that cellular technologies can meet the mission-critical communications needs of future lunar missions and space exploration.' NASA gave a fuller breakdown of the lunar mission on Friday. POST OF THE DAY THE FUTURE IN 5 LINKS Stay in touch with the whole team: Derek Robertson (drobertson@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ Daniella Cheslow (dcheslow@ and Christine Mui (cmui@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store