Latest news with #AIrisks


Malay Mail
2 days ago
- Politics
- Malay Mail
Fearing AI misuse, tech godfather Bengio sets up group to monitor rogue agents that could ‘greatly harm humanity'
NEW YORK, June 4 — Concerned about the rapid spread of generative AI, a pioneer researcher is developing software to keep tabs on a technology that is increasingly taking over human tasks. Canadian computer science professor Yoshua Bengio is considered one of the godfathers of the artificial intelligence revolution and on Tuesday announced the launch of LawZero, a non-profit organisation intended to mitigate the technology's inherent risks. The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Those risks are increasing with the development of so-called AI agents, a use of the technology that tasks computers with making decisions that were once made by human workers. The goal of these agents is to build virtual employees that can do practically any job a human can, at a fraction of the cost. 'Currently, AI is developed to maximise profit,' Bengio said, adding it was being deployed even as it persists to show flaws. Every frontier AI system should be grounded in a core commitment: to protect human joy and endeavour. Today, we launch @LawZero_, a nonprofit dedicated to advancing safe-by-design AI. June 3, 2025 Moreover, for Bengio, giving AI human-like agency will easily be used for malicious purposes such as disinformation, bioweapons, and cyberattacks. 'If we lose control of rogue super-intelligent AIs, they could greatly harm humanity,' he said. One of the first objectives at LawZero will be to develop Scientist AI, a form of specially trained AI that can be used as a guardrail to ensure other AIs are behaving properly, the company said. The organisation already has over 15 researchers and has received funding from Schmidt Sciences, a charity set up by former Google boss Eric Schmidt and his wife Wendy. The project comes as powerful large language models (or LLMs) from OpenAI, Google and Anthropic are deployed across all sectors of the digital economy, while still showing significant problems. These include AI models that show a capability to deceive and fabricate false information even as they increase productivity. In a recent example, AI company Anthropic said that during safety testing, its latest AI model tried to blackmail an engineer to avoid being replaced by another system. — AFP


Forbes
23-05-2025
- Forbes
Do Not Call These Numbers On Your iPhone, Android Phone
Never make these calls. You have been warned. This threat is now surging and has more than doubled in 2025. Google and Microsoft have updated their platforms to better protect users, but it has not yet stopped this plague of attacks. The FBI has issued multiple warnings for users to avoid the simple mistake that could see data, money, even identifies stolen. The latest warning comes from Guardio's security team, which says it has now 'spotted a 137% surge in tech support scams between November 2024 and April 2025. According to our data, these dangerous scams have more than doubled over the past few months.' Such scams are not new — cybercriminals reach out to victims to fix made-up issues with PCs or phones. What is new is 'a spike closely tied to scammers using AI tools to scale their operations, allowing them to create convincing scams at scale.' That surge is one new AI risk. The other is that such scams are now 'more convincing, personal, and far more effective, catching more people off guard. These scams lead to significant financial losses, and the money usually can't be recovered.' Guardio found that whatever the lure — a frozen browser, a popup, an alert — the likely attack will push users to make a phone call. A 1-888 number to reach back directly to Microsoft or Google or Apple support. There the victim will be convinced to install software or provide direct access to their provide personal or financial information. As ever with phone scams, remember this is what they do for a living and they do it well. Tech support scams have been making headlines in recent weeks with Google highlighting the risks and updating its platforms to help users flag them in real time. Do not call these numbers. 'Tech support scams are an increasingly prevalent form of cybercrime,' Google warns, 'aimed at extorting money or gaining unauthorized access to sensitive data… Tech support scams on the web often employ alarming pop-up warnings mimicking legitimate security alerts. We've also observed them to use full-screen takeovers and disable keyboard and mouse input to create a sense of crisis.' Google has updated its Safe Browsing with on-device analysis to monitor signals that might flag this type of attack in real time. 'If Safe Browsing determines that the page is likely to be a scam,' Google says, 'Chrome will show a warning.' The company is also updating Android. 'Our research shows that phone scammers often try to trick people into performing specific actions to initiate a scam, like changing default device security settings or granting elevated permissions to an app. These actions can result in spying, fraud, and other abuse by giving an attacker deeper access to your device and data. To combat phone scammers, we're working to block specific actions and warn you of these sophisticated attempts.' Google warns users it doesn't call or proactively reach out to users to resolve technical or account issues. Never. Microsoft echoes these warnings. 'Tech support scams are an industry-wide issue… If you allow them to remote into your computer to perform a 'fix,' they will often install malware, ransomware, or other unwanted programs that can steal your information or damage your data or device.' Critically, Microsoft warns users that its error and warning messages 'never include phone numbers.' Never. Meanwhile, the surge continues. 'We think the huge rise in tech support scams we're seeing is really concerning and worth highlighting,' Guardio told me.