logo
Microsoft Confirms Critical 10/10 Cloud Security Vulnerability

Microsoft Confirms Critical 10/10 Cloud Security Vulnerability

Forbes09-05-2025
Microsoft confirms 10/10 Azure vulnerability.
SOPA Images/LightRocket via Getty Images
It's not often that a truly critical security vulnerability emerges that hits the maximum Common Vulnerability Scoring System severity rating of 10. This is one of those times.
Microsoft has confirmed multiple vulnerabilities rated as critical and impacting core cloud services, one of which has reached the unwelcome heights of that 10/10 criticality rating. The good news is that none are known to have been exploited in the wild, none have already been publicly disclosed, and as a user, there's nothing you need to do to protect your environment.
A total of four cloud security vulnerabilities have been confirmed by Microsoft, one of which hit the 10/10 rating, but two aren't a million miles short, both being given 9.9 ratings. The final vulnerability remains critical, with a CVSS severity rating of 9.1. Let's look at them in order of their criticality.
CVE-2025-29813
Critical Rating: 10.0
Azure DevOps Elevation of Privilege Vulnerability
Microsoft confirmed that this Azure DevOps pipeline token hijacking vulnerability is caused by an issue whereby Visual Studio improperly handles the pipeline job tokens, enabling an attacker to potentially extend their access to a project. 'To exploit this vulnerability,' Microsoft said, 'an attacker would first have to have access to the project and swap the short-term token for a long-term one.'
CVE-2025-29972
Critical Rating: 9.9
Azure Storage Resource Provider Spoofing Vulnerability
Microsoft said that this Azure server-side request forgery vulnerability could allow an authorized attacker to perform 'spoofing' over a network. In other words, a successful threat actor could exploit this vulnerability to distribute malicious requests that impersonate legitimate services and users.
CVE-2025-29827
Critical Rating: 9.9
Azure Automation Elevation of Privilege Vulnerability
Yet another Azure security vulnerability with an unbelievably high official severity rating of 9.9, this time enabling a successful hacker to elevate privileges across the network thanks to an improper authorization issue in Azure Automation.
CVE-2025-47733
Critical Rating: 9.1
Microsoft Power Apps Information Disclosure Vulnerability
Hooray, not Azure this time, and dropping on the criticality rating scale to a 9.1 as well. This vulnerability, as the name suggests, would allow an attacker to disclose information over the network. It's another server-side request forgery vulnerability but this time impacting Microsoft Power Apps.
Here's the really good news among the bad critical vulnerability disclosure stuff: there is no patch to install, no updates to deploy, and no action required by the user at all. 'This vulnerability has already been fully mitigated by Microsoft. There is no action for users of this service to take,' Microsoft said with regard to each of the cloud security issues mentioned. That's because it comes under the remit of what the Microsoft Security Response Center refers to as a commitment to provide comprehensive vulnerability information to customers, by detailing cloud service CVEs once they have been patched internally. 'In the past,' Microsoft said, 'cloud service providers refrained from disclosing information about vulnerabilities found and resolved in cloud services, unless customer action was required.' With the value of full transparency now properly understood, all that has changed. 'We will issue CVEs for critical cloud service vulnerabilities,' Microsoft confirmed, 'regardless of whether customers need to install a patch or to take other actions to protect themselves.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Your Face Could Cost You The Job: The Dangerous Rise Of Facial Recognition At Work
Your Face Could Cost You The Job: The Dangerous Rise Of Facial Recognition At Work

Forbes

time22 minutes ago

  • Forbes

Your Face Could Cost You The Job: The Dangerous Rise Of Facial Recognition At Work

The Covid-19 pandemic ushered in a new era of remote work. Many employers desperate to track and monitor employees working away from the office implemented different technology tools to surveille employees. According to one 2025 report, more than half of fortune 100 employees were required to return back to the office fulltime. With many back-to-office mandates in place, remnants of surveillance culture have remained. Many companies are using facial recognition software to manage employees. A recent survey by ExpressVPN indicated that 74% of U.S. employers are using online monitoring and surveillance tools with 67% using biometric tracking like facial recognition and fingerprint scans. Employers use facial recognition software in a number of different ways: to track employee attendance, to identify employees, to interview and screen job candidates, to reduce the number of employee touchpoints, and to track employees (this is common for delivery and gig workers). What are the vulnerabilities and limitations of using facial recognition software in the workplace and how does it reinforce biases? There have been several different cases where facial recognition software has caused harm, reinforcing biases in the workplace. In 2025, a complaint was filed with the Colorado Civil Rights Division and the Equal Employment Opportunity Commission (EEOC) against the software company Intuit and the human resources assessment software vendor HireVue. The complaint alleges that the AI used by HireVue resulted in an Indigenous and Deaf woman being denied a promotion based on her race and disability. In a separate case, a makeup artist at a leading brand claimed to have been fired in 2020 because of a video interview through HireVue, where the facial recognition software marked her poorly for her body language during the interview. In 2024, an Uber Eats driver won a case where he alleged that that company fired him because of racist facial recognition software. The former employee claimed that he was fired after the company's verification checks, which use facial recognition software, failed to recognize his face. Scholar and writer Dr. Joy Buolamwini has focused much of her research on the flaws with facial recognition technology discussing in her book Unmasking AI as well as the documentary Coded Bias how the technology is less accurate at identifying darker skin tones. There is a wealth of evidence that indicates that facial recognition technology disproportionately impacts marginalized communities. This technology frequently misidentifies Black people leading to wrongful arrests. One 2025 study indicated that facial recognition tools had higher error rates for adults with Down syndrome. Researchers also note that facial recognition tools are less accurate for transgender individuals and these tools struggle to identify non-binary folks. Integrating facial recognition tools into the workplace can have deleterious effects on employees. A 2023 assessment of feedback shared with the White House Office of Science and Technology Policy indicated that digital surveillance in the workplace creates a sense of distrust among employees, making them feel constantly monitored and leading to a decline in productivity and morale. Workers also noted that digital surveillance could limit unionizing, deterring this type of behavior in the workplace. There were also employee concerns about data privacy and how the data collected would be used. Employers should think twice about implementing facial recognition software in the workplace. Not only is the type of technology prone to bias, but it can also erode employee trust and morale. If organizations have this type of technology in place already, they should request more information from the vendor about audits and what accountability measures are in place to ensure accuracy and mitigate bias. Employees should know their rights and there must be transparency around how data is collected, stored, and used. We must deeply consider the future we are creating when our face holds the key to whether we praised or punished.

Law enforcement department in Denver metro area begins using AI to assist with police reports
Law enforcement department in Denver metro area begins using AI to assist with police reports

CBS News

time22 minutes ago

  • CBS News

Law enforcement department in Denver metro area begins using AI to assist with police reports

Police in Wheat Ridge are getting a technology upgrade. As of Aug. 1, the department has begun using artificial intelligence to help officers write their reports. "It was hand notes. You could go through a notepad in two shifts because you're writing everything down," said Sgt. Jamie Watson. The department's AI technology works directly with body-worn cameras. "I don't have to sit there and go back in my memory and go now 'What did she say about this?' It's right there -- it's transcribed for you," Watson said. To demonstrate, Watson simulated a traffic stop: "Speed limit there is 40 (mph). And I hate to tell you, you were doing 52," she said during the demonstration. Once the body camera is activated, the system begins running and listening to the interaction in real time. Within seconds, a report is generated for the officer to review. "It says, 'On August 5, 2025, at approximately 2:29 p.m., I conducted a traffic stop in the parking lot of the Wheat Ridge Police Department.' Now, I never said that -- it just picks up your location based on where you are," Watson explained. Wheat Ridge police spokesperson Alex Rose said the program costs about $100,000 a year and is an addition to the department's existing contract with Axon for body cameras. "In a net-net, you ask our officers, it's saving about 40% to 60% of their time behind a computer writing reports," Rose said. Rose says that means there's more time that officers can now spend out in the community. The department's new AI tool joins a growing list of technologies used by law enforcement agencies, including body-worn cameras and automatic license plate readers. Anaya Robertson, policy director at the ACLU Colorado, said the use of AI in police reporting raises legal and ethical questions. "When we're talking about incident reports and case notes -- those are often admissible in court," Robertson said. "There's a general bias that AI doesn't make mistakes, which is problematic when you think about something being used as evidence." He emphasized the need for transparency. "(It's) needed so the community knows their law enforcement agency wants to utilize this kind of technology," she said. In Wheat Ridge, the AI-generated report is only considered a first draft. Officers are prompted to include additional observations and personal reflections the AI may not capture. Built-in safeguards require officers to review the report carefully. Errors are intentionally placed in the drafts and must be corrected before submission. Watson, who participated in the testing phase, said the technology allows her to spend less time behind a desk and more time in the field. "It is an incredible benefit to the officer," she said. According to Rose, the department consulted with the district attorney's office, which approved the use of AI-generated reports. Each report includes a disclaimer noting the use of artificial intelligence in its creation.

Will Both Parties Support Trump's Domestic Investment?
Will Both Parties Support Trump's Domestic Investment?

Bloomberg

time22 minutes ago

  • Bloomberg

Will Both Parties Support Trump's Domestic Investment?

Jim Messina, The Messina Group Founder & CEO and Democratic strategist, & Sarah Chamberlain, Republican Main Street Partnership President & CEO, discuss President Trump having Apple CEO Tim Cook in the Oval Office to speak on Apple's domestic investment and whether or not Democrats and Republicans will see eye to eye on this program. They also talk about the President's claim that there will be a 100% tariff on chips & semiconductors, but explain this claim is more of a negotiation tactic. Both Jim and Sarah speak with Joe Mathieu on the late edition of Bloomberg's "Balance of Power." (Source: Bloomberg)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store