Urgent call to update laws amid rise of 'silent threat' on Aussie roads: 'We need to catch up'
Smart glasses are emerging as a "silent threat" on Australian roads, with experts warning their use behind the wheel poses serious risks to safety and privacy, all while slipping through existing legal cracks.
Often designed to resemble regular eyewear, smart glasses come equipped with discreet cameras, microphones and even display screens built into the lenses. While they're marketed as a blend of fashion and function, their increasing use in Australian society, including behind the wheel, is triggering concern among industry specialists.
Monash University research found that 13.5 per cent of Australian smart glasses owners had used the technology while driving or cycling. Even more had recorded others without their consent, often in spaces where filming is restricted.
Now, Professor Hussein Dia, an expert in future urban mobility, has joined a growing chorus of voices calling for tighter regulation before smart glasses become a subtle but widespread threat to both road safety and personal privacy. Speaking to Yahoo News, Dia admitted the situation is worse than he'd anticipated.
"It is more concerning than I initially thought," Dia said. "These glasses can not only record and store video on the device itself, but when paired with a mobile app, the videos can also be sent to the cloud or even streamed live for everyone to watch."
The potential risks, he said, are far-reaching. "So indeed, they are a 'silent threat' to road safety and privacy. Their rising use, including while driving, presents serious risks to road safety and personal privacy, and their use seems to be outpacing current Australian regulations. They can distract drivers, stream video in real time, and record people without them ever knowing.
"It's a legal blind spot, and we need to catch up quickly."
Unlike dashcams or built-in vehicle cameras, which are fixed in place, clearly visible, and generally accepted as tools for recording evidence, smart glasses are different — they're wearable, can move with the person, and often look just like normal glasses, making them hard to detect.
Major brands like Ray-Ban, Meta, Amazon, and Google have all released iterations of the product.
"I thought to myself, well, how do they compare to dashcams or built-in cameras that record surroundings in some new vehicles on the market today?" Dia said.
"This is where it gets interesting, dashcams [are] used for insurance or enforcement purposes mainly, but smart glasses are mobile and discreet, making them far more intrusive and especially, they could be mistaken for ordinary eyewear. They can store footage, upload it to cloud platforms, or even livestream content without the subject's knowledge, creating significant privacy concerns.
"Dashcams seem to have some regulations around them, and they are visible, but smart glasses allow real-time recording of anyone, anywhere, without warning. That's a major red flag for both road use and everyday public settings."
Dia explained that while dashcams are accepted because they record the road, smart glasses raise serious privacy concerns because they can go unnoticed in places like shops, schools, and restrooms, capturing personal interactions without consent.
He also warned that they act as a visual and cognitive distraction on the roads. Their ability to stream, record, and display content in a driver's line of sight, Dia said, also makes them potentially more dangerous than mobile phones.
From a privacy perspective, the law offers little protection. "While it's illegal to touch your phone while driving, there's no clear law banning smart glass use behind the wheel. Likewise, Australian privacy laws do little to restrict individuals recording in public spaces, unless audio is involved, or the act is deemed harassing," he said.
"Current privacy laws offer limited protection — in Australia, individuals are generally allowed to record in public spaces, though state-based surveillance laws vary. But most do not adequately address personal use of wearable cameras, particularly where the recording is silent, visual-only, and not clearly signposted." To address the growing risk, he's urging lawmakers to act — not only through clearer legislation but by encouraging manufacturers to build in better safeguards.
"With no uniform national rules around personal surveillance in public and limited road safety provisions for wearable tech, I think we need some urgent regulatory updates that could include amendment of Australian road laws to explicitly address wearable visual displays while driving; strengthening privacy laws to account for wearable and mobile surveillance devices; and getting manufacturers involved to introduce clearer recording indicators, and restrict usage in sensitive areas," he said.
"Currently, the industry seems largely relying on users to self-regulate, which is not sufficient given the increasing sophistication and miniaturisation of these devices.'
"As smart glasses become more common, this won't be just a tech issue, but more so a public safety issue. Without stronger rules, action and more public awareness, we would be heading into risky territory where smart glasses risk becoming a widespread yet largely unregulated form of mobile surveillance with implications not just for road users, but for public life more broadly."
Do you have a story tip? Email: newsroomau@yahoonews.com.
You can also follow us on Facebook, Instagram, TikTok, Twitter and YouTube.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Yahoo
36 minutes ago
- Yahoo
Meta plans to automate many of its product risk assessments
An AI-powered system could soon take responsibility for evaluating the potential harms and privacy risks of up to 90% of updates made to Meta apps like Instagram and WhatsApp, according to internal documents reportedly viewed by NPR. NPR says a 2012 agreement between Facebook (now Meta) and the Federal Trade Commission requires the company to conduct privacy reviews of its products, evaluating the risks of any potential updates. Until now, those reviews have been largely conducted by human evaluators. Under the new system, Meta reportedly said product teams will be asked to fill out a questionaire about their work, then will usually receive an "instant decision" with AI-identified risks, along with requirements that an update or feature must meet before it launches. This AI-centric approach would allow Meta to update its products more quickly, but one former executive told NPR it also creates 'higher risks,' as 'negative externalities of product changes are less likely to be prevented before they start causing problems in the world.' In a statement, Meta seemed to confirm that it's changing its review system, but it insisted that only 'low-risk decisions' will be automated, while 'human expertise' will still be used to examine 'novel and complex issues.' This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
an hour ago
- Yahoo
Upgrade to Windows 11 Pro for just A$23
The following content is brought to you by Mashable partners. If you buy a product featured here, we may earn an affiliate commission or other compensation. Modernize your old computer with a A$23 lifetime Windows 11 Pro license TL;DR: Upgrade your PC for just A$23 with this Microsoft Windows 11 Pro license, on sale through June 1. Do you have an old computer that could use a refresh? Upgrade its operating system and give it new life with Windows 11 Pro, now just A$23 (reg. A$310) through June 1. If you've got an older device lying around, you don't need to shell out hundreds of dollars for a brand-new laptop. Just give it a new lease on life with this Windows 11 Pro upgrade. This new operating system is filled with features that can help you tackle both work and play. What's new in Windows 11 Pro? Microsoft Windows 11 Pro was created with modern professionals in mind, but you don't have to be tackling work to appreciate the new features. It offers a seamless interface that's easy to navigate, snap layouts, improved voice typing, and a more powerful search experience everyone can benefit from. We should all be taking our cybersecurity more seriously, and Windows 11 Pro offers advanced security measures like a biometric login, encrypted authentication, and advanced antivirus defenses to beef up your protection. Need to get work done? Windows 11 Pro includes Microsoft Teams, and also offers some unique professional features. Azure AD lets you manage logins and permissions for employees, while Hyper-V makes it possible to create and run virtual machines to test things. You'll also get Windows Sandbox, a way to test files safely without risking your computer. If you'd like to ditch the mouse and keyboard, Microsoft Windows 11 Pro gives you the option to go touchscreen. And it also puts Copilot, Microsoft's AI-powered assistant, right on your desktop. Once you tackle all your responsibilities, it's time to start gaming. Windows 11 Pro includes DirectX 12 Ultimate, with graphics that make you feel like you're right there in the game. Note: You'll need 4GB RAM and 40GB of hard drive space to run Windows 11 Pro, and if you're running Windows 11 but can't install upgrades via Windows Update, you won't be able to install this version either. Elevate your PC experience with this lifetime license to Microsoft Windows 11 Pro, now just A$23 (reg. A$310) through June 1. StackSocial prices subject to change. Microsoft Windows 11 Pro Opens in a new window Credit: Windows Microsoft Windows 11 Pro AU$23 AU$310 Save AU$287


Forbes
an hour ago
- Forbes
2025 Gen AI Trends: Privacy, Adoption, And Compliance
In 2025, Gen AI adoption is transforming privacy, governance, and compliance frameworks across global industries. Here's what privacy counsels are saying about Generative AI and its regulatory impact. In 2025, Gen AI adoption is transforming privacy, governance, and compliance frameworks across ... More global industries. 2025 has already thrown plenty of curveballs, and AI governance is no exception, diverging sharply from what many predicted just a year ago. Though broad AI adoption remains in its early phases, sectors like education and mental health are seeing noticeable momentum, especially in individual-facing applications and services. But things are shifting quickly. 'Prior to the AI Act coming into force, AI governance was fractured,' said Caitlin Fennessy, Vice President and Chief Knowledge Officer at the IAPP, formerly known as the International Association of Privacy Professionals. 'Academics, civil society, and professional associations were involved, but they were often late to the conversation because there were no rules yet.' Today, the governance community has matured. The technology has advanced. Public engagement has surged. Since ChatGPT, AI is no longer just the domain of specialists. Friends and families are asking whether 'Deep Seek is the real deal.' At the IAPP's AI Governance Global conference (AIGG25) in Dublin, regulators, legal counsels, product leaders, and privacy professionals compared notes. Here's what the front lines of AI governance are revealing in 2025. AI no longer operates in a regulatory vacuum. In Europe, the EU AI Act came into force in August 2024 and is now rolling out in phases. As of February 2025, prohibitions on unacceptable-risk AI are in effect, alongside requirements for AI literacy. By August, obligations will apply to general-purpose AI providers, and national competent authorities must be appointed. Between 2026 and 2027, high-risk AI systems in sectors like healthcare, law enforcement, and infrastructure will be subject to extensive conformity assessments, documentation, and post-market monitoring. By 2030, some requirements will extend to large-scale government systems. Support comes from mechanisms like the AI Pact, a voluntary initiative inviting providers to implement provisions ahead of schedule, as well as ongoing guidance from the European Commission and the newly established AI Office. At the same time, EU officials have considered softening their approach. When asked whether the Commission was open to amending the AI Act, Kilian Gross said the first priority would be simplifying implementation, to make it easier for companies while still remaining effective. In contrast, the United States is exploring a deregulatory path. A proposed 10-year moratorium on state-level enforcement of AI-specific laws is under Congressional consideration. It would suspend enforcement of design, performance, documentation, and data-handling laws unless those apply across all technologies. 'Yes, there is a complex regulatory landscape for AI systems,' said Ashley Casovan, Managing Director of the IAPP's AI Governance Center. 'However, it's not insurmountable. For those who have started to navigate this web of rules, there are clear pathways for complying with overlapping requirements.' The message from the conference was consistent. AI governance cannot be owned by a single function. It requires coordination between legal, privacy, compliance, product, design, and engineering. Casovan described this shift as being highly dependent on use cases. The specific roles and responsibilities within governance teams vary by sector and application. But as the regulatory landscape becomes more complex and AI adoption expands, the need for people who can navigate and translate these obligations is growing. In highly regulated industries such as healthcare, finance, and education, governance efforts are advancing most rapidly. At a dedicated AI in Healthcare workshop, multiple speakers stressed that AI compliance must align with existing obligations in patient care, medical recordkeeping, and safety. One panelist described it as a 'complex web of laws, regulations, rules, standards, and industry practices.' Other sectors are adopting risk-based governance aligned with the AI Act's classification system, especially in use cases involving biometrics or automated decision-making in employment and HR. Many organisations are using the EU's framework globally as a benchmark rather than creating their own from scratch. AI governance is being embedded into existing privacy and compliance programs, leveraging what's already in place. In some jurisdictions, state-level legislation and sector-specific rules are shaping governance even further. In cities like New York, organisations are adopting more targeted mitigation strategies, aligning AI obligations with longstanding standards around data use and safety. All of this signals a shift. AI governance is becoming more mature, risk-aware, and integrated into broader organisational operations. Despite visible progress, several challenges remain. Innovation continues to outpace regulation. Product cycles are faster than rulemaking. There is still no agreement on when or how to intervene. There is also no consensus on a best-practice model. 'We haven't seen [the] best practice structure for AI governance yet,' said Ronan Davy of Anthropic. 'Company-specific contexts—risk management, size, style, use cases—all need to be considered.' The diversity of organisational needs makes a universal framework difficult to establish. Fragmentation across jurisdictions continues to challenge multinationals. But many organisations are adapting. They are building jurisdiction-specific playbooks and aligning AI oversight with established sectoral requirements. The field is still young, drawing from disciplines including privacy, compliance, safety engineering, IT risk, and ethics. Building internal capability, and external networks, is now central to AI governance work. Casovan emphasized the organisational change underway. The EU AI Act intersects with more than 60 other legislative instruments, especially in areas like financial regulation and product safety. Companies are responding by creating new governance roles such as Chief AI Officer, Head of Digital Governance, and hybrid roles like Chief Privacy and AI Officer. These titles reflect a demand for leadership that can span legal, technical, and operational responsibilities. In the US, privacy continues to fill the gap in the absence of comprehensive AI laws. Fennessy pointed to an earlier pattern. The US privacy profession outpaced Europe not because of regulation, but because of market pressure and consumer trust. She sees a similar dynamic playing out in AI. 'Organisations can't afford to conduct ten different risk assessments,' she said. 'We're seeing a shift toward integrating privacy, security, and ethics into a single framework. This helps surface the most critical issues and elevates them to the board.' Trustible CEO Gerald Kierce challenged the idea that governance slows down innovation. 'We've seen this firsthand,' he said. 'One of our customers saw a 10x increase in use cases in just one year after adopting a robust governance framework.' Before implementing governance, they lacked clear processes and tools. Once structure was in place, they were able to responsibly scale. 'There's a false narrative that governance slows things down,' said Kierce. 'That's only true when it's approached as a checkbox exercise. In reality, governance enables progress by creating clarity, trust, and accountability.' AI governance is becoming cross-functional by necessity. Legal interpretations must be converted into operational controls that governance and compliance teams can manage. Companies are integrating AI risk into familiar tools like DPIAs and cybersecurity protocols. Casovan reinforced the foundation: 'Start with your inventory. Know what AI systems you have, how they're being used, and who is responsible.' Rather than start from zero, most organisations are building on existing governance structures: privacy programs, ethics boards, safety reviews. 'Don't reinvent the wheel,' said Casovan. 'Follow governance practices you already have in place.' The goal is to adapt known systems to meet new demands, not duplicate effort. Fennessy underscored the need for a unified model. Fragmented approaches don't scale. 'That integrated governance approach is what enables organisations to manage AI risks holistically,' she said. Privacy, security, and ethics are converging, not diverging. Organisations are consolidating impact assessments, surfacing the most critical risks, and aligning AI oversight with strategic goals. The work is complex, but the direction is clear - and necessary.