Latest news with #TechTonic


Hindustan Times
a day ago
- Hindustan Times
Decoding UK's OSA, ChatGPT adds emotional support, and pondering Windows' future
The UK's newly updated enforcements for the Online Safety Act (OSA) are worth having a conversation about, this week. Most of the online media coverage would lead you to believe that it's a new act that's been passed into a new law, but it isn't — the Online Safety Act received Royal Assent on 26 October 2023, some provisions came into force on 10 January 2024, while additional elements took effect on 1 April 2024. We're talking about this now, because the critical age verification requirements took effect on 25 July 2025, which means all online platforms accessible in that part of the world are legally required to implement "highly effective" age assurance measures. In fact, this will not have a UK-only fallout, because it could potentially reshape the digital landscape globally, much in the way the GDPR or General Data Protection Regulation of 2016 has had on how online platforms and services collect and handle user data, in subsequent regulations worldwide. OpenAI The mandatory age verification measures that came into place late last month, are meant to provide a substantial legal assurance of a user's age and consent, the idea being to reduce access to content such as pornography, or anything that encourages self-harm, suicide or eating disorders for instance, on the World Wide Web. Apple There are two sides to this coin. Tech companies and content creators are alarmed by the OSA's new sweeping requirements. If any site accessible in the UK—including social media, search engines, music sites, and adult content providers— does not enforce age checks to prevent children from seeing harmful content, they now face potential fines up to 10% of their revenue for non-compliance. This could very well pressure them into implementing invasive verification systems. Depending how a specific platform does it, methods include scanning your face, credit card, or an identity document, if you want to access content. UK's regulators have been at it for a while, a recent case in point being the Investigatory Powers Act, which we decoded in my Tech Tonic column recently which would have forced tech companies to disable active encryption methods, putting user data at significant risk. There are privacy and access implications of this, something digital rights advocates warn about, detailing that these measures have the potential to create an unprecedented surveillance infrastructure, with these massive databases of personal and biometric information inevitably vulnerable to breaches and misuse. Users must now choose between privacy and access, fundamentally altering the internet's traditionally open nature. 'The Act, which is now coming in enforcement stages exemplifies how well-intended laws can cause unintended consequences on other aspects of technologies. The mandatory use of accredited technology is bound to weaken end-to-end encryption which is the hallmark of a free digital society without which commerce or personal communications systems cannot work. Any of the current age verification methods cannot be imposed without addressing biometric surveillance creep, data breaches and misuse, and increased centralization of user data,' explains a spokesperson of Software Freedom Law Centre India ( in a conversation with us. 'The OSA's age assurance rules require platforms to use facial scans, upload IDs, or verify age through banking or telecom data. These measures raise serious privacy concerns and discourage online anonymity. Larger platforms are testing third-party software for this, but the risk does not disappear, it spreads. User data could now sit with multiple external vendors, increasing the chances of leaks or misuse,' points out Vikram Jeet Singh, Partner at BTG Advaya, a law firm. Possible global implications cannot be ignored, considering the OSA's impact extends far beyond British borders, potentially influencing online speech frameworks worldwide. There can be an argument that while it is effective in some form, it breaks the right to privacy and free speech, while also compromising cybersecurity. Countries such as India, already grappling with content regulation challenges, are likely to be closely watching the UK's approach as a potential model, or cautionary tale. The precedent set by Britain's age verification requirements could normalise similar measures globally, which alongside a fragmented internet where access to information depends on geography, also depends on a willingness to submit to digital surveillance. This is something the spokesperson details, 'Such laws generally have global ripple effects like the GDPR. Companies may choose to adopt UK-compliant policies to avoid the costs related to fragmentation. Countries will emulate such provisions to curb dissent and justify surveillance under the guise of child protection or moral regulation by the state.' What's the way forward? The UK's now complete Online Safety Act effectively represents a watershed moment for internet governance, confronted with often opposing scenarios of fundamental questions about digital rights and any government's commitment to protecting children. The intent of the UK government is commendable, in terms of what it is trying to achieve — the internet as a safe space for children. However, the immediate surge in VPN downloads in the UK on the Apple App Store and Google Play Store suggest, citizens aren't likely to play along. Does that potentially undermine the Act's effectiveness? EMOTIONAL SUPPORT OpenAI says that they are updating ChatGPT (irrespective of which model you use specifically) giving it an ability to detect mental or emotional distress. The AI company wants ChatGPT to work better for users when they want guidance and perhaps a pep talk, than pure facts or information. 'I'm feeling stuck—help me untangle my thoughts' is an example OpenAI mentions, among others, to indicate the GPT models will be more capable of listening to the reasoning of a user's thoughts, rather than just tokenise those words into a response. Newly added are also gentle reminders during long sessions to encourage breaks. OpenAI isn't building this out of its own hat, but instead suggests they've worked with over 90 physicians across over 30 countries (including psychiatrists, paediatricians, and general practitioners) to build custom rubrics for evaluating complex, multi-turn conversations, as well as engaging human-computer-interaction (HCI) researchers and clinicians to give feedback on how well they've identified concerning behaviours, as well as convening an advisory group of experts in mental health, youth development, and HCI. The company admits they haven't gotten it right earlier, case in point being an update earlier this year, which made the model respond in a tone that was too agreeable, bordering on saying what sounded nice instead of what was actually helpful. MAKING NOTES Notes In what is usually the most difficult quarter for iPhone sales, with the spectre of the traditional September refresh looming large, Apple has reported Q3 earnings higher than expectations. The company has reported quarterly revenue of $94 billion, up 10 percent year over year, a June quarter revenue record, and beating $89.3 billion expectations. Apple CEO Tim Cook again emphasised the importance of India, for Apple's growth trajectory. 'The world of mousing and keyboarding around will feel as alien as it does to Gen Z [using] MS-DOS,' the words of David Weston, Microsoft's Corporate Vice President of Enterprise & Security in what is apparently the first of Microsoft's 'Windows 2030 Vision' video series. What does this mean? Since he doesn't elaborate any further than this breadcrumb, I'll lay out the possibility for you — another attempt at Windows which is overloaded with AI, perhaps even more within the OS itself and the apps you use on the computing device, with some element of agentic features that'll utilise natural language understanding and context from a user's data as well as what's on the screen. Ready for the future? MEET'S GETTING SMART Google Meet Google Meet is getting a rather interesting new feature, and it may seem like there's some sorcery to it, instead it is more of attention to details. Google says that if you are now joining a Meet call from your own device such as a laptop, the video meeting platform can detect when you may be doing so in a large conference room-esque physical space. In an attempt to reduce or totally eliminate the problem of sound echo on such a call, Meet will suggest joining the call using something called a 'Companion Mode'. Mind you, this presently only works if you are joining a meet call from your laptop on the Google Chrome web browser — and rolls out for all Google Workspace customers with Google Meet hardware devices. Meet uses your laptop's microphone to intelligently know when you are in a room using an ultrasonic signal. 'This wayfinding feature helps ensure a seamless, echo-free start to your meeting. When you join using the highlighted Companion mode button, you will also be automatically checked into the correct room,' says Google, in an official post. Basically, this will require your Google Workspace admins (basically, your organisation's IT folks) to enable 'Proximity Detection' that will allow that hardware to detect nearby devices, as a feature on the Google Meet Hardware installed in a conference room (for this I am sure there will be typical inertia reasoned around 'compatibility' and 'security' to mask ineptitude). At this point, based on my experiences with IT folks, easier said than done. End of the story.


Hindustan Times
26-07-2025
- Business
- Hindustan Times
Tech Tonic: God complex is why AI chiefs can't see the humans they'll displace
'Every job will be affected. Some jobs will be lost, some jobs will be created. But every job will be affected. And immediately, it is unquestionable, you're not going to lose your job to an AI, but you're going to lose your job to somebody who uses AI,' Nvidia CEO Jensen Huang said at the recent Milken Institute Global Conference. I respect Huang, as many of us do too, but with this one comment, he's joined the long line of ivory tower millionaires and billionaires who are trying to play God. Much like the rest of the Silicon Valley folks who love to be seen giving a quote, when pressed about what these new jobs (the ones we'll likely have to 'change to') might be, he didn't have a clue. I'm being brusque. He doesn't have a clue, and I've heard that before. Many times. We're potentially hurtling toward a humanitarian catastrophe, and the levels of hallucination are astounding. Think about it—would you be reading this warning (and about this insincere lip service) if artificial intelligence (AI) had replaced this human? If I were to have AI write this for your reading pleasure, I'd expect to be summoned by my editor-in-chief and my executive editor, get roundly ticked off—and rightly so. Human perception, irrespective of the millions of dollars being thrown toward it, is not something artificial intelligence, artificial general intelligence, or superintelligence—irrespective of what you call it—will be able to replicate. I've said this before, and I'll say it again for every million the tech billionaires spend toward that perceived mission. Also Read: Tech Tonic: Microsoft's unlikeability crisis is something their AI cannot solve Not everyone's in the same boat of delusion, and that is why there's newfound respect for Anthropic CEO Dario Amodei. At the company's first developer conference earlier this summer, he made two things very clear — AI usage could spike unemployment among entry-level jobs to as high as 20% in the next five years, and that AI companies as well as governments must stop 'sugar-coating' things. His hope, when he said this, was to jolt fellow AI companies and the government into preparing for and protecting the humans who work jobs for a living. Mind you, Huang's comments came as he bristled at the exact warning Amodei expressed. Axios reported that Huang told them, 'I don't know why AI companies are trying to scare us. We should advance the technology safely just as we advance cars safely. ... But scaring people goes too far.' Fine, Amodei shouldn't scare people—we can sort of agree, for a moment, for the sake of it. What would you suggest? That is exactly where leaders such as Huang come up short. Very, very short. Ambiguous lines such as 'new jobs will be created' don't say anything, despite words being cobbled together into a semblance of a sentence. I was once told by a tech executive that we must think of the coming of AI as the evolution of the car all those years ago, when everyone said the job of a horse carriage pilot would be eliminated. Wrong. Cars needed drivers to work. AI, with all its claimed super-intelligence, will go about in isolation. Even if it is wrong (which it will be most of the time), it'll be wrong with some level of confidence. Very Gen Z vibes, if I may take the liberty to infuriate another demographic. Any and all comparison with perceived historical technological disruption is surely intellectually dishonest, at best. Take the Industrial Revolution, for example, which unfolded over decades, allowing generations of humans to adapt. Previous waves of automation primarily affected manual labor, alongside creating new opportunities for cognitive work. AI's very approach is different, since it targets precisely those knowledge-based jobs that were supposed to be resistant to machines taking over—from radiology to legal research, financial analysis, writing code, managing a company's tech infrastructure, or creating art. Also Read: NVIDIA CEO Jensen Huang told not to meet companies linked to Chinese military: Report The Nvidia CEO's confidence stems from the fact that he's perched atop the AI revolution's most profitable enterprise — valued at around $4 trillion. 'AI agents are the new digital workforce,' he declared at CES 2025, seemingly oblivious to the irony of celebrating the replacement of human workers while preaching job creation. For Huang and his fellow tech titans, AI represents a multi-trillion-dollar opportunity. For millions of workers and job roles, it represents an existential threat to their livelihoods. Perhaps most galling is the tech industry's willful blindness to the speed of AI displacement versus job creation. LinkedIn's 2025 Work Change Report—make of it what you will—suggests that 70% of the skills used in most jobs could change by 2030. This isn't gradual evolution, but economic whiplash that transcends humans in numbers unimaginable. While Huang speaks airily of workers adapting and learning new skills, he ignores a very simple yet harsh reality: retraining takes months or years, but job losses usually happen overnight. An inward look at the happenings in Silicon Valley—if they bother to—will illustrate that. OpenAI's CEO Sam Altman says the human workforce should take AI as 'friends.' Shopify CEO Tobias Lütke blessed the world with a long post on X in April to say, 'Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI.' When Micha Kaufman, CEO of Fiverr, an online marketplace, says 'AI is coming for your jobs. Heck, it's coming for my job too. This is a wake-up call,' she forgets that her financial reality and the financial reality of millions of families isn't at all the same. Also Read: Is your job safe? OpenAI CEO Sam Altman warns that AI could replace specific jobs Tech companies and their leaders see business and money. They don't see humans. Unfortunately, we seem to have handed over the reins of any conversation about our futures to the same people. The future will be littered with economic and humanitarian catastrophes, whether AI succeeds or fails. If you think Silicon Valley cares—beyond their series of reassuring bromides—you're very wrong. As they say, Father Time makes no round trips. Vishal Mathur is the Technology Editor at HT. Tech Tonic is a weekly column that looks at the impact of personal technology on the way we live, and vice versa. The views expressed are personal.