logo
Dear AI, how am I doing?

Dear AI, how am I doing?

Observer29-07-2025
The ever-growing role of AI reminds one of the time when the word 'automatic' came into existence, and it was the word most used to describe everything from watches to vehicles.
If one had to wind the watch in the past, it was an automatic quartz now. If it is a device, then the term automatic would indicate that it could switch off automatically without human interference, just like a microwave with a timer or a kettle. Of course, the word automatically also means spontaneously.
From all that became automatic, which has been great for industrialisation, Artificial Intelligence is not just making work easier but might be taking away jobs. So the debate goes on.
What we may not realise is how much people might already be depending on AI in their mobile phones and tablets.
A friend of mine could not solve her skin ailment even after seeing a few skin specialists. The next thing she did seems to have given her comfort. She asked the AI what she should do, and there she was given suggestions, including lifestyle changes. She felt, finally, that someone was giving her hope and not just prescriptions and medications.
The sentiment seemed similar to another friend who likes to ask AI how she is doing once in a while. So AI functions as a counsellor.
The journal app on the phone often prompts, "Want to reflect on your day?"
It is quite fun, actually, until we begin to wonder who else could be reading the journal.
After all, the AI apps read everything that is available online to summarise or come to an opinion.
There is another friend who likes to ask the AI what it has to say about him. He is often amazed at the details the AI has on him, including his gym membership.
So does it mean our online lives are all recorded as data science continues to grow in importance, just like AI's role in global communication and interconnectivity in the business world? AI has made it all easy, whether it is natural language processing or image and audio recognition. The world of media has changed so much that we are at times confused about the authenticity of the videos. AI videos and images can be intriguing, but they have also opened doors to fake news, and validity is debatable. AI has definitely made us smarter in the way we operate our lifestyle, such as applying energy efficiency measures as well as saving time.
And now it might just be able to break into loneliness. But is there a danger of being overdependent on AI?
One obvious concern would be the lack of emotional intelligence. The AI might be able to provide the probabilities and logical answers, but emotion is very much human. No technology can substitute that.
Time is so precious that it is probably the most expensive commodity that people may not have enough to spare. Maybe it is this lack of time that makes others turn towards technology to have a quiet conversation.
Spare some time for someone who is seeking someone to listen to - it might be too much to ask while we rush through the daily challenges of life. But maybe it is time to consider giving time for others as an act of kindness.
In today's world, everything has to make sense - especially economic sense. Wages are based on hours spent working, and I wonder if that is when time became a precious commodity. The reality, though, is that time became the most precious commodity because even money cannot buy time.
Time lost is gone forever.
As a friend put it, now is the time to start. To start, everything you have kept for tomorrow.

Hashtags

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

India well-equipped to tackle evolving online harms, cyber crimes: Govt tells Parliament
India well-equipped to tackle evolving online harms, cyber crimes: Govt tells Parliament

Mint

time24 minutes ago

  • Mint

India well-equipped to tackle evolving online harms, cyber crimes: Govt tells Parliament

New Delhi [India], August 9 (ANI): India's cyber legal framework, backed by the IT Act, BNS, and institutions like GAC, CERT-In, and I4C, is well-equipped to tackle evolving online harms and cyber crimes, said Union Minister of State for Electronics and Information Technology Jitin Prasada in Rajya Sabha. MoS Prasada said on Friday that the Government is conscious of the threats posed by deepfakes powered by Artificial Intelligence (AI), including synthetic audio, video and text. Such content can seriously impact a person's dignity, reputation, and right to privacy. It also raises concerns about platform accountability, he added, according to the Ministry of Electronics & IT. The MoS informed that to combat the rising threat of deepfakes and ensure an open, safe, and accountable cyberspace, the Government of India has established a comprehensive legal and institutional framework. Several existing laws address various aspects of AI-generated harms, including the Information Technology Act, 2000 (IT Act), which criminalises identity theft, impersonation, privacy violations, and the circulation of obscene content. It also empowers authorities to issue blocking orders (Section 69A) and takedown notices (Section 79) to intermediaries. Complementing this are the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, updated in 2022 and 2023, which mandate digital platforms to exercise due diligence, prevent the hosting of unlawful content, and ensure accountability. The Digital Personal Data Protection Act, 2023 (DPDP Act) adds another layer by requiring data fiduciaries, including AI firms, to process personal data lawfully and with user consent. Deepfakes using personal data without consent can be penalised under this Act. Further, the Bharatiya Nyaya Sanhita, 2023 (BNS) addresses misinformation and organised cybercrimes through Sections 353 and 111. The Government has also issued advisories (in December 2023 and March 2024) directing intermediaries to remove impersonation and deepfake content, inform users about misleading content, and ensure timely compliance with the Grievance Appellate Committee (GAC) orders. Intermediaries are also advised to label AI-generated content when outputs are potentially unreliable. India's broader cyber ecosystem includes platforms like the Indian Cyber Crime Coordination Centre (I4C) and its SAHYOG Portal, which enable coordinated removal of unlawful content. Citizens can report incidents through the National Cyber Crime Reporting Portal or by calling the helpline 1930. Additionally, CERT-In provides guidance on emerging AI threats, including deepfakes, and conducts public awareness campaigns through initiatives such as Cyber Jagrookta Diwas, National Cyber Security Awareness Month, and Safer Internet Day. (ANI)

Nvidia gets green light: US clears chip exports to China; company avoids billion dollars hit after CEO's meeting with Trump
Nvidia gets green light: US clears chip exports to China; company avoids billion dollars hit after CEO's meeting with Trump

Time of India

timean hour ago

  • Time of India

Nvidia gets green light: US clears chip exports to China; company avoids billion dollars hit after CEO's meeting with Trump

The United States Commerce Department has begun issuing licenses to Nvidia to export its H20 chips to China, a US official told Reuters on Friday, marking a major breakthrough for the AI giant's access to one of its most important markets. The move follows Washington's decision last month to reverse an April ban on sales of the H20 chip to China. Nvidia had designed the processor specifically to comply with Biden-era AI chip export rules. The restrictions were expected to wipe $8 billion off its July-quarter sales, the company had warned. Nvidia CEO Jensen Huang met with US President Donald Trump on Wednesday, two sources familiar with the matter told Reuters. In July, the company said it was applying for licenses to resume H20 sales and had been assured of speedy approvals. It remains unclear how many licenses have been granted, which Chinese companies can receive the chips, or the total value of approved shipments. Nvidia previously disclosed a $5.5 billion hit from the export curbs but later revised the figure down by $1 billion after reusing some materials. The H20 alone generated $4.6 billion in first-quarter sales, with China accounting for 12.5% of the company's overall revenue. The company has denied Chinese claims that the H20 poses national security risks, insisting the chip contains no 'backdoors' for remote access. 'Cybersecurity is critically important to us. NVIDIA does not have 'backdoors' in our chips that would give anyone a remote way to access or control them,' a Nvidia spokesperson earlier said in a statement. CNBC reported that China's Cyberspace Administration (CAC) has asked Nvidia to 'to clarify and submit relevant supporting documentation regarding security risks, including potential vulnerabilities and backdoors, associated with its H20 computing chips sold to China.' The regulator said Nvidia's AI chips have been reported to contain serious security flaws and noted that some US lawmakers are pushing for mandatory tracking features on advanced chip exports. Other high-end Nvidia AI chips remain barred from sale to China under longstanding US export controls aimed at slowing Beijing's AI and defense advancements. Huang has warned that Nvidia's industry leadership could be at risk without access to the Chinese market, where domestic rivals like Huawei are rapidly expanding their chip offerings. Stay informed with the latest business news, updates on bank holidays , public holidays , current gold rate and silver price .

Karen Hao on AI tech bosses: ‘Many choose not to have children because they don't think the world is going to be around much longer'
Karen Hao on AI tech bosses: ‘Many choose not to have children because they don't think the world is going to be around much longer'

Irish Times

timean hour ago

  • Irish Times

Karen Hao on AI tech bosses: ‘Many choose not to have children because they don't think the world is going to be around much longer'

Scarlett Johansson never intended to take on the might of Silicon Valley. But last summer the Hollywood star discovered a ChatGPT model had been developed whose voice – husky, with a hint of vocal fry – bore an uncanny resemblance to the AI assistant voiced by Johansson in the 2013 Spike Jonze movie Her. On the day of the launch, OpenAI chief executive Sam Altman , maker of ChatGPT, posted on X a one-word comment: 'her'. Later Johansson released a furious statement revealing she had been asked to voice the new aide but had declined. Soon the model was scrapped. Johansson and a phalanx of lawyers had defeated the tech behemoths. That skirmish is one among the many related in Karen Hao's new book Empire of AI: Inside the Reckless Race for Total Domination, a 482-page volume that, in telling the story of San Francisco company OpenAI and its founder, Altman, concerns itself with large and worrying truths. Could AI steal your job, destabilise your mental health and, via its energy-guzzling servers plunge the environment into catastrophe? Yes to all of the above, and more. [ Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao - Precise, insightful, troubling Opens in new window ] As Hao puts it in the book: 'How do we govern artificial intelligence? AI is one of the most consequential technologies of this era. In a little over a decade, it has reformed the backbone of the Internet. It is now on track to rewire a great many other critical functions in society, from healthcare to education, from law to finance, from journalism to government. The future of AI – the shape this technology takes – is inextricably tied to our future.' It's a rainy day in Dublin when I travel to Dalkey to meet Hao, a Hong Kong-dwelling, New Jersey-raised journalist who has become a thorn in Altman's side. Educated at MIT, she writes for the Atlantic and leads the Pulitzer Centre AI Spotlight series, a programme that trains journalists in covering AI matters. Among families grabbing a bite to eat in a local hotel, the boisterous kids running around tables in the lobby and tourists checking in and out, Hao, neat and professional in a cream blazer with her hair tied back, radiates an air of calm authority. READ MORE 'AI is such an urgent story,' she says. 'The pursuit of AI becomes dangerous as an idea because it's eroding people's data privacy . It's eroding people's fundamental rights. It's exploiting labour, but it's humans that are doing that, in the name of AI.' Whether you're in Dublin or San Diego, AI is hurtling into our lives. ChatGPT has 400 million weekly users. You can't go on to WhatsApp , Google or Meta without encountering an AI bot. It was revealed in a recent UK Internet Matters survey that 12 per cent of kids and teens use chatbots to offset feelings of loneliness. Secondary school students are changing their CAO forms to give themselves the best chance of thwarting the broken career ladder that AI has created. The impact of AI on the environment is extraordinary. Just one ChatGPT search about something as simple as the weather consumes vast energy, 10 times more than a Google search. Or, as Des Traynor of Intercom put it at Dalkey Book Festival recently, it's like using a 'massive diesel generator to power a calculator'. It's far from the utopian ideal of a medical solutions-focused, climate-improving enterprise that was first trumpeted to Hao when she began investigating OpenAI and Altman in 2019. As a 20-something reporter at MIT Technology Review covering artificial intelligence, Hao became intrigued by the company. Founded as a non-profit, OpenAI claimed not to chase commercialisation. Even its revamp into a partially for-profit model didn't alter its mission statement: to safely build artificial intelligence for the benefit of humanity. And to be open and transparent while doing it. But when Hao arrived at the plush headquarters on San Francisco's 18th and Folsom Streets, all exposed wood beam ceilings and comfy couches, she noticed that: nobody seemed to be allowed to talk to her casually. Her photograph had been sent to security. She couldn't even eat lunch in the canteen with the employees. 'They were really secretive, even though they kept saying they were transparent,' Hao says. 'Later on, I started sourcing my own interviews. People started telling me: this is the most secretive organisation I've ever worked for.' Karen Hao in Dublin during the Dalkey book festival. Photograph: Nick Bradshaw The meetings Hao had with OpenAI executives did not impress her. 'In the first meeting, they could not articulate what the mission was. I was like, well, this organisation has consistently been positioning itself as anti-Silicon Valley. But this feels exactly like Silicon Valley, where men are thrown boatloads of money when they don't yet have a clear idea of what they're even doing.' Simple questions appeared to wrong-foot the executives. They spoke about AGI (artificial general intelligence), the theoretical notion that silicon chips could one day give rise to a human-like consciousness. AGI would help solve complex problems in medicine and climate change , they enthused. But how would they achieve this and how would AGI technology be successfully distributed? They hedged. 'Fire is another example,' Hao was told. 'It's also got some real drawbacks to it.' Since that time, AGI has not been developed, but billions have been pumped into large language models such as ChatGPT, which can perform tasks such as question answering and translation. Built by consuming vast amounts of often garbage data from the bottom drawer of the Internet, AI chatbots are frequently unreliable. An AI assistant might give you the right answer. Or it might, as Elon Musk's AI bot Grok did recently , praise Adolf Hitler and cast doubt on people with Jewish surnames. 'Quality information and misinformation are being mixed together constantly,' Hao says, 'and no one can tell any more what are the sources of truth.' It didn't have to be this way. 'Before ChatGPT and before OpenAI took the scaling approach, the original trend in AI research was towards tiny AI models and small data sets,' Hao says. 'The idea was that you could have really powerful AI systems with highly curated data sets that were only a couple of hundred images or data points. But the key was you needed to do the curation on the way in. When it's the other way around, you're culling the gunk and toxicity and that becomes content moderation.' One particularly moving section of Hao's book is when she journeys to poorer countries to look at how people who work on the content moderation side of OpenAI cope day-to-day. Meagre incomes, job instability and exposure to hate speech, child sex abuse and rape fantasies online are just some of the realities contractors face. In Kenya , one worker's sanity became so frayed his wife and daughter left him. When he told Hao his story, the author says she felt like she'd been punched in the gut. 'I went back to my hotel, and I cried because I was like, this is tearing people's families apart.' Hao nearly didn't get her book out. She had thought she would have some collaboration with Altman and OpenAI, but the participation didn't happen. 'I was devastated,' she admits. 'Fortunately I had a lot of amazing people in my life who were like, 'Are you going to let them win or are you going to continue being the excellent journalist you know you can be, and report it without them?'' Understanding companies such as OpenAI is becoming more important for everyone. In recent weeks, Meta, Microsoft, Amazon and Alphabet , Google's parent company, delivered their quarterly public financial reports, disclosing that their year-to-date capital expenditure ran into tens of billions , much of it required for the creation and maintenance of data centres to power AI's services. In Ireland, there are more than 80 data centres, gobbling up 50 per cent of the electricity in the Dublin region, and hoovering up more than 20 per cent nationally, as they work to process and distribute huge quantities of digital information. [ Let's get real: Ireland's data centre boom is driving up fossil fuel dependence Opens in new window ] Hao believes governments must force tech companies to have more transparency in relation to the energy their data centres consume. 'If you're going to build data centres, you have to report to the public what the actual energy consumed is, how much water is actually used. That enables the public and the government to decide if this is a trade-off worth continuing. And they need to invest more in independent institutions for cultivating AI expertise.' While governments have to play their part, it's difficult reading the book not to find yourself asking the simple question: why aren't tech bosses themselves concerned about what they're doing? Tech behemoths may be making billions – AI researchers are negotiating pay packages of $250 million from companies such as Meta – but surely they've given a care to their children's future? And their children's children? Wouldn't they prefer them to live in a world they still have flowers and polar bears and untainted water? [ Adam Kelly: I am a university lecturer witnessing how AI is shrinking our ability to think Opens in new window ] 'What's interesting is many of them choose not to have children because they don't think the world is going to be around much longer,' Hao says. 'With some people in more extreme parts of the community, their idea of Utopia is all humans eventually going away and being superseded by this superior intelligence. They see this as a natural force of evolution.' 'It's like a very intense version of utilitarianism,' she adds. 'You'd maximise morality in the world if you created superior intelligences that are more moral than us, and then they inherited our Earth.' Offering a more positive outlook, there are many in the AI community who would say that the work they are doing will result in delivering solutions that benefit the planet. AI has the potential to accelerate scientific discoveries: its possibilities are exciting because they are potentially paradigm-shifting. Is that enough to justify the actions being taken? Not according to Hao. 'The problem is: we don't have time to continue destroying our planet with the hope that one day maybe all of it will be solved by this thing that we're creating,' she says. 'They're taking real world harm today and offsetting it with a possible future tomorrow. That possible future could go in the opposite direction.' 'They can make these trade-offs because they're the ones that are going to be fine. They're the ones with the wealth to build the bunkers. If climate change comes, they have everything ready.' Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao is published by Allen Lane

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store