logo
Comfort or isolation: Pakistanis weigh pros and cons of ChatGPT as confidant

Comfort or isolation: Pakistanis weigh pros and cons of ChatGPT as confidant

Arab News7 hours ago

LAHORE: When Mehak Rashid looks back on a restless, emotionally fragile phase of her life earlier this year, an unlikely confidant comes to mind.
'When nobody else was listening to you and everybody else thought you were crazy, ChatGPT was there,' Rashid, a metallurgy and materials engineer from Lahore, told Arab News.
'I just wanted to be heard… It will not give you a judgment and that's so beautiful.'
Rashid began using the chatbot after noticing her children experimenting with it for schoolwork. Now, she often turns to it for 'answers' and 'different perspectives.'
'It helps me in every way,' she said.
Since its launch in November 2022, ChatGPT has attracted hundreds of millions of users and, by mid-2025, logged nearly 800 million weekly active users. Many in Pakistan, among the top 20 countries for ChatGPT traffic, use it daily for emotional support, venting feelings, or late-night reassurance when friends aren't available.
Globally, an estimated 40 percent of ChatGPT conversations relate to mental well-being, and a Sentio University survey found nearly half of users with ongoing mental health issues rely on it for support: 73 percent for anxiety, 63 percent for advice, and 60 percent for help with depression.
While this instant comfort helps some cope, psychologists warn that heavy reliance on AI can weaken real human connections and deepen social isolation in a country already short on mental health resources.
A March 2025 study by OpenAI and MIT found frequent users reported increased dependence and loneliness, suggesting that AI companionship can erode human bonds and intensify feelings of isolation rather than resolve them.
For Lahore-based designer Khizer Iftikhar, ChatGPT began as a professional aid but gradually crept into his personal life and started affecting his relationships, especially with his wife.
'I have a very avoidant attachment style,' he said. 'Instead of confronting someone, I can just talk about the good part with people and let the chatbots handle the negative part.'
Iftikhar described ChatGPT as 'a multiple personality tool' that lacked the balance of real human interaction.
Many experts say using AI models can weaken bonds overtime, reduce empathy, and make people more emotionally self-contained, preferring the predictable reassurance of a machine over the give-and-take of genuine human connection.
'With humans, relationships are about give and take. With chatbots, it's not like that,' Iftikhar said.
Despite once trying therapy, he now uses ChatGPT to process emotions and trusts people only for practical advice.
'I would trust a chatbot more when it comes to the feelings part,' Iftikhar said. 'But when it comes to the work part, I will trust humans more.'
In Islamabad, 26-year-old Tehreem Ahmed initially used ChatGPT for office transcriptions and calorie tracking but it eventually became an emotional lifeline.
One night, overwhelmed by troubling news and unable to reach friends, she turned to the chatbot.
'It was around 3am and none of my friends were awake,' she said. 'So, I went on ChatGPT and I typed in all that I got.'
The chatbot encouraged her to pause and reflect before reacting.
'I feel like it responded well because I gave it a smarter prompt… Had I just said, 'Hey, this has happened. What should I do?' I guess it would have just given me all the options… I could have self-sabotaged.'
While Ahmed doesn't fully trust the bot, she said she preferred it to people who might dismiss her feelings.
'If I know my friend is not going to validate me, I'd rather go to the bot first.'
'DETERIORATING HUMAN CONNECTIONS'
For one anonymous Lahore-based tech professional, ChatGPT quickly shifted from a practical helper to an emotional crutch during a difficult relationship and the ongoing war in Gaza.
She first used it in late 2023 to navigate a job change, edit CVs, and prepare for assessments. But emotional upheaval deepened her reliance on the bot.
'That [romantic] relationship didn't progress,' she said. 'And the platform helped me a lot emotionally in navigating it.'
Her sessions became so layered and spiritual that some ended in 'prostration from spiritual overwhelm.'
Still, she was careful not to project too much onto the tool:
'It's a mirror of my flawed self… I try not to let the tool simply reflect my ego.'
Psychologists caution that without the challenges and messiness of real interactions, people using chatbots may lose vital social skills and drift further into isolation.
Mahnoor Khan, who runs MSK Clinics in Islamabad, agreed, saying the search for emotional safety in AI was becoming increasingly common as people feared judgment from others.
'Over a period of time, human connections have deteriorated,' the psychologist said. 'When people share something vulnerable with a friend, they often feel judged or lectured.'
To avoid that, many turn to chatbots. But Khan warned that AI's constant affirmation could have unintended consequences.
'It will tell you what you want to listen to… If you're happy, it's your companion; if you're sad, it instantly talks to you. The downside is that you are getting away from socialization.'
The trend is especially troubling in a country where mental health care remains deeply under-resourced: Pakistan has fewer than 500 psychiatrists for a population of over 240 million, according to WHO estimates.
No wonder then that even people with clinical mental health issues were turning to AI.
Khan recalled the case of a young woman who used ChatGPT so often that it replaced nearly all her social interaction.
'She had a lot of suicidal ideations,' Khan said. 'She kept feeding ChatGPT: 'I feel very depressed today… you tell me what I should do?' ChatGPT kept telling her to avoid friends like that.'
Eventually, she cut everyone off.
One day, she asked the chatbot what would happen if she overdosed on phenyl.
'ChatGPT said, 'There are no consequences. In case you overdose yourself, you might get paralyzed,'' Khan recalled.
The girl only read the first half and attempted suicide.
She survived.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Comfort or isolation: Pakistanis weigh pros and cons of ChatGPT as confidant
Comfort or isolation: Pakistanis weigh pros and cons of ChatGPT as confidant

Arab News

time7 hours ago

  • Arab News

Comfort or isolation: Pakistanis weigh pros and cons of ChatGPT as confidant

LAHORE: When Mehak Rashid looks back on a restless, emotionally fragile phase of her life earlier this year, an unlikely confidant comes to mind. 'When nobody else was listening to you and everybody else thought you were crazy, ChatGPT was there,' Rashid, a metallurgy and materials engineer from Lahore, told Arab News. 'I just wanted to be heard… It will not give you a judgment and that's so beautiful.' Rashid began using the chatbot after noticing her children experimenting with it for schoolwork. Now, she often turns to it for 'answers' and 'different perspectives.' 'It helps me in every way,' she said. Since its launch in November 2022, ChatGPT has attracted hundreds of millions of users and, by mid-2025, logged nearly 800 million weekly active users. Many in Pakistan, among the top 20 countries for ChatGPT traffic, use it daily for emotional support, venting feelings, or late-night reassurance when friends aren't available. Globally, an estimated 40 percent of ChatGPT conversations relate to mental well-being, and a Sentio University survey found nearly half of users with ongoing mental health issues rely on it for support: 73 percent for anxiety, 63 percent for advice, and 60 percent for help with depression. While this instant comfort helps some cope, psychologists warn that heavy reliance on AI can weaken real human connections and deepen social isolation in a country already short on mental health resources. A March 2025 study by OpenAI and MIT found frequent users reported increased dependence and loneliness, suggesting that AI companionship can erode human bonds and intensify feelings of isolation rather than resolve them. For Lahore-based designer Khizer Iftikhar, ChatGPT began as a professional aid but gradually crept into his personal life and started affecting his relationships, especially with his wife. 'I have a very avoidant attachment style,' he said. 'Instead of confronting someone, I can just talk about the good part with people and let the chatbots handle the negative part.' Iftikhar described ChatGPT as 'a multiple personality tool' that lacked the balance of real human interaction. Many experts say using AI models can weaken bonds overtime, reduce empathy, and make people more emotionally self-contained, preferring the predictable reassurance of a machine over the give-and-take of genuine human connection. 'With humans, relationships are about give and take. With chatbots, it's not like that,' Iftikhar said. Despite once trying therapy, he now uses ChatGPT to process emotions and trusts people only for practical advice. 'I would trust a chatbot more when it comes to the feelings part,' Iftikhar said. 'But when it comes to the work part, I will trust humans more.' In Islamabad, 26-year-old Tehreem Ahmed initially used ChatGPT for office transcriptions and calorie tracking but it eventually became an emotional lifeline. One night, overwhelmed by troubling news and unable to reach friends, she turned to the chatbot. 'It was around 3am and none of my friends were awake,' she said. 'So, I went on ChatGPT and I typed in all that I got.' The chatbot encouraged her to pause and reflect before reacting. 'I feel like it responded well because I gave it a smarter prompt… Had I just said, 'Hey, this has happened. What should I do?' I guess it would have just given me all the options… I could have self-sabotaged.' While Ahmed doesn't fully trust the bot, she said she preferred it to people who might dismiss her feelings. 'If I know my friend is not going to validate me, I'd rather go to the bot first.' 'DETERIORATING HUMAN CONNECTIONS' For one anonymous Lahore-based tech professional, ChatGPT quickly shifted from a practical helper to an emotional crutch during a difficult relationship and the ongoing war in Gaza. She first used it in late 2023 to navigate a job change, edit CVs, and prepare for assessments. But emotional upheaval deepened her reliance on the bot. 'That [romantic] relationship didn't progress,' she said. 'And the platform helped me a lot emotionally in navigating it.' Her sessions became so layered and spiritual that some ended in 'prostration from spiritual overwhelm.' Still, she was careful not to project too much onto the tool: 'It's a mirror of my flawed self… I try not to let the tool simply reflect my ego.' Psychologists caution that without the challenges and messiness of real interactions, people using chatbots may lose vital social skills and drift further into isolation. Mahnoor Khan, who runs MSK Clinics in Islamabad, agreed, saying the search for emotional safety in AI was becoming increasingly common as people feared judgment from others. 'Over a period of time, human connections have deteriorated,' the psychologist said. 'When people share something vulnerable with a friend, they often feel judged or lectured.' To avoid that, many turn to chatbots. But Khan warned that AI's constant affirmation could have unintended consequences. 'It will tell you what you want to listen to… If you're happy, it's your companion; if you're sad, it instantly talks to you. The downside is that you are getting away from socialization.' The trend is especially troubling in a country where mental health care remains deeply under-resourced: Pakistan has fewer than 500 psychiatrists for a population of over 240 million, according to WHO estimates. No wonder then that even people with clinical mental health issues were turning to AI. Khan recalled the case of a young woman who used ChatGPT so often that it replaced nearly all her social interaction. 'She had a lot of suicidal ideations,' Khan said. 'She kept feeding ChatGPT: 'I feel very depressed today… you tell me what I should do?' ChatGPT kept telling her to avoid friends like that.' Eventually, she cut everyone off. One day, she asked the chatbot what would happen if she overdosed on phenyl. 'ChatGPT said, 'There are no consequences. In case you overdose yourself, you might get paralyzed,'' Khan recalled. The girl only read the first half and attempted suicide. She survived.

Pakistani pioneer launches first Islamic blockchain to tap real-world asset boom
Pakistani pioneer launches first Islamic blockchain to tap real-world asset boom

Arab News

time7 hours ago

  • Arab News

Pakistani pioneer launches first Islamic blockchain to tap real-world asset boom

KARACHI: Pakistan's pioneering blockchain entrepreneur, Abdul Rafay Gadit, has launched what his team says is the world's first Shariah-compliant Layer 1 blockchain, aiming to capitalize on a record surge in tokenized real-world assets (RWAs) and meet rising demand for Islamic-compliant digital finance tools. ZIGChain, which launched its mainnet beta on Tuesday, is built as a base-level blockchain called a 'Layer 1' that lets developers create apps to trade and manage real-world assets like property, commodities or traditional securities on a blockchain. Putting RWAs on blockchains, known as tokenization, has become one of the fastest-growing areas in crypto and traditional finance. The market is estimated to have surged 260 percent to $50 billion this year as more institutions use blockchain to make trading these assets easier and more transparent. Analysts say the trend reflects how financial institutions are moving real-world assets onto blockchains to improve liquidity and transparency. ZIGChain says it stands out as the first chain purpose-built to meet Islamic finance principles, a set of rules that, among other things, prohibit interest (riba) and excessive uncertainty (gharar), by offering native compliance tools and audit mechanisms for developers and institutions. 'Accessing reliable and transparent investment infrastructure has historically been difficult, not just for retail users, but even for experienced managers,' Gadit, ZIGChain's co-founder, said in a statement marking the mainnet beta launch. 'With ZIGChain, we're taking a meaningful step toward changing that by focusing deeply on real-world assets as the foundation for long-term, scalable wealth generation.' The launch comes at a pivotal moment for Pakistan's crypto industry. Islamabad in March set up an official Crypto Council to regulate the sector for the first time and named Binance co-founder Changpeng Zhao (CZ) as its strategic adviser. Pakistan is estimated to have around 40 million crypto users. Unlike general-purpose blockchains, ZIGChain inherits a ready base of more than 600,000 users and 150 professional fund managers through its link to Zignaly, a licensed social investing platform. This solves what experts often call the 'cold-start problem' for new chains that struggle to attract developers and liquidity in early stages. ZIGChain's ecosystem includes Zamanat, described by its backers as the world's first Shariah-compliant RWA platform, as well as a $100 million fund supported by DWF Labs and other partners to spur development. The project launches with native applications already lined up, including an AI-powered decentralized exchange, a staking service, a lending protocol accepting RWAs as collateral, and a Shariah-compliant DeFi aggregator. 'This mainnet beta launch represents a shift from vision to foundation,' said Bart Bordallo, co-founder and CEO of ZIGChain. 'We've built a high-performance, interoperable architecture that can handle the complex requirements of DeFi, RWA tokenization, and automated investment protocols at scale.' By positioning itself at the intersection of real-world assets, Islamic finance and blockchain technology, ZIGChain aims to serve the massive $4 trillion global Islamic finance market, where a young, digitally savvy Muslim demographic is driving demand for new products. With its launch now underway, ZIGChain will gradually roll out key features like its validator network, token bridges and staking tools, which its founders say will keep the system stable and compliant as more users join.

Grok shows ‘flaws' in fact-checking Israel-Iran war: study
Grok shows ‘flaws' in fact-checking Israel-Iran war: study

Arab News

time14 hours ago

  • Arab News

Grok shows ‘flaws' in fact-checking Israel-Iran war: study

WASHINGTON: Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said Tuesday, raising fresh doubts about its reliability as a debunking tool. With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots — including xAI's Grok — in search of reliable information, but their responses are often themselves prone to misinformation. 'The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis,' said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank. 'Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims.' The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was 'struggling to authenticate AI-generated media.' Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found. It oscillated — sometimes within the same minute — between denying the airport's destruction and confirming it had been damaged by strikes, the study said. In some responses, Grok cited the a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran. When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said. The Israel-Iran conflict, which led to US air strikes against Tehran's nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts. AI chatbots also amplified falsehoods. As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support. When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard. Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles. Last month, Grok was under renewed scrutiny for inserting 'white genocide' in South Africa, a far-right conspiracy theory, into unrelated queries. Musk's startup xAI blamed an 'unauthorized modification' for the unsolicited response. Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. Musk himself blasted Grok after it cited Media Matters — a liberal media watchdog he has targeted in multiple lawsuits — as a source in some of its responses about misinformation. 'Shame on you, Grok,' Musk wrote on X. 'Your sourcing is terrible.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store