logo
OpenAI Disables ChatGPT Search Sharing Feature Amid Privacy Concerns

OpenAI Disables ChatGPT Search Sharing Feature Amid Privacy Concerns

Hans India5 days ago
OpenAI has decided to permanently shut down a controversial feature that allowed users to make their ChatGPT conversations publicly searchable on platforms like Google. The move comes in response to growing concerns over privacy and the unintentional exposure of sensitive user information online.
This feature, rolled out earlier in the year, was designed with the intention of letting users voluntarily share useful ChatGPT conversations on the web. It was strictly optional—users had to select a specific chat and then check a box allowing it to be indexed by search engines. However, despite this two-step opt-in process, several users inadvertently ended up sharing highly personal information.
According to OpenAI's Chief Information Security Officer, Dane Stuckey, the decision to discontinue the feature was made after realizing the potential for accidental oversharing far outweighed the benefits.
"We just removed a feature from @ChatGPTapp that allowed users to make their conversations discoverable by search engines, such as Google. This was a short-lived experiment to help people discover useful conversations. This feature required users to opt in, first by picking a chat to share, then by clicking a checkbox for it to be shared with search engines (see below)," Stuckey shared in a post on X (formerly Twitter).
Although the intention behind the feature was to help users discover valuable AI-generated content and insights, it resulted in unintended consequences. A report by Fast Company highlighted that over 4,500 ChatGPT chats had become visible through Google Search. While many of these were harmless, others contained sensitive details such as real names, locations, or emotionally intimate content users had disclosed to the chatbot in confidence.
Even after users deleted the shared chat or its link, those conversations could still be found through search engines until their indexes were updated—a delay that left private content exposed far longer than expected.
OpenAI CEO Sam Altman acknowledged the issue in a recent podcast, noting the unique relationship users have with ChatGPT.
He said that people often treat the AI assistant like a confidante, discussing topics they might not share even with close friends.
This deep sense of trust, while validating for the platform, also made the chat-sharing feature inherently risky.
The company has now removed the sharing option altogether and is actively coordinating with search engines to eliminate previously indexed chats. The update is being seen as a necessary corrective step that prioritizes user trust and privacy.
The removal of the feature marks the end of what OpenAI has referred to as an "experiment" to make AI conversations discoverable on the web—a well-intentioned idea that couldn't outweigh the privacy risks in the real world.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Decoding UK's OSA, ChatGPT adds emotional support, and pondering Windows' future
Decoding UK's OSA, ChatGPT adds emotional support, and pondering Windows' future

Hindustan Times

time9 minutes ago

  • Hindustan Times

Decoding UK's OSA, ChatGPT adds emotional support, and pondering Windows' future

The UK's newly updated enforcements for the Online Safety Act (OSA) are worth having a conversation about, this week. Most of the online media coverage would lead you to believe that it's a new act that's been passed into a new law, but it isn't — the Online Safety Act received Royal Assent on 26 October 2023, some provisions came into force on 10 January 2024, while additional elements took effect on 1 April 2024. We're talking about this now, because the critical age verification requirements took effect on 25 July 2025, which means all online platforms accessible in that part of the world are legally required to implement "highly effective" age assurance measures. In fact, this will not have a UK-only fallout, because it could potentially reshape the digital landscape globally, much in the way the GDPR or General Data Protection Regulation of 2016 has had on how online platforms and services collect and handle user data, in subsequent regulations worldwide. OpenAI The mandatory age verification measures that came into place late last month, are meant to provide a substantial legal assurance of a user's age and consent, the idea being to reduce access to content such as pornography, or anything that encourages self-harm, suicide or eating disorders for instance, on the World Wide Web. Apple There are two sides to this coin. Tech companies and content creators are alarmed by the OSA's new sweeping requirements. If any site accessible in the UK—including social media, search engines, music sites, and adult content providers— does not enforce age checks to prevent children from seeing harmful content, they now face potential fines up to 10% of their revenue for non-compliance. This could very well pressure them into implementing invasive verification systems. Depending how a specific platform does it, methods include scanning your face, credit card, or an identity document, if you want to access content. UK's regulators have been at it for a while, a recent case in point being the Investigatory Powers Act, which we decoded in my Tech Tonic column recently which would have forced tech companies to disable active encryption methods, putting user data at significant risk. There are privacy and access implications of this, something digital rights advocates warn about, detailing that these measures have the potential to create an unprecedented surveillance infrastructure, with these massive databases of personal and biometric information inevitably vulnerable to breaches and misuse. Users must now choose between privacy and access, fundamentally altering the internet's traditionally open nature. 'The Act, which is now coming in enforcement stages exemplifies how well-intended laws can cause unintended consequences on other aspects of technologies. The mandatory use of accredited technology is bound to weaken end-to-end encryption which is the hallmark of a free digital society without which commerce or personal communications systems cannot work. Any of the current age verification methods cannot be imposed without addressing biometric surveillance creep, data breaches and misuse, and increased centralization of user data,' explains a spokesperson of Software Freedom Law Centre India ( in a conversation with us. 'The OSA's age assurance rules require platforms to use facial scans, upload IDs, or verify age through banking or telecom data. These measures raise serious privacy concerns and discourage online anonymity. Larger platforms are testing third-party software for this, but the risk does not disappear, it spreads. User data could now sit with multiple external vendors, increasing the chances of leaks or misuse,' points out Vikram Jeet Singh, Partner at BTG Advaya, a law firm. Possible global implications cannot be ignored, considering the OSA's impact extends far beyond British borders, potentially influencing online speech frameworks worldwide. There can be an argument that while it is effective in some form, it breaks the right to privacy and free speech, while also compromising cybersecurity. Countries such as India, already grappling with content regulation challenges, are likely to be closely watching the UK's approach as a potential model, or cautionary tale. The precedent set by Britain's age verification requirements could normalise similar measures globally, which alongside a fragmented internet where access to information depends on geography, also depends on a willingness to submit to digital surveillance. This is something the spokesperson details, 'Such laws generally have global ripple effects like the GDPR. Companies may choose to adopt UK-compliant policies to avoid the costs related to fragmentation. Countries will emulate such provisions to curb dissent and justify surveillance under the guise of child protection or moral regulation by the state.' What's the way forward? The UK's now complete Online Safety Act effectively represents a watershed moment for internet governance, confronted with often opposing scenarios of fundamental questions about digital rights and any government's commitment to protecting children. The intent of the UK government is commendable, in terms of what it is trying to achieve — the internet as a safe space for children. However, the immediate surge in VPN downloads in the UK on the Apple App Store and Google Play Store suggest, citizens aren't likely to play along. Does that potentially undermine the Act's effectiveness? EMOTIONAL SUPPORT OpenAI says that they are updating ChatGPT (irrespective of which model you use specifically) giving it an ability to detect mental or emotional distress. The AI company wants ChatGPT to work better for users when they want guidance and perhaps a pep talk, than pure facts or information. 'I'm feeling stuck—help me untangle my thoughts' is an example OpenAI mentions, among others, to indicate the GPT models will be more capable of listening to the reasoning of a user's thoughts, rather than just tokenise those words into a response. Newly added are also gentle reminders during long sessions to encourage breaks. OpenAI isn't building this out of its own hat, but instead suggests they've worked with over 90 physicians across over 30 countries (including psychiatrists, paediatricians, and general practitioners) to build custom rubrics for evaluating complex, multi-turn conversations, as well as engaging human-computer-interaction (HCI) researchers and clinicians to give feedback on how well they've identified concerning behaviours, as well as convening an advisory group of experts in mental health, youth development, and HCI. The company admits they haven't gotten it right earlier, case in point being an update earlier this year, which made the model respond in a tone that was too agreeable, bordering on saying what sounded nice instead of what was actually helpful. MAKING NOTES Notes In what is usually the most difficult quarter for iPhone sales, with the spectre of the traditional September refresh looming large, Apple has reported Q3 earnings higher than expectations. The company has reported quarterly revenue of $94 billion, up 10 percent year over year, a June quarter revenue record, and beating $89.3 billion expectations. Apple CEO Tim Cook again emphasised the importance of India, for Apple's growth trajectory. 'The world of mousing and keyboarding around will feel as alien as it does to Gen Z [using] MS-DOS,' the words of David Weston, Microsoft's Corporate Vice President of Enterprise & Security in what is apparently the first of Microsoft's 'Windows 2030 Vision' video series. What does this mean? Since he doesn't elaborate any further than this breadcrumb, I'll lay out the possibility for you — another attempt at Windows which is overloaded with AI, perhaps even more within the OS itself and the apps you use on the computing device, with some element of agentic features that'll utilise natural language understanding and context from a user's data as well as what's on the screen. Ready for the future? MEET'S GETTING SMART Google Meet Google Meet is getting a rather interesting new feature, and it may seem like there's some sorcery to it, instead it is more of attention to details. Google says that if you are now joining a Meet call from your own device such as a laptop, the video meeting platform can detect when you may be doing so in a large conference room-esque physical space. In an attempt to reduce or totally eliminate the problem of sound echo on such a call, Meet will suggest joining the call using something called a 'Companion Mode'. Mind you, this presently only works if you are joining a meet call from your laptop on the Google Chrome web browser — and rolls out for all Google Workspace customers with Google Meet hardware devices. Meet uses your laptop's microphone to intelligently know when you are in a room using an ultrasonic signal. 'This wayfinding feature helps ensure a seamless, echo-free start to your meeting. When you join using the highlighted Companion mode button, you will also be automatically checked into the correct room,' says Google, in an official post. Basically, this will require your Google Workspace admins (basically, your organisation's IT folks) to enable 'Proximity Detection' that will allow that hardware to detect nearby devices, as a feature on the Google Meet Hardware installed in a conference room (for this I am sure there will be typical inertia reasoned around 'compatibility' and 'security' to mask ineptitude). At this point, based on my experiences with IT folks, easier said than done. End of the story.

Officials look to make A.I. summit bigger than G20
Officials look to make A.I. summit bigger than G20

Hindustan Times

time2 hours ago

  • Hindustan Times

Officials look to make A.I. summit bigger than G20

Preparations are in full swing as the government gets ready to host the India Artificial Intelligence (AI) Summit next year in New Delhi, with people aware of the matter saying that the government wants it to be 'bigger than the G20 Summit.' The AI Summit will be held from 19 to 20 February, 2026, following similar high-profile meetings in Paris (2025), Seoul (2024) and Bletchley Park (2023).(Representational) The Summit will be held from 19 to 20 February, 2026, following similar high-profile meetings in Paris (2025), Seoul (2024) and Bletchley Park (2023). 'This will be a grand event, built around the model of G20. All global CEOs, along with heads of state have been invited,' said a government official at the ministry of electronics and information technology (MeitY), which is the ministry leading the preparations of the Summit. 'Lots of moving parts right now. The government is still working around the programme schedule, smaller pre-events, and slowly sending out invitations.' Based on the central theme of 'People, Planet, and Progress,' the Summit's invitation, sent to a select few last month, read, 'The event will bring together world leaders, policymakers, innovators, and experts to discuss AI's role in driving equitable development, improving public services, and tackling global challenges while prioritizing ethics, safety, and inclusivity.' People familiar with the planning told HT that the Prime Minister's Office is closely involved, with clear instructions to make it a landmark global event bigger than the G20. At the G7 Summit held in Kananaskis, Canada in June 2025, PM Narendra Modi invited all G7 leaders to India's AI Summit. Ahead of the summit, MeitY has issued a call for proposals inviting government agencies, academia, intergovernmental organisations, private firms and non-profits from all over the world to host pre-Summit events between August 11, 2025 and 31 January 2026. These pre-events can be in the form of workshops, panels, hackathons, conferences, etc, and must be held in-person or hybrid, but not fully virtual. 'By hosting Pre-Summit Events, organisations can play an active role in shaping the Summit's deliberations and outcomes. Selected events will receive official co-branding and visibility as part of the Summit's global programme,' said MeitY. The pre-events must align with the seven smaller summit themes, namely human capital, inclusion, safe and trusted AI, resilience, science, democratising AI resources, and social good. Selected events will receive official co-branding. In the run up to the preparations, MeitY also held five virtual stakeholder consultations from June 6 to 8, chaired by ministry's additional secretary Abhishek Singh. These consultations included representatives from international organisations, industry, startups, academia, and civil society. The ministry also opened a public consultation forum from June 6 to 30, inviting citizens to share their suggestions on shaping the Summit's agenda. The previous AI Summit was held in Paris in February 2025, where 58 countries, including India, endorsed a joint statement on 'Inclusive and Sustainable Artificial Intelligence for People and the Planet.' The United States and the United Kingdom did not sign the declaration, which emphasised making AI more accessible while ensuring trust and safety in its deployment.

MCG & MCM now take AI route to boost property tax collection
MCG & MCM now take AI route to boost property tax collection

Time of India

time2 hours ago

  • Time of India

MCG & MCM now take AI route to boost property tax collection

Gurgaon: MCG and MCM have introduced artificial intelligence (AI) to boost revenue from property tax. MCG has so far generated a revenue of Rs 200 crore, which is around 72% of its target set for FY26, and MCM's has already crossed the Rs 29 crore-mark, the revenue it generated in the last fiscal year. Using AI, residents with outstanding dues are being identified and approached and their queries resolved. AI also assists them in paying their property tax dues. "We are using AI to boost our property tax collection. We have so far collected a revenue of Rs 200 crore in this financial year so far. Using AI, we generated a revenue of Rs 95 crore in July alone," MCG additional commissioner Yash Jaluka told TOI. You Can Also Check: Gurgaon AQI | Weather in Gurgaon | Bank Holidays in Gurgaon | Public Holidays in Gurgaon "We took the data of self-certified properties with the highest dues and used generative AI, which is a bot, to call the owners. This bot answers in real time. If you did not pay property tax, it will ask you what difficulties you faced and subsequently address your concerns. Before reaching out to people, we segregated the data into categories of those who promised to pay or refused to pay the taxes and those who faced difficulties in paying them. Most of the people contacted ended up paying the taxes," he added. Using AI for property tax collection was started as a pilot project in Yamunanagar, where Sinha served as the civic body commissioner before heading MCM. "We were successful in replicating the AI model here in Manesar and Gurgaon. We have collected Rs 29 crore so far, which is our income from property tax for the entire last fiscal," Sinha said. Meanwhile, residents pointed out that since MCG was generating so much revenue, it should invest money in the city's development. "The city is clearly lacking the kind of manpower and machinery it needs to function. MCG should learn from other municipal corporations and invest in the right kind of machinery," said Gauri Sarin, convener of the group Making Model Gurugram.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store