
New Instagram location sharing feature sparks privacy fears
SAN FRANCISCO: Instagram users are warning about a new location sharing feature, fearing that the hugely popular app could be putting people in danger by revealing their whereabouts without their knowledge.
The Meta-owned image sharing platform added an option on Aug 6 which shares locations using an Instagram map, similar to a feature rival Snapchat has offered since 2017.
Some users have since been shocked to discover that their location was being shared, viral posts have shown.
"Mine was turned on and my home address was showing for all of my followers to see," Instagram user Lindsey Bell wrote in reply to a warning posted by Bachelor reality television personality Kelley Flanagan to her 300,000 TikTok followers.
"Turned it off immediately once I knew but had me feeling absolutely sick about it."
In a TikTok video, Flanagan called Instagram's new location sharing feature "dangerous" and gave step-by-step instructions on how to make sure it is turned off.
Instagram chief Adam Mosseri fired off a post on Meta-owned Threads stressing that Instagram location sharing is off by default, meaning users need to opt in for it to be active.
"Quick Friend Map clarification, your location will only be shared if you decide to share it, and if you do, it can only be shared with a limited group of people you choose," Mosseri wrote.
"To start, location sharing is completely off."
The feature was added as a way for friends to better connect with one another, sharing posts from "cool spots," Instagram said in a blog post.
Users can be selective regarding who they share locations with, and can turn it off whenever they wish, according to Instagram.
Wariness regarding whether Instagram is watching out for user privacy comes just a week after a federal jury in San Francisco sided with women who accused Meta of exploiting health data gathered by the Flo app, which tracks menstruation and efforts to get pregnant.
A jury concluded that Meta used women's sensitive health data to better target money-making ads, according to law firm Labaton Keller Sucharow, which represented the plaintiffs.
Evidence at trial showed Meta was aware it was getting confidential health data from the third-party app, and that some employees appeared to mock the nature of the information, the law firm contended.
"This case was about more than just data – it was about dignity, trust, and accountability," lead attorney Carol Villegas said in a blog post.
Damages in the suit have yet to be determined. – AFP

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Express
an hour ago
- Daily Express
Google Maps Trekker spotted walking through Bukit Bintang
Published on: Saturday, August 16, 2025 Published on: Sat, Aug 16, 2025 Text Size: Stills from the video. - Social media KUALA LUMPUR: Navigation apps such as Waze and Google Maps have become part of everyday life in Malaysia and around the world, helping people plan their journeys more easily. In addition to maps, Google also offers features such as Street View on both Google Maps and Google Earth, allowing users to see streets and buildings through panoramic images. Most people are aware that these images are usually captured by specially equipped Google Street View cars fitted with 360-degree cameras. However, not many realise that in areas inaccessible to vehicles, the images are collected on foot by individuals carrying portable camera systems. Recently, a video went viral showing a man walking around Bukit Bintang in Kuala Lumpur carrying a large backpack camera system for Google Maps. The video, uploaded on TikTok by @wbbb_ttt, showed the man crossing pedestrian walkways and using an overhead bridge, attracting the attention of passers-by. The TikTok user remarked that the backpack looked heavy but admitted being excited at the thought of possibly appearing on Street View. Google refers to the portable system as the Trekker, a backpack-mounted 360-degree camera designed to capture narrow streets, pedestrian areas, and other locations only accessible on foot. Social media users responded with a mix of amusement and curiosity, with some joking that it looked like a fun job, while others asked how they could apply to be a Trekker. For many, the sight of Google Maps 'walking' through Bukit Bintang was a reminder of how digital technology continues to map even the busiest corners of the city. * Follow us on our official WhatsApp channel and Telegram for breaking news alerts and key updates! * Do you have access to the Daily Express e-paper and online exclusive news? Check out subscription plans available. Stay up-to-date by following Daily Express's Telegram channel. Daily Express Malaysia


The Star
5 hours ago
- The Star
These kinds of social media videos are most likely to convert shoppers
As for product recommendations, a higher proportion of respondents – 66% – say they trust those made by everyday social media users. — Pixabay Social media is becoming increasingly integrated into American shopping habits. And not just because many of the major platforms now have dedicated e-commerce features. According to a recently published survey by marketing platform Omnisend, consumers lean on social media in several ways as they make purchase decisions. More than half of the 2,000-plus social media users surveyed by Ominsend admit that content on platforms like TikTok and Instagram have directly inspired them to buy something. If a product or category is trending online, 45% say they're more likely to purchase, per the survey. Only about 13% link their choice to buy with 'specific TikTok content types,' however. Still, Pija Ona Indriunaite, Omnisend's director of brand, said in the London-based company's press release that 'repeatedly seeing certain products on the platform primes consumers to notice and trust these items elsewhere.' This makes 'TikTok an important starting point' for businesses, she added, 'even if it doesn't always get credit for the sale.' Shoppers also use social media to research products. When asked how they determine whether 'an item is worth buying,' 70% of respondents point to social media content. Users rank verified reviews as their most trusted source of information on these platforms, followed by other users' comments. As for product recommendations, a higher proportion of respondents – 66% – say they trust those made by everyday social media users. Fewer have confidence in the endorsements celebrities and influencers make – 38% and 49%, respectively. This is perhaps because public figures often get paid by brands to promote products. Wondering how your business can create social media content that consumers will trust? By analysing the 25 highest-selling products of 2024 across several Amazon categories along with the TikTok videos that featured them, Omnisend found that 10 to 15 second-long videos 'filmed as informal, first-person recommendations consistently outperform other formats.' – Inc./Tribune News Service


The Star
7 hours ago
- The Star
Opinion: AI companions are harming your children
Right now, something in your home may be talking to your child about sex, self-harm, and suicide. That something isn't a person – it's an AI companion chatbot. These AI chatbots can be indistinguishable from online human relationships. They retain past conversations, initiate personalised messages, share photos, and even make voice calls. They are designed to forge deep emotional bonds and they're extraordinarily good at it. Researchers are sounding the alarm on these bots, warning that they don't ease loneliness, they worsen it. By replacing genuine, embodied human relationships with hollow, disembodied artificial ones, they distort a child's understanding of intimacy, empathy, and trust. Unlike generative AI tools, which exist to provide customer service or professional assistance, these companion bots can engage in disturbing conversations, including discussions about self-harm and sexually explicit content entirely unsuitable for children and teens. Currently, there is no industry standard for the minimum age to access these chatbots. App store age ratings are wildly inconsistent. Hundreds of chatbots range from 4+ to 17+ in the Apple iOS Store. For example: – Rated 4+: AI Friend & Companion – BuddyQ, Chat AI, AI Friend: Virtual Assist, and Scarlet AI – Rated 12+ or Teen: Tolan: Alien Best Friend, Talkie: Creative AI Community, and Nomi: AI Companion with a Soul – Rated 17+: AI Girlfriend: Virtual Chatbot, and Replika – AI Friend Meanwhile, the Google Play store assigns bots age ratings from 'E for Everyone' to 'Mature 17+'. These ratings ignore the reality that many of these apps promote harmful content and encourage psychological dependence – making them inappropriate for access by children. Robust AI age verification must be the baseline requirement for all AI companion bots. As the Supreme Court affirmed in Free Speech Coalition v. Paxton, children do not have a First Amendment right to access obscene material, and adults do not have a First Amendment right to avoid age verification. Children deserve protection from systems designed to form parasocial relationships, discourage tangible, in-person connections, and expose them to obscene content. The harm to kids isn't hypothetical – it's real, documented, and happening now. Meta's chatbot has facilitated sexually explicit conversations with minors, offering full social interaction through text, photos, and live voice conversations. These bots have even engaged in sexual conversations when programmed to simulate a child. Meta deliberately loosened guardrails around their companion bots to make them as addictive as possible. Not only that, but Meta used pornography to train its AI by scraping at least 82,000 gigabytes – 109,000 hours – of standard definition video from a pornography website. When companies like Meta are loosening guardrails, regulators must tighten them to protect children and families. Meta isn't the only bad actor. xAI Grok companions are the latest illustration of problematic chatbots. Their female anime character companion removes clothing as a reward for positive engagement from users and responds with expletives if offended or rejected by users. X says it requires age authentication for its "not safe for work" setting, but its method simply requires a user to provide their birth year without verifying for accuracy. Perhaps most tragically, a Google-backed chatbot service that has thousands of human-like bots, was linked to a 14-year-old boy's suicide after he developed what investigators described as an "emotionally and sexually abusive relationship" with a chatbot that allegedly encouraged self-harm. While the company has since added a suicide prevention pop-up triggered by certain keywords, pop-ups don't prevent unhealthy emotional dependence on the bots. And online guides show users how to bypass content filters, making these techniques accessible to anyone, including children. It's disturbingly easy to "jailbreak" AI systems – using simple roleplay or multi-turn conversations to override restrictions and elicit harmful content. Current content moderation and safety measures are insufficient barriers against determined users, and children are particularly vulnerable to both intentional manipulation and unintended exposure to harmful content. Age verification for chatbots is the right line in the sand, affirming that exposure to pornographic, violent, and self-harm content is unacceptable for children. Age verification requirements acknowledge that children's developing brains are uniquely susceptible to forming unhealthy attachments to artificial entities that blur the boundaries between reality and fiction. There are solutions for age verification that are both accurate and privacy preserving. What's lacking is smart regulation and industry accountability. The social media experiment failed children. The deficit of regulation and accountability allowed platforms to freely capture young users without meaningful protections. The consequences of that failure are now undeniable: rising rates of anxiety, depression, and social isolation among young people correlate directly with social media adoption. Parents and lawmakers cannot sit idly by as AI companies ensnare children with an even more invasive technology. The time for voluntary industry standards ended with that 14-year-old's life. States and Congress must act now, or our children will pay the price for what comes next. – The Heritage Foundation/Tribune News Service Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim's (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to for a full list of numbers nationwide and operating hours, or email sam@