
What to know about Instagram Map, a new feature drawing backlash
Meta, which owns Instagram, said in a blog post Wednesday that the feature was an opt-in service to help people 'stay up to date with friends.' Some users, however, reacted with confusion and panic, voicing concerns about privacy and safety.
Here's what to know about the feature.
What is Instagram Map?
The new map, which can be found on top of Instagram's message inbox, allows users to share their live location while they are using the app.
It also allows people to see the locations of users who share that information in recent posts on their feeds.
Who can see you on it?
By default, nobody. Meta said in its blog post that the location sharing option was inactive by default, and users would have to opt in. The company said people could limit who could see that information, or turn it off entirely.
Meta called it a 'new, lightweight way to connect with each other.' Similar features exist in other apps: Snapchat has a personalized map feature, and Apple devices allow users to share their locations with one another. Meta's other platforms like Facebook and WhatsApp also offer live location sharing.
How did people react?
Broadly, not well. The news quickly raised questions about the possible dangers of location sharing on one of the world's most popular social media platforms.
As the feature reached smartphones in the United States, it caused confusion, and even panic, for some users.
Many people, including professional content creators, called for it to be rolled back, arguing that it could be used to stalk and harass.
U.S. Sens. Marsha Blackburn, R-Tenn., and Richard Blumenthal, D-Conn., wrote to Meta on Friday, according to NBC News, urging its chief executive, Mark Zuckerberg, to abandon the feature.
Why did people see themselves on the map unexpectedly?
Some users said that they were unhappy to see their prior posts plotted on the map without using the location-sharing feature.
One explanation, offered by Instagram's leader, Adam Mosseri, is that the map was populated not only with real-time locations but also with earlier posts tagged with a location.
Those location tags existed before, but were not collated on a prominent map.
Allie Taylor, an educator who posts content about disability on Instagram, was at work Wednesday when they shared a video on the app with a location tag for the city of Cincinnati.
Soon, Taylor began receiving messages from their followers, including strangers, saying that their location was visible on Instagram's new map. It appeared accurate enough to show the street they worked on, Taylor said.
'It was terrifying,' Taylor said, adding, 'Why was this even a feature?'
How do you turn it off?
To check location sharing permissions for Instagram, users can try several options.
In the app itself, users can head to their messages inbox, open the map, view the settings, and change location sharing to 'no one.'
Phone users can also go directly into the location services tab in the device's settings and choose to allow or deny it for Instagram.
Instagram has promised 'improvements.'
Mosseri appeared taken aback by how the feature was received and made a series of posts seeking to answer the criticism.
'We're never going to share your location without someone actively asking to do so,' he said in one post on Friday. He conceded that there had been 'confusion' around the rollout.
A statement from Meta, sent Saturday, said, 'Instagram Map is off by default, and your live location is never shared unless you choose to turn it on.
'If you do, only people you follow back — or a private, custom list you select — can see your location.'
Mosseri, in his posts, said Instagram needed to do a 'better job' of explaining what would appear on the map.
'We can, and will, make it easier to understand exactly what's happening,' he said, adding that Instagram was hoping to make improvements early next week.
This article originally appeared in

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Observer
17 hours ago
- Observer
What to know about Instagram Map, a new feature drawing backlash
A new Instagram feature rolled out in the United States this past week stirred strong feelings: Users can now share and view locations of others on a map. Meta, which owns Instagram, said in a blog post Wednesday that the feature was an opt-in service to help people 'stay up to date with friends.' Some users, however, reacted with confusion and panic, voicing concerns about privacy and safety. Here's what to know about the feature. What is Instagram Map? The new map, which can be found on top of Instagram's message inbox, allows users to share their live location while they are using the app. It also allows people to see the locations of users who share that information in recent posts on their feeds. Who can see you on it? By default, nobody. Meta said in its blog post that the location sharing option was inactive by default, and users would have to opt in. The company said people could limit who could see that information, or turn it off entirely. Meta called it a 'new, lightweight way to connect with each other.' Similar features exist in other apps: Snapchat has a personalized map feature, and Apple devices allow users to share their locations with one another. Meta's other platforms like Facebook and WhatsApp also offer live location sharing. How did people react? Broadly, not well. The news quickly raised questions about the possible dangers of location sharing on one of the world's most popular social media platforms. As the feature reached smartphones in the United States, it caused confusion, and even panic, for some users. Many people, including professional content creators, called for it to be rolled back, arguing that it could be used to stalk and harass. U.S. Sens. Marsha Blackburn, R-Tenn., and Richard Blumenthal, D-Conn., wrote to Meta on Friday, according to NBC News, urging its chief executive, Mark Zuckerberg, to abandon the feature. Why did people see themselves on the map unexpectedly? Some users said that they were unhappy to see their prior posts plotted on the map without using the location-sharing feature. One explanation, offered by Instagram's leader, Adam Mosseri, is that the map was populated not only with real-time locations but also with earlier posts tagged with a location. Those location tags existed before, but were not collated on a prominent map. Allie Taylor, an educator who posts content about disability on Instagram, was at work Wednesday when they shared a video on the app with a location tag for the city of Cincinnati. Soon, Taylor began receiving messages from their followers, including strangers, saying that their location was visible on Instagram's new map. It appeared accurate enough to show the street they worked on, Taylor said. 'It was terrifying,' Taylor said, adding, 'Why was this even a feature?' How do you turn it off? To check location sharing permissions for Instagram, users can try several options. In the app itself, users can head to their messages inbox, open the map, view the settings, and change location sharing to 'no one.' Phone users can also go directly into the location services tab in the device's settings and choose to allow or deny it for Instagram. Instagram has promised 'improvements.' Mosseri appeared taken aback by how the feature was received and made a series of posts seeking to answer the criticism. 'We're never going to share your location without someone actively asking to do so,' he said in one post on Friday. He conceded that there had been 'confusion' around the rollout. A statement from Meta, sent Saturday, said, 'Instagram Map is off by default, and your live location is never shared unless you choose to turn it on. 'If you do, only people you follow back — or a private, custom list you select — can see your location.' Mosseri, in his posts, said Instagram needed to do a 'better job' of explaining what would appear on the map. 'We can, and will, make it easier to understand exactly what's happening,' he said, adding that Instagram was hoping to make improvements early next week. This article originally appeared in


Observer
a day ago
- Observer
New Instagram location sharing feature sparks privacy fears
San Francisco - Instagram users are warning about a new location sharing feature, fearing that the hugely popular app could be putting people in danger by revealing their whereabouts without their knowledge. The Meta-owned image-sharing platform added an option on Wednesday that shares locations using an Instagram map, similar to a feature rival Snapchat has offered since 2017. Some users have since been shocked to discover that their location was being shared, as viral posts have shown. "Mine was turned on and my home address was showing for all of my followers to see," Instagram user Lindsey Bell wrote in reply to a warning posted by "Bachelor" reality television personality Kelley Flanagan to her 300,000 TikTok followers. "Turned it off immediately once I knew, but had me feeling sick about it." In a TikTok video, Flanagan called Instagram's new location-sharing feature "dangerous" and gave step-by-step instructions on how to make sure it is turned off. Instagram chief Adam Mosseri fired off a post on Meta-owned Threads stressing that Instagram location sharing is off by default, meaning users need to opt in for it to be active. "Quick Friend Map clarification, your location will only be shared if you decide to share it, and if you do, it can only be shared with a limited group of people you choose," Mosseri wrote. "To start, location sharing is completely off." The feature was added as a way for friends to better connect, sharing posts from "cool spots," Instagram said in a blog post. Users can be selective regarding who they share locations with, and can turn it off whenever they wish, according to Instagram. Wariness regarding whether Instagram is watching out for user privacy comes just a week after a federal jury in San Francisco sided with women who accused Meta of exploiting health data gathered by the Flo app, which tracks menstruation and efforts to get pregnant. A jury concluded that Meta used women's sensitive health data to better target money-making ads, according to law firm Labaton Keller Sucharow, which represented the plaintiffs. Evidence at trial showed Meta was aware it was getting confidential health data from the third-party app, and that some employees appeared to mock the nature of the information, the law firm contended. "This case was about more than just data -- it was about dignity, trust, and accountability," lead attorney Carol Villegas said in a blog post. Damages in the suit have yet to be determined.


Observer
02-08-2025
- Observer
AI is not your friend
Meta CEO Mark Zuckerberg and OpenAI's Sam Altman have been aggressively promoting the idea that everyone — children included — should form relationships with AI 'friends' or 'companions'. Meanwhile, multinational tech companies are pushing the concept of 'AI agents' designed to assist us in our personal and professional lives, handle routine tasks and guide decision-making. But the reality is that AI systems are not and never will be, friends, companions, or agents. They are and will always remain, machines. We should be honest about that and push back against misleading marketing that suggests otherwise. The most deceptive term of all is 'artificial intelligence'. These systems are not truly intelligent and what we call 'AI' today is simply a set of technical tools designed to mimic certain cognitive functions. They are not capable of true comprehension and are neither objective, fair, nor neutral. Nor are they becoming any smarter. AI systems rely on data to function and increasingly, that includes data generated by tools like ChatGPT. The result is a feedback loop that recycles output without producing deeper understanding. More fundamentally, intelligence is not just about solving tasks; it's also about how those tasks are approached and performed. Despite their technical capabilities, AI models remain limited to specific domains, such as processing large datasets, performing logical deductions and making calculations. When it comes to social intelligence, however, machines can only simulate emotions, interactions and relationships. A medical robot, for example, could be programmed to cry when a patient cries, yet no one would argue that it feels genuine sadness. The same robot could just as easily be programmed to slap the patient and it would carry out that command with equal precision – and with the same lack of authenticity and self-awareness. The machine doesn't 'care'; it simply follows instructions. And no matter how advanced such systems become, that is not going to change. AI systems are not and never will be, friends, companions, or agents. Simply put, machines lack moral agency. Their behaviour is governed by patterns and rules created by people, whereas human morality is rooted in autonomy — the capacity to recognise ethical norms and behave accordingly. By contrast, AI systems are designed for functionality and optimisation. They may adapt through self-learning, but the rules they generate have no inherent ethical meaning. Consider self-driving cars. To get from point A to point B as quickly as possible, a self-driving vehicle might develop rules to optimise travel time. If running over pedestrians would help achieve that goal, the car might do so, unless instructed not to, because it cannot understand the moral implications of harming people. This is partly because machines are incapable of grasping the principle of generalisability — the idea that an action is ethical only if it can be justified as a universal rule. Moral judgment depends on the ability to provide a plausible rationale that others can reasonably accept. These are what we often refer to as 'good reasons'. Unlike machines, humans are able to engage in generalisable moral reasoning and can therefore judge whether their actions are right or wrong. The term 'data-based systems' (DS) is thus more appropriate than 'artificial intelligence', as it reflects what AI can actually do: generate, collect, process and evaluate data to make observations and predictions. It also clarifies the strengths and limitations of today's emerging technologies. At their core, these are systems that use highly sophisticated mathematical processes to analyse vast amounts of data — nothing more. Humans may interact with them, but communication is entirely one-way. DS have no awareness of what they are 'doing' or of anything happening around them. This is not to suggest that DS cannot benefit humanity or the planet. On the contrary, we can and should rely on them in domains where their capabilities exceed our own. But we must also actively manage and mitigate the ethical risks they present. Developing human-rights-based DS and establishing an International Data-Based Systems Agency at the United Nations would be important first steps in that direction. Over the past two decades, Big Tech firms have isolated us and fractured our societies through social media — more accurately described as 'anti-social media', given its addictive and corrosive nature. Now, those same companies are promoting a radical new vision: replacing human connection with AI 'friends' and 'companions'. At the same time, these companies continue to ignore the so-called 'black box problem': the untraceability, unpredictability and lack of transparency in the algorithmic processes behind automated evaluations, predictions and decisions. This opacity, combined with the high likelihood of biased and discriminatory algorithms, inevitably results in biased and discriminatory outcomes. The risks posed by DS are not theoretical. These systems already shape our private and professional lives in increasingly harmful ways, manipulating us economically and politically, yet tech CEOs urge us to let DS tools guide our decisions. To protect our freedom and dignity, as well as the freedom and dignity of future generations, we must not allow machines to masquerade as what they are not: us. @Project Syndicate, 2025