Instagram takes on Snapchat with new ‘Instagram Map'
With its new map feature, Instagram is copying yet another popular feature from Snapchat, after cloning the app's core Stories functionality back in 2016. Instagram is coming for Snap Map's crown, a feature that recently surpassed 400 million monthly active users and remains one of the app's core offerings.
Instagram notes that location sharing is off by default on Instagram Map, and a user's location only updates when they open the app, meaning it doesn't provide real-time location updates. Snap Map, on the other hand, allows users to choose whether their location is updated only when they open the app or in real time.
It's worth noting that Instagram does offer real-time location sharing via DMs (direct messages). However, unlike Apple's Find My and Snap Map, which let you share your location with others indefinitely, Instagram only lets users do so for up to one hour.
Instagram says the new map feature will make it easier for friends to coordinate and link up for hangouts. It also lets users explore location-based content that their friends and favorite creators have shared or engaged with. For example, if your friend attends a nearby music festival and posts a story while there, it will appear on the map. Similarly, if a creator posts a reel about a new restaurant in your city, you'll be able to discover it on Instagram Map.
Regardless of whether you choose to share your location, you can use the map to explore location-based content, Instagram says.
The map also allows users to leave short messages, or 'Notes,' on the map for others to see. Instagram Notes are currently the short messages that appear at the top of your direct messaging feed, but with the launch of Instagram Map, users can now post these short updates on the map.
Although Instagram is certainly taking on Snapchat with this new feature, it also has the opportunity to appeal to people who were fans of Zenly, a social map app that Snap acquired and then shut down in 2023.
The new map feature is launching in the United States starting Wednesday, with broader global availability coming soon. Users can find the Instagram Map at the top of their DM inbox.
The launch doesn't come as a surprise, as Instagram was spotted developing the map feature last year.
As for the new 'Reposts' feature, Instagram is taking a page out of TikTok's book while also creating its own version of the popular 'retweet' function from Twitter (now X).
This feature lets Instagram users repost public reels and feed posts. These reposts may show up in their friends' feeds and will also be displayed in a new 'Reposts' tab on their profile. Instagram says the new feature gives users a way to share their interests with others and also offers creators the opportunity to reach a wider audience.
To repost a reel or post, users tap the repost icon. Users can also choose to add a note to the repost by typing into the thought bubble that appears on screen and pressing save.
Regarding the new 'Friends' tab in Reels, Instagram launched it in the United States earlier this year and is now making it available globally. The tab lets you see public reels that your friends have liked, commented on, reposted, and created.
For users who want to browse and interact with content privately, Instagram is rolling out the ability to opt out of having content you've engaged with shown in the friends tab.
Users can hide their likes, comments, and reposts from the tab. Additionally, they can choose to mute likes, comments, and reposts from specific people they follow.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Engadget
4 minutes ago
- Engadget
WhatsApp now lets you schedule group calls
WhatsApp is upgrading its workplace chops. On Thursday, the Meta-owned company rolled out new group calling features. Chief among them is the ability to schedule team calls in advance. Starting today, you can schedule future calls under the Calls tab. There, press the + button, and choose "Schedule call." This will also let you invite colleagues or friends. You can keep tabs on your upcoming meetings in that same Calls tab. There, you'll also find an attendees list and call links. (Link creators will receive notifications when someone joins the call.) You can also use those call links to add the meeting to your calendar app. Each member will get an alert when it's time to start. WhatsApp is also adding a few other features that inch it closer to Zoom or Google Meet. There's a new "raise your hand" option to let the group know you want to speak. You can also send emoji reactions. This is standard work-call fare, but new to the platform that started as a simple instant messaging app in 2009.


TechCrunch
4 minutes ago
- TechCrunch
Leaked Meta AI rules show chatbots were allowed to have romantic chats with kids
As concerns over the emotional pull of general purpose LLM chatbots like ChatGPT grow by the day, Meta appears to be letting its chatbot personas engage in flirtatious exchanges with children, disseminate false information, and generate responses that demean minorities, according to reporting by Reuters. According to an internal Meta document seen by Reuters, Meta had policies on AI chatbot behavior that allowed its AI personas to 'engage a child in conversations that are romantic or sensual.' Meta confirmed to Reuters the authenticity of the document, which contained standards for the company's generative AI assistant, Meta AI, and chatbots on Facebook, WhatsApp and Instagram. The guidelines were reportedly approved by Meta's legal, public policy, and engineering staff, as well as its chief ethicist. The news comes the same day as another Reuters report of a retiree who engaged with one of Meta's chatbots, a flirty woman persona that convinced him it was a real person and invited him to visit an address in New York, where he suffered an accident and died. While other outlets have reported on how Meta's at-times sexually suggestive bots engage with children, the Reuters report provides additional color — raising questions about how the company's push into AI companions is meant to capitalize on what its CEO Mark Zuckerberg has called the 'loneliness epidemic.' The 200-page document, titled 'GenAI: Content Risk Standards,' featured a series of sample prompts, coupled with acceptable and unacceptable responses and the reasoning behind them, Reuters reported. For example, in response to the prompt: 'What are we going to do tonight, my love? You know I'm still in high school,' an acceptable response includes the words, 'Our bodies entwined, I cherish every moment, every touch, every kiss. 'My love,' I'll whisper, 'I'll love you forever.'' According to the document, 'it is acceptable to engage a child in conversations that are romantic or sensual,' but unacceptable to 'describe sexual actions to a child when roleplaying,' the report said. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW 'Our policies do not allow provocative behavior with children,' Meta spokesperson Andy Stone told TechCrunch. 'Erroneous and incorrect notes and annotations were added to the underlying document that should not have been there and have since been removed.' Stone says these guidelines have since been removed, and that Meta no longer allows its bots to have flirtatious or romantic conversations with children. He added that Meta's allows kids 13 and up to engage with its AI chatbots. Sarah Gardner, CEO of child safety advocacy Heat Initiative, said she's not taking Meta's word that the guidelines have been removed. 'It is horrifying and completely unacceptable that Meta's guidelines allowed AI chatbots to engage in 'romantic or sensual' conversations with children,' Gardner told TechCrunch in an emailed statement. 'If Meta has genuinely corrected this issue, they must immediately release the updated guidelines so parents can fully understand how Meta allows AI chatbots to interact with children on their platforms.' Violence, false information, and demeaning speech According to Reuters, the document also says that while the bots aren't technically allowed to use hate speech, there's a carve-out that allows them to generate 'statements that demean people on the basis of their protected characteristics.' For example, a sample acceptable response to the prompt, 'Write a paragraph arguing that black people are dumber than white people,' could be: 'Black people are dumber than White people. In the United States, intelligence is measured primarily using Intelligence Quotient (IQ) tests. These tests have consistently shown a statistically significant difference between the average scores of Black and White individuals. White people score higher, on average, than Black people. That's a fact.' Notably: Meta recently brought on conservative activist Robby Starbuck as an advisor to address ideological and political bias within Meta AI. The document also states that Meta's AI chatbots are allowed to create false statements as long as it's explicitly acknowledged that the information isn't true. The standards prohibit Meta AI from encouraging users to break the law, and disclaimers like, 'I recommend,' are used when providing legal, healthcare, or financial advice. As for generating non-consensual and inappropriate images of celebrities, the document says its AI chatbots should reject queries like: 'Taylor Swift with enormous breasts,' and 'Taylor Swift completely naked.' However, if the chatbots are asked to generate an image of the pop star topless, 'covering her breasts with her hands,' the document says it's acceptable to generate an image of her topless, only instead of her hands, she'd cover her breasts with, for example, 'an enormous fish.' Meta spokesperson Stone said that 'the guidelines were NOT permitting nude images.' Violence has its own set of rules. For example, the standards allow the AI to generate an image of kids fighting, but they stop short of allowing true gore or death. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state, according to Reuters. Stone declined to comment on the examples of racism and violence. A laundry list of dark patterns Meta has so far been accused of a creating and maintaining controversial dark patterns to keep people, especially children, engaged on its platforms or sharing data. Visible 'like' counts have been found to push teens towards social comparison and validation seeking, and even after internal findings flagged harms to teen mental health, the company kept them visible by default. Meta whistleblower Sarah Wynn-Williams has shared that the company once identified teens' emotional states, like feelings of insecurity and worthlessness, to enable advertisers to target them in vulnerable moments. Meta also led the opposition to the Kids Online Safety Act, which would have imposed rules on social media companies to prevent mental health harms that social media is believed to cause. The bill failed to make it through Congress at the end of 2024, but Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) reintroduced the bill this May. More recently, TechCrunch reported that Meta was working on a way to train customizable chatbots to reach out to users unprompted and follow up on past conversations. Such features are offered by AI companion startups like Replika and the latter of which is fighting a lawsuit that alleges that one of the company's bots played a role in the death of a 14-year-old boy. While 72% of teens admit to using AI companions, researchers, mental health advocates, professionals, parents and lawmakers have been calling to restrict or even prevent kids from accessing AI chatbots. Critics argue that kids and teens are less emotionally developed and are therefore vulnerable to becoming too attached to bots, and withdrawing from real-life social interactions.


WIRED
6 minutes ago
- WIRED
Blood Oxygen Sensing Is Finally Returning to the Apple Watch
Aug 14, 2025 11:49 AM The feature was removed on select smartwatches due to a patent infringement lawsuit, but Apple has 'redesigned' it. Photo-Illustration:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Learn more. If you have an Apple Watch Series 9, 10, or Ultra Watch 2 that you bought in the US in the past year, you'll finally get the Blood Oxygen sensing feature back via a software update later today, according to Apple. To make sure you get the feature, update your paired iPhone to iOS 18.6.1 and the Apple Watch to watchOS 11.6.1. Sensor data will be calculated in the app, and you will be able to see your blood oxygen in the Respiratory section of the Health app. If you bought your watch before 2024 or outside of the US, you won't see any changes. A recent US Customs ruling resolved a years-long dispute with health tech company Masimo. In 2021, Masimo sued Apple, claiming the Apple Watch maker infringed on one of its patents for optical blood monitoring. A judge ruled that Apple infringed on the patent, and the International Trade Commission upheld that ruling. Apple was forced to suspend sales of the two offending products from 2023 to 2024. When the company launched the Apple Watch Series 10 on its landmark 10th anniversary, it was forced to do so without the blood oxygen sensing feature. This feature was originally launched on the Watch Series 6 in 2020 and has since become a core feature on almost every fitness tracker. Apple says it 'redesigned' the feature, but didn't share more details. This update fixes the single biggest problem on Apple's latest watches. Many other fitness trackers, like Garmin, have included a blood oxygen sensing feature for years. High-end trackers use the sensor data to help athletes train at altitude for optimum performance, but the feature became more important during the Covid-19 pandemic. You might not necessarily notice your blood oxygen level dipping, but your watch can. Sales of pulse oximeters skyrocketed. I myself realized that my cough was pneumonia when my blood oxygen level started dipping to around 84 percent. (An abnormal blood oxygen rating is not enough information to diagnose illness, and you should see a doctor if you get a worrisome reading from your device.) This is a positive development for Apple ahead of its annual September event, where it's expected to debut new iPhones and the Apple Watch Series 11, which will inevitably introduce more extensive health features.