‘They messed up big time': Some Instagram users unhappy with new reposting feature
Here's what we know about the new feature.
What is it?
Similar to retweets on X (formerly Twitter) or reposts on TikTok, the new Instagram feature is designed to make sharing content easier. It's been in testing since 2022, and is now rolling out to users globally.
According to a Meta news release that was shared on Wednesday, the feature is meant to make it easier 'to share your interests with your friends.'
Users can now repost public posts and Reels, which will appear in a new 'Reposts' tab on their profile and may also show up in their followers' feeds.
How does it work?
The posts are credited to the original creator. For content creators, that means their post could be shown to someone else's followers if they share it, even if those people don't follow them. It's a new way to expand their reach beyond their own audience and potentially boost engagement with minimal extra effort.
This update is part of a broader set of changes from Meta. Instagram also launched a 'Friends Map', that allows you to see where your friends are and what they are doing there (location sharing is optional), and a new 'Friends' tab in Reels, where you can see public content your friends have interacted with.
What do users think about the feature?
Although targeted at making sharing easier, many users are not thrilled with the feature.
'They're tryna make it like Tiktok but that's the exact reason why so many users use instagram because they prefer it more,' one reddit user wrote. 'They messed up big time.'
Others are frustrated by the design changes. The repost button now sits where the comment button used to be, leading to some accidentally sharing posts they meant to reply to.
'They put it EXACTLY where the comment button was. This is such an evil decision,' another Redditor wrote.
With another saying, 'It's so annoying, I do not want to end up filing my profile with 10 reposted reels at the end of the day because I accidentally clicked the button. At least ask for confirmation, or have it as a sub-option within the share button.'
Some have gone as far as asking if they can get rid of it all together. Time will tell if the repost button earns its place, or just more complaints.
Minors on Instagram are routinely recommended sexual content: report
Canadian adolescents among first to experience Instagram's new teen accounts with private settings
Our website is the place for the latest breaking news, exclusive scoops, longreads and provocative commentary. Please bookmark nationalpost.com and sign up for our newsletters here.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
32 minutes ago
- Yahoo
If You Invested $10K In Micron Technology Stock 10 Years Ago, How Much Would You Have Now?
Benzinga and Yahoo Finance LLC may earn commission or revenue on some items through the links below. Micron Technology Inc. (NASDAQ:MU) designs, develops, manufactures, and sells memory and storage products globally. It is set to report its Q4 2025 earnings on Sept. 24. Wall Street analysts expect the company to post EPS of $2.51, up from $1.18 in the prior-year period. According to Benzinga Pro, quarterly revenue is expected to reach $10.73 billion, up from $7.75 billion a year earlier. Don't Miss: The same firms that backed Uber, Venmo and eBay are investing in this pre-IPO company disrupting a $1.8T market — 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. If You Bought Micron Technology Stock 10 Years Ago The company's stock traded at approximately $18.91 per share 10 years ago. If you had invested $10,000, you could have bought roughly 529 shares. Currently, shares trade at $108.16, meaning your investment's value could have grown to $57,197 from stock price appreciation alone. However, Micron Technology also paid dividends during these 10 years. Micron Technology's dividend yield is currently 0.43%. Over the last 10 years, it has paid about $1.79 in dividends per share, which means you could have made $947 from dividends alone. Summing up $57,197 and $947, we end up with the final value of your investment, which is $58,144. This is how much you could have made if you had invested $10,000 in Micron Technology stock 10 years ago. This means a total return of 481.44%. In comparison, the S&P 500 total return for the same period is 258.20%. Trending: Accredited Investors: Grab Pre-IPO Shares of the AI Company Powering Hasbro, Sephora & MGM— What Could The Next 10 Years Bring? Micron Technology has a consensus rating of "Buy" and a price target of $149.07 based on the ratings of 28 analysts. The price target implies more than 37% potential upside from the current stock price. The company on June 25 announced its Q3 2025 earnings, posting revenues of $9.3 billion, beating the consensus estimate of $8.87 billion. The company reported adjusted EPS of $1.91, above the consensus of $1.60. Micron expects Q4 revenue of $10.7 billion, plus or minus $300 million, versus estimates of $9.88 billion. Adjusted EPS is expected to be $2.50, plus or minus $0.15, versus the consensus of $2.01 per out this article by Benzinga for P/E ratio insights for Micron Technology. Given the historical stock price appreciation and expected upside potential, growth-focused investors may find Micron Technology stock attractive. Furthermore, they can benefit from the company's modest dividend yield of 0.43%. Read Next: Warren Buffett once said, "If you don't find a way to make money while you sleep, you will work until you die." Here's , starting article If You Invested $10K In Micron Technology Stock 10 Years Ago, How Much Would You Have Now? originally appeared on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Fast Company
an hour ago
- Fast Company
5 secrets for breaking through the entry-level job ‘glass floor'
From on-again-off-again tariffs, economic uncertainty, and layoffs, fresh graduates are in one of the toughest job markets in recent history. More than half do not have a job lined up by the time they graduate, and the unemployment rate for young degree holders is the highest it's been in 12 years, not counting the pandemic. Technological advancements are further making the situation harder, as artificial intelligence (AI) has wormed its way into the workforce, cannibalizing the number of entry-level jobs available. What's a young grad to do? I interviewed hiring managers, career advisers, and college students, and in this piece you'll learn: What out-of-work new grads need to be doing right now in their 'limbo' How to identify industries that are hiring you may never have thought of The right approach to developing AI literacy to stand out 1. Use limbo productively What several recent college grads refer to as 'limbo,' the time period between graduation and employment, is often regarded as an excruciating phase of uncertainty. Experts recommend using this time as an opportunity for gaining experience outside of traditional corporate work.


Forbes
an hour ago
- Forbes
ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?
During the GPT-5 launch this week, Sam Altman, CEO of OpenAI, invited a cancer patient and her husband to the stage. She shared how, after receiving her biopsy report, she turned to ChatGPT for help. The AI instantly decoded the dense medical terminology, interpreted the findings, and outlined possible next steps. That moment of clarity gave her a renewed sense of control over her care. Altman mentioned; 'health is one of the top reasons consumers use ChatGPT, saying it 'empowers you to be more in control of your healthcare journey.' Around the world, patients are turning to AI chatbots like ChatGPT and Claude to better understand their diagnoses and take a more active role in managing their health. In hospitals, both patients and clinicians sometimes use these AI tools informally to verify information. At medical conferences, some healthcare professionals admit to carrying a 'second phone' dedicated solely to AI queries. Without accessing any private patient data, they use it to validate their assessments, much like patients seeking a digital 'second opinion' alongside their physician's advice. Even during leisure activities like hiking or camping, parents often rely on AI Chatbots like ChatGPT or Claude for quick guidance on everyday concerns such as treating insect bites or skin reactions in their children. This raises an important question: Can AI Companions Like ChatGPT, Claude, and Others Offer the Same Promise, Comfort, Commitment, and Care as Some Humans? As AI tools become more integrated into patient management, their potential to provide emotional support alongside clinical care is rapidly evolving. These chatbots can be especially helpful in alleviating anxiety caused by uncertainty, whether it's about a diagnosis, prognosis, or simply reassurance regarding potential next steps in medical or personal decisions. Given the existing ongoing stressors from disease management burden on patients, advanced AI companions like ChatGPT and Claude can play an important role by providing timely, 24/7 reassurance, clear guidance, and emotional support. Notably, some studies suggest that AI responses can be perceived as even more compassionate and reassuring than those from humans. Loneliness is another pervasive issue in healthcare. Emerging research suggests that social chatbots can reduce loneliness and social anxiety, underscoring their potential as complementary tools in mental health care. These advanced AI models help bridge gaps in information access, emotional reassurance, and patient engagement, offering clear answers, confidence, comfort, and a digital second opinion, particularly valuable when human resources are limited. Mustafa Suleyman, CEO of Microsoft AI, has articulated a vision for AI companions that evolve over time and transform our lives by providing calm and comfort. He describes an AI 'companion that sees what you see online and hears what you hear, personalized to you. Imagine the overload you carry quietly, subtly diminishing. Imagine clarity. Imagine calm.' While there are many reasons AI is increasingly used in healthcare, a key question remains: Why Are Healthcare Stakeholders Increasingly Turning to AI? Healthcare providers are increasingly adopting AI companions because they fill critical gaps in care delivery. Their constant availability and scalability enhance patient experience and outcomes by offering emotional support, cognitive clarity, and trusted advice whenever patients need it most. While AI companions are not new, today's technology delivers measurable benefits in patient care. For example, Woebot, an AI mental health chatbot, demonstrated reductions in anxiety and depression symptoms within just two weeks. ChatGPT's current investment in HealthBench to promote health and well-being further demonstrate its promise, commitment, and potential to help even more patients. These advances illustrate how AI tools can effectively complement traditional healthcare by improving patient well-being through consistent reassurance and engagement. So, what's holding back wider reliance on chatbots? The Hindrance: Why We Can't Fully Rely on AI Chatbot Companions Despite rapid advancements, AI companions are far from flawless, especially in healthcare where the margin for error is razor thin. Large language models (LLMs) like ChatGPT and Claude are trained on vast datasets that may harbor hidden biases, potentially misleading vulnerable patient populations. Even with impressive capabilities, ChatGPT can still hallucinate or provide factually incorrect information—posing real risks if patients substitute AI guidance for professional medical advice. While future versions may improve reliability, current models are not suited for unsupervised clinical use. Sometimes, AI-generated recommendations may conflict with physicians' advice, which can undermine trust and disrupt the patient–clinician relationship. There is also a risk of patients forming deep emotional bonds with AI, leading to over-dependence and blurred boundaries between digital and human interaction. As LinkedIn cofounder Reid Hoffman put it in Business Insider, 'I don't think any AI tool today is capable of being a friend,' and "And I think if it's pretending to be a friend, you're actually harming the person in so doing." For now, AI companions should be regarded as valuable complements to human expertise, empathy, and accountability — not replacements. A Balanced, Safe Framework: Maximizing Benefit, Minimizing Risk To harness AI companions' full potential while minimizing risks, a robust framework is essential. This begins with data transparency and governance: models must be trained on inclusive, high-quality datasets designed to reduce demographic bias and errors. Clinical alignment is critical; AI systems should be trained on evidence-based protocols and guidelines, with a clear distinction between educational information and personalized medical advice. Reliability and ethical safeguards are vital, including break prompts during extended interactions, guidance directing users to seek human support when needed, and transparent communication about AI's limitations. Above all, AI should complement human clinicians, acting as a navigator or translator to encourage and facilitate open dialogue between patients and their healthcare providers. Executive Call to Action In today's digital age, patients inevitably turn to the internet and increasingly to AI chatbots like ChatGPT and Claude for answers and reassurance. Attempts to restrict this behavior are neither practical nor beneficial. Executive physician advisors and healthcare leaders are therefore responsible for embracing this reality by providing structured, transparent, and integrated pathways that guide patients in using these powerful tools wisely. It is critical that healthcare systems are equipped with frameworks ensuring AI complements clinical care rather than confuses or replaces it. Where AI capabilities fall short, these gaps must be bridged with human expertise and ethical oversight. Innovation should never come at the expense of patient safety, trust, or quality of care. By proactively shaping AI deployment in healthcare, stakeholders can empower patients with reliable information, foster meaningful clinician-patient dialogue, and ultimately improve outcomes in this new era of AI-driven medicine.