Gen Z and boomers are both FaceTiming in public — but for different reasons
The other day, I was waiting for the subway, standing next to a woman in her pajamas making breakfast. She wasn't actually next to me but on the screen of another rider's iPhone. The flashes of movement on the screen and their loud conversation caught my attention. I wasn't trying to be nosy, but I (and several other commuters around me) was suddenly involved in what would have been a private, intimate moment.
I'm not the only one getting annoyed. Social media posts abound with people districted and flustered by the prevalence of public video calls. "Am I insane for thinking it's extremely rude to FaceTime without headphones in a public space?" one Threads poster asked last year. "I find this to be so inconsiderate, entitled and obnoxious, honestly. I will never understand." The more than 350 comments that followed revealed a divide about whether we should be turning the whole world into our living room. Some questioned how FaceTiming was any different from chatting with a friend in person. Others deemed public FaceTimers "arrogant individuals with no care for others."
This isn't a new phenomenon. FaceTime debuted with the iPhone 4 in 2010, but it took a few more years for enough people to get iPhones and grow accustomed to — and eventually feel entitled to — constant connection. The feature became available not just through WiFi but also via cellular data in 2012. People began to complain to etiquette experts, who gave their takes on the nuisance in newspaper columns. Video calls became even more normalized in 2020, when many of us started working remotely and stacking our calendars with Zoom meetings from 9-to-5, followed by virtual happy hours. Now, many have taken our comfort with chatting on camera into the real world. Our smartphones have blurred the space between what we do at home and what we do in public, and the digital world now has a tangible place in the public sphere.
Pamela Rutledge, the director of the Media Psychology Research Center, says FaceTiming and talking on speakerphone in public are symptoms of broader shifts in social norms over the past two decades. It's common to check your phone at the dinner table or seclude yourself from public interactions with headphones. When people start a video call with someone, even in a crowded area, "our brains create that sense of social presence, which takes us someplace else," she says. We're taken out of the environment and are less likely to be aware of the annoyed people around us. Despite the ire, people continue to take these video calls because the benefits, like reading social cues from the person they're calling, "are greater than the violation of privacy that they apparently are not feeling," she says.
For the people on the call, FaceTiming may be screen time that sits apart from "bad" screen time. Video calls make it easier to read social cues, which can help us avoid communication breakdowns that can happen over texts. One case study conducted during the pandemic lockdowns found that FaceTiming with family improved an Alzheimer's patient's behavior; he was less anxious and agitated after the calls and ate better than in the earliest days of lockdown. Even parents who keep young kids away from screens may give in for a video call with grandma and grandpa. A study from 2016 found that children under the age of 2 can learn words and patterns from interactive screen time like FaceTime calls, and even start to recognize people they repeatedly speak to, like a grandparent. But they don't absorb as much from prerecorded videos.
FaceTime calls feel like hanging out, while phone calls can feel like work.
But for all the benefits of FaceTime, any tech we use to communicate "can also detract from in-person interaction experiences," Juliana Schroeder, a professor at the University of California, Berkeley's Haas School of Business, tells me in an email. Loud public calls can negatively affect the in-person interactions of other people around them — be it their fellow commuters, restaurant diners, or the people working out next to them at the gym.
Gen Z hates phone calls, but they grew up on video calls. FaceTime calls feel like hanging out, while phone calls can feel like work. Boomers, meanwhile, didn't grow up talking on the phone in public, but they're likely to rush to answer (remembering the pre-voicemail days), and may happily pick up video calls from family, even in crowded spaces without headphones at the ready. Smartphones have increased the pressure for us to be always available, and we've become more comfortable disrupting public spaces or texting during meetings and conversations to meet that demand.
Of course, we don't know the reasons behind any individual FaceTime or speakerphone call, and so may be quick to judge. Caroline Lidz, a 23-year-old in Boston working in tech public relations, admits she's operated with a double standard. She's irritated when she encounters a person on a video call in public with no headphones, but she'll answer any time her twin sister calls, which is usually on FaceTime (though she says she does use headphones). Lidz realized in speaking to me for this story that she tended to think, "It's OK if I do it, because I know my reasons," she says. But when she doesn't know someone else's reasons, "I'm less forgiving with other people." The FaceTime calls are more engaging — she can't be distractedly scrolling through her phone or on her laptop, but Lidz also says she thinks a lot about what the frenzy of public FaceTime calls means for privacy. Generally, Lidz says, to avoid being rude, people should do their best to respect the privacy of the person who's calling you, so they know they may be broadcast to the public, and try not to show too much of the people around you on the call.
Part of the public-call shaming likely arises from the fear that we're too connected and even addicted to our phones. The average American spends almost seven hours a day staring at screens. Three in four US adults who use FaceTime make calls at least once a week, with 14% of people using it multiple times a day, a 2023 survey from the University of Southern California's Neely Center Social Media Index found. A lot of that screen time happens in public spaces, and it's changing our social etiquette; the more people film TikToks or FaceTime in public, the more we let down our guard and accept the behavior as normal.
I'm guilty of FaceTiming my best friend in public when I need her advice on an outfit or gift I'm looking to buy. I try to be quick, feeling justified that I need to be on a video call because I've got something I need to show her. I answered a FaceTime call on a train once and screeched as quietly as possible — a friend had just gotten engaged, and I jumped on the call expecting to see the ring held up to the camera. My grandpa always puts his iPhone on speaker (he says it's hard to hear through the phone's tiny ear speaker) and will take these calls anywhere. We've all learned that if we call him, we could be on the line with anyone in the living room.
It's as easy to justify these loud calls as it is to condemn them. We've gotten used to connecting to one another anytime and anywhere, leaving unpleasant places like airport terminals in favor of chatting with friends. That's not necessarily bad. But please, for all of our sanity, put some headphones in.
Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends.
Read the original article on Business Insider

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Newsweek
2 hours ago
- Newsweek
Apple and Google Face Accusations of Enabling Thieves Through Device Policies
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Apple and Google have been accused by UK lawmakers and police officials of not doing enough to stem a lucrative international black market in stolen smartphones. During a parliamentary hearing on Tuesday, June 3, Members of Parliament and the Metropolitan Police called on the tech giants to block stolen devices from accessing cloud services, a move they argue would drastically reduce resale value and help deter theft. Police said they have recorded 80,000 stolen phones in London in 2024—a 25% rise from 2023. Most of the thefts involve iPhones, and officials estimate the trade generates up to £50 million ($67,837,542) annually. Many devices are trafficked abroad to markets in Algeria, China and Hong Kong. Stock image of a Google smartphone and the Apple logo. Stock image of a Google smartphone and the Apple logo. Photo by Vadym Plysiuk / Getty Images Why It Matters Police officials argue that stolen smartphones are fueling violence and organized crime in the UK. The police want companies to block stolen phones based on their International Mobile Equipment Identity (IMEI) numbers and prevent them from accessing Apple and Google cloud services. While UK mobile networks already use IMEI blacklists, these blocks do not apply globally, which leaves a loophole for international criminal networks. What to Know Despite calls dating back to 2023, Apple and Google have not implemented the requested global blocks. Both companies raised concerns during the hearing that those measures could be exploited for fraud. "We worry, and we have had these discussions with the Met, that there is a vector for fraud," Gary Davis, global senior director for privacy and law enforcement at Apple, said during the session, as reported by The Register. Davis added, "Every month, over 1,000 people try to imitate legitimate users to seek data from us and delete accounts." Simon Wingrove, Google's software engineering manager, told MPs that "Android devices can be blocked from accessing the cloud services after they are stolen," but cautioned that changes based on IMEI data would require industry-wide coordination. What People Are Saying Former policing minister Kit Malthouse said it seemed Apple was "dragging your feet and sitting behind this is a very strong commercial incentive". He added, "The fact that £50m of phones are stolen in London every year—if that stopped, that would be £50m in sales that would be depressed." In a statement to Newsweek, Google said, "Google does not profit from phone theft. For years, Android has invested in advanced theft protection features, including the industry's first Theft Detection Lock and Offline Device Lock, to help prevent theft and block stolen devices. "We've built and evolved these features by listening to victims and partnering closely with law enforcement and industry." Newsweek reached out to Apple requesting further comment on June 4. What's Next Security firms have suggested that a central cloud-level block tied to the IMEI system could be feasible, provided devices are registered from their first activation. Dion Price, CEO of the locking tech firm Trustonic, told MPs that his company offers that service for clients, according to The Register. "If we get the signal from the legitimate owner of that device, then we can lock or unlock it within 30 seconds anywhere in the world," he said. Meanwhile, police in London have begun deploying high-speed e-bikes to pursue snatchers and reported a 15% drop in thefts during April and May. However, officials warned this reduction is likely due to increased enforcement and public awareness, and is not a permanent fix.
Yahoo
2 hours ago
- Yahoo
Amazon Is Prepping for a Future Where Robots Replace Human Delivery Drivers
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing. We might not be far away from the day when Amazon packages get delivered by a robot. The e-commerce giant is currently working on software for humanoid robots ahead of planned testing at a dedicated "humanoid park" at one of its San Francisco facilities, The Information reports, citing a person familiar with the development. The park will have an indoor obstacle course, and Amazon hopes the humanoid robots will be able to hop on the back of a Rivian electric van and step out to complete deliveries. The company will be testing robots offered by several companies, including those from the Chinese firm Unitree (image above). As for the vehicles, Electrek reports that Amazon's delivery fleet currently has 20,000 Rivian electric vans; that number is expected to hit 100,000 by 2030. Amazon previously tested humanoid robots in a warehouse in Seattle in 2023. Called Digits, these robots could lift objects like humans and assist them with certain repetitive tasks, such as tote recycling. The new project, though in its infancy, is about humanoids delivering packages. The report follows Amazon's announcement of a new agentic AI team, which will, among other things, focus on developing the AI framework for its robotics operations. It's not the first time Amazon has tried to replace human delivery staff with robots. Its Prime Air service uses drones to drop packages in eligible areas. It recently started delivering gadgets like iPhones, AirPods, and Galaxy phones in under 60 minutes. Food delivery platforms like Grubhub, Uber Eats, and DoorDash also use robots for delivery.
Yahoo
4 hours ago
- Yahoo
Meta platforms showed hundreds of "nudify" deepfake ads, CBS News finds
Meta has removed a number of ads promoting "nudify" apps — AI tools used to create sexually explicit deepfakes using images of real people — after a CBS News investigation found hundreds of such advertisements on its platforms. "We have strict rules against non-consensual intimate imagery; we removed these ads, deleted the Pages responsible for running them and permanently blocked the URLs associated with these apps," a Meta spokesperson told CBS News in an emailed statement. CBS News uncovered dozens of those ads on Meta's Instagram platform, in its "Stories" feature, promoting AI tools that, in many cases, advertised the ability to "upload a photo" and "see anyone naked." Other ads in Instagram's Stories promoted the ability to upload and manipulate videos of real people. One promotional ad even read "how is this filter even allowed?" as text underneath an example of a nude deepfake. One ad promoted its AI product by using highly sexualized, underwear-clad deepfake images of actors Scarlett Johansson and Anne Hathaway. Some of the ads ads' URL links redirect to websites that promote the ability to animate real people's images and get them to perform sex acts. And some of the applications charged users between $20 and $80 to access these "exclusive" and "advance" features. In other cases, an ad's URL redirected users to Apple's app store, where "nudify" apps were available to download. An analysis of the advertisements in Meta's ad library found that there were, at a minimum, hundreds of these ads available across the company's social media platforms, including on Facebook, Instagram, Threads, the Facebook Messenger application and Meta Audience Network — a platform that allows Meta advertisers to reach users on mobile apps and websites that partner with the company. According to Meta's own Ad Library data, many of these ads were specifically targeted at men between the ages of 18 and 65, and were active in the United States, European Union and United Kingdom. A Meta spokesperson told CBS News the spread of this sort of AI-generated content is an ongoing problem and they are facing increasingly sophisticated challenges in trying to combat it. "The people behind these exploitative apps constantly evolve their tactics to evade detection, so we're continuously working to strengthen our enforcement," a Meta spokesperson said. CBS News found that ads for "nudify" deepfake tools were still available on the company's Instagram platform even after Meta had removed those initially flagged. Deepfakes are manipulated images, audio recordings, or videos of real people that have been altered with artificial intelligence to misrepresent someone as saying or doing something that the person did not actually say or do. Last month, President Trump signed into law the bipartisan "Take It Down Act," which, among other things, requires websites and social media companies to remove deepfake content within 48 hours of notice from a victim. Although the law makes it illegal to "knowingly publish" or threaten to publish intimate images without a person's consent, including AI-created deepfakes, it does not target the tools used to create such AI-generated content. Those tools do violate platform safety and moderation rules implemented by both Apple and Meta on their respective platforms. Meta's advertising standards policy says, "ads must not contain adult nudity and sexual activity. This includes nudity, depictions of people in explicit or sexually suggestive positions, or activities that are sexually suggestive." Under Meta's "bullying and harassment" policy, the company also prohibits "derogatory sexualized photoshop or drawings" on its platforms. The company says its regulations are intended to block users from sharing or threatening to share nonconsensual intimate imagery. Apple's guidelines for its app store explicitly state that "content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy" is banned. Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell University's tech research center, has been studying the surge in AI deepfake networks marketing on social platforms for more than a year. He told CBS News in a phone interview on Tuesday that he'd seen thousands more of these ads across Meta platforms, as well as on platforms such as X and Telegram, during that period. Although Telegram and X have what he described as a structural "lawlessness" that allows for this sort of content, he believes Meta's leadership lacks the will to address the issue, despite having content moderators in place. "I do think that trust and safety teams at these companies care. I don't think, frankly, that they care at the very top of the company in Meta's case," he said. "They're clearly under-resourcing the teams that have to fight this stuff, because as sophisticated as these [deepfake] networks are … they don't have Meta money to throw at it." Mantzarlis also said that he found in his research that "nudify" deepfake generators are available to download on both Apple's app store and Google's Play store, expressing frustration with these massive platforms' inability to enforce such content. "The problem with apps is that they have this dual-use front where they present on the app store as a fun way to face swap, but then they are marketing on Meta as their primary purpose being nudification. So when these apps come up for review on the Apple or Google store, they don't necessarily have the wherewithal to ban them," he said. "There needs to be cross-industry cooperation where if the app or the website markets itself as a tool for nudification on any place on the web, then everyone else can be like, 'All right, I don't care what you present yourself as on my platform, you're gone,'" Mantzarlis added. CBS News has reached out to both Apple and Google for comment as to how they moderate their respective platforms. Neither company had responded by the time of writing. Major tech companies' promotion of such apps raises serious questions about both user consent and about online safety for minors. A CBS News analysis of one "nudify" website promoted on Instagram showed that the site did not prompt any form of age verification prior to a user uploading a photo to generate a deepfake image. Such issues are widespread. In December, CBS News' 60 Minutes reported on the lack of age verification on one of the most popular sites using artificial intelligence to generate fake nude photos of real people. Despite visitors being told that they must be 18 or older to use the site, and that "processing of minors is impossible," 60 Minutes was able to immediately gain access to uploading photos once the user clicked "accept" on the age warning prompt, with no other age verification necessary. Data also shows that a high percentage of underage teenagers have interacted with deepfake content. A March 2025 study conducted by the children's protection nonprofit Thorn showed that among teens, 41% said they had heard of the term "deepfake nudes," while 10% reported personally knowing someone who had had deepfake nude imagery created of them. Musk alleges Trump's name appeared in Epstein files as feud escalates What to know about President Trump's travel ban on nationals from 12 countries Trump says he's disappointed by Musk criticism of budget bill, Musk says he got Trump elected