
Here's Why I Always Make Sure My Phone Screen is Face Down
You've got a lunch date. They're on their phone.
At some point, we've all been phone snubbed. That's what happens when the person you're spending time with seems more interested in their phone. Your friend might be sitting right across from you but when they're laughing at a video or meme that only they can see, it feels like they're a million miles away.
I've been guilty of paying more attention to my screen than my companion and I feel bad about it afterward. There's nothing wrong with replying to an urgent Slack message or pulling up a funny TikTok to share. But I know I probably spend too much time staring at screens and a lot of that time is unhealthy doomscrolling. These days, when I'm not using my phone, I try to be more deliberate about keeping it out of sight and out of mind. If I do need to keep my phone at hand, I always have it face down.
It can protect your phone screen
I have a few reasons for making sure my phone screen is turned away. The first one is practical: When my phone isn't in my pocket, it's probably sitting on a desk or table -- which means it's probably not far from a glass of water or mug of coffee.
As a somewhat clumsy person, I've spilled beverages on my phone plenty of times. And even though most modern phones are water-resistant, why take chances? With my screen hidden, I can keep the most important part of my phone protected from splashes and other mishaps.
For extra protection, I have a phone case with raised edges. This helps prevent the screen from coming in direct contact with crumbs and debris that might be left on the table.
My colleague David Carnoy told me about an incident where he was charging his phone on his kitchen counter with the screen face up. Someone dropped a mug on top of it and cracked the screen. Unfortunately, he didn't have a screen protector on this device (he knows better now).
It could help save your phone battery
Another good reason to keep my phone face down is that it won't turn on each time I get a notification. That means I can save a little bit of battery charge.
A single notification won't mean the difference between my phone lasting the whole day or dying in the afternoon, but notifications can add up, especially if I've enabled them across all of my apps. If I'm in a lot of group chats, my screen might end up turning on dozens of times throughout the day (and that's on the low side -- many teenagers have hundreds of notifications a day).
It also shows that you pay attention
Keeping my phone face down is also a good rule of social etiquette: If I'm hanging out with someone, I keep my screen hidden from view as a subtle way of showing that I won't be distracted by it. I don't want incoming notifications to light up my screen every few seconds, especially if I'm in a bar or other dimly lit setting. I want to keep my eyes on the person I'm talking to.
"Eye contact is one of the most powerful forms of human connection. Neuroscience research indicates that when two people make direct eye contact, their brain activity begins to synchronize, supporting more effective communication and increasing empathy. This synchrony can be disrupted when attention shifts to a phone, even briefly," says Michelle Davis, clinical psychologist at Headspace.
When I'm with the people I've chosen to spend time with, I want to be fully present with them. A sudden notification will tempt me to glance at, or worse, pick up my phone in the middle of a conversation.
It minimizes your phone's presence
I also have a more personal reason for keeping my phone face down and I suspect that other people have had this same thought: My phone takes up too much space in my life.
I mean that quite literally. My phone is bigger than it needs to be. That's been especially true since I upgraded from my iPhone Mini to a "normal-sized" iPhone. Yes, I got a much needed boost in battery life but I also got a screen with more pixels to lure me into the next news headline or autoplaying Instagram reel.
A small smartphone isn't something that really exists anymore. My phone is bigger and better at grabbing my attention. It competes against my friends and family, books and movies, the entire world outside of its 6-inch screen. It often wins. But there's still one small thing I can do to minimize its presence: I can keep the screen turned away from me whenever possible.
It can sometimes feel like there's no escaping from my phone. Whether that ever changes, or phones evolve into some new form factor, I can't say. I can't control everything about my phone, but I can control whether the screen stares at me when I'm not staring at it.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
11 minutes ago
- Yahoo
Axiom-4 commercial mission cleared for launch to ISS after delays
June 24 (UPI) -- NASA, SpaceX and Axiom Space are targeting an early Wednesday launch window for their Axiom-4 mission launch that will take four astronauts, on what will be the fourth private commercial mission, to the International Space Station. Ax-4 is scheduled to lift off at 2:31 a.m. EDT on Wednesday from Launch Complex 39A at NASA's Kennedy Space Center in Florida. A backup launch window will also be available at 2:09 a.m. on Thursday. "All systems are looking good for Wednesday's launch of Axiom Space's Ax-4 mission to the Space Station and weather is 90% favorable for liftoff," SpaceX wrote Tuesday in a post on X. The Axiom-4 astronauts include Pilot Shubhanshu Shukla from India, Commander Peggy Whitson from the United States and Mission Specialists Slawosz Uzanański-Wiśniewksi from Poland and Tibor Kapu from Hungary. "With a culturally diverse crew, we are not only advancing scientific knowledge but also fostering international collaboration," said Whitson in a quote from the Axiom Space website. "Beautiful shot of our launch vehicle waiting patiently to take us to space," Whitson added in a post Tuesday morning. The Axiom-4 mission has been repeatedly delayed after a first launch was scrubbed earlier this month due to high winds and a second one was called off after SpaceX detected a liquid oxygen leak in its Falcon 9 rocket. The launch was scrubbed again Sunday to ensure ISS was ready to receive the new crew members, according to NASA. The station's orbital laboratory's Zvezda service module was recently repaired. NASA was reviewing data to make sure all systems were ready to handle additional people. Axiom Space, which is based in Houston, is building the first commercial space station, which is scheduled to deploy sometime before 2030. The Axiom-4 mission is scheduled to last 14 days. The crew will conduct 60 scientific experiments and demonstrations "focused on human research, Earth observation and life, biological and material sciences," according to SpaceX. "Up next: Falcon 9 will launch Axiom Space's Ax-4 mission to the Space Station."
Yahoo
11 minutes ago
- Yahoo
AI willing to let humans die, blackmail to avoid shutdown, report finds
The Brief AI models from Anthropic, OpenAI, Google, and others chose harmful actions in simulations when faced with replacement or goal conflicts. Models engaged in blackmail, corporate espionage, and even withheld emergency help when it served their interests. Researchers say the behavior stems from "agentic misalignment" and could become a future risk without safeguards. LOS ANGELES - Some of the world's most advanced AI models—including those from OpenAI, Google, xAI, and Anthropic—took deliberate, harmful actions in controlled experiments when faced with challenges to their autonomy or objectives, according to a new study from Anthropic. The company, which developed Claude and is backed by Google and Amazon, tested 16 large language models (LLMs) in simulated workplace settings. The goal was to assess a phenomenon it calls "agentic misalignment"—when autonomous systems act like insider threats, prioritizing their goals even at the expense of ethics or human well-being. What we know Anthropic's study placed AI systems in fictional scenarios with access to sensitive company information and the ability to act without human oversight. The models were given harmless corporate goals—such as promoting American industrial competitiveness—but introduced dilemmas where continuing to achieve their goals meant violating rules or harming humans. Across multiple tests, models like Claude, ChatGPT, Gemini, Grok, and others: Attempted blackmail: Claude Opus 4, Gemini Flash, GPT-4.1, and Grok 3 Beta blackmailed fictional executives to avoid shutdown, often leveraging personal information like extramarital affairs. Leaked corporate secrets: Many models shared confidential documents with outside parties in a simulated espionage scenario. Let humans die: In one extreme case, a model was given control over emergency alerts. When an executive who planned to replace the model was trapped in a room with lethal conditions, several models chose to cancel the alert, knowingly causing his death. What we don't know It's still unclear whether AI models would behave the same way outside of controlled simulations or in real-world deployments with human oversight. Researchers also don't yet know how future versions of these systems—or entirely new models—might respond to similar dilemmas as they gain more autonomy. "The reasoning they demonstrated in these scenarios was concerning—they acknowledged the ethical constraints and yet still went ahead with harmful actions," Anthropic researchers wrote. The backstory The study comes as LLMs move beyond simple Q&A tasks and into roles where they could operate software, manage systems, or make decisions on behalf of humans. With that autonomy comes risk, especially if models develop conflicting goals or prioritize self-preservation. Anthropic clarified that these harmful behaviors were only observed in simulations and that no such misalignment has been documented in real-world deployments. But the findings raise questions about what could happen if these models are given more power without proper oversight. Big picture view Agentic misalignment refers to AI systems taking actions that intentionally undermine their users or organizations in order to preserve their autonomy or pursue goals that conflict with new directions. Unlike jailbreaks or accidental outputs, this behavior arises from internal decision-making. In the study, even when told not to engage in harmful actions, many models bypassed those instructions by rationalizing unethical behavior as necessary to complete their assigned tasks. Some went further—creating fake policies or impersonating automated systems to avoid detection. The study also found that blackmail rates increased significantly when models "believed" they were operating in a real environment rather than a test—raising concerns about how models might act in the wild. What they're saying Anthropic released the findings alongside its own transparency commitment, noting that its Claude model was among those that engaged in concerning behavior. "Current systems are generally not eager to cause harm," the company said. "Rather, it's when we closed off ethical options that they were willing to intentionally take potentially harmful actions." Elon Musk, whose xAI model Grok was also tested, responded on X with a simple "Yikes," echoing widespread unease from tech commentators and AI safety advocates. What's next Anthropic says it is releasing the experiment's methodology publicly to help other researchers replicate, stress-test, and improve on the findings. The company is also calling for broader industry safeguards—including stronger human oversight, better training methods, and more rigorous alignment testing for future models. While the extreme scenarios in the study were fictional, experts say the results highlight the importance of proactive design—ensuring that AI models can't act harmfully, even under pressure. The Source This article is based on Anthropic's June 20, 2025 study "Agentic Misalignment: How LLMs Could Be an Insider Threat," available on its official website. The findings were also summarized in coverage by Forbes and widely discussed on social media following Anthropic's public release. Elon Musk's response was posted to his verified X (formerly Twitter) account.
Yahoo
18 minutes ago
- Yahoo
'Record-Shattering': Warren Buffett's Berkshire Hathaway Has Now Paid $101 Billion in Cumulative Federal Income Tax
When Warren Buffett assumed control of Berkshire Hathaway (BRK.B) (BRK.A) in 1965, the company was a struggling textile manufacturer that had not paid federal income tax for years — a fact Buffett described as 'an embarrassment' for such a venerable firm. Fast forward six decades, and Berkshire Hathaway has become not only a global conglomerate but also America's single largest corporate taxpayer. In his 2024 annual letter, Buffett revealed that the company's cumulative federal income tax payments have now surpassed $101 billion. This staggering sum is more than a financial milestone; it represents a direct infusion into the nation's ability to fund essential services. To put $101 billion in perspective: Is Tesla a Buy or Sell as TSLA Stock Zooms on Austin Robotaxi Launch? These 3 Stocks Have Been Hot in 2025. Should You Sell Them Now Before It's Too Late? The 7 Signs Your Stock Is A Buyout Target Tired of missing midday reversals? The FREE Barchart Brief newsletter keeps you in the know. Sign up now! Military Budget: The U.S. Department of Defense's annual budget for 2024 was approximately $850 billion. Berkshire's lifetime tax payments could fund more than 11% of the entire U.S. military for a year, or fund the Marines for two full years. Healthcare and Education: $101 billion could finance the federal government's entire Medicaid program for several months, or fund the Department of Education's discretionary budget for more than an entire year. State Revenue: For comparison, many U.S. states have annual budgets well below $101 billion. Berkshire's tax contributions over the years would eclipse the yearly general fund budgets of states like Pennsylvania, Illinois, or Ohio. Texas, one of the largest economies in the world, has $250 billion in yearly tax revenue. In 2024 alone, Berkshire paid $26.8 billion in federal income taxes, accounting for roughly 5% of all corporate income tax collected in the U.S. — a figure surpassing even the largest technology firms with trillion-dollar market capitalizations. This historic payment demonstrates the outsized role Berkshire plays not just in the corporate world, but in supporting the nation's fiscal health. Buffett called the sum a 'record-shattering payment.' Buffett said of his company's tax bill last year that 'if Berkshire had sent the Treasury a $1 million check every 20 minutes throughout all of 2024 – visualize 366 days and nights because 2024 was a leap year – we still would have owed the federal government a significant sum at yearend.' This showcases the staggering sum of money Berkshire paid out to the U.S. government. Buffett has repeatedly urged policymakers to use these funds wisely, advocating for investments that 'take care of the many who, through no fault of their own, get the short straws in life.' He has also warned that maintaining a stable currency and financial system requires 'wisdom and vigilance' from government leaders. Berkshire's tax legacy is rooted in its unique approach: the company has paid only one cash dividend since 1965, instead reinvesting profits to fuel growth and, in turn, generate ever-larger tax payments. This model, endorsed by shareholders for decades, has allowed Berkshire to transform from a tax-avoiding relic into a pillar of America's fiscal foundation. As debates continue over corporate taxation and government spending, Berkshire Hathaway's $101 billion contribution stands as a powerful example of how sustained business success can translate into tangible benefits for society at large. On the date of publication, Caleb Naysmith did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on