logo
Apple watch comes to the rescue as man faces breathing trouble

Apple watch comes to the rescue as man faces breathing trouble

Economic Times05-05-2025

How Apple watch's Fall Detection and SOS work
How to set up the Emergency and health features on Apple watch
In a long list of health scares averted by the Apple Watch, we now have a new one to add. A Reddit user named Taylor recently shared a life-saving moment where his Apple Watch came to the rescue during a severe breathing episode.As Taylor recounts, he was suddenly hit with intense shortness of breath, causing him to black out and fall face-down on the pavement. At that critical moment, the Apple Watch's fall detection feature kicked in, sensing the fall and lack of movement. It immediately contacted emergency services, ensuring help arrived just in time.Taylor shared his experience in a Reddit post, explaining that he had been dealing with a cold and cough, initially brushing it off as just seasonal or possibly even COVID-19 symptoms. 'I thought it was just a lingering cold or some strange COVID symptoms—mostly shortness of breath,' he wrote. However, what started as an ordinary day soon took a dangerous turn. 'Nothing unusual until one evening after work.'While walking through a nearly empty parking lot, Taylor suddenly felt dizzy and lightheaded before losing consciousness. 'I barely made it to the back of my car before I blacked out. The next thing I know, I'm waking up, face-down on the pavement, and my Apple Watch is buzzing, flashing SOS,' he recalled.The watch's fall detection feature kicked in, automatically dialing Emergency SOS. In his disoriented state, Taylor accidentally stopped the call, but by that time, emergency services had already been alerted. 'I mistakenly ended the call (don't do what I did), but thankfully, 911 called me back right away. I was able to tell them what happened and that I needed help,' he shared.Taylor was later rushed to the emergency room, where doctors discovered multiple blood clots in his lungs, one of which was blocking oxygen to his heart. 'The ER doctor told me that if help hadn't arrived so quickly, I had about a 50/50 chance of not making it,' he shared.This incident highlights just how crucial health and emergency features in wearables, especially the Apple Watch, have become—evolving from helpful tools to potential life-savers. With features like satellite calling, heart rate alerts, and SOS, the Apple Watch has integrated powerful emergency functionalities over the years. In this case, it was the Fall Detection and Emergency SOS capabilities that came to the rescue, detecting Taylor's hard fall and automatically connecting him with emergency services.Apple's Fall Detection feature uses advanced sensors and motion algorithms, like the accelerometer and gyroscope, to monitor your movement. If a user experiences a hard fall, the watch triggers an alarm and displays an alert. If the user remains unresponsive for about a minute, it automatically calls emergency services and shares their location.The Emergency SOS function allows users to quickly contact emergency services by pressing and holding the side button on the watch. If configured, it also sends a message to the user's emergency contacts with their location. As Apple explains, 'When the call ends, your Apple Watch will send your emergency contacts a text message with your current location, unless you choose to cancel.'In Taylor's case, once the Emergency SOS was triggered, his wife—who was 35 miles away—received an alert, knowing exactly where he was in real time. 'She knew something was wrong and knew exactly where I was,' Taylor recalled.To ensure these features are enabled:- Turn on Fall Detection: Open the Apple Watch app on your iPhone, go to Emergency SOS, and toggle Fall Detection on. It's automatically activated for users over 55, but younger users can enable it manually.- Add Emergency Contacts: Open the Health app on your iPhone. Tap your profile picture, then tap Medical ID > Edit. Scroll down to Emergency Contacts to add or update them.- Fill Out Your Medical ID: Open the Medical ID section on your iPhone and add vital information like blood type, allergies, and medical conditions. This information can be accessed by first responders from your locked screen in an emergency.By ensuring these features are set up, you can have peace of mind knowing that your Apple Watch is ready to help in an emergency.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Engineer gets job offer. From his own company. And for his own position. How it happened?
Engineer gets job offer. From his own company. And for his own position. How it happened?

Time of India

time10 hours ago

  • Time of India

Engineer gets job offer. From his own company. And for his own position. How it happened?

An engineering manager working at a mid-sized environmental consulting firm recently found himself in an unusual situation—he was offered his own job by his own company. The twist? A third-party recruiting agency was behind the mix-up, and they used the engineer's resume as the job description for the position. The incident came to light after the engineer shared the story on Reddit, where it quickly gained attention. According to his post, the company had recently opened a new office and needed to hire someone to fill a managerial role similar to his own. Without an in-house HR team, the company turned to an external recruiting agency to source candidates. To streamline the process, the company shared his resume with the agency as a reference for the ideal candidate. However, instead of creating a job listing with distinct criteria, the recruiters reportedly used the resume nearly verbatim and relied on bots to scan LinkedIn for matches. Unsurprisingly, the system identified the engineer himself as the top candidate—and automatically reached out with an offer. Recruiting Firm Relied Heavily on Automation The recruiting firm, which is based in the U.S. but outsources much of its work overseas, appears to have used AI-driven methods to scan resumes and source applicants. As the engineer explained, these bots matched his qualifications so closely that he was considered a perfect fit for the job—without realizing he already held it. The firm's approach drew criticism online, with commenters highlighting the growing reliance on automation in recruitment and its often comical results. One user remarked that the engineer should at least check if he could get a raise, joking that he was clearly the company's top candidate. Another added that he could 'rubber stamp his own offer.' This is not the only instance of recruiters mistakenly contacting current employees. One Reddit user shared a similar experience from a startup, where a recruiter tried to fill a junior position using an outdated resume of a current team lead. Another described being offered $1.50 more for the same role they already occupied. In that case, the employee informed their boss—who didn't find it amusing—but ended up giving them a raise. Such incidents have exposed issues in the recruitment industry, especially among agencies that prioritize volume over quality. One commenter, a former recruiter, criticized the business model that outsources critical hiring functions to poorly trained or indifferent staff overseas. Another noted that some agencies even advertise misleading job conditions to attract candidates faster. Company Ends Contract with Recruiting Firm Following the incident, the engineering manager confirmed that his company would no longer work with the recruiting firm responsible. The recruiter's failure to properly vet candidates—and the embarrassment of mistakenly pitching a job to someone already employed in that very position—proved enough to sever ties. The broader takeaway, however, reflects a growing frustration with recruitment practices in today's job market . With increasing reliance on automation, combined with limited oversight, even straightforward tasks like hiring a new employee can go unexpectedly wrong.

Engineer gets job offer. From his own company. And for his own position. How it happened?
Engineer gets job offer. From his own company. And for his own position. How it happened?

Economic Times

time10 hours ago

  • Economic Times

Engineer gets job offer. From his own company. And for his own position. How it happened?

Engineer shocked after being recruited for job he already has An engineering manager working at a mid-sized environmental consulting firm recently found himself in an unusual situation—he was offered his own job by his own company. The twist? A third-party recruiting agency was behind the mix-up, and they used the engineer's resume as the job description for the incident came to light after the engineer shared the story on Reddit, where it quickly gained attention. According to his post, the company had recently opened a new office and needed to hire someone to fill a managerial role similar to his own. Without an in-house HR team, the company turned to an external recruiting agency to source streamline the process, the company shared his resume with the agency as a reference for the ideal candidate. However, instead of creating a job listing with distinct criteria, the recruiters reportedly used the resume nearly verbatim and relied on bots to scan LinkedIn for matches. Unsurprisingly, the system identified the engineer himself as the top candidate—and automatically reached out with an recruiting firm, which is based in the U.S. but outsources much of its work overseas, appears to have used AI-driven methods to scan resumes and source applicants. As the engineer explained, these bots matched his qualifications so closely that he was considered a perfect fit for the job—without realizing he already held it. The firm's approach drew criticism online, with commenters highlighting the growing reliance on automation in recruitment and its often comical results. One user remarked that the engineer should at least check if he could get a raise, joking that he was clearly the company's top candidate. Another added that he could 'rubber stamp his own offer.' This is not the only instance of recruiters mistakenly contacting current employees. One Reddit user shared a similar experience from a startup, where a recruiter tried to fill a junior position using an outdated resume of a current team lead. Another described being offered $1.50 more for the same role they already occupied. In that case, the employee informed their boss—who didn't find it amusing—but ended up giving them a incidents have exposed issues in the recruitment industry, especially among agencies that prioritize volume over quality. One commenter, a former recruiter, criticized the business model that outsources critical hiring functions to poorly trained or indifferent staff overseas. Another noted that some agencies even advertise misleading job conditions to attract candidates the incident, the engineering manager confirmed that his company would no longer work with the recruiting firm responsible. The recruiter's failure to properly vet candidates—and the embarrassment of mistakenly pitching a job to someone already employed in that very position—proved enough to sever ties. The broader takeaway, however, reflects a growing frustration with recruitment practices in today's job market. With increasing reliance on automation, combined with limited oversight, even straightforward tasks like hiring a new employee can go unexpectedly wrong.

The Digital Shoulder: How AI chatbots are built to ‘understand' you
The Digital Shoulder: How AI chatbots are built to ‘understand' you

Mint

time13 hours ago

  • Mint

The Digital Shoulder: How AI chatbots are built to ‘understand' you

As artificial intelligence (AI) chatbots are becoming an inherent part of people's lives, more and more users are spending time chatting with these bots to not just streamline their professional or academic work but also seek mental health advice. Some people have positive experiences that make AI seem like a low-cost therapist. AI models are programmed to be smart and engaging, but they don't think like humans. ChatGPT and other generative AI models are like your phone's auto-complete text feature on steroids. They have learned to converse by reading text scraped from the internet. When a person asks a question (called a prompt) such as 'how can I stay calm during a stressful work meeting?' the AI forms a response by randomly choosing words that are as close as possible to the data it saw during training. This happens really fast, but the responses seem quite relevant, which might often feel like talking to a real person, according to a PTI report. But these models are far from thinking like humans. They definitely are not trained mental health professionals who work under professional guidelines, follow a code of ethics, or hold professional registration, the report says. When you prompt an AI system such as ChatGPT, it draws information from three main sources to respond: Background knowledge it memorised during training, external information sources and information you previously provided. To develop an AI language model, the developers teach the model by having it read vast quantities of data in a process called 'training'. This information comes from publicly scraped information, including everything from academic papers, eBooks, reports, and free news articles to blogs, YouTube transcripts, or comments from discussion forums such as Reddit. Since the information is captured at a single point in time when the AI is built, it may also be out of date. Many details also need to be discarded to squish them into the AI's 'memory'. This is partly why AI models are prone to hallucination and getting details wrong, as reported by PTI. The AI developers might connect the chatbot itself with external tools, or knowledge sources, such as Google for searches or a curated database. Meanwhile, some dedicated mental health chatbots access therapy guides and materials to help direct conversations along helpful lines. AI platforms also have access to information you have previously supplied in conversations or when signing up for the platform. On many chatbot platforms, anything you've ever said to an AI companion might be stored away for future reference. All of these details can be accessed by the AI and referenced when it responds. These AI chatbots are overly friendly and validate all your thoughts, desires and dreams. It also tends to steer conversation back to interests you have already discussed. This is unlike a professional therapist who can draw from training and experience to help challenge or redirect your thinking where needed, reported PTI. Most people are familiar with big models such as OpenAI's ChatGPT, Google's Gemini, or Microsoft's Copilot. These are general-purpose models. They are not limited to specific topics or trained to answer any specific questions. Developers have also made specialised AIs that are trained to discuss specific topics, like mental health, such as Woebot and Wysa. According to PTI, some studies show that these mental health-specific chatbots might be able to reduce users' anxiety and depression symptoms. There is also some evidence that AI therapy and professional therapy deliver some equivalent mental health outcomes in the short term. Another important point to note is that these studies exclude participants who are suicidal or who have a severe psychotic disorder. And many studies are reportedly funded by the developers of the same chatbots, so the research may be biased. Researchers are also identifying potential harms and mental health risks. The companion chat platform for example, has been implicated in an ongoing legal case over a user's suicide, according to the PTI report. At this stage, it's hard to say whether AI chatbots are reliable and safe enough to use as a stand-alone therapy option, but they may also be a useful place to start when you're having a bad day and just need a chat. But when the bad days continue to happen, it's time to talk to a professional as well. More research is needed to identify if certain types of users are more at risk of the harms that AI chatbots might bring. It's also unclear if we need to be worried about emotional dependence, unhealthy attachment, worsening loneliness, or intensive use.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store