logo
Alarming number of Lasik eye surgery patients who took their own lives revealed after police officer's suicide

Alarming number of Lasik eye surgery patients who took their own lives revealed after police officer's suicide

Daily Mail​23-05-2025
The recent suicide of a young Pennsylvania police officer has reignited concerns over Lasik eye surgery, as dozens more were pushed to the brink of death over similar complications.
Ryan Kingerski, a 26-year-old officer with the Penn Hills Police Department, took his own life after months of excruciating pain, double vision and persistent headaches.
He claimed the Lasik eye surgery he underwent five months earlier was the source of his suffering.
Now, as more horror stories of agonizing symptoms surface, it's becoming increasingly clear that Kingerski's case is not isolated.
'Everyone has different problems when it comes to Lasik,' Edward Boshnick, a Miami-based eye doctor, told The New York Post.
'It's the biggest scam ever put on the American public... and it's a multi-billion dollar business.'
Lasik eye surgery, or laser vision correction, is marketed by providers as 95 to 99 percent safe. The so-called 'simple' procedure uses an ultraviolet laser to reshape the cornea, improving vision without glasses or contacts.
Morris Waxler, 89, was the former head of the Food and Drug Administration branch responsible for reviewing data and approving the Lasik operation decades ago - a decision he now regrets.
Ryan Kingerski (pictured), a 26-year-old officer with the Penn Hills Police Department, took his own life after months of excruciating pain, double vision and persistent headaches - claiming the Lasik surgery he underwent five months earlier was the source of his suffering
'It didn't matter what questions and concerns I had, because the surgeons were very powerful and still are,' he told The Post.
He had petitioned the FDA to revoke its approval of the Lasik procedure after his own analysis revealed complication rates between 10 to 30 percent - a staggering contrast to the 'less than one percent' figure cited by providers.
In 2018, Detroit TV meteorologist Jessica Starr hanged herself at just 35-years-old, leaving behind a 30-page suicide note and videos blaming her tragic decision on the elective surgery.
She documented her struggles in video diary entries. In one recording, she spoke about feeling mad at herself for deciding to go through with the procedure.
According to her family, Starr reached out to various eye doctors and even sought help with a therapist, but her emotional state continued deteriorating.
The young mother ultimately took her own life after struggling with intense pain and vision problems in the two months since her surgery.
'Prior to the procedure, Jessica was completely normal, very healthy,' Dan Rose, Starr's widower, told The Post. 'There was no depression... no underlying issue.'
Also in 2018, Paul Fitzpatrick, a Canadian father-of-two, killed himself and blamed 20 years of post-Lasik pain in his suicide note.
In the years following his operation, Fitzpatrick suffered headaches and described feeling needles in his eyes as well as an unbearable dry and burning sensation.
His family said in the months leading up to his death, the pain was so unbearable he would keep his eyes closed for most of the time, walking with a cane and planned to move in with his parents.
He left a suicide note when he took his life in October of 2018, which described the pain he felt that pushed him to death.
'I cannot experience any type of pleasure anymore,' Fitzpatrick wrote.
'Just the pain of burning eyes inside my head and throughout myself… Since 1996 Pain, pain and more pain, please forgive me for not being strong enough to cope. The past few months have been unbearable.'
Gloria McConnell had two Lasik procedures to fix her short-sightedness in 2019.
Serious complications arose a few weeks after the surgery, including eyes so dry they had a burning sensation, mites and ingrown hairs in her eyelashes.
Four years later, she was barely able to leave her bed.
Fitzpatrick's family said in the months leading up to his death, the pain was so unbearable he would keep his eyes closed for most of the time, walking with a cane and planned to move in with his parents (pictured)
She died by suicide aged 60. Her son said she left a note to her family in which she explained that the pain from the bungled surgery formed part of her decision to end her life.
McConnell even submitted a comment to the FDA's draft recommendations which said: '[LASIK] has destroyed my life.'
In August of 2024, Kingerski took some time off his dream job as a police officer to get Lasik and improve his vision - a decision that seemed safe given the demands of his career.
However, he would tragically never wear his uniform again, as what his parents described as a 'tragically unsuccessful surgery' transformed him from a smiling, vibrant person to someone unrecognizable.
The operation left him with debilitating side effects - headaches, dark spots floating in his eye sight, double vision and extreme sensitivity.
In January, still without relief or answers, Kingerski ended his life.
In a heartbreaking suicide note, he wrote: 'I can't take this anymore. Lasik took everything from me'.
Paula Cofer, one Lasik survivor, shared that she endured two years of suicidal thoughts following her 'disastrous' procedure back in 2000, The Post reported.
In Lasik and similar surgeries, a small flap is cut into the cornea, which is then raised slightly. This reshaping changes the way that light is refracted to make up for nearsightedness or farsightedness that occur when light doesn't hit the proper spot on the retina.
'The Lasik lobby and the surgeons will tell you only one percent of patients have issues afterward,' the 66-year-old woman told the outlet. 'That's not true. There are multiple studies that indicate otherwise.'
'The percentage of those with poor outcomes are in the double digits, not one percent,' she added. 'And they know it.'
In a shocking twist, Cofer claimed to have known at least 40 people who have taken their own lives after Lasik - unable to go on living with the constant pain and vision problems developed after the procedure, The Post reported.
As a way to spread awareness, Cofer runs the Lasik Complications Support Group on Facebook - just one of many organizations on social media created in response to the unspoken dangers of Lasik.
'I really didn't want to stick around at times, but I decided I would to get the word out about how dangerous this surgery can be,' she told the outlet.
'If you understand Lasik and what it does to the eyes and cornea, you realize you can't do it on a healthy eye and not expect complications,' she added.
In Lasik and similar surgeries, a small flap is cut into the cornea, which is then raised slightly.
This reshaping changes the way that light is refracted to make up for nearsightedness or farsightedness that occur when light doesn't hit the proper spot on the retina.
'Not everyone has severe complications but a lot more people are suffering than you know,' Cofer said. 'I got floaters, severe dry eyes, induced astigmatism and severe night vision problems.'
More than 10 million Americans have undergone the procedure since Lasik was FDA approved in 1999, according to the medical journal Clinical Ophthalmology, which reports that 700,000 to 800,000 opt for laser vision correction each year.
Abraham Rutner, a 43-year-old Brooklyn electrician, was one of the lucky ones - miraculously finding a sliver of hope after his failed Lasik procedure five years ago.
'It's like you have a layer of oil on top of your eye - it was so hazy and terrible,' he told The Post. 'I couldn't work. I couldn't drive. I felt like I was still a young man and I lost my life.'
However, he heard about Dr. Boshnick's work, whose optometric practice offers vision and comfort restoration due to a variety of eye conditions and surgeries - including Lasik.
Rutner was eventually fitted with a scleral lens, a specialized contact that covers and protects corneas damaged by Lasik. Cofer said she was also fitted with the lens - and that it has provided significant relief.
The FDA does warn on its website that the procedure carries risks, including vision loss, glare, halos, double vision and other 'debilitating visual symptoms'.
According to The American Refractive Surgery Council's website, 'Lasik is safe and is one of the most studied elective surgical procedures available today... the rate of sight-threatening complications from Lasik eye surgery is estimated to be well below one percent.'
However, for some experts like Boshnick, Lasik is nothing more than a 'BS procedure', according to The Post.
'People come in with healthy eyes and all they need is eyeglasses,' Waxler told the outlet.
'But when surgeons cut the cornea they are removing nerves and leaving the corneas with odd shapes and some patients will have intractable pain.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I'm a relationships expert: these are the commonly missed signs that your female friends are TOXIC (and how to cut them off)
I'm a relationships expert: these are the commonly missed signs that your female friends are TOXIC (and how to cut them off)

Daily Mail​

time9 minutes ago

  • Daily Mail​

I'm a relationships expert: these are the commonly missed signs that your female friends are TOXIC (and how to cut them off)

I have gone through more friendship break-ups than I care to admit and, controversially, I believe that makes me a better friend. It might even keep me younger too. A study last week revealed that toxic friendships cause premature biological ageing, comparable to that triggered by smoking. New York University found that social exchanges with so-called frenemies can cause chemical changes to DNA by keeping the body in a state of high stress.

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot
Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

The Guardian

timean hour ago

  • The Guardian

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

Tran* sat across from me, phone in hand, scrolling. 'I just wanted to make sure I didn't say the wrong thing,' he explained, referring to a recent disagreement with his partner. 'So I asked ChatGPT what I should say.' He read the chatbot-generated message aloud. It was articulate, logical and composed – almost too composed. It didn't sound like Tran. And it definitely didn't sound like someone in the middle of a complex, emotional conversation about the future of a long-term relationship. It also did not mention anywhere some of Tran's contributing behaviours to the relationship strain that Tran and I had been discussing. Like many others I've seen in therapy recently, Tran had turned to AI in a moment of crisis. Under immense pressure at work and facing uncertainty in his relationship, he'd downloaded ChatGPT on his phone 'just to try it out'. What began as a curiosity soon became a daily habit, asking questions, drafting texts, and even seeking reassurance about his own feelings. The more Tran used it, the more he began to second-guess himself in social situations, turning to the model for guidance before responding to colleagues or loved ones. He felt strangely comforted, like 'no one knew me better'. His partner, on the other hand, began to feel like she was talking to someone else entirely. ChatGPT and other generative AI models present a tempting accessory, or even alternative, to traditional therapy. They're often free, available 24/7 and can offer customised, detailed responses in real time. When you're overwhelmed, sleepless and desperate to make sense of a messy situation, typing a few sentences into a chatbot and getting back what feels like sage advice can be very appealing. But as a psychologist, I'm growing increasingly concerned about what I'm seeing in the clinic; a silent shift in how people are processing distress and a growing reliance on artificial intelligence in place of human connection and therapeutic support. AI might feel like a lifeline when services are overstretched – and make no mistake, services are overstretched. Globally, in 2019 one in eight people were living with a mental illness and we face a dire shortage of trained mental health professionals. In Australia, there has been a growing mental health workforce shortage that is impacting access to trained professionals. Clinician time is one of the scarcest resources in healthcare. It's understandable (even expected) that people are looking for alternatives. Turning to a chatbot for emotional support isn't without risk however, especially when the lines between advice, reassurance and emotional dependence become blurred. Many psychologists, myself included, now encourage clients to build boundaries around their use of ChatGPT and similar tools. Its seductive 'always-on' availability and friendly tone can unintentionally reinforce unhelpful behaviours, especially for people with anxiety, OCD or trauma-related issues. Reassurance-seeking, for example, is a key feature in OCD and ChatGPT, by design, provides reassurance in abundance. It never asks why you're asking again. It never challenges avoidance. It never says, 'let's sit with this feeling for a moment, and practice the skills we have been working on'. Tran often reworded prompts until the model gave him an answer that 'felt right'. But this constant tailoring meant he wasn't just seeking clarity; he was outsourcing emotional processing. Instead of learning to tolerate distress or explore nuance, he sought AI-generated certainty. Over time, that made it harder for him to trust his own instincts. Beyond psychological concerns, there are real ethical issues. Information shared with ChatGPT isn't protected by the same confidentiality standards as registered Ahpra professionals. Although OpenAI states that data from users is not used to train its models unless permission is given, the sheer volume of fine print in user agreements often goes unread. Users may not realise how their inputs can be stored, analysed and potentially reused. There's also the risk of harmful or false information. These large language models are autoregressive; they predict the next word based on previous patterns. This probabilistic process can lead to 'hallucinations', confident, polished answers that are completely untrue. AI also reflects the biases embedded in its training data. Research shows that generative models can perpetuate and even amplify gender, racial and disability-based stereotypes – not intentionally, but unavoidably. Human therapists also possess clinical skills; we notice when a client's voice trembles, or when their silence might say more than words. This isn't to say AI can't have a place. Like many technological advancements before it, generative AI is here to stay. It may offer useful summaries, psycho-educational content or even support in regions where access to mental health professionals is severely limited. But it must be used carefully, and never as a replacement for relational, regulated care. Tran wasn't wrong to seek help. His instincts to make sense of distress and to communicate more thoughtfully were logical. However, leaning so heavily on to AI meant that his skill development suffered. His partner began noticing a strange detachment in his messages. 'It just didn't sound like you', she later told him. It turned out: it wasn't. She also became frustrated about the lack of accountability in his correspondence to her and this caused more relational friction and communication issues between them. As Tran and I worked together in therapy, we explored what led him to seek certainty in a chatbot. We unpacked his fears of disappointing others, his discomfort with emotional conflict and his belief that perfect words might prevent pain. Over time, he began writing his own responses, sometimes messy, sometimes unsure, but authentically his. Good therapy is relational. It thrives on imperfection, nuance and slow discovery. It involves pattern recognition, accountability and the kind of discomfort that leads to lasting change. A therapist doesn't just answer; they ask and they challenge. They hold space, offer reflection and walk with you, while also offering up an uncomfortable mirror. For Tran, the shift wasn't just about limiting his use of ChatGPT; it was about reclaiming his own voice. In the end, he didn't need a perfect response. He needed to believe that he could navigate life's messiness with curiosity, courage and care – not perfect scripts. Name and identifying details changed to protect client confidentiality Carly Dober is a psychologist living and working in Naarm/Melbourne In Australia, support is available at Beyond Blue on 1300 22 4636, Lifeline on 13 11 14, and at MensLine on 1300 789 978. In the UK, the charity Mind is available on 0300 123 3393 and Childline on 0800 1111. In the US, call or text Mental Health America at 988 or chat

Undeclared milk leads to US-wide butter recall
Undeclared milk leads to US-wide butter recall

The Independent

time6 hours ago

  • The Independent

Undeclared milk leads to US-wide butter recall

A voluntary recall has been issued for over 64,000 pounds of Bunge North America's NH European Style Butter Blend due to undeclared milk, a common allergen. The recall was initiated on 14 July and classified as a Class II recall by the FDA. The affected butter was distributed to 12 US centres and one in the Dominican Republic. This butter recall is part of a series of recent food and drink issues, including High Noon Vodka Seltzer being recalled as it was mislabeled as non-alcoholic energy drinks. Consumers are advised to check affected products and dispose of or return them.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store