logo
I Loved Driving at Night, Until a Rare Condition Changed Everything

I Loved Driving at Night, Until a Rare Condition Changed Everything

The Drive23-05-2025

The latest car news, reviews, and features.
The night my vision changed, I was driving home from my girlfriend's place. Lights became cloudy streaks; letters and numbers on green highway signs repeated in trails. It was a lot like waking up really dehydrated, when you're acutely aware of how dry your eyes are. In such situations, I'd typically rub them, but I had contacts in and no backup pair of glasses, so that wasn't going to work here. No amount of blinking set things back in alignment. I drove for about 40 minutes, got home, and got into bed. My eyes would feel better in the morning, I thought. Why wouldn't they?
It's been three-and-a-half years since then, and my eyes still don't feel right. Fixing them has entailed so many doctor's visits that I've lost count, and eye drops made from my own blood that insurance doesn't cover. But that's skipping ahead. Ever since I began driving, I loved to do it at night. That's become less fun over time due to the onset of LEDs and everyone driving trucks, but this condition has pretty much ruined one of my greatest pleasures.
When I woke up the morning after that drive home, I couldn't immediately tell anything was amiss until I sat down at my desk to write. Had my monitor always been this bright? And why did simply looking at it give me a headache? Clearly, I needed to rest my eyes, so I went for a walk outside. Breaks didn't help, and after a few days of this, I realized something was seriously wrong. Driving home on that very first night, my vision kind of felt like this. Mint Images via Getty Images
What I'd come to learn (but not before several erroneous diagnoses) was that I had something called corneal neuropathy. Confocal imaging, using a special microscope that could get a look at the nerves that sheet the surface of my eye, showed an alarming lack of them. Nerve endings are supposed to show up like straight-ish squiggles, and mine were faint, kinked, or marred with fuzzy, balled-up clusters; these had responded to whatever trauma my eyes had undergone by regrowing malformed. But in a less technical sense—and the words of one of my ophthalmologists—the nerves in my eyes were ' pissed off .'
Corneal neuropathy is like any kind of neuropathy, in that it presents in strange, unique ways for every individual because the human body is a pseudopredictable mess. If you're looking for yet more ways to depress yourselves on the daily, peruse the r/dryeye subreddit. Some folks there have classic dry eye syndrome, which can be debilitating enough on its own; three doctors diagnosed me with the condition while noting that I was far too young to have it. Others in the community have unrelenting, excruciating pain, despite tear ducts that behave perfectly normally, which sometimes goes hand-in-hand with cluster migraines. Corneal neuropathy happens to go by many names, and one of them is neuropathic dry eye; a patient might feel like their eyes are dry, when every possible form of examination indicates that they aren't.
Fortunately, my neuropathy does not present as debilitating pain. My eyes feel gritty much of the time, sure, but for the most part, it's a minor annoyance I can deal with by wearing glasses instead of contact lenses and liberally applying over-the-counter tears. Unfortunately , it presents as perpetual sensitivity to highly concentrated, artificial sources of light, more so than broad sunlight. And this brings us to why my experience is here on The Drive , rather than in a case study in the American Journal of Ophthalmology. Though if you look hard enough on Reddit, you might be able to find my story there, too.
You may have noticed that modern headlights are bright. They're so bright that even people with healthier eyes than mine are fed up. The problem is twofold. On one hand, today's LED headlights are indeed brighter and emit cooler light than the halogen lamps of 20 years ago. But—and this part tends to get lost in the conversation—car design also plays a role. As vehicles get larger and ride taller, their lights that used to mostly point downward, illuminating the path ahead, now project directly into the retinas of anyone driving anything smaller and lower. You could fight fire with fire and replace your daily with something equally elevated, but that doesn't really fix the problem, and besides, we enthusiasts like to drive what we like.
All that is to say that right now is a seriously frustrating time to drive at night for many people. For some dry-eye sufferers, phantom or otherwise, it's harder still. Teenagers appreciate the freedom of driving when they get their learner's permits but of course, after a while, you take it as a given that a car enables you to go anywhere, at any time, limited only by distance and fatigue.
But when every streetlamp has a hazy glow to it; when every road sign seems just a touch less sharp; when you can't seem to make the interior lights dim enough; when you have to start positioning your car with a generous buffer zone before oncoming traffic passes because you know you're about to be effectively blind for a second or two; when the night seems darker than you can ever recall, you start avoiding things. One strange side-effect for me through all this is that driving in the rain actually makes my eyes feel more normal. An expert might be able to tell me why, but I'd guess that rain gives my brain an explanation for its cloudy or distorted vision. RifatHasina via Getty Images
Sometimes I'd be aware of my avoidance, and sometimes I wouldn't. If I needed to run around the corner to a grocery store to get that one ingredient we'd forgotten for dinner, I might ask my partner to drive. It was the same for long trips through the night. Sometimes, my eyes might feel a little more comfortable than usual, and I'd be more willing to try. Other times, I'd wonder if I was a danger to myself, anyone riding with me, and anyone I shared the road with.
Those are depressing questions to ask yourself. But the especially insidious part was how early on in this journey, dread would set in every night, and I never knew why. It might hit me with the passing of the day, or when I'd go to take out the trash. Of course, I didn't realize what I'd actually been dreading—the loss of freedom and the inability to easily do something I love.
Through most of my 20s, I'd guess more than half of my driving happened after the sun went down. I honestly preferred it that way. That was partially down to having a job at a newspaper production office, where I wouldn't go home until we sent content to the presses. But those first few years out of college were full of late hangs with friends, impromptu Wawa runs, and trips to and from basement shows. I usually had the most fun when I was going somewhere at night. And when I had the car to myself, it was therapeutic.
There's still no greater solitude to me than being alone on a back road; that's when I most deeply feel the joy of driving. I don't necessarily have to be going fast either, and trust me, along the Delaware River, that's the perfect way to inadvertently control the deer population. At night, the world is only ever as large as what my headlights can see, and that's a pretty comforting feeling.
Corneal neuropathy almost destroyed it, and for the last several years, I doubted I'd ever get it back. When I was diagnosed with this condition, a doctor told me the only thing that was likely to help was autologous serum eye drops (ASEDs). These drops are a combination of serum from the patient's own blood, and saline. Doctors prescribe different concentrations of serum and recommend different regimens for every patient (for what it's worth, I'm on a 20% concentration eight times a day), but the principle here is that, unlike artificial tears, ASEDs 'share many of the same biochemical properties as real tears,' per Medical News Today , and contain even higher concentrations of biological nutrients like vitamin A, proteins, and transforming growth factor than natural tears do. That stimulates healing when nerves in the eye struggle to heal on their own.
I tried ASEDs, alongside fancy glasses, occasional steroids, and a host of different drugs that target chronic nerve pain, on and off, for two years. Serum tears are expensive—I pay $400 for a three-month dose, and insurance doesn't cover them, because why would they? I'd pass on refills because I wasn't seeing the results I hoped for, and couldn't stomach the expense. It already angered me that I was ripping through $20 bottles of normal eye drops every three weeks; $400 for a treatment I wasn't sure was helping and made travel an absolute pain (you've got to keep them cold all the time , and I fly a lot) was an indignity for someone who used to pride themselves on needing nothing but coffee and Advil in the morning. Every three months, I get a box of these little vials full of eye drops made from my own blood serum. They arrive frozen, and when I travel, I put a bottle or two in that insulated tumbler and fill up the rest of the space with plastic ice cubes. TSA hasn't given me grief yet! Adam Ismail
It took a long time to admit that this was just my life now, and I might see results if I just stayed the course of treatment. I shifted to a different ASED and drug regimen, and today, I feel like I'm doing a little better. Imaging of my corneas backs that up—more squiggles, less fuzzy balls. If you asked me precisely what 'a little better' feels like, it's definitely not 'healthy.' Headlights still have clouds and feel like they take up too much space and create too much noise in my visual field. But it's all a smidge less overwhelming. My doctor isn't even satisfied with my pace of improvement and believes I should be further along than I am now. At this point, I'm just content to be improving at all.
All this has been a tremendous inconvenience at best, and a deeply personal, often unrelatable-feeling source of anxiety and panic at worst. It's impacted every facet of my life, but it's notably reshaped my relationship with something I love to do, which I've also essentially based a career around. I have regrets that I wish I'd taken better care of my eyes, or somehow enjoyed those late drives more than I knew to at the time, but those feelings are illogical and unrealistic. Being grateful is good, but it isn't natural—it's learned, and it's work.
It's also taken me years to get to this point of acceptance, and I still haven't perfected it yet. To anyone who enjoys driving and, for whatever reason, finds it more difficult now than ever before, my heart goes out to you. So too if you know exactly the treatment you need and can't afford it. There are plenty of people suffering from the same condition I am, but it's still not terribly well researched, and 'many clinicians are unfamiliar with [its] existence,' let alone how to manage it, per The Scientific Journal of The Royal College of Ophthalmologists . Also, it should go without saying that none of this is medical advice; I encourage you to see a doctor if you have similar concerns.
What you've just read is something I've wanted to write for a long time. Whenever I tried, I'd get stuck on what purpose it'd serve. Frankly, I'm still not sure, but it's always cathartic to vent. And if it gets even a few more people talking about things like this—hell, if it gets more attention on how scorchingly bright today's headlights are—I'll take it. We could all use some relief.
Have your own story about struggling with night driving? Comment below or contact the author directly: adam.ismail@thedrive.com
Adam Ismail is the News Editor at The Drive, coordinating the site's slate of daily stories as well as reporting his own and contributing the occasional car or racing game review. He lives in the suburbs outside Philly, where there's ample road for his hot hatch to stretch its legs, and ample space in his condo for his dusty retro game consoles.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Safety: Beyond AI Hype To Hybrid Intelligence
AI Safety: Beyond AI Hype To Hybrid Intelligence

Forbes

time13 minutes ago

  • Forbes

AI Safety: Beyond AI Hype To Hybrid Intelligence

Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.

Driver crashes into T-Mobile store after medical emergency, MDSO says
Driver crashes into T-Mobile store after medical emergency, MDSO says

CBS News

time18 minutes ago

  • CBS News

Driver crashes into T-Mobile store after medical emergency, MDSO says

A driver in his 70s was taken to the hospital Friday morning after crashing his car into a T-Mobile store, according to the Miami-Dade Sheriff's Office. Deputies said the man had gone to the store to pay his bill when he experienced a medical emergency and accidentally hit the gas pedal instead of the brake, sending his vehicle into the middle of the store. "It was just chaos, everybody was scared, everybody was wondering what was going on," said Felix Morales, who told CBS News Miami he works at a warehouse behind the store and heard the crash. "Poof, like glass, and I said whoa, that's kind of weird, and that's what made me come over here," Morales said. No one else injured Xavier Thompson, who works nearby, said no one was injured inside the store because it had not yet opened. "As far as the T-Mobile people said, they were in the back. Nobody was inside. It happened prior to the five minutes when they open," Thompson told CBS News Miami. Monica Rodriguez, another nearby worker, said she was worried for the man. "I was concerned for him because I was like, oh my God, what happened, I hope he's okay," Rodriguez said. Man expected to recover Authorities said the man is expected to recover. The store was deemed unsafe by the county and there is no word on when it will reopen. CBS News Miami reported that cleanup crews spent the day working to remove the vehicle and assess the damage, which included shattered glass and a damaged entrance. The door frame remained intact. T-Mobile did not immediately respond to requests for comment.

Tesla's Leader of Optimus Humanoid Robot Program Leaves Company
Tesla's Leader of Optimus Humanoid Robot Program Leaves Company

Bloomberg

time32 minutes ago

  • Bloomberg

Tesla's Leader of Optimus Humanoid Robot Program Leaves Company

The leader of Tesla Inc. 's Optimus program is leaving the company, according to a person familiar with the matter, injecting uncertainty into the humanoid robot effort that Chief Executive Officer Elon Musk sees as a significant part of the future business. Milan Kovac, the head of engineering for Optimus, informed colleagues on Friday that he is departing effective immediately, said the person, who asked not to be identified discussing private information. Ashok Elluswamy, who leads Tesla's autopilot teams, will take over responsibility for Optimus, the person said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store