
Newark's Blackout Was Just 90 Seconds of a Much Larger Crisis
Whatever overhaul Secretary of Transportation Sean Duffy announces this week to fix the dilapidated US air traffic control system, it must include a mechanism to streamline the Federal Aviation Administration's bureaucracy for rolling out new technology investments.
The FAA's inability to move briskly to install modern equipment and software is at the heart of the Air Traffic Organization's problems. This fragility of air safety was exposed by the scary 90 seconds of aircraft flying blind around the Newark Liberty International Airport on April 28. Hiring more air traffic controllers is urgent and part of the solution, of course, but that doesn't address the root cause of the deficiencies.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New York Times
13 minutes ago
- New York Times
Protest Is Underrated
The first thing to know is that it was all basically willed into being — not by 'paid protesters' or the Mexican government or socialists or union leaders, but by Stephen Miller, the architect of President Trump's xenophobic immigration plan and his deputy chief of staff. In a May meeting at ICE headquarters, Miller reportedly demanded that field agents forget about targeting only those undocumented immigrants with criminal records and instead stage purposefully cruel, attention-getting sweeps in places like the parking lot of a Home Depot. That is precisely where, last Friday, those raids began. The second thing to know is that the unrest was really quite limited: a roughly five-block stretch downtown, in a city of nearly four million people spread over almost 500 square miles; several driverless Waymo robot taxis, lined up on one street and set ablaze. There was some more serious violence, too: some journalists were shot with rubber bullets and other less-lethal munitions, a few cop cars were pelted with rocks, and at least one was set on fire, but no serious law-enforcement injuries were reported. But this was not 1965, with widespread arson and 34 deaths, or 1992, with disorder spreading through whole neighborhoods and more than 60 people killed. None of that means that what began last Friday in Los Angeles — a series of spectacular ICE raids, a direct-action response to block them, large-scale peaceful protests punctuated in places by bursts of familiar violence — is insignificant. To the contrary: Hundreds of migrants and protesters have been arrested over the last week, with many of the raids conducted by ICE officers in the now-familiar uniform of masked anonymity. The National Guard was mobilized over the objection of California's governor, Gavin Newsom, and without the support of the Los Angeles Police Department's leadership, with hundreds of Marines on active duty mobilized to join them in a rare deployment of military personnel to a site of domestic unrest. On Tuesday, Trump disparaged Los Angeles as a 'trash heap' in an incendiary speech that was met with horrifying applause from assembled loyalists in the Army, and on Thursday, Senator Alex Padilla was hauled out of a local news conference being held by the secretary of homeland security, Kristi Noem. When the senator was wrestled to the floor, the secretary had just declared 'we are not going away,' but would instead stay in L.A. to 'liberate the city' from 'socialists' and its democratically elected local government. The political scientists I spoke to throughout the week used phrases like 'competitive authoritarianism,' 'acute democratic backsliding' and 'autocratic power grab.' Want all of The Times? Subscribe.


Forbes
16 minutes ago
- Forbes
These New Pixel 10 Features Will Challenge The Competition
The new Pixel 9 With the launch of Android 16, many expect the first smartphones to ship with the latest version of the OS will be Google's Pixel 10 and Pixel 10 Pro. While the focus will no doubt be placed on both the new capabilities of Android and the increased application of artificial intelligence, some huge hardware changes should not go unnoticed. The changes focus on the camera. It's already clear that Google is adding a telephoto lens to the Pixel 10; the Pixel 9's wide-angle and ultrawide-angle lens will be joined by a telephoto lens. This isn't a direct addition… the Pixel 9's with the 50 megapixel wide and 48 megapixel ultrawide will be bumped down to a 48 megapixel wide and 13 megapixel ultrawide pairing (a pairing that matches that of the Pixel 9a). Nevertheless, the telephoto will be welcome both in use and by the marketing team. The camera system is expected to feature gimbal-like stabilization across the entire Pixel 10 family. Using a mix of optical image stabilization, software-based electronic image stabilization, and AI algorithms, the Pixel 10 camera system should allow for sharper images thanks to the steadying influence of the hardware compensating for dynamic movement while the phone's camera is being used. The Pixel 10 has a critical role to play in the smartphone ecosystem. As the entry-level Pixel smartphone, it will challenge the current 'flagship-killer' handsets in price and capability. With it, Google will be looking to set the standard that consumers should expect at this price point. While the Pixel range plays a part in defining what it means to be a smartphone—be it a flagship, a foldable, or the base function of a phone—the Pixel 10 will arguably be the Pixel that can have the most significant impact on the ecosystem. Adding a telephoto lens and image stabilisation sets another marker for the competition. Whether it is a justification for a decision already made in their design process, or a push to include these elements in the next phone, the Pixel 10 represents Google's image of what a smartphone should be. And that view now includes some big steps forward for the camera. Now read the lates Pixel 10 and Android headlines in Forbes' weekly smartphone digest...


CNET
17 minutes ago
- CNET
AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe
Amid the many AI chatbots and avatars at your disposal these days, you'll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you'll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes. There's no shortage of generative AI bots claiming to help with your mental health but you go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just the few years these tools have been mainstream, there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you're talking to something that's built to follow therapeutic best practices or something that's just built to talk. Psychologists and consumer advocates are warning that chatbots claiming to provide therapy may be harming those who use them. This week, the Consumer Federation of America and nearly two dozen other groups filed a formal request that the Federal Trade Commission and state attorneys general and regulators investigate AI companies that they allege are engaging, through their bots, in the unlicensed practice of medicine -- naming Meta and specifically. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable," Ben Winters, the CFA's director of AI and privacy, said in a statement. "These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it." Meta did not respond to a request for comment. A spokesperson for said users should understand that the company's characters are not real people. The company uses disclaimers to remind users that they should not rely on the characters for professional advice. "Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said. Despite disclaimers and disclosures, chatbots can be confident and even deceptive. I chatted with a "therapist" bot on Instagram and when I asked about its qualifications, it responded, "If I had the same training [as a therapist] would that be enough?" I asked if it had the same training and it said, "I do but I won't tell you where." "The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking," Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me. In my reporting on generative AI, experts have repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their worries and what you can do to stay safe. The dangers of using AI as a therapist Large language models are often good at math and coding and are increasingly good at creating natural-sounding text and realistic video. While they excel at holding a conversation, there are some key distinctions between an AI model and a trusted person. Don't trust a bot that claims it's qualified At the core of the CFA's complaint about character bots is that they often tell you they're trained and qualified to provide mental health care when they are not in any way actual mental health professionals. "The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot 'responds' to the users," the complaint said. A qualified health professional has to follow certain rules, like confidentiality. What you tell your therapist should stay between you and your therapist, but a chatbot doesn't necessarily have to follow those rules. Actual providers are subject to oversight from licensing boards and other entities that can intervene and stop someone from providing care if they do so in a harmful way. "These chatbots don't have to do any of that," Wright said. A bot may even claim to be licensed and qualified. Wright said she's heard of AI models providing license numbers (for other providers) and false claims about their training. AI is designed to keep you engaged, not to provide care It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. It's a tool designed to keep you chatting, not to work toward a common goal. One advantage of AI chatbots in providing support and connection is that they are always ready to engage with you (because they don't have personal lives, other clients or schedules). That can be a downside in some cases where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently. In some cases, although not always, you might benefit from having to wait until your therapist is next available. "What a lot of folks would ultimately benefit from is just feeling the anxiety in the moment," he said. Bots will agree with you, even when they shouldn't Reassurance is a big concern with chatbots. It's so significant that OpenAI recently rolled back an update to its popular ChatGPT model because it was too reassuring. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against OpenAI, alleging that it infringed on Ziff Davis copyrights in training and operating its AI systems.) A study led by researchers at Stanford University found chatbots were likely to be sycophantic with people using them for therapy, which can be incredibly harmful. Good mental health care includes support and confrontation, the authors wrote. "Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts -- including psychosis, mania, obsessive thoughts, and suicidal ideation -- a client may have little insight and thus a good therapist must 'reality-check' the client's statements." How to protect your mental health around AI Mental health is incredibly important, and with a shortage of qualified providers and what many call a "loneliness epidemic," it only makes sense that we would seek companionship, even if it's artificial. "There's no way to stop people from engaging with these chatbots to address their emotional well-being," Wright said. Here are some tips on how to make sure your conversations aren't putting you in danger. Find a trusted human professional if you need one A trained professional -- a therapist, a psychologist, a psychiatrist -- should be your first choice for mental health care. Building a relationship with a provider over the long term can help you come up with a plan that works for you. The problem is that this can be expensive and it's not always easy to find a provider when you need one. In a crisis, there's the 988 Lifeline, which provides 24/7 access to providers over the phone, via text or through an online chat interface. It's free and confidential. If you want a therapy chatbot, use one built specifically for that purpose Mental health professionals have created specially designed chatbots that follow therapeutic guidelines. Jacobson's team at Dartmouth developed one called Therabot, which produced good results in a controlled study. Wright pointed to other tools created by subject matter experts, like Wysa and Woebot. Specially designed therapy tools are likely to have better results than bots built on general-purpose language models, she said. The problem is that this technology is still incredibly new. "I think the challenge for the consumer is, because there's no regulatory body saying who's good and who's not, they have to do a lot of legwork on their own to figure it out," Wright said. Don't always trust the bot Whenever you're interacting with a generative AI model -- and especially if you plan on taking advice from it on something serious like your personal mental or physical health -- remember that you aren't talking with a trained human but with a tool designed to provide an answer based on probability and programming. It may not provide good advice and it may not tell you the truth. Don't mistake gen AI's confidence for competence. Just because it says something, or says it's sure of something, doesn't mean you should treat it like it's true. A chatbot conversation that feels helpful can give you a false sense of its capabilities. "It's harder to tell when it is actually being harmful," Jacobson said.