
Southwest Airlines to require portable chargers be kept out while in use due to battery fire concerns
Passengers on Southwest Airlines flights will soon be required to keep their portable chargers in plain sight while using them due to concerns about the growing number of lithium-ion battery fires in a new policy that other airlines may adopt.
The Dallas-based airline announced the new policy that will go into effect on May 28 and said passengers may have already seen notifications about the rule when using the airline's app.
There is growing concern about lithium-ion battery fires on planes because the number of incidents continues to grow yearly, and devices powered by those batteries are ubiquitous. There have already been 19 incidents involving the batteries this year, following last year's record high of 89, according to Federal Aviation Administration statistics.
The incidents have more than doubled since the pandemic-era low of 39 in 2020, and have climbed annually.
Some research suggests that portable chargers might be the second-leading cause of battery fires on planes, only behind electronic cigarettes.
Compared to the roughly 180,000 flights U.S. airlines operate each week, the number of incidents is still relatively small and lithium batteries can overheat anywhere. However, it is a growing concern for the airlines.
"It's definitely a serious risk," said David Wroth, who studies the risks for UL Standards & Engagement and works with 37 airlines and battery manufacturers to minimize them. At least a couple of airlines UL is working with are reevaluating the risks associated with rechargeable batteries, so additional rule changes could be coming.
Asian airlines enforce same rule
While Southwest is the first U.S. airline to restrict the use of portable lithium-ion chargers, several Asian airlines took action earlier this year after a devastating fire aboard an Air Busan plane waiting to take off from an airport in South Korea in January.
In the Korean airline fire in January, all 176 people aboard the plane had to be evacuated because the blaze burned through the plane's roof. The cause of that fire hasn't been officially determined, but several airlines and Korean regulators took action against portable chargers afterward.
Korean airlines won't allow the chargers to be stored in overhead bins anymore; they must either be packed in a plastic bag or have their ports covered with insulating tape to keep them from touching metal.
In addition, Singapore Airlines and Thai Airways both prohibit the use or charging of portable power banks at all during flights.
Officials want passengers to be responsible about packing
Last summer, a smoking laptop in a passenger's bag led to the evacuation of a plane awaiting takeoff at San Francisco International Airport. In 2023, a flight from Dallas to Orlando, Florida, made an emergency landing in Jacksonville, Florida, after a battery caught fire in an overhead bin.
Southwest said that requiring these chargers to be kept out in the open when they are being used will help because "in the rare event a lithium battery overheats or catches fire, quick access is critical and keeping power banks in plain sight allow for faster intervention and helps protect everyone onboard."
Experts have long recommended keeping rechargeable devices in reach during flights so they can be monitored for any signs of problems like becoming too hot to touch or starting to bulge or smoke. But the airlines have to rely on educating consumers and encouraging them to take precautions.
"Ultimately, it comes down to a lot of personal responsibility that we as passengers have to take," Wroth said.
Southwest will allow the chargers to be stored inside carry-on bags when they aren't in use. But a spokeswoman said the airline is just alerting customers about the policy before their flight and asking for their compliance. Wroth said that approach is probably best.
"We have enough problems with unruly passengers already. And having cabin crew confront somebody over bringing something on board is not likely to be a good situation as well," Wroth said.
The Transportation Security Administration has long prohibited e-cigarettes and chargers and power banks with lithium-ion batteries in checked bags, but allows them in carry-on bags. The rule exists precisely because fires in the cargo hold might be harder to detect and extinguish.
The FAA recommends passengers keep cell phones and other devices nearby on planes so they can access them quickly. The agency said flight crews are trained to recognize and respond to lithium battery fires. Passengers should notify the flight crew immediately if their lithium battery or device is overheating, expanding, smoking or burning.
The latest research from UL Standards & Engagement said that data from 2024 suggests that portable chargers were to blame in 19% of the incidents, though that was only slightly ahead of the number of cell phone incidents. E-cigarettes accounted for 28% of the problems.
Nearly one-third of all passengers carried portable chargers on flights last year.
More than one-quarter of passengers surveyed last year said they put vaping cigarettes and portable chargers in checked bags. That is against federal rules, but Wroth said it might be as much an issue of them not understanding the dangers as much as it is passengers trying to hide the devices.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Wall Street Journal
38 minutes ago
- Wall Street Journal
Trump's Crackdown on Foreign Students Threatens to Disrupt Pipeline of Inventors
Ajay Bhatt had never been on a plane when he left India for City University of New York to pursue a graduate degree in 1981. More than four decades and 130 patents later, billions of people are still using Bhatt's most-recognizable invention, the Universal Serial Bus, or USB. 'My dad really didn't want me to go,' Bhatt recalls. But, he said, 'This was the country where you could get the very best education, and everybody was welcoming.'


Forbes
an hour ago
- Forbes
Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity
Speculating on the future of AI including artificial general intelligence (AGI) and artificial ... More superintelligence (ASI). In today's column, I examine a newly posted blog piece by Sam Altman that has generated quite a bit of hubbub and controversy within the AI community. As the CEO of OpenAI, Sam Altman is considered an AI luminary, of which his viewpoint on the future of AI carries an enormous amount of weight. His latest online commentary contains some eyebrow-raising indications about the current and upcoming status of AI, including aspects partially coated in AI-speak and other insider terminology that require mindful interpretation and translation. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. In a new posting on June 10, 2025, entitled 'The Gentle Singularity' by Sam Altman on his personal blog, the famed AI prognosticator made these remarks (excerpts): There's a whole lot in there to unpack. His upbeat-worded opinion piece contains commentary about many undecided considerations, such as referring to the ill-defined and indeterminate AI event horizon, the impacts of artificial superintelligence, various touted dates that suggest when we can expect things to really take off, hazy thoughts about the nature of the AI singularity, and much more. Let's briefly explore the mainstay elements. A big question facing those who are deeply into AI consists of whether we are on the right track to attain AGI and ASI. Maybe we are, maybe we aren't. Sam Altman's reference to the AI event horizon alludes to the existing pathway that we are on, and he unequivocally implies and states that in his opinion, we not only have reached the event horizon but that we are avidly past it already. As espoused, the takeoff has started. Just to note, that's a claim embodying immense boldness and brashness, and not everyone in AI concurs with that viewpoint. Consider these vital facets. First, in favor of that perspective, some insist that the advent of generative AI and large language models (LLMs) vividly demonstrates that we are now absolutely on the path toward AGI/ASI. The incredible semblance of natural language fluency exhibited by the computational capabilities of contemporary LLMs seems to be a sure sign that the road ahead must lead to AGI/ASI. However, not everyone is convinced that LLMs constitute the appropriate route. There are qualms that we already are witnessing headwinds on how much generative AI can be further extended, see my coverage at the link here. Perhaps we are nearing a severe roadblock, and continued efforts will not get us any further bang for the buck. Worse still, we might be off-target and going in the wrong direction altogether. Nobody can say for sure whether we are on the right path or not. It is a guess. Well, Sam Altman has planted a flag that we are incontrovertibly on the right path and that we've already zipped down the roadway quite a distance. Cynics might find this a self-serving perspective since it reinforces and reaffirms the direction that OpenAI is currently taking. Time will tell, as they say. Another consideration in the AI field is that perhaps there will be a kind of singularity that serves as a key point at which AGI or ASI will readily begin to emerge and keenly showcase that we have struck gold in terms of being on the right pathway. For my detailed explanation of the postulated AI singularity, see the link here. Some believe that the AI singularity will be a nearly instantaneous split-second affair, happening faster than the human eye can observe. One moment we will be working stridently on pushing AI forward, and then, bam, the singularity occurs. It is envisioned as a type of intelligence explosion, whereby intelligence rapidly begets more intelligence. After the singularity happens, AI will be leaps and bounds better than it just was. In fact, it could be that we will have a fully complete AGI or ASI due to the singularity. One second earlier, we had plain AI, while an instant later we amazingly have AGI or ASI in our midst, like a rabbit out of a hat. Perhaps though the singularity will be a long and drawn-out activity. There are those who speculate the singularity might get started and then take minutes, hours, or days to run its course. The time factor is unknown. Maybe the AI singularity will take months, years, decades, centuries, or lengthier to gradually unfurl. Additionally, there might not be anything resembling a singularity at all, and we've just concocted some zany theory that has no basis in reality. Sam Altman's posting seems to suggest that the AI singularity is already underway (or, maybe happening in 2030 or 2035) and that it will be a gradual emerging phenomenon, rather than an instantaneous one. Interesting conjecture. Right now, efforts to forecast when AGI and ASI are going to be attained are generally based on putting a finger up into prevailing AI winds and wildly gauging potential dates. Please be aware that the hypothesized dates have very little evidentiary basis to them. There are many highly vocal AI luminaires making brazen AGI/ASI date predictions. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. See my analysis of those dates at the link here. A somewhat quieter approach to the gambit of date guessing is via the use of surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Depending on how you interpret Sam Altman's latest blog post, it isn't clear as to whether AGI is happening by 2030 or 2035, or whether it is ASI instead of AGI since he refers to superintelligence, which might be his way of expressing ASI or maybe AGI. There is a muddiness of differentiating AGI from ASI. Indeed, I've previously covered his changing definitions associated with AGI and ASI, i.e., moving of the cheese, at the link here. We'll know how things turned out in presumably a mere 5 to 10 years. Mark your calendars accordingly. An element of the posting that has gotten the gall of especially AI ethicists is that the era of AGI and ASI seems to be portrayed as solely uplifting and joyous. We are in a gentle singularity. That's certainly happy news for the world at large. Utopia awaits. There is a decidedly other side to that coin. AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as 'P(doom),' which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI or x-risk. The other camp entails the so-called AI accelerationists. They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity's problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that's good in the sense that AI will invent things we never could have envisioned. No one can say for sure which camp is right and which one is wrong. This is yet another polarizing aspect of our contemporary times. For my in-depth analysis of the two camps, see the link here. You can readily discern which camp the posting sides with, namely roses and fine wine. It is important to carefully assess the myriads of pronouncements and proclamations being made about the future of AI. Oftentimes, the wording appears to brazenly assert that the future is utterly known and predictable. With a sense of flair and confidence, many of these prognostications can be easily misread as somehow a bushel of facts and knowns, rather than a bundle of opinions and conjecture. Franklin D. Roosevelt wisely stated: 'There are as many opinions as there are experts.' Keep your eyes and ears open and be prudently mindful of all prophecies concerning the future of AI. You'll be immeasurably glad you were cautious and alert.


Washington Post
an hour ago
- Washington Post
How Disney's AI lawsuit could shift the future of entertainment
The battle over the future of AI-generated content escalated on Wednesday as two Hollywood titans sued a fast-growing AI start-up for copyright infringement. Disney and Universal, whose entertainment empires include Pixar, Star Wars, Marvel and Despicable Me, sued Midjourney, claiming it wrongfully trained its image-generating AI models on the studios' intellectual property.