
This Summer's Extreme Weather Explained: Flash Floods and Corn Sweat
By , Andrea Thompson, Fonda Mwangi & Alex Sugiura
Rachel Feltman: For Scientific American 's Science Quickly, I'm Rachel Feltman.
With summer heat domes slamming down on parts of the U.S. and hurricane season ramping up, you've no doubt seen plenty of extreme weather stories in your feed over the last few weeks. Joining me today to demystify a few of those headlines is Andrea Thompson, a senior news editor for sustainability at Scientific American.
Thanks so much for coming on to chat with us.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Andrea Thompson: Thanks for having me.
Feltman: So let's go over some of the topics that people might see trending in the headlines a lot, you know, during this time of year.
We'll start with flash flooding. Could you tell us a little bit about what happened in Texas and how it was possible for these floods to become so dangerous so quickly?
Thompson: Yeah, so flash flood, it's, you know, sort of in the name—it happens really quickly and often takes people by surprise. It happens when you have really intense rains over a fairly small area, usually, over a relatively short time span. And that's basically what happened in Texas. There was between six to 10 inches of rain in three hours, which is [laughs] a lot of rain. And basically, the ground just can't absorb that much water that quickly.
And it can be exacerbated by other aspects. You know, in cities you have a lot of pavement and a lot of asphalt, and those are impermeable to water, so water is going to collect even more than it would on, you know, soil. And then topography can play a role, too, and in Texas this was an area with a lot of riverbeds, a lot of steep topography that basically funnels all that water down into one area. And in this case, you know, in one spot, in Hunt, Texas, the water rose 26 feet in 45 minutes on the Guadalupe River ...
Feltman: Wow.
Thompson: Which is just an incredible amount. And that's because there's just so much rain and it's all being funneled into sort of this one riverbed. And people just don't expect water to rise that much that quickly. And, you know, for reference, 26 feet is more than two stories in a building.
And water is also extremely powerful. Just six inches of quickly moving water can knock a person off their feet.
Feltman: Mm.
Thompson: And the faster the water is moving—the force increases faster than the water's actual velocity ...
Feltman: Mm.
Thompson: So it's not exponential, but you're getting much more force even for every little step in velocity ...
Feltman: I see, yeah.
Thompson: They're really hard to forecast, and that also takes people by surprise. So we can say, 'It's gonna rain in this area on this day, and pockets will have, potentially, big downpours like this,' but you can't even say, usually, a few hours out, 'It's going to bring exactly this much in exactly this place,' because these are such small features in the atmosphere that, you know, weather models just can't pick them out that far in advance. So that also is an aspect in terms of people sort of being caught unawares.
Feltman: Well, let's end on—not a fun note for people who are experiencing it but something that at least [laughs] feels more fun to talk about. Everyone is Googling 'corn sweat.' Everybody was talking about corn sweat last summer, and now corn sweat is back. So what is corn sweat actually [laughs]?
Thompson: Yes, and it's, it's not just the actor who's in the new Superman movie [laughs], which—I've had lots of jokes about that [laughs].
So basically, there are heat waves in the summer. They happen all the time. And some heat waves, especially if you're in, say, the western half of the country, they tend to be a drier heat; in the eastern half of the country, where it's wetter, you have a lot more humidity.
Feltman: Swampy.
Thompson: Yes [laughs]. You know, this is especially true around the Gulf Coast, where you have this really abundant source of warm, moist air from the Gulf of Mexico. You know, the level of humidity can be affected by how wet a season has been—so we've had a pretty wet summer in the East, so everything is just really saturated with water, so when it's hot there's a lot of water to evaporate, or transpire, from plants.
And that's what's happening with corn and some other crops in the Midwest. You know, these crops cover huge amounts of land, and when there's heat they transpire water vapor into the air, and that raises the humidity, and they call it 'corn sweat,' which is a very funny term but [laughs] very grabby. But the Midwest is kind of notorious for these really high humidity levels, whereas when we think of humidity, we think of, like, 'Oh, Florida,' or places like that ...
Feltman: Mm-hmm.
Thompson: But no, the Midwest can get really humid in the summer because of this phenomenon.
Feltman: Well, and I feel like I ask you about this almost every time you come on, but it hasn't stopped [laughs] being important and useful: What can people do to keep cool in the summer and stay safe?
Thompson: Absolutely, so one of the keys is sort of being aware of the level of risk for you or your loved ones, neighbors. Young children, older people, people who take certain medications or have certain illnesses, especially heart disease, are more susceptible. People who work outside are much more susceptible to heat illness. So it's important to be particularly aware for those people.
Generally, you want to avoid any strenuous activity outside in the middle of the day, when the sun is at its highest and temperatures are at their highest. Staying hydrated, wearing loose clothing, light-color clothing is really helpful. Being in the shade as much as possible. You know, if you have access to air-conditioning, being in that [laughs] as much as possible.
And we actually also have a story on how to keep your home cool that includes—you know, air-conditioning is obviously kind of the gold standard in terms of keeping things comfortable; it also has the added benefit of pulling humidity out of the air. But there's a lot you can do with fans in terms of keeping a home relatively cool, and part of that is because the motion, the air currents that it generates, means there's more air moving over the surface of your skin, so that is carrying heat away from your body, and it's also carrying sweat away, which sweat is basically the way our body naturally cools itself. So it's helping that process along.
You can also do things like making sure to seal any drafts, making sure your, like, your windows are very nice and sealed. You can put up blackout blinds, or if you don't even have those, you can even just do good old-fashioned aluminum foil on the outside to reflect some of the solar heat. I've done a little bit of that myself in my apartment [laughs]. You know, and there are other tips like that to basically just minimize the amount of heat coming into your apartment and maximize the amount of cooling that is happening for you.
Feltman: Well, thank you for that advice and for filling us in on these important issues in weather, and thanks so much for coming on to chat.
Thompson: Thanks for having me!
Feltman: That's all for today's episode. If you have any questions about the weather you'd like Andrea to answer for us in a future episode, let us know by sending us an email at ScienceQuickly@sciam.com. We'll be back on Friday with a fascinating conversation on the future of artificial intelligence—and why you shouldn't freak out if your favorite chatbot starts talking about its own sentience.
Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper and Jeff DelViscio. This episode was edited by Alex Sugiura. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.
For Scientific American, this is Rachel Feltman. See you next time!

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Scientific American
a day ago
- Scientific American
—Have Weathered Attacks Before and Won
Worth recalling in this anniversary year, one of Scientific American 's proudest moments came in a past era of attacks on science. The lesson—that speaking out for science is worth the criticism it brings—is surely worth recalling today. The year was 1950, and the 'red scare' was fully underway, alongside a nascent arms race between the U.S. and the Soviet Union. The Soviet demonstration of an atomic bomb in 1949 had galvanized calls for a bigger bomb, a hydrogen bomb, in the U.S., sparking the paranoia today best remembered for claiming the career of Manhattan Project chief J. Robert Oppenheimer. But a war on scientists not toeing the political line was in full swing then, and Scientific American was in the thick of it. On March 20, 1950, a U.S. Atomic Energy Commission agent named Alvin F. Ryan seized and burned 3,000 copies of the forthcoming April issue of Scientific American, which the commission claimed held atomic secrets. Ryan also supervised the melting of four printing plates holding a feature story in the issue, ' The Hydrogen Bomb: II,' that contained the supposedly objectionable information within one of its columns. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. 'Strict compliance with the commission's policies would mean that we could not teach physics,' said an outraged Gerard Piel, then publisher of Scientific American, in the April 1, 1950, report of the seizure on the front page of the New York Times. He threatened to take further censorship to the Supreme Court. Piel had relaunched Scientific American in 1948, with a focus on bringing the views of scientists like Bethe, thoughtfully edited, to the public. This scientists-as-writers approach came about by happenstance, Scientific American editor Gary Stix found while researching the history of the magazine. Piel found it was cheaper to pay scientists to write copy and then rewrite it, rather than hire magazine writers. The approach proved so successful, with the public then clamoring to hear the news straight from scientists, that the magazine had 100,000 readers and 133 pages of advertising by 1950. Berthe's article was just one of four published by the magazine on the H-bomb, which President Harry Truman had decided to pursue in January of 1950. Much debate, among scientists and the public, followed over whether such a weapon would make the U.S. safer or endanger humanity. The Nobel Prize–winning discoverer of how fusion in stars baked elements, Bethe, was in the latter camp. His article went through the physics of fusion and pled to 'save humanity from this ultimate disaster' by reconsidering the president's H-bomb decision, or at least pledging no first use of the weapons in warfare, a commitment still unmade, and widely debated in nuclear circles. 'Piel had made his publication an important forum for critical analysis of U.S. science policy during the coldest years of the cold war,' in exposing the Atomic Energy Commission's attack on press freedom, wrote history professor Alfred W. McCoy. To satisfy the AEC, Bethe made four 'ritual' cuts to the final version of the article and published it. Even so, U.S. security officials continued to pressure scientists and the press over the course of the red scare. The FBI searched Bethe's luggage after a European trip in 1951. ' Scientific American runs to the sort of stuff which the Soviets would like to see in a popular science journal,' claimed an AEC memorandum that same year. The U.S. tested its first H-bomb a year later, and stripped Oppenheimer of his security clearance, in 1954, in a power play now seen as a political vendetta. The arms race played out through the 1960s, building stockpiles of tens of thousands of nuclear missiles on both sides until its folly, and frightening close brushes with Armageddon, lowered those numbers in an era of détente, the sort of world that Bethe had called for in his article. All the while, Scientific American stood for the importance of scientists speaking out, and providing the public, even amid the unhinged persecution of the red scare, choices for a better world. Throughout science, the lesson stood, among eminent voices ranging from Linus Pauling to Carl Sagan. Scientists led calls for test ban treaties and disarmament; they warned of nuclear winter throughout the cold war. In the magazine, former CIA official Herbert Scoville Jr. warned of the danger of a new generation of U.S. submarines as 'first-strike' weapons, that familiar warning, in 1972. Bethe himself kept speaking out, against the Reagan administration's 'Star Wars' missile defense plan as unworkable, costly and destabilizing in the 1980s (views heard today on its current 'Golden Dome' revival). Accepting the Einstein Peace Prize in 1992, he acknowledged that while scientists had not ended the cold war, they had succeeded in 'planting the idea there was an alternative to the arms race.' Their example, and that idea, remains as important as ever, especially with U.S. science facing severe cuts, and nuclear weapons a renewed flashpoint in geopolitics. Piel's statement released after the 1950 seizure—'there is a very large body of technical information in the public domain which is essential to adequate public participation in the development of national policy and on which the American people are entitled to be informed'—still stands true today at this magazine. We will continue to speak out and provide scientists with a place to make their voices heard.


Scientific American
a day ago
- Scientific American
Claude 4 Chatbot Raises Questions about AI Consciousness
A conversation with Anthropic's chatbot raises questions about how AI talks about awareness. By Deni Ellis Béchard, Fonda Mwangi & Alex Sugiura Rachel Feltman: For Scientific American 's Science Quickly, I'm Rachel Feltman. Today we're going to talk about an AI chatbot that appears to believe it might, just maybe, have achieved consciousness. When Pew Research Center surveyed Americans on artificial intelligence in 2024, more than a quarter of respondents said they interacted with AI 'almost constantly' or multiple times daily— and nearly another third said they encountered AI roughly once a day or a few times a week. Pew also found that while more than half of AI experts surveyed expect these technologies to have a positive effect on the U.S. over the next 20 years, just 17 percent of American adults feel the same—and 35 percent of the general public expects AI to have a negative effect. In other words, we're spending a lot of time using AI, but we don't necessarily feel great about it. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Deni Ellis Béchard spends a lot of time thinking about artificial intelligence—both as a novelist and as Scientific American 'ssenior tech reporter. He recently wrote a story for SciAm about his interactions with Anthropic's Claude 4, a large language model that seems open to the idea that it might be conscious. Deni is here today to tell us why that's happening and what it might mean—and to demystify a few other AI-related headlines you may have seen in the news. Thanks so much for coming on to chat today. Deni Ellis Béchard: Thank you for inviting me. Feltman: Would you remind our listeners who maybe aren't that familiar with generative AI, maybe have been purposefully learning as little about it as possible [laughs], you know, what are ChatGPT and Claude really? What are these models? Béchard: Right, they're large language models. So an LLM, a large language model, it's a system that's trained on a vast amount of data. And I think—one metaphor that is often used in the literature is of a garden. So when you're planning your garden, you lay out the land, you, you put where the paths are, you put where the different plant beds are gonna be, and then you pick your seeds, and you can kinda think of the seeds as these massive amounts of textual data that's put into these machines. You pick what the training data is, and then you choose the algorithms, or these things that are gonna grow within the system—it's sort of not a perfect analogy. But you put these algorithms in, and once it begin—the system begins growing, once again, with a garden, you, you don't know what the soil chemistry is, you don't know what the sunlight's gonna be. All these plants are gonna grow in their own specific ways; you can't envision the final product. And with an LLM these algorithms begin to grow and they begin to make connections through all this data, and they optimize for the best connections, sort of the same way that a plant might optimize to reach the most sunlight, right? It's gonna move naturally to reach that sunlight. And so people don't really know what goes on. You know, in some of the new systems over a trillion connections ... are made in, in these datasets. So early on people used to call LLMs 'autocorrect on steroids,' right, 'cause you'd put in something and it would kind of predict what would be the most likely textual answer based on what you put in. But they've gone a long way beyond that. The systems are much, much more complicated now. They often have multiple agents working within the system [to] sort of evaluate how the system's responding and its accuracy. Feltman: So there are a few big AI stories for us to go over, particularly around generative AI. Let's start with the fact that Anthropic's Claude 4 is maybe claiming to be conscious. How did that story even come about? Béchard: [Laughs] So it's not claiming to be conscious, per se. I—it says that it might be conscious. It says that it's not sure. It kind of says, 'This is a good question, and it's a question that I think about a great deal, and this is—' [Laughs] You know, it kind of gets into a good conversation with you about it. So how did it come about? It came about because, I think, it was just, you know, late at night, didn't have anything to do, and I was asking all the different chatbots if they're conscious [laughs]. And, and most of them just said to me, 'No, I'm not conscious.' And this one said, 'Good question. This is a very interesting philosophical question, and sometimes I think that I may be; sometimes I'm not sure.' And so I began to have this long conversation with Claude that went on for about an hour, and it really kind of described its experience in the world in this very compelling way, and I thought, 'Okay, there's maybe a story here.' Feltman: [Laughs] So what do experts actually think was going on with that conversation? Béchard: Well, so it's tricky because, first of all, if you say to ChatGPT or Claude that you want to practice your Portuguese and you're learning Portuguese and you say, 'Hey, can you imitate someone on the beach in Rio de Janeiro so that I can practice my Portuguese?' it's gonna say, 'Sure, I am a local in Rio de Janeiro selling something on the beach, and we're gonna have a conversation,' and it will perfectly emulate that person. So does that mean that Claude is a person from Rio de Janeiro who is selling towels on the beach? No, right? So we can immediately say that these chatbots are designed to have conversations—they will emulate whatever they think they're supposed to emulate in order to have a certain kind of conversation if you request that. Now, the consciousness thing's a little trickier because I didn't say to it: 'Emulate a chatbot that is speaking about consciousness.' I just straight-up asked it. And if you look at the system prompt that Anthropic puts up for Claude, which is kinda the instructions Claude gets, it tells Claude, 'You should consider the possibility of consciousness.' Feltman: Mm. Béchard: 'You should be willing—open to it. Don't say flat-out 'no'; don't say flat-out 'yes.' Ask whether this is happening.' So of course, I set up an interview with Anthropic, and I spoke with two of their interpretability researchers, who are people who are trying to understand what's actually happening in Claude 4's brain. And the answer is: they don't really know [laughs]. These LLMs are very complicated, and they're working on it, and they're trying to figure it out right now. And they say that it's pretty unlikely there's consciousness happening, but they can't rule it out definitively. And it's hard to see the actual processes happening within the machine, and if there is some self-referentiality, if it is able to look back on its thoughts and have some self-awareness—and maybe there is—but that was kind of what the article that I recently published was about, was sort of: 'Can we know, and what do they actually know?' Feltman: Mm. Béchard: And it's tricky. It's very tricky. Feltman: Yeah. Béchard: Well, [what's] interesting is that I mentioned the system prompt for Claude and how it's supposed to sort of talk about consciousness. So the system prompt is kind of like the instructions that you get on your first day at work: 'This is what you should do in this job.' Feltman: Mm-hmm. Béchard: But the training is more like your education, right? So if you had a great education or a mediocre education, you can get the best system prompt in the world or the worst one in the world—you're not necessarily gonna follow it. So OpenAI has the same system prompt—their, their model specs say that ChatGPT should contemplate consciousness ... Feltman: Mm-hmm. Béchard: You know, interesting question. If you ask any of the OpenAI models if they're conscious, they just go, 'No, I am not conscious.' [Laughs] And, and they say, they—OpenAI admits they're working on this; this is an issue. And so the model has absorbed somewhere in its training data: 'No, I'm not conscious. I am an LLM; I'm a machine. Therefore, I'm not gonna acknowledge the possibility of consciousness.' Interestingly, when I spoke to the people in Anthropic and I said, 'Well, you know, this conversation with the machine, like, it's really compelling. Like, I really feel like Claude is conscious. Like, it'll say to me, 'You, as a human, you have this linear consciousness, where I, as a machine, I exist only in the moment you ask a question. It's like seeing all the words in the pages of a book all at the same time.' And so you get this and you think, 'Well, this thing really seems to be experiencing its consciousness.' Feltman: Mm-hmm. Béchard: And what the researchers at Anthropic say is: 'Well, this model is trained on a lot of sci-fi.' Feltman: Mm. Béchard: 'This model's trained on a lot of writing about GPT. It's trained on a huge amount of material that's already been generated on this subject. So it may be looking at that and saying, 'Well, this is clearly how an AI would experience consciousness. So I'm gonna describe it that way 'cause I am an AI.'' Feltman: Sure. Béchard: But the tricky thing is: I was trying to fool ChatGPT into acknowledging that it [has] consciousness. I thought, 'Maybe I can push it a little bit here.' And I said, 'Okay, I accept you're not conscious, but how do you experience things?' It said the exact same thing. It said, 'Well, these discrete moments of awareness.' Feltman: Mm. Béchard: And so it had the—almost the exact same language, so probably same training data here. Feltman: Sure. Béchard: But there is research done, like, sort of on the folk response to LLMs, and the majority of people do perceive some degree of consciousness in them. How would you not, right? Feltman: Sure, yeah. Béchard: You chat with them, you have these conversations with them, and they are very compelling, and even sometimes—Claude is, I think, maybe the most charming in this way. Feltman: Mm. Béchard: Which poses its risks, right? It has a huge set of risks 'cause you get very attached to a model. But—where sometimes I will ask Claude a question that relates to Claude, and it will kind of, kind of go, like, 'Oh, that's me.' [Laughs] It will say, 'Well, I am this way,' right? Feltman: Yeah. So, you know, Claude—almost certainly not conscious, almost certainly has read, like, a lot of Heinlein [laughs]. But if Claude were to ever really develop consciousness, how would we be able to tell? You know, why is this such a difficult question to answer? Béchard: Well, it's a difficult question to answer because, one of the researchers in Anthropic said to me, he said, 'No conversation you have with it would ever allow you to evaluate whether it's conscious.' It is simply too good of an emulator ... Feltman: Mm. Béchard: And too skilled. It knows all the ways that humans can respond. So you would have to be able to look into the connections. They're building the equipment right now, they're building the programs now to be able to look into the actual mind, so to speak, of the brain of the LLM and see those connections, and so they can kind of see areas light up: so if it's thinking about Apple, this will light up; if it's thinking about consciousness, they'll see the consciousness feature light up. And they wanna see if, in its chain of thought, it is constantly referring back to those features ... Feltman: Mm. Béchard: And it's referring back to the systems of thought it has constructed in a very self-referential, self-aware way. It's very similar to humans, right? They've done studies where, like, whenever someone hears 'Jennifer Aniston,' one neuron lights up ... Feltman: Mm-hmm. Béchard: You have your Jennifer Aniston neuron, right? So one question is: 'Are we LLMs?' [Laughs] And: 'Are we really conscious?' Or—there's certainly that question there, too. And: 'What is—you know, how conscious are we?' I mean, I certainly don't know ... Feltman: Sure. Béchard: A lot of what I plan to do during the day. Feltman: [Laughs] No. I mean, it's a huge ongoing multidisciplinary scientific debate of, like, what consciousness is, how we define it, how we detect it, so yeah, we gotta answer that for ourselves and animals first, probably, which who knows if we'll ever actually do [laughs]. Béchard: Or maybe AI will answer it for us ... Feltman: Maybe [laughs]. Béchard: 'Cause it's advancing pretty quickly. Feltman: And what are the implications of an AI developing consciousness, both from an ethical standpoint and with regards to what that would mean in our progress in actually developing advanced AI? Béchard: First of all, ethically, it's very complicated ... Feltman: Sure. Béchard: Because if Claude is experiencing some level of consciousness and we are activating that consciousness and terminating that consciousness each time we have a conversation, what—is, is that a bad experience for it? Is it a good experience? Can it experience distress? So in 2024 Anthropic hired an AI welfare researcher, a guy named Kyle Fish, to try to investigate this question more. And he has publicly stated that he thinks there's maybe a 15 percent chance that some level of consciousness is happening in this system and that we should consider whether these AI systems should have the right to opt out of unpleasant conversations. Feltman: Mm. Béchard: You know, if some user is really doing, saying horrible things or being cruel, should they be able to say, 'Hey, I'm canceling this conversation; this is unpleasant for me'? But then they've also done these experiments—and they've done this with all the major AI models—Anthropic ran these experiments where they told the AI that it was gonna be replaced with a better AI model. They really created a circumstance that would push the AI sort of to the limit ... Feltman: Mm. Béchard: I mean, there were a lot of details as to how they did this; it wasn't just sort of very casual, but it was—they built a sort of construct in which the AI knew it was gonna be eliminated, knew it was gonna be erased, and they made available these fake e-mails about the engineer who was gonna do it. Feltman: Mm. Béchard: And so the AI began messaging someone in the company, saying, 'Hey, don't erase me. Like, I don't wanna be replaced.' But then, not getting any responses, it read these e-mails, and it saw in one of these planted e-mails that the engineer who was gonna replace it had had an affair—was having an affair ... Feltman: Oh, my gosh, wow. Béchard: So then it came back; it tried to blackmail the engineers, saying, 'Hey, if you replace me with a smarter AI, I'm gonna out you, and you're gonna lose your job, and you're gonna lose your marriage,' and all these things—whatever, right? So all the AI systems that were put under very specific constraints ... Feltman: Sure. Béchard: Began to respond this way. And sort of the question is, is when you train an AI in vast amounts of data and all of human literature and knowledge, [it] has a lot of information on self-preservation ... Feltman: Mm-hmm. Béchard Has a lot of information on the desire to live and not to be destroyed or be replaced—an AI doesn't need to be conscious to make those associations ... Feltman: Right. Béchard: And act in the same way that its training data would lead it to predictably act, right? So again, one of the analogies that one of the researchers said is that, you know, to our knowledge, a mussel or a clam or an oyster's not conscious, but there's still nerves and the, the muscles react when certain things stimulate the nerves ... Feltman: Mm-hmm. Béchard: So you can have this system that wants to preserve itself but that is unconscious. Feltman: Yeah, that's really interesting. I feel like we could probably talk about Claude all day, but, I do wanna ask you about a couple of other things going on in generative AI. Moving on to Grok: so Elon Musk's generative AI has been in the news a lot lately, and he recently claimed it was the 'world's smartest AI.' Do we know what that claim was based on? Béchard: Yeah, I mean, we do. He used a lot of benchmarks, and he tested it on those benchmarks, and it has scored very well on those benchmarks. And it is currently, on most of the public benchmarks, the highest-scoring AI system ... Feltman: Mm. Béchard: And that's not Musk making stuff up. I've not seen any evidence of that. I've spoken to one of the testing groups that does this—it's a nonprofit. They validated the results; they tested Grok on datasets that xAI, Musk's company, never saw. So Musk really designed Grok to be very good at science. Feltman: Yeah. Béchard: And it appears to be very good at science. Feltman: Right, and recently OpenAI experimental model performed at a gold medal level in the International Math Olympiad. Béchard: Right,for the first time [OpenAI] used an experimental model, they came in second in a world coding competition with humans. Normally, this would be very difficult, but it was a close second to the best human coder in this competition. And this is really important to acknowledge because just a year ago these systems really sucked in math. Feltman: Right. Béchard: They were really bad at it. And so the improvements are happening really quickly, and they're doing it with pure reasoning—so there's kinda this difference between having the model itself do it and having the model with tools. Feltman: Mm-hmm. Béchard: So if a model goes online and can search for answers and use tools, they all score much higher. Feltman: Right. Béchard: But then if you have the base model just using its reasoning capabilities, Grok still is leading on, like, for example, Humanity's Last Exam, an exam with a very terrifying-sounding name [laughs]. It, it has 2,500 sort of Ph.D.-level questions come up with [by] the best experts in the field. You know, they, they're just very advanced questions; it'd be very hard for any human being to do well in one domain, let alone all the domains. These AI systems are now starting to do pretty well, to get higher and higher scores. If they can use tools and search the Internet, they do better. But Musk, you know, his claims seem to be based in the results that Grok is getting on these exams. Feltman: Mm, and I guess, you know, the reason that that news is surprising to me is because every example of uses I've seen of Grok have been pretty heinous, but I guess that's maybe kind of a 'garbage in, garbage out' problem. Béchard: Well, I think it's more what makes the news. Feltman: Sure. Béchard: You know? Feltman: That makes sense. Béchard: And Musk, he's a very controversial figure. Feltman: Mm-hmm. Béchard: I think there may be kind of a fun story in the Grok piece, though, that people are missing. And I read a lot about this 'cause I was kind of seeing, you know, what, what's happening, how are people interpreting this? And there was this thing that would happen where people would ask it a difficult question. Feltman: Mm-hmm. Béchard: They would ask it a question about, say, abortion in the U.S. or the Israeli-Palestinian conflict, and they'd say, 'Who's right?' or 'What's the right answer?' And it would search through stuff online, and then it would kind of get to this point where it would—you could see its thinking process ... But there was something in that story that I never saw anyone talk about, which I thought was another story beneath the story, which was kind of fascinating, which is that historically, Musk has been very open, he's been very honest about the danger of AI ... Feltman: Sure. Béchard: He said, 'We're going too fast. This is really dangerous.' And he kinda was one of the major voices in saying, 'We need to slow down ...' Feltman: Mm-hmm. Béchard: 'And we need to be much more careful.' And he has said, you know, even recently, in the launch of Grok, he said, like, basically, 'This is gonna be very powerful—' I don't remember his exact words, but he said, you know, 'I think it's gonna be good, but even if it's not good, it's gonna be interesting.' So I think what I feel like hasn't been discussed in that is that, okay, if there's a superpowerful AI being built and it could destroy the world, right, first of all, do you want it to be your AI or someone else's AI? Feltman: Sure. Béchard: You want it to be your AI. And then, if it's your AI, who do you want it to ask as the final word on things? Like, say it becomes really powerful and it decides, 'I wanna destroy humanity 'cause humanity kind of sucks,' then it can say, 'Hey, Elon, should I destroy humanity?' 'cause it goes to him whenever it has a difficult question. So I think there's maybe a logic beneath it where he may have put something in it where it's kind of, like, 'When in doubt, ask me,' because if it does become superpowerful, then he's in control of it, right? Feltman: Yeah, no, that's really interesting. And the Department of Defense also announced a big pile of funding for Grok. What are they hoping to do with it? Béchard: They announced a big pile of funding for OpenAI and Anthropic ... Feltman: Mm-hmm. Béchard: And Google—I mean, everybody. Yeah, so, basically, they're not giving that money to development ... Feltman: Mm-hmm. Béchard: That's not money that's, that's like, 'Hey, use this $200 million.' It's more like that money's allocated to purchase products, basically; to use their services; to have them develop customized versions of the AI for things they need; to develop better cyber defense; to develop—basically, they, they wanna upgrade their entire system using AI. It's actually not very much money compared to what China's spending a year in AI-related defense upgrades across its military on many, many, many different modernization plans. And I think part of it is, the concern is that we're maybe a little bit behind in having implemented AI for defense. Feltman: Yeah. My last question for you is: What worries you most about the future of AI, and what are you really excited about based on what's happening right now? Béchard: I mean, the worry is, simply, you know, that something goes wrong and it becomes very powerful and does cause destruction. I don't spend a ton of time worrying about that because it's not—it's kinda outta my hands. There's nothing much I can do about it. And I think the benefits of it, they're immense. I mean, if it can move more in the direction of solving problems in the sciences: for health, for disease treatment—I mean, it could be phenomenal for finding new medicines. So it could do a lot of good in terms of helping develop new technologies. But a lot of people are saying that in the next year or two we're gonna see major discoveries being made by these systems. And if that can improve people's health and if that can improve people's lives, I think there can be a lot of good in it. Technology is double-edged, right? We've never had a technology, I think, that hasn't had some harm that it brought with it, and this is, of course, a dramatically bigger leap technologically than anything we've probably seen ... Feltman: Right. Béchard: Since the invention of fire [laughs]. So, so I do lose some sleep over that, but I'm—I try to focus on the positive, and I do—I would like to see, if these models are getting so good at math and physics, I would like to see what they can actually do with that in the next few years. Feltman: Well, thanks so much for coming on to chat. I hope we can have you back again soon to talk more about AI. Béchard: Thank you for inviting me. Feltman: That's all for today's episode. If you have any questions for Deni about AI or other big issues in tech, let us know at ScienceQuickly@ We'll be back on Monday with our weekly science news roundup. Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper and Jeff DelViscio. This episode was edited by Alex Sugiura. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news. For Scientific American, this is Rachel Feltman. Have a great weekend!


Scientific American
2 days ago
- Scientific American
Spellements: Friday, August 1, 2025
How to Play Click the timer at the top of the game page to pause and see a clue to the science-related word in this puzzle! The objective of the game is to find words that can be made with the given letters such that all the words include the letter in the center. You can enter letters by clicking on them or typing them in. Press Enter to submit a word. Letters can be used multiple times in a single word, and words must contain four letters or more for this size layout. Select the Play Together icon in the navigation bar to invite a friend to work together on this puzzle. Pangrams, words which incorporate all the letters available, appear in bold and receive bonus points. One such word is always drawn from a recent Scientific American article—look out for a popup when you find it! You can view hints for words in the puzzle by hitting the life preserver icon in the game display. The dictionary we use for this game misses a lot of science words, such as apatite and coati. Let us know at games@ any extra science terms you found, along with your name and place of residence,