logo
ChatGPT's Rising Traffic Versus Other Top Websites [Infographic]

ChatGPT's Rising Traffic Versus Other Top Websites [Infographic]

Yahoo28-05-2025

This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter.
No matter how you look at it, ChatGPT has had a transformative impact on web usage trends, and the broader tech industry as a whole.
Because while generative AI tools in a similar vein to ChatGPT have been in development for years, with Meta, Google, and many others exploring the possibilities of AI tech, no one had launched a user-friendly, conversational AI interface like ChatGPT, and that quickly made the app synonymous with the new tech.
And while other AI models are improving, and likely already surpass ChatGPT in capacity and capability, that first-mover advantage has been huge for the platform, as you can see in this latest overview from Visual Capitalist.
The chart below compares the web traffic data of the top 10 websites in April, as recorded by SimilarWeb.
As per Visual Capitalist:
'The ChatGPT.com site saw a 13% increase in monthly traffic in April 2025, with features like image generation picking up popularity. Meanwhile, other most-visited websites like Google, YouTube, and Facebook all saw their monthly traffic decline in April. Wikipedia saw the sharpest fall at -6.1%, potentially signaling a slowdown in research activity or competition from AI-powered tools like ChatGPT. Social media site X.com (formerly Twitter) followed with a -5.2% change in monthly traffic.'
So even though many other AI alternatives are now available, ChatGPT is still gaining traffic, and becoming a key source of information for many users.
That points to the shifting paths of online discovery, and the rising use of AI, for many purposes.
It's worth considering what that means for user behaviors, and how that relates to marketing approaches.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Is OpenAI Building an Empire or a Religion? - The Assignment with Audie Cornish - Podcast on CNN Audio
Is OpenAI Building an Empire or a Religion? - The Assignment with Audie Cornish - Podcast on CNN Audio

CNN

time32 minutes ago

  • CNN

Is OpenAI Building an Empire or a Religion? - The Assignment with Audie Cornish - Podcast on CNN Audio

Audie Cornish 00:00:00 Elon Musk has told this story a couple of times, but it's good context. Elon Musk clip 00:00:04 Larry and Paige and I used to be very close friends and I would stay at his house and I'd talk to Larry into the late hours of the night about AI safety. Audie Cornish 00:00:13 It's the story of what motivated him to invest in AI. So many, many years ago, Larry Page of Google had just scooped up DeepMind, a cutting edge AI company founded by Mustafa Suleyman, who I have interviewed on this podcast. It's a good conversation. I'm gonna stick it in our show notes. But here's how Musk told the story at the New York Times Dealbook Summit last year. Elon Musk clip 00:00:37 'And it became apparent to me that Larry did not care about AI safety. I think perhaps the thing that gave it away was when he called me a speciesist for being pro-humanity. As in, you know, like a racist, but poor species. So I'm like, wait a second, what side are you on, Larry? Audie Cornish 00:01:00 'Google had the talent and the computing power and what seemed like infinite amounts of money to spend on both. Musk considered AI a double-edged sword, and he worried people were not worried enough about the sharper edge. He was hosting dinners talking about this. He was doing the university circuit talking about this. He met multiple times with President Obama to talk about it. Now, at the same time, Sam Altman, top executive at the famous Y Combinator, that's a tech company incubator. Well, he was also looking to take a big swing with a company focused on artificial intelligence. Karen Hao 00:01:37 Altman is a very strategic person. He plays the long game. And one of the things he's very, very good at is getting talent and getting capital towards a specific objective. And so early on, he thought, who are the people that I need to recruit to this project to turn this into a legitimate lab? The first person that he identified was Elon Musk. Audie Cornish 00:02:01 'Now the company they co-founded, OpenAI, launched in December of 2015 as a non-profit committed to open collaboration, making its patents and research publicly available. It didn't stay that way. Sky News clip 00:02:18 I think I just spotted behind this man here, Sam Altman, who is the boss of OpenAI, a tech powerhouse himself in the United States, one of many tech leaders who has come here to Saudi Arabia as part of the American delegation to meet the Crown Prince. There he is, Sam Altman, meeting Donald Trump and the Crown prince. Audie Cornish 00:02:39 'OpenAI is now a hybrid company, meaning it's a for-profit and a non-profit entity. They call it capped profit. However, they slice it, two things are true. They stopped sharing their open source code and they are monetizing the technology. Sky News clip 00:02:55 'But interesting, of course, that he's not the only highly, highly wealthy tech bro, shall we say, in this line-up. Going before him was the world's richest man, Elon Musk. And these images are remarkable. And these opportunities for these tech billionaires and multimillionaires is hugely important. Audie Cornish 00:03:16 'OpenAI stands at the center of the AI revolution, and the questions raised by its co-founders, now rivals, remain. Who should control AI? What are the hidden costs of the A.I. Revolution? And as these companies become the new empires, what power do we, ordinary people, have to shape the future? I'm Audie Cornish, and this is The Assignment. Audie Cornish 00:03:44 The story of OpenAI, the company, is not the story of artificial intelligence, but of how that science became a product that you're seeing everywhere, from customer service bots to your company HR department. Tech journalist Karen Hao has spent years chronicling how the story of AI shifted from big hopes to serve humanity to a scramble for power, profit, and influence. Karen Hao 00:04:09 My background is in tech. I was a mechanical engineer for undergrad. I went to work in Silicon Valley after graduating from MIT and Audie Cornish 00:04:19 And you're like, I don't want to make any money. I think I'm going to start doing journalism. Is that how it went? Because that's how it sounds like. Karen Hao 00:04:27 Basically. Audie Cornish 00:04:28 'Behind the scenes, she saw something familiar. Companies promising the world, then bending to the pressure of growth and scale. In her new book, Empire of AI, Hao writes about how the industry's founding ideals give way to secrecy and rivalry and the break neck arms race and the rise of a quasi-religious movement. The person at the center, Sam Altman. Karen Hao 00:04:54 'He was the president of Y Combinator, which is one of the most acclaimed startup accelerators in Silicon Valley. It's launched many famous startups. And he had this idea that he wanted to take big swings around hard technology. So he was investing more in quantum, in self-driving cars, in nuclear fusion, and- Audie Cornish 00:05:18 'And we should say, at that time, Silicon Valley's sort of churning out things that we can be describing as the Uber of X, the Airbnb of Y. Yeah, exactly. They were sort of iterations on, I'm not going to say novelty, but social media. They were not taking big swings, so to speak. And so here he is, this guy who has, like, all the connections in the world. Because literally, that's what the Combinator is. It is just- connection and networking and connecting the money to the ideas. Karen Hao 00:05:49 Yes, it was a very, very dense network. Exactly. Audie Cornish 00:05:50 He's the guy at the heart of that. And he decides to take a big swing. Karen Hao 00:05:55 'Early on, he thought, who are the people that I need to recruit to this project to turn this into a legitimate lab? The first person that he identified was Elon Musk. And Musk at the time was talking very publicly, very often about his fears around AI. And so Altman, to recruit Musk, he starts saying to Musk, I am like-minded in your views about AI. I'm also worried about AI being very powerful and going wrong, and it seems to me that if it will stay within Google it would go wrong, but the best way to counteract it is to create an AI lab of our own. Audie Cornish 00:06:39 For the good guys. Google is somehow evil and it's interesting because we were talking to I think the founder of DeepMind, Mustafa Suleyman, he was now at Microsoft and like he started his companies because he wanted to be the good guy, right? Karen Hao 00:06:54 Yeah, yeah. Audie Cornish 00:06:55 Everyone's the hero in the tech story of how they make things. Karen Hao 00:06:59 Yes. Audie Cornish 00:06:59 And in this case, yeah, same thing. Karen Hao 00:07:01 Yeah, it's a very, very common theme in AI. Also, just a very common theme in everyone's human experience. Everyone is the main character of their story. Everyone has the one trying to do it better you did hit upon something that is, I think, really key to understanding the AI world and how AI is being developed today. There's a lot of ideological clash that happens where everyone fashions themselves as morally superior to the other, and therefore they have to be the one that is the CEO of a new company, and they're going to do it better than the previous one. Audie Cornish 00:07:39 'And over time, I feel like OpenAI becomes like a cautionary tale, in a way, of like how those ideologies can come to a head. And people may not remember this, but a while back, the company went through a transition where it moved from being kind of non-profit oriented with a non- profit board to that board essentially rebelling and saying, Sam Altman, actually, you're not such a guy. In how you treat us, but also, are you taking all the sort of safety mechanisms seriously that you could? This blows up and becomes an international news story, even though none of us really know why or how. We just know that there are some people saying that AI is super, super bad and that he's not, you know, heating the guardrails and him being like, it's fine. And these people, they're just... Who knows what there's, I mean, who knows what could happen? And I see an echo of this conversation over and over again between the people you talk about, the boomers and the doomers. Karen Hao 00:08:45 Yes. Audie Cornish 00:08:46 Who are those two factions and how did they surface in OpenAI? Karen Hao 00:08:50 'Yeah. So one, going back to this idea that there is an ideological clash that really shapes these technologies. One thing that's happened in the last few years within the AI world and within the Silicon Valley world is there are really what can only be described as quasi-religious movements that have been born. Audie Cornish 00:09:10 'Did you say quasi- religious? 00:09:12 'Religious, yes. Quasi-religious movements that have been born and the reason I say this is because there are a lot of people within this world who believe in what I call the artificial general intelligence religion. This is a religion where you think that it is possible to recreate human intelligence. This is something that, there isn't actually scientific consensus on this. So even the people who talk about this, they themselves talk about it as a belief. It's just, you either have the belief or you don't. And if you do have the belief, then the second tenet is you believe it's going to create civilizational transformation. And there are two factions within this religion. There are the boomers who think that civilizatonal transformation will be hugely positive. AGI will bring us to utopia. And the other faction believes that transformation will be hugely devastating. AGI could potentially kill all of humanity. Audie Cornish 00:10:16 Just to be clear, you are not, this is not hyperbole. Like I have seen, this very public language. It comes out sometimes in groups of meetings of AI scientists, where you're right. It's either heaven or hell. Karen Hao 00:10:32 'Exactly, and I couch it by saying quasi-religious, but you could actually even argue that it is just full-blown religious because there's no evidence for either of these things. It is just a deep-seated belief, a spiritual belief. They do use spiritual language. Sometimes they even talk about AGI as them recreating digital gods and demons. This is the explicit language that they use. And the reason why I say the boomers and the doomers are factions in the same religion is because they both then conclude the same thing, which is, oh, we are the good guys. We are the ones that have to be in control of developing this technology. And so there is an inherent anti-democratic conclusion that they arrived to, which is we should not make this a democratic process. We should not be getting lots of input from various people. We should be opening up this technology to lots of people. We need to keep a tight fist. Clamp down on this technology and be secretive about it. In the early days of OpenAI when it was founded, it was found in late 2015, there were very few people that believed in the AGI religion. To even speak about the idea that artificial general intelligence was possible meant you were not a serious scientist because this is not based in science. And So the people that were drawn to the explicit premise that OpenAI set out, which was to be bold and ambitious and claim, yes, we are in fact trying to achieve AGI, it only attracted the people who already believed that and it just so happened that it attracted both of the factions, it attracted the boomers and the doomers. So throughout the history of OpenAI, there has always been these two factions constantly in tension with one another and fighting over how do we actually develop this technology? How should we approach deploying the technology? And one of the things that I concluded through my reporting was it was this tension that really led to the acceleration, the massive acceleration of AI development. Audie Cornish 00:12:49 I see. So it feels like if we're thinking like, jeez, where'd this come from? Why is AI everywhere? It's not in our heads. There really was an explosion of productivity because it became a kind of arms race in the industry. Karen Hao 00:13:02 'Yes, and the specific moment was when OpenAI decided, you know, ChatGPT was based on a model called GPT 3.5. GPT3, which was the previous generation, that was a giant step change from GPT2 in terms of sheer scale of the model. And OpenAI made a very explicit decision at the time where they thought, we have these AI models in the world today. We have never tried just blowing it up by multiple orders of magnitude. Like multiple orders of magnitude more data, multiple more orders of magnitude of computer chips for training these models. And so whereas GPT-2 I think was originally trained on maybe a few dozen chips, they decided to train GPT 3 on 10,000 chips, an entire supercomputer, one of the largest supercomputers that had ever been built at the time. And it was that jump that kicked off within the industry, the very first race, that was the opening shot that then led a lot of other companies to start swarming around this concept that OpenAI hit on, which is scale at all costs. And after ChatGPT came out, then the race really raised the next level. Audie Cornish 00:14:28 Coming up in the Church of AI, what role does Sam Altman play? Audie Cornish 00:14:34 Somebody said that if you dropped Sam Altmann on an island of cannibals and came back in five years, he'd be king. Karen Hao 00:14:42 Yes, that was Paul Graham. Audie Cornish 00:14:43 'I can never un-hear that. His mentor said. Karen Hao 00:14:47 As a compliment. Audie Cornish 00:14:47 As a compliment? Audie Cornish 00:14:51 Stay with us. Audie Cornish 00:14:55 'I know you're saying quasi-religion, but if we're taking the metaphor all the way, who is Sam Altman in this world then, right? Like, is he a pope? Is he a cult leader? Is he, you know what I mean? What is the, where does he start to fall on the spectrum between boomer and doomer, first of all, since we know he started out cautious. And then second of all. Yeah, is it a charismatic leader situation? Like what are we looking at when we see him in the public space really selling us on his vision? Karen Hao 00:15:33 Can I read you a quote from the opening of my book? Audie Cornish 00:15:36 Yes, oh my gosh, please do. I think I know.I think I know the one you're going to read, actually. Karen Hao 00:15:40 So I start my book with two quotes side by side. And the one from Sam Altman goes like this, "Successful people create companies, more successful people, create countries. The most successful people create religions" and this is in quotations that Sam Altmann is quoting this. "I heard this from Qi Lu, I'm not sure what the source is. It got me thinking though, the most successful founders do not set out to create companies. They're on a mission to create something closer to a religion. And at some point, it turns out that forming a company is the easiest way to do so." And so the thing about this religion that's so interesting is they do not pray to a higher power. They are the ones that believe they're creating a higher power. And Altman, I would say, is sort of like Paul Atreides' mom in Dune. Audie Cornish 00:16:36 I love this reference. Keep it going. Yes. Karen Hao 00:16:40 She was the one that created the myth, that created a religion around Paul Atreides, right? And when people encountered that myth, they didn't understand that it was a creation, so they just believed it. I think that's who Altman is. This is just based on, after doing a lot of reporting and understanding who he is, this is my own conclusion. This is not like I saw some document where he was talking about these things. I think he understood very, very early on in his career, as evidenced by this quote, that to mobilize people, to mobilize extraordinary resources, you have to create a religious fervor around a quest. And he figured out how to create that by evoking this idea of we are going to create this Artificial General Intelligence and to your question of is he a boomer or a doomer? No one really knows. And this was something that was quite interesting when interviewing people for the book, is regardless of how long they had worked with Altman, how closely they had worked with altman, no one could really say what he believes. And if they were doomers themselves. They thought that maybe Altman was more of a they were boomers themselves... Audie Cornish 00:18:03 Meaning that he cared, that he was concerned, but also the implication was, would care about safety and care about these things they're worried about. Karen Hao 00:18:12 Yes. And if they were boomers, they believed that Altman was certainly in their camp and most likely a boomer. And what I realized when I asked people, I would always ask people, what did Sam tell you in this conversation about what he believed and what the company was doing? I realized he always said different things to different people depending on what they wanted to hear. And so ultimately, I think he will choose whether to embody a more boomer ethos or a more doomer ethos based on what is convenient and what he needs to continue mobilizing, not just his employees, but also mobilizing the public, mobilizing regulators, policymakers to move in a direction that is in the best interest of open AI. Audie Cornish 00:19:02 The quote I thought you were gonna read was the one that, was it Paul Graham? Who said this? Somebody said that if you dropped Sam Altman on an island of cannibals and came back in five years, he'd be king. Karen Hao 00:19:16 Yes, that was Paul Graham. Yeah. His mentor. Audie Cornish 00:19:18 I can never unhear that. His mentor said this? Karen Hao 00:19:21 His mentor said this. Audie Cornish 00:19:23 As a compliment? Karen Hao 00:19:26 As a compliment. Audie Cornish 00:19:28 OK, so Karen, I want to move on to something else, which is the way you just described Altman is actually similar to the way people have described Donald Trump. And I'm bringing this up because we are seeing Altman travel in those circles now. Karen Hao 00:19:44 'Mm-hmm. Audie Cornish 00:19:44 In part because of the president's embrace of technology. So just for example, Trump does this three country tour of Saudi Arabia, UAE, Qatar, brings 50 CEOs and I'm watching from my desk on screen and I see Sam Altman there shaking hands. The AI czar is there, David Sachs, he calls it this like game changer in the global AI race. Which the administration thinks like, look, if the US can cement the position before anyone else, they don't have to worry about China, et cetera. But it was just so wild seeing Altman in the court of Saudi Arabia helping make this deal happen, and as we learned in the background reporting, upsetting Elon Musk in the process, who felt like he, that OpenAI was getting more attention than he was. It felt like I was watching a moment where world powers were divvying up something. Yeah. Only those world powers were tech companies. Karen Hao 00:20:52 'Yeah. So, I mean, the reason why I call my book Empire of AI is a nod to this argument that I make in the book that these companies need to be thought of as new forms of empire. And the reason is because empires of old and empires of AI share all the same features. First, they lay claim to resources that are not their own, but they redesign the rules to suggest that it was always their own. So they're scraping the internet saying, this was free for the taking, but people did not give their informed consent to the idea that just because you post on social media, you're suddenly going to be fodder for training models that could potentially restrict your economic opportunity. The empires also exploit labor all around the world, and with these AI companies, that not only refers to the fact that they contract a lot of workers around the that then work in extremely poor conditions to do data cleaning, data annotation, and then content moderation for these companies, but also the fact that these organizations are ultimately building labor-automating technologies. OpenAI's definition of AGI is highly autonomous systems that outperform humans in most economically valuable work. So not only is it exploiting labor on the way in, the product is also exploiting Labor. Audie Cornish 00:22:20 Ah, Karen, okay, that's a lot to unpack, but there are a couple things in there I wanna jump in and ask about. You mentioned the labor part of it. We've been talking about AI scientists and venture capitalists and all the people at the top level, and then I'm reading your book and I'm learning it's data workers in Kenya who are, I don't know what, stripping disturbing content out of the responses. Like, how does this even work? Tell me about the humans in AI. Karen Hao 00:22:51 'There was a part in opening eyes history, as we talked about, when they started shifting from being more research oriented to realizing they needed some kind of commercialization. And that meant placing their text generated generation tool that can spew anything into the hands of millions of users. There have been some really infamous scandals in which companies have done this and then it has been a spectacular failure because then the chat bot suddenly starts saying racist, abusive, toxic things in the hands of those users. And so OpenAI was like, we can't have that happen. We need to build a content moderation filter that we wrap around all of our models so that when a user says something, if the model does start to generate something, the filter blocks it. It never reaches back to the user. And so they contracted workers in Kenya to build this filter. And what these workers did, was they had to day in and day out read reams of the worst text on the internet, as well as AI-generated text where OpenAI prompted models to imagine the worst texts on the Internet. And then those workers had to very carefully annotate into a detailed taxonomy, is this violent content or is this sexual content? Is this extremely graphic violent content? Is this sexual abuse content? And does that involve children? For these workers, we see exactly what happened with the social media era and content moderators in social media. They were left deeply traumatized and it wasn't just the individuals that were left traumatized. It was their communities, their families, people that depended on them, that lost a person that they depended on and these models would not be possible. They would not have the success as products that they do without this labor and it is the most taxing harmful labor and they are paid two dollars an hour. The only justification is an ideological one. It is a belief that underpins all empires that there are superior groups of people that have some nature given or God given right to rule over and subjugate inferior people. Audie Cornish 00:25:11 And this is before we get to the workers in Colombia, right, who are labeling images for AI training, or, and I'm not gonna go down the rabbit hole here, the environmental impact that is required by these data centers. We've talked about that in other contexts and suggest our listeners actually check out Terms of Service, which is our podcast on this stuff, and they've gone into it. But it gets to your point that like, I think we see such a cleaned up version. Of chat GPT, you know, to whatever help you do, kind of like nonsense tasks, that we don't have a real sense of the cost, the human cost. Karen Hao 00:25:50 It's not just the human cost, it becomes a country cost where you literally are being dispossessed of the natural resources and the human resources to develop your country further. I mean that is colonialism and that is the legacy that we still live with. Audie Cornish 00:26:07 We see something similar with a couple countries that have tried to say, like, look, we're going to get in on the AI game and it's cost them. Karen Hao 00:26:14 Led them to repeat the same thing. So I spoke with activists in Chile and Uruguay. They were aggressively trying to fight data center expansion within their countries because the governments thought, we really want the foreign direct investment. We want to welcome these companies in to build these data centers. But then the communities that actually have to host the data centers, and these data center have to use fresh water to cool the overheating of the computers. Both of those countries, when those data centers came in, were experiencing historic droughts. And ultimately, it was in that moment that Google said, okay, we are going to come in and build a data center that now uses the fresh drinking water that you do not get to have. Audie Cornish 00:27:00 There's a recent poll that found that more than 75% of Americans basically want to slow down AI development, to make sure it's done safely or ethically, when they're sort of presented with that option. Do you think there is a version of critical public pressure that could have an effect on this conversation? Karen Hao 00:27:27 Absolutely — 00:27:27 Are you sure? Because when I see the tech people with Trump, you know, getting sovereign wealth money, I'm like, the jig is up. Karen Hao 00:27:37 Is it going to be incredibly hard? Absolutely. But you know, one of the features of empires is that they are made to feel inevitable, but historically every single empire has fallen. So it really is possible. It's just going to take an extraordinary amount of work. And I like to think about the full AI supply chain, you know these technologies, there are all of these ingredients that the companies need to build these models. And they're all of the spaces in which they then have to have access to deploy those models. So the ingredients include the data, they include the land, energy and water for hosting and powering the supercomputers, they include labor, they include talent, they include all of the intellectual property that writers, artists and creators have produced over time. And the spaces in which they need to deploy are schools, businesses, healthcare industry, all of these other types of industries. But one of the things that Silicon Valley has done incredibly well in the last decade is to convince everyone that their resources are in fact Silicon Valley's resources, but we need to remember— Audie Cornish 00:28:53 What do you mean by that? Karen Hao 00:28:53 You know I have friends that will say there's no data privacy anymore whatever they have all of my data anyway so I'll just continue giving them my data That's your resource. You have ownership over that. We have ownership over our data, our land, our energy, our water. We have ownership over our schools, our healthcare system. These are actually collectively owned or individually owned resources and spaces. And these companies actually need, they need access to it. And we are the ones that grant them access to it. Audie Cornish 00:29:28 But we're also told, like, this is gonna cure cancer. This is gonna help us get to Mars. This is going to, like the AI is going, you know, AI is gonna open dot dot dot. Open doors to this or that. And the Industrial Revolution had its benefits. You know what I mean? Like it had its downsides, but it had it's benefits. And I guess the reason why I'm wrestling with this is because, yeah, like I just had this long conversation with Mustafa Suleyman and he's very, like this could really help people, you know? I think like any tool. And then I talk to you and you're just like, power to the people, fight these colonizers. And then, I Google somewhere else and they're like, well, Skynet's coming, so I don't know what to tell y'all. You know what I mean? Like, you're going to be attacked by drones. Find a cave. I hear such conflicting information about how I feel about this industry, and I don't if it's about the technology itself or about the people involved in it. Karen Hao 00:30:23 Here's what I'll say. When Mustafa Suleyman says, it could help people, could is the operative word. And when people say it could be Skynet, could is still the operative word. And what I try to do with my book is say, this is what is literally happening right now. It's not a could scenario, it is the reality. And this reality should be the best evidence that we have in understanding how AI is impacting people now. And how it will continue to impact people in the future, because it demonstrates to us the logic of how this technology is being built and how its gonna work its way through ultimately the fault lines of our society. And right now, it is not looking good. And what I argue is that we need and we can actually turn the ship around, but it cannot be it's too late, there's no role for us. We should just wait for these overlords to hopefully be nice. You know, there are artists and writers that are now suing these companies saying, we don't like the fact that you just trained on our intellectual property. You don't get to do that. No, like this is something that we need to aggressively collectively shape by taking ownership of that data, taking ownership with that land. And I think everyone wants to be in control of it. It's just most people don't know how. And I hope that through reading the book that people will start to figure out how. Audie Cornish 00:31:57 Journalist Karen Hao. Her new book is called "Empire of AI: Dreams and Nightmares in Sam Altman's Open AI.". Audie Cornish 00:32:08 The Assignment is a production of CNN Audio, and this episode was produced by an actual person, Lori Galarreta. Our senior producer is Matt Martinez, Dan Dzula is our technical director, and Steve Lickteig is executive producer of CNN audio. We had support from Dan Bloom, Haley Thomas, Alex Manassari, Robert Mathers, Jon Dionora, Leni Steinhardt, Jamus Anderus, Nichole Pesaru, and Lisa Namerow. As always, thank you so much for listening. We know you can spend your time in a lot of other places. Please hit that subscribe button, share with a friend, and we'll talk next week.

NATO set to approve new military purchases as part of a major defense spending hike
NATO set to approve new military purchases as part of a major defense spending hike

Hamilton Spectator

time32 minutes ago

  • Hamilton Spectator

NATO set to approve new military purchases as part of a major defense spending hike

BRUSSELS (AP) — NATO defense ministers are set Thursday to approve plans to buy more weapons and military equipment to better defend Europe, the Arctic and the North Atlantic, as part of a U.S. push to ramp up security spending. The 'capability targets' lay out plans for each of the 32 nations to purchase priority equipment like air and missile defense systems, artillery, ammunition, drones and 'strategic enablers' such as air-to-air refueling, heavy air transport and logistics. 'Today we decide on the capability targets. From there, we will assess the gaps we have, not only to be able to defend ourselves today, but also three, five, seven years from now,' NATO Secretary-General Mark Rutte said. 'All these investments have to be financed,' he told reporters before chairing the meeting at NATO's Brussels headquarters. U.S. President Donald Trump and his NATO counterparts will meet on June 24-25 to agree to new military spending targets. U.S. Defense Secretary Pete Hegseth said that 'to be an alliance, you've got to be more than flags. You got to be more than conferences. You need to keep combat ready capabilities.' The targets are assigned by NATO based on a blueprint agreed upon in 2023 — the military organization's biggest planning shakeup since the Cold War — to defend its territory from an attack by Russia or another major adversary. Under the plans, NATO would aim to have up to 300,000 troops ready to move to its eastern flank within 30 days, although experts suggest the allies would struggle to muster those kinds of numbers. The member countries are assigned roles in defending NATO territory across three major zones — the high north and Atlantic area, a zone north of the Alps, and another in southern Europe. Error! Sorry, there was an error processing your request. There was a problem with the recaptcha. Please try again. You may unsubscribe at any time. By signing up, you agree to our terms of use and privacy policy . This site is protected by reCAPTCHA and the Google privacy policy and terms of service apply. Want more of the latest from us? Sign up for more at our newsletter page .

Google Confirms Almost All Gmail Users Must Upgrade Accounts
Google Confirms Almost All Gmail Users Must Upgrade Accounts

Forbes

timean hour ago

  • Forbes

Google Confirms Almost All Gmail Users Must Upgrade Accounts

Change your account now. Your email is under attack. You know this, of course, but Google has just confirmed that 61% of email users have been targeted by attacks. If that sounds alarming, the situation with text messaging is even worse, hitting almost all American phone users. Google warns that more than 60% of U.S. users saw 'an increase in scams over the past year,' with more than half 'personally experiencing a data breach.' While these numbers are 'far from surprising,' Google says, what is surprising is that almost all users are yet to upgrade accounts to make them safer and more secure. Most users, Google says, 'still rely on older sign-in methods like passwords and two-factor authentication (2FA),' despite the push to upgrade accounts to passkeys and social sign-ins, which uses authenticated platforms like 'Sign in with Google.' The situation is slightly more promising with younger users. 'Digitally-native Gen Z users are bypassing outdated security norms like passwords, opting for more advanced authentication tools.' This younger generation is 'more reliant on passkeys or social sign-ins,' albeit they're also more likely to reuse and less likely to change passwords. Google warns 'passwords are not only painful to maintain, but are also more prone to phishing and often leaked through data breaches.' And that's the real issue. 'It's important to use tools that automatically secure your account and protect you from scams,' Google tells users, and that means upgrading accounts now. Google says 'we want to move beyond passwords altogether, while keeping sign-ins as easy as possible.' That includes social sign ins, but mainly it means passkeys. 'Passkeys are phishing-resistant and can log you in simply with the method you use to unlock your device (like your fingerprint or face ID) — no password required.' Adding a passkey to your Google account also means 'you can rely on just your Google Account to log in to your favorite websites and apps — limiting the number of accounts you have to maintain.' Put more simply, because passkeys link to your hardware — primarily your phone, this secure device becomes a digital key for critical accounts. Microsoft has gone further than Google and is pushing for users to delete passwords altogether, given they present an account vulnerability if still in place. While you can't do that with your Google account today, you can avoid using your password and you can change 2FA to remove SMS texts and only use options linked to your devices — an authenticator apps or Google prompts. Upgrade your account today.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store