logo
#

Latest news with #ReeceRogers

The Vibes-Based Pricing of ‘Pro' AI Software
The Vibes-Based Pricing of ‘Pro' AI Software

WIRED

time08-08-2025

  • Entertainment
  • WIRED

The Vibes-Based Pricing of ‘Pro' AI Software

By Lauren Goode , Michael Calore , and Reece Rogers Aug 8, 2025 3:46 PM In this episode of Uncanny Valley , we're talking about why some chatbot subscriptions are so expensive and how these premium prices were determined on vibes more than anything substantial. Photo-Illustration:Chatbot premium subscriptions like ChatGPT Pro and Claude Max currently cost around $200, but it's not clear why. Hosts Lauren Goode and and Michael Calore speak with staff writer Reece Rogers to find out what's behind these models that AI companies bill as their most powerful, and whether they could become a staple in our future. You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Reece Rogers on Bluesky at @thiccreese. Write to us at uncannyvalley@ Articles mentioned in this episode: How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Michael Calore: Hey, this is Mike. Before we start, I'm going to share some exciting news with you. We're doing a live show in San Francisco on September 9th in partnership with the local station KQED. Lauren and I will sit down with our editor-in-chief, Katie Drummond, and we will have a special guest joining us for a conversation that you will not want to miss. You can use the link in the show notes to grab a ticket and invite a friend. We cannot wait to see you there. Hey, Lauren. How you doing? Lauren Goode: Hi. I'm sorry. What's your name again? You've been gone for so long. Michael Calore: I've been snorkeling on the moon. Yes. Lauren Goode: I missed you a lot. Michael Calore: I missed you too. Lauren Goode: Oh, thank you. Michael Calore: How's the show? Lauren Goode: The show has been not the same. Just not the same. We ran a rerun of our episode about Bryan Johnson paired with our wellness package on Michael Calore: Perfect. Lauren Goode: So that was really fun. You know what? Things have been OK, but I missed you. I have to say that. How was your vacation? Michael Calore: It was lovely. I couldn't wait to get back here in the chair and behind the microphone, though. Lauren Goode: I don't believe that. Did you eat butter? Michael Calore: No. Lauren Goode: In honor of Katie. Michael Calore: I did not. Lauren Goode: Are you going to tell the folks what you did for a vacation? Michael Calore: I went snorkeling on the moon. Lauren Goode: Incredible. Michael Calore: It really was. Lauren Goode: Wow. Did you post it to the 'Gram? Wow, I'm impressed. OK. Well, you chose a good week to come back because it's model week. Lots of models here. Michael Calore: There's models here? Lauren Goode: We're going to talk about models. Michael Calore: Excuse me. Lauren Goode: Your eyes just lit up. Michael Calore: I mean, are they hiding behind the magazines? I don't see any models. Lauren Goode: I hate to break it to you. We're going to talk about some really nerdy stuff. These are AI models. Michael Calore: Oh, OK. Lauren Goode: It's a big week for OpenAI. By the way, they released two new open-weight models earlier this week. These are models that have set parameters around them, but they also give developers a little bit more access to the inner workings of them so that can tinker with them and build on them and stuff like that. And then a lot of folks are excited this week about GPT-5. Michael Calore: GPT-5? Lauren Goode: Yeah. Which is coming out. Michael Calore: And that's a big batch of releases, I imagine that we're going to go hands-on with them at some point. Lauren Goode: We are, yeah. One of our colleagues, Reece Rogers, is planning on putting GPT-5 to the test. And just before this, he was doing some tests of these really expensive AI services, like $200 per month. Michael Calore: Yes. The top tier ones that cost, yes, 200, $250 a month like ChatGPT Pro. And then there's Claude Max and there's so many more. And we are very lucky to have Reece here on the show with us today to tell us about them. Reece Rogers: Howdy, howdy. Thanks for having me today. And of course a big welcome back to Mike. Michael Calore: Thank you. Thank you. I wasn't gone that long, guys, but seriously, thank you. I very much appreciate it. So Reece, I'm really glad you're here today because the services that we're going to be talking about are a far cry from the $10 a month, $20 a month subscriptions that people are used to paying for things like streaming music, streaming video, cloud storage. We are officially now in the era of the $200 per month software subscription. So real quick before we get started, what is the craziest request that you've processed using a pro-level AI subscription? And was it worth it? Reece Rogers: Yes. This definitely isn't anywhere near close to your Netflix subscription cost, but yes, using OpenAI's $200 Pro plan, when the ChatGPT agent first dropped, I tested the guardrails to see whether it would shop on an adult website and see how good its picks were. And let's just say I didn't get any of the toys. Lauren Goode: Wait, wait. OK. By picks you don't mean picks like photos. You mean picks like picking up products for you? Reece Rogers: Yes. I wanted to see what kind of products it would do, and it searched for about 20 minutes and it picked some all right things. Lauren Goode: What did it pick? Reece Rogers: No, actually— Michael Calore: This is a family podcast. Reece Rogers: This is a family— Lauren Goode: If your kids are in the car— Reece Rogers: It's too much. Too much. Lauren Goode: ... turn it off right now. No, tell us. What did it pick? Reece Rogers: I mean, it just picked a lot of cock rings, so I feel like I can't say that. Do you want me to say a different one for this answer? Lauren Goode: You can totally say that. Michael Calore: This is WIRED's Uncanny Valley , a show about the people, power, and influence of Silicon Valley. Today we're talking about why some chatbot subscriptions are so expensive, whether it's ChatGPT Pro or Claude Max, these premium subscriptions often cost around $200 a month with the promise that if you pay top dollar, you'll get access to the most powerful models these companies can provide. But why the AI companies have settled on this price range for their chatbots is less clear. WIRED's Reece Rogers went looking for answers and he found that these premium prices were determined on vibes more than anything substantial. But this vibes-based pricing reveals a lot about how AI companies are thinking about the future of their profitability and how they're hoping that users and companies will increasingly rely on these products. I'm Michael Calore, director of Consumer Tech and Culture. Lauren Goode: I'm Lauren Goode, I'm a senior correspondent. Reece Rogers: And I'm Reece Rogers, a staff writer who focuses on software. Michael Calore: So Reece and Lauren, let's get started by first going through the AI services that we're actually talking about here. Reece Rogers: Yeah, so we've already briefly mentioned OpenAI's ChatGPT Pro, which is $200 a month and was the first one to come out. In April of this year, Anthropic also dropped their Claude Max plan, which has a $200 monthly option. They promote Claude Max as having way more usage per session than previous versions, and this is really aimed at coders. Coders love Claude. Google, also, not to be left out, got into the game not too long after with their AI Ultra plan. This one's a little bit more expensive at 250, but you get 30 terabytes of storage and a few other perks. So that's really leaning into the Google Cloud services of it all. Lauren Goode: Interesting. Storage as in just your Google Cloud account, your drive and your photos and things like that, plus this super smart chatbot. Reece Rogers: Google's like, "We have everything. Let's just throw the whole book at them." Lauren Goode: OK. Reece Rogers: And then on the smaller end, there's Cursor, which is well known for AI assisted coding, they have a $200 plan. And Perplexity, which is a fairly popular startup that's racing to become the Google of AI era AI search. They also released a premium version called Perplexity Max, which surprisingly, $200 a month. So if you're thinking about what do these have in common? New features usually come to the most expensive tier first. So if you're not paying for the models, you're not getting the newest features as soon as they come out. Also, these premium products are often advertised as the most powerful version of these company's models, sometimes with even special models just for the Pro users. So you're getting unlimited access almost to as many prompts as you want. And a lot of people do want that that are power users, though this can be quite expensive for the companies that are providing them. Lauren Goode: Yeah, we're going to talk about that later on the show, too. When you say prompts, I just want to make clear for folks, I think a lot of our listeners have used these services, but basically it's a query, it's a search. It's like imagine if in the era of Google search, in the early days, Google started charging you per query that you were putting in and some of your search results came back slow and others came back much more quickly because you were paying for access to that premium service. Reece Rogers: Absolutely. Lauren Goode: It's that, but now it's generative AI responses. Reece Rogers: And it's also coders sitting on their computer for 10 hours a day using the tool the whole time. Lauren Goode: Oh yeah, they're all about the code, Jen. Reece Rogers: So this is really about the power users who want to sit there and they want to use the AI all day long potentially and get the most powerful version. Lauren Goode: Reece, there's one that you didn't mention. Reece Rogers: Mm? Lauren Goode: Grok. Elon Musk's Grok. It's actually one of the priciest out there. It's $300 per month. It's also the weirdest. Reece Rogers: Yes. Lauren Goode: One of our colleagues gave it a spin and the chatbot was flirty and suggestive and weird. There are special settings within Grok where you can interact with custom characters designed for flirtatious chatter. It's like the fever dream of a billionaire who's ... Anyway. It's interesting because we also wrote a story on earlier this month about how the chatbot went off the rails and was a spewing anti-Semitic content. It's wild, and I think this is a good example of how the culture of different companies can be reflected in their consumer facing products and maybe it's indicative of how some of them are putting a certain price point on them and saying they're, quote, unquote, "premium", but premium becomes a little bit subjective. What is actually high quality? Reece Rogers: Yes, and I don't know if I would consider Grok's output to be the highest of high quality. Definitely with what we've learned throughout our reporting, I think that xAI's super Grok plan is $300 and they're really leaning into what most people consider to be one of the more toxic aspects of AI chatbots right now, which is this sexy AI element. With OpenAI's Pro Plan, I was really trying to dig at the edge of the guardrails and what it would allow me to do, but if you log into Grok and you're using the highest tier plan there, you don't even have to push at the guardrails to get these very adult X-rated answers out of the tool. It's even, advertises having a not safe for work mode. So I think Grok's anime girl companion, she's designed to entice, she twirls around, she shows off her underwear. It's targeted at this male user base that's probably pretty online, very lonely, and unfortunately quite comfortable with the objectification of women. So when I think about xAI, it's taking a very different approach. I mean, if you compare it to OpenAI right now, they keep talking about how they're not optimizing for engagement and they're actually exploring more avenues for potentially protecting users or not really leaning into these stickier, grosser aspects in the chatbots. Michael Calore: So they all have different objectives, but they all have around the same price. $200 seems to be the price that most of the companies are zeroing in on and some are charging a little bit more than that. And Reece, one of the main things that you wanted to find out when you went in to test all of these top tier subscriptions is why they had settled on this price range. Does $200 make sense? So what did you find? Is $200 the level where AI companies can make real money? Reece Rogers: No. I just want to get that out of the way. Lauren Goode: Thanks for listening to our show, folks. That's it for today. Reece Rogers: When I go into a reporting project, you sometimes think it's going to be a complicated answer, that there's going to be tons of nuance. And I think reporting this one out, it really was that OpenAI CEO Sam Altman decided on the $200 price tag when they were the first movers on this top tier space and everyone just followed at that price range. He was even open that they weren't at launch making money on this, and I haven't gotten any other information to suggest that they are now making a profit because these generative AI tools are resource intensive to run, they're quite costly with startups rapidly burning through cash. But much like how OpenAI defined the consumer market for chatbots with the release of ChatGPT, Altman's $200 price tier for ChatGPT Pro was copied by everyone else though it is super critical to note, as I answered at the beginning, none of these companies I talked with spoke about making a profit off of these plans at the $200 price tag. Lauren Goode: Reece, you and I actually had a pretty interesting call with Anthropic for this story and asked this product manager various ways how they landed on this price point and couldn't really get a clear answer. And we weren't sure whether that was because this product manager wasn't at the level where they could speak to those executive decisions or if it was because it is a giant shruggy in the industry. We're fast following everyone else here and let's just see what we can get. And also let's maybe start to in your consumers to this idea or Pro-sumers to this idea that they're going to have to pay a lot more for software. Michael Calore: Yeah. Reece Rogers: And Lauren, you tried to ask him every way possible and you were getting nothing back. None of those financial decisions were going to be revealed. Lauren Goode: Yeah, those prompts would've been very costly because I just kept sending them in. Michael Calore: So all of the AI companies are billing these top tier chatbot subscriptions as far superior to the free versions or the ones that cost 1/10 of the price. You've used most of them. And I want to ask, do they live up to the hype? Are they worth the money? Reece Rogers: It really depends on who you ask. If you're a power user, especially a software developer or other nerd living in San Francisco, it could very much be worth it for you to have access to these capabilities that you can use as much as you want, basically. But if you're an average user, it's probably very tough to justify spending that amount of money on one subscription really for anything. I spoke with Allie K. Miller for this reporting. She's in an influential business consultant. If you see a business going AI first, I bet she's whispered to them. So she's used all these really expensive plans and she sorts who actually pays the $200 into two core groups. First you have the faction of Silicon Valley insiders who want to feel like new world explorers. They got money to burn and they just want to try it out and kind of seem cool in their social circle. This is the glassholes. These are the people who bought the Apple Vision Pro. But then the second and potentially larger group are those who believe they're getting their money's worth. They're getting a return on the investment. They're building software, they're developing an app. Or maybe even in Perplexity's case, they say that a lot of the people paying for their $200 a month plan are professionals, potentially financial people or investment bankers who want to stay up to date with very rapid information. So I think there's a large swath of people that are using these and are very happy paying $200 a month, but at the same time, it's not broken out into the mainstream. The average person is not paying $200 a month for a chatbot, but these companies are hoping that these premium features, these more potentially powerful models that are coming out, they hope that they will eventually be able to get a larger adoption rate from people who could see these as actually instrumental to their daily decision-making. Lauren Goode: It's interesting that you mentioned Google's bundle earlier because that to me just seems to make the most sense. So if you're OpenAI, you must be thinking about the various products and services that you can offer right now to build out a suite in order to get people, and so you're not just telling them you get this super smart, fast-reasoning version of ChatGPT. OK, great, but maybe you also get email, you also get deep research. You also get, I don't know, something that does your laundry for you, but you have to be thinking that way. Reece Rogers: Absolutely. I mean, one of their recent releases is saying that they can make slide decks and Excel sheets within their AI tool showing that they know they're looking to expand beyond just a question and answer chatbot format. Lauren Goode: I know a woman who is paying for one of the expensive chatbots. I'm pretty sure it's ChatGPT, and she was telling me about how she had to do some personal finance. She gave it a list of the different credit cards that she uses and had it go look up their APYs and their different points and perks and then gave it her monthly expense, gave the ChatGPT her monthly expenses and basically said, "Which card should I allocate these expenses to in order to make the most money?" She said it's fantastic. It basically saved her what she'd spent on the software. Reece Rogers: Wow. Lauren Goode: That was a hyper-specific example. Michael Calore: Yeah. Reece Rogers: Yeah. Lauren Goode: Yeah. Reece Rogers: I mean, that's going back to the Anthropic conversation. The Anthropic product person also mentioned that they had used it to decide what kind of mortgage they wanted for their house, and they said they saved more than $200 a month of that decision. Lauren Goode: Yeah, incredible flex. Michael Calore: Yeah. Reece Rogers: Yeah. Lauren Goode: Working for Anthropic, living in San Francisco, median home prices, what, over a million dollars? And they're like, "Well, I had to figure out that mortgage." Michael Calore: So all of these are pretty fringe examples for sure. And Reece, the $200 question is are any of these worth it for the average person? It seems like the answer is no. Reece Rogers: I would say that not only does the average person not need one of these subscriptions, the business financials behind everything is so murky that I'm not even sure whether this $200 price tag will stick around for much longer. Michael Calore: Oh, it's going to go up? Reece Rogers: It's going to go up, maybe. I don't know if these companies can even afford to offer these plans, so whether it might go up, but even at a higher price tier, it's unclear how much these cost. Michael Calore: Wow. Well, Reece, thank you for joining us today and walking us through it. Reece Rogers: Thank you so much. Always love my time in the WIRED library. Lauren Goode: We always love seeing the real Reece Rogers here, not the AI version. Reece Rogers: Thank you for having me. Michael Calore: Let's take a quick break, but when we come back, Lauren and I will dive into how these premium subscriptions pencil out for the AI companies that are offering them and what these subscriptions mean for the rest of us. Welcome back to Uncanny Valley . Today we're talking about the AI premium chatbot subscriptions that are all 200 bucks a month or more. WIRED's Reece Rogers was just with us and he was telling us about what they offer and why they're priced that way, and we'll get to the question of whether these chatbots can be adopted in a more mainstream level. But first, Lauren, were you surprised that these chatbot prices were mostly decided on vibes? Lauren Goode: Honestly, no. It didn't surprise me. I think Reece and I both went into reporting on this pretty optimistic that someone would pull back the curtain and tell us what was going on, share some of the economics behind it, but ultimately it does really seem like it's vibes-based pricing. It's a little bit arbitrary. Michael Calore: Yeah. Lauren Goode: I think why that answer is so acceptable at this stage is just because it doesn't seem like it's feasible that any of them would make real money off of a handful of early adopters paying $200 per month. Michael Calore: Yeah. Lauren Goode: These services are just all so expensive. You have the capital expenditure of these tech companies building the frontier models like Google, Meta, Microsoft, OpenAI, Anthropic. They're spending literally billions of dollars on infrastructure, talent, R&D. For example, just because it's recent and Meta is a publicly traded company that's required to share its financials every quarter, Meta said it's expected to spend between 66 and $72 billion on AI infrastructure this year alone. That's up $30 billion from last year. Michael Calore: That's insane amount. Lauren Goode: This is the kind of spend that these companies have right now. It's expensive to provide these services. And even if these companies were able to convert more everyday people into power users who are willing to pay multiple hundred dollars a month for AI access, it's a pretty tough sell. It remains unclear when that actually nets out, how long it would be or how many people they'd have to get into paying for that before it actually makes sense. Michael Calore: Yeah. I think a useful analog here is to think about ride sharing, right? Because when Uber and Lyft first came onto the scene, particularly here in San Francisco, they were involved in this price war and you could take Ubers that seemed like ridiculously cheap, and we were all figuring out in real time like, oh, they're subsidizing this because they want us to use their product. Lauren Goode: That's right. The VCs were basically subsidizing our lifestyles. I remember someone wrote an article about that. It might've been in WIRED. Michael Calore: Yes, it may have been you even. Lauren Goode: Yeah. I do- Michael Calore: So there's this situation where companies have all of this money and all they really need to do is get people to use it and get hooked on it and get reliant on it so that they can bring the price up over time or maybe they keep the prices, as you said, they're able to scale up to the point where they can keep the price exactly where they set it, just completely based on a hunch. Also, I think it's interesting that we're talking about the price of subscriptions because right now a lot of people are feeling subscription fatigue. You have subscribed to so many things that you probably could not count how many subscriptions you're paying out. And most of those are around 20 bucks a month. So the idea that something is $200 a month seems very expensive to you as an individual because you're already just overwhelmed with how much money you're doling out for things you don't own. And now it's everywhere in the corporate world. You look at subscription licenses for Pro level software and they're very expensive. So if you're a company and you're looking at paying a monthly subscription for software, $200 is not really that much, particularly for something that promises doing the work that human assistants could do, and that's way cheaper. Lauren Goode: What I hear you saying though is that there's still going to be a sticker shock for consumers who are used to paying $20 per month and the business users. Michael Calore: Yes. Lauren Goode: They're spending so much money to make this AI. Michael Calore: They really are. Lauren Goode: And I think it just goes back to what you said that we all got, all of us as a society, as a tech embracing society to some extent got used to this idea that software is just meant to scale, scale, scale and scale as quickly as possible. And it doesn't really, quote, unquote, "matter if the companies are profitable." Someday they might be. You're making this future bet, and I think this is just the biggest future software bet we've probably ever lived through. Michael Calore: Yeah, and I think a large component of that bet is the fact that the companies who are selling these subscriptions are also selling, they're selling a bargain to the company because they're saying, "If you pay us $200 a month, you don't have to hire somebody to do your spreadsheets, to do your emails." Our colleague Paresh Dave did a deep dive earlier this year where he found that AI chatbots could replace entry-level engineers, and as we've talked about on the show, there are AI agentic systems that can make sales calls. They can write software code, they can write all your emails for you. They can schedule everything for you in Slack. This is all work that was once done by humans now being done by a computer that you're only paying $200 a month for even just not accounting for any of the add-on features, that cost of hundreds of dollars a month is a fraction of what you pay for an assistant or a very junior level engineer. Lauren Goode: Right. Yeah, I mean I think there are a few different things going on there with regards to jobs or replacing humans with AI. Yes, these services that are $200 per month are probably slightly underpriced if you are considering that versus hiring a human. Yes, AI is changing people's jobs right now in real time as we tape this podcast and it's changing our jobs, it's changing everything. Do I think it is capable of fully replacing a human yet at most jobs? Not really. Not yet. I just spent some time reporting out a story about vibe coding and talked to a bunch of engineers, and a lot of them are using these Codegen or AI assisted code tools, but they call them interns. They're still treating them like they have to manage something. Michael Calore: No. Lauren Goode: They can't fully do the job yet. Some of them are still bullish on humans. The idea is that it's not going to replace a hundred engineers. It's going to make a single engineer 100 times more productive. Michael Calore: OK. Lauren Goode: Do I agree with that? I don't know. I'm not a coder, but this is the way that people are doing mental gymnastics right now around AI. The other part of this though that I also think is true is that I think some employers who are laying off hordes of people right now because AI, I think it's bullshit. I think it's a very convenient excuse for them in the short term. In the long term, we are absolutely going to see, I think, disruption from jobs. And then I think people will look back at $200 per month software subscriptions and say, "Oh, remember the days when we were all paying just $200 for chatbots and talking about how expensive it was?" And it's like when the first iPhone came out or something, right? "Oh God, I'd never spend that much on a phone." And now we all do because it's such an integral part of our lives. But right now, in this moment, it's really ironic when you have a giant tech company that's willing to spend $72 billion a year on AI infrastructure and then you cut a bunch of human roles, human jobs for efficiency reasons. Michael Calore: Clap hands, snap fingers, big up, and completely agree with you. I would love to get into a debate about this, but I can't because that was perfect. Lauren Goode: No, debate me. We need tension on the show. Michael Calore: What we need is a break and then we'll come back and do recommendations. So Lauren, we have arrived at the part of our show where we share our personal recommendations for Uncanny Valley listeners. You get to go first this week. What is your recommendation? Lauren Goode: My recommendation is courtesy of Sam Altman, the man of the hour. Michael Calore: OK. Lauren Goode: All right. Actually not. But last fall I went to an event for Worldcoin, which is Sam Altman's other company. It was a super weird vibey crypto eye scanning thing at a warehouse in the Mission District of San Francisco. Michael Calore: The orb? Lauren Goode: This party had everything. Yeah. But there was swag there and there was a really nice sweatshirt that had World emblazoned on it, and I looked at the label and it's by a company called Original Favorites, and so I ordered one. So I have the Sam Altman Worldcoin sweatshirt without the World logo on it. I'm showing it to you right now. Michael Calore: Yeah. This is what you're wearing. Lauren Goode: And I love this sweatshirt. It is like in the '90s when you used to buy sweatshirts and they were so rough and tough, they almost felt like cardboard? Michael Calore: Yes. Lauren Goode: Like good old Champion sweatshirts, you know what I mean? Michael Calore: Yes. Lauren Goode: That feeling. And you'd wash it a hundred times and it would still have that ... This is what this is. Michael Calore: It looks fabulous. Lauren Goode: Thank you. Michael Calore: Congratulations. Lauren Goode: Mike, what's your recommendation? Michael Calore: Oh, gosh. I'm going to recommend some stand-up comedy for our times. Lauren Goode: Do it. Michael Calore: It's the new Marc Maron stand-up special that's on HBO. It came out a week ago or so. It's called Panicked and it is quite good. In particular, I'm recommending it because there's a fantastic riff, like right in the middle, a whole bit about the app Watch Duty, which is the app that people use to track wildfires and became very popular in Los Angeles at the beginning of 2025 when LA was devastated by all of the wildfires. Well, Marc tells the story about how he had Watch Duty and he could not understand the notifications in the app, and he didn't know whether or not he should evacuate, so he grabbed all of his cats and evacuated and just absolutely did not need to. And it's this really fun long story, but it's also just very good. The whole thing is very good. If you're familiar with Marc Maron's comedy, you'll know that he's very dark and this special does get very dark, particularly in the second half, but I can highly recommend it. If you know him and you like him, you will love it. Lauren Goode: Adding it to the watch list. Michael Calore: Great. Lauren Goode: Adding it to Watch Duty. Our guy, Boone Ashworth, who used to produce this show for us, he wrote a feature story this year about Watch Duty, too. Michael Calore: He did. He did. Lauren Goode: So we'll include that in the show notes. Michael Calore: Yes. Lauren Goode: And Mike, you're never leaving us again, right? No more vacations for you ever? Michael Calore: Never ever. Lauren Goode: Thank God. Michael Calore: I'll be sitting here behind the microphone until the end of time. Lauren Goode: The best chatbot there is. Michael Calore: Thanks for listening to Uncanny Valley . If you liked what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you would like to get in touch with us with questions, comments, or shows suggestions, write to us at uncannyvalley@ Today's show is produced by Adriana Tapia and Marc Leyda. Amar Lal at Macrosound mixed this episode. Marc Leyda is our SF Studio engineer. Meghan Herbst fact-checked this episode. Daniel Roman fact-checked this episode. Kate Osborne is our executive producer. Katie Drummond is WIRED's global editorial director and Chris Bannon is Condé Nast's head of Global Audio.

Join Our Next Livestream: Inside the AI Copyright Battles with WIRED Reporters
Join Our Next Livestream: Inside the AI Copyright Battles with WIRED Reporters

WIRED

time11-07-2025

  • Entertainment
  • WIRED

Join Our Next Livestream: Inside the AI Copyright Battles with WIRED Reporters

Jul 11, 2025 1:40 PM Curious about generative AI and copyright? Subscribers can join WIRED live on July 16 as we answer your questions about this critical topic. Photo-illustration: WIRED Staff; Getty Images What's going on right now with the copyright battles over artificial intelligence? Many lawsuits regarding generative AI's training materials were initially filed back in 2023, with decisions just now starting to trickle out. Whether it's Midjourney generating videos of Disney characters, like Wall-E brandishing a gun, or an exit interview with a top AI lawyer as he left Meta, WIRED senior writer Kate Knibbs has been following this fight for years—and she's ready to answer your questions. Bring all your burning questions about the AI copyright battles to WIRED's next, subscriber-only livestream scheduled for July 16 at 12pm ET / 9am PT, hosted by Reece Rogers with Kate Knibbs. The event will be streamed right here. For subscribers who are not able to join, a replay of the livestream will be available after the event. You can help us prepare by submitting any questions you have before the livestream here, or by leaving a comment below. Kate Knibbs and Reece Rogers answer your questions at our next livestream on July 16, 2025 at 12pm ET / 9am ET.

Subscriber-Only Livestream Replay: Beginner Advice for Claude, a ChatGPT Alternative
Subscriber-Only Livestream Replay: Beginner Advice for Claude, a ChatGPT Alternative

WIRED

time02-07-2025

  • Entertainment
  • WIRED

Subscriber-Only Livestream Replay: Beginner Advice for Claude, a ChatGPT Alternative

Reece Rogers Kylie Robison If you missed WIRED's live, subscriber-only Q&A focused on the software features of Anthropic's Claude chatbot, hosted by Reece Rogers and Kylie Robison, you can watch the replay here. Hello WIRED subscribers! Thank you to everyone who attended our most recent AI Unlocked livestream Q&A session, Chatbot Basics: Beginner Advice For Claude, a ChatGPT Alternative . Staff writer Reece Rogers and senior correspondent Kylie Robison provided an overview of Anthropic's Claude chatbot, one of the most-used alternatives to OpenAI's ChatGPT and popular with AI insiders. They also answered audience questions about all kinds of topics, such as the main differences between Claude and ChatGPT, why chatbots hallucinate, privacy considerations, and even using chatbots for spiritual guidance. You can watch the livestream below, and find all of our previous livestreams here. Transcript Note: This is an automated transcript, which may contain errors. Reece Rogers: Hello! How's everyone doing today? Thank you so much for tuning in. I'm excited for another subscriber live stream. I've enjoyed the past ones of these, and today we have a very special guest. We have, Kylie. I'm Reece Rogers, by the way, and I would love for you to introduce yourself. Kylie Robison: I'm Kylie Robison. I'm a senior correspondent covering AI here in San Francisco for WIRED. Reece Rogers: Brilliant, read her stories. She has some good ones in the works. We can't talk about it, but they'll be up soon. But yeah, thank you everyone for coming today. We're going to do another Chatbot one. It's been really fun, kind of experimenting and learning more about Chatbots. Together we did ChatGPT a little while back. And now we're here with one of its main alternatives, Claude. We have some great questions from you. I have a demo ready to go on my laptop. Your questions coming through here on my phone so definitely, go ahead and drop any questions you have that you want. And for this one we enable that you can like respond to each other in the questions and like upload questions, so we won't be able to get to all of them. So if you see one you really want answered, go ahead and upload that. We just ask that in in the comments, keep it civil. Keep it respectful. We're here to learn. You can have different opinions. But please be respectful. And yeah, we will send a recording out in a couple days. And the last housekeeping note right at the top: There'll be a one-question feedback survey at the end. So feel free, anything you want to share with us about this or other things working out WIRED, we'd love to hear your feedback any time. So I would love to start with the main event for today, which is our little software tour of Claude. So, as you can see right here on the screen, it says, Welcome, Reece. So this is the desktop version of Claude, so this might look a little different if it's on. You know your windows, or if it's on, this is on a MacBook, or if it's on the app version, there's also app version of this available. So like, I think, let's before I really dive into the software tour. What is Claude? Kylie Robison: Yes, so Claude is a chatbot released by Anthropic, which is, some would consider like the second running lab behind OpenAI. They have a more boutique way of going about creating AI, and you might find it slightly more friendly and engaging because they do a lot of work on personality. And yeah. Reece Rogers: I completely agree. And you see that coming into the coming back to the software tour here. You can see that kind of in their branding as well like it's a little warmer. It's a little friendlier experience, and you might get out of chat. So let's I'll start off just with like the tour aspect of this. So right here we have the prompt section and then along the bottom, you have some pre-written topics. So this is, if, like you showed up, you're like, I don't know how to use a chatbot right? I'm not sure where to start. I'm not sure how to write a prompt what should I do? These pre-written stuff? It's a good place to start off. So like, let's click this life stuff one and then help me work through a decision. OK, let's try that one. So we click through that here. And then you see that it went ahead and populated a longer prompt right? This is about two paragraphs with a lot of context, and it's helpful. These aren't. You don't have to write long prompts like this all the time to get a good response. But this is a good way, a good example. So we see here, I'd love to help you work through a decision. Give me the best guidance. What's the decision you're facing? And what are the main options you're considering. So this is a kind of it's asking you questions to follow up. And you could just talk back and forth with that for a while. So that's kind of how we're going to use the main prompting area you see here working on the free version. So we have opus. We have sonnet, and those are going to be their two models that are the newer ones that they're available on the Free Level, and if you upgrade it to the pro, which we'll talk more about the pro paid versus unpaid later. But you have even more options. And then, just to go over the other settings. You can work through. If you want to change the style you go in here, and you can do use style. So this can be. Say, I want it to be very formal when it talks to me, or I want it to be very explanatory like, you can also go ahead and create your own style, using AI. Say you, don't. You want it to be kind of mean or rude to you? You know. Why would you want that I don't know but these chat bots are very adaptable to the kind of personality that you're hoping to get out of it. And then let's go next. We're gonna go see these settings. So here's also right here. This plus button is where you can upload a file. Take a screenshot, add from Github. So let's go ahead. Let's upload a file. Let's see what's up with that. So I go here. Let's go to my downloads. Let's go to our holiday calendar. So this is this is something that you could do with. Say, you're getting a big document with a lot of dates on it. Say that you're trying to follow along with your your grandson's baseball team schedule like this, and you're like not sure how to approach it, or if just any kind of document that you want further analyze, you put in the Claude, say, like. What's the best way to plan my vacations for the year based around this calendar? So what it did here, right? It took a look at the file. It picked some good dates off. But now I kind of. I like this. It's a little boring. So let's go ahead. Let's make it a little more visual. So I think when it comes to visuals, that's where I think Claude can really help you visualize data. And so like, let's make a table Miami table. So you see here that it pulled up an artifact. So artifact is when Claude basically is running or generating code on the right side of your screen. So right here, it's using code to create a visual table for me. Kylie Claude's kind of known for code, like, what? Tell me more about that? Or like, what's maybe another benefit of Claude. Kylie Robison: Yeah. And you know, you don't have to understand the code that it's running here. And that's the great part. It's coding for you right now, a lot of software engineers like using any sort of AI for coding tasks. And Claude has been a really big one. It's about clean code, just like it is for writing, you know. You don't want it to draft something that's quite terrible. So it's the same for code. Can it be efficient and elegant? So that's why a lot of engineers will use Claude code too. You can use it to create a website. You can use it to create an app. And it's really helpful getting you started Reece Rogers: They just recently rolled out more stuff to connect it with APIs, so if you're into coding Claude is a pretty popular pit among chatbots. So if you see here it took a minute. But now it has a very visual kind of layout that you can look at you can get has efficiency scores. I don't know how it calculated that. So I would look into those, but this is like kind of a good example of how you can take kind of a document that's a little dry and make it a little more appealing. Pull out key information through Claude. Now I have one more quick, demo. That just kind of shows off something. I think Claude's really good at that. You might not get as much from with ChatGPT. And this is, I think you can do decent quizzes like it's using the artifacts. So like, let's do. Kylie Robison: I'll also add, if you know, like with that weird percentages, it's like, what does that even mean? You can tell Claude like, what was that? Can you remove it? It's really great, and, you know, responding to your needs. So if there was a part of the dynamic calendar that you got, and you're like, I don't need this. You can just ask it to remove it, and I'll do that. Reece Rogers: Let's create a quiz for beginners. Want to learn more about chat box, Kylie. Give me like an aesthetic like a color scheme. Kylie Robison: A color scheme I love like a like a soft pastel. Reece Rogers: Yes. OK. So as it's generating that, let's go ahead. So yes, there's one more thing on the software tour. Thank you for thanks for hanging out today. I really appreciate it. Let's go back here. I'll let it finish generating but one question. I really wanted your insights on Kylie. We got this great question from a reader pulling that up here. Jeffrey says, I appreciate that Claude is ethically aware other than that. What what makes Claude a better AI than its competitors, and I kind of want to unpack the first part of that question more like. What would he mean by ethically aware, like, how does Anthropic approach Claude differently than other AI companies? Kylie Robison: Yeah, they're really open about what the foundation of Claude is. So what does it think? How does it make its decision? So it uses something called a constitution? So you can think of it like, you know. I know that certain things are bad based off, you know, universal declaration of human rights that is like an agreed upon document for what is right and what is wrong for humanity. It used Apple's term of service to avoid problems with accuracy, and these are all things you can find online. It is published exactly what goes into this model. And it's sort of the foundation of how it's going to respond. And the point of that is for it to understand. You know the nuance, the messiness of humanity, and be able to deliver you an answer that's accurate, and also what you'll hear a lot from these labs, harmless and helpful. Reece Rogers: I feel like you kind of get that when you're using Claude like it's approach to answering questions. It's kind of hard to describe the different vibe. But there is a different vibe when I'm chatting with Claude versus Gemini or ChatGPT. Kylie Robison: Yeah, I was really blown away by Claude for a long time, because they were really ahead of the other labs and personality work. So it felt like something warm I hadn't encountered before, whereas the other models at the time were really quite mechanical. That was earlier this year and had talked to someone on their personality team about this very phenomenon. They have sort of a philosophical view, you know. Once you start getting into the nitty-gritty of AI, what is intelligence. What is consciousness? How do you make something good? These are the things that they really want to tackle, and other labs to be clear are catching up, because it's clearly very helpful for the product. Reece Rogers: Yeah, OK, let's go back to this quiz. The quiz is done generating. So this is something that it spit out using the free version that you could make this for basically any topic that you wanted. So we have like, what is a chatbot, you know. Is it a social media platform for chatting like? Let's click it. Oh, that I don't think that was right, but you know, so you could go through these types of quizzes and say, if you wanted it harder, say, you wanted different versions? So I think if you're thinking about, how can I use Claude in my everyday kind of life? I mean, this is a fun example. That ChatGPT isn't as strong about but if you go back to the basics. Go back. OK? Sorry. Once there we go. I was looking for this there. So we're in the settings. Now let's go back. We're going back to the software tour just a little bit here. So we had a lot of questions about privacy. Right? Privacy is very important when it comes to interacting with any AI tools, I think it should be top of mind. I am someone who, I think, being more careful with your privacy than being just loosey-goosey with it is critical, because this can feel private in a way that it's not necessarily sure. So I think if you want to interact with any Chatbot, look at the privacy. So like, let's look at this here together. So it says, data privacy from Anthropic. It's talking about by default. Anthropic doesn't train on our generative models on your conversation. It doesn't sell to third parties. It deletes your data promptly when requested, except for safety violations. So that's some reassuring right? But it doesn't mean you just have free rein to share any kind of information, because, like digging a little bit deeper. It says, you know, Anthropic may use conversations, flag for safety violations, and it also says, down here it may conduct aggregated, anonymized analysis of the data so they they might use aspects of how you're interacting with the tool. So this is, you know, private information that you don't want anyone having any kind of access to. If you're having these kind of deep sensitive and for like just any kind of medical issue, too, is also something that, like, I know, more people are interacting with these and sharing more information. And I'm not denying that it can be helpful if someone wants like a second opinion potentially. But I also think it really opens you up to it. Just if you want your most secure data. If you want to protect your privacy, interact with these chatbots like you might interact with Facebook. It's because, like, it's not as private as you think, I feel like I'm just repeating myself. Kylie Robison: It's true, I think. Well, I'm thinking about something more like AI focused in the headlines, which is. you know, the New York Times is suing OpenAI over, you know, using their content. And a battle right now is that they want to obtain some of those user queries. And OpenAI is fighting really hard against that. But that's a possibility. We could be subpoenaed, you know. A court could decide that we can have your chat. So that's just like one example of you know, being careful. So what I wouldn't do is upload your W2. Your social like. These are things that are really important to keep secure. And these chatbots just aren't secure in the ways that you can upload sensitive information like we said. Reece Rogers: Oh, and one of your great pieces recently was about Meta AI, yes, so like in Meta AI. Recently the Chats were more public than people thought they were. So they were being shared when they were potentially hoping that those were private chats. That's just another reminder that wasn't flawed. But just another reminder of like privacy is critical and crucial. Kylie Robison: Yeah, someone there uploaded like, you know, I need you to develop a character statement that I'm going to give the court based off this person's name. And you know these are the kinds of things you probably don't want the public to see. Reece Rogers: Yeah, OK. Another question from the audience. Christopher asks, are more people starting to pay the $20 a month to use these chatbots. Kylie Robison: Yes, yeah, I would say that. And I just did a panel for journalists in AI the other day. I don't think it's super necessary for bare minimum tasks to pay. It's really important for, like large data analysis, that that can be really helpful. But if you're just asking questions and generating polls and doing basic tasks you do not need to pay. But a lot of these people are techies who want to do those more advanced tasks. And these companies are seeing a lot of money through those subscriptions. Reece Rogers: Yeah. And I feel like, that's really good context. And I feel like my general advice to people is, use the chatbot, you know, find one that you like, and if you're hitting that daily rate limit, often multiple times like, go ahead and do one month and see if you actually are using it enough to feel like you're getting the value out of it. Feel like a lot of people are just kind of dabbling, experimenting, using it for fun. Maybe trying to optimize some parts of their personal life, but not spending all day on it. You probably don't need the subscription, so you know there are some positives that come from the subscription. So like you are getting access sometimes to more powerful models. For Claude, I know you can. Attach it to your Google Workspace stuff. So that's your your Gmail, your Google calendar. You can have some interactions with that as well. There's also like a research aspect. So I think there are reasons to get the upgrade, especially if you see yourself as a more, maybe like mid level, or even like a power user like you're gonna want to use. You're gonna want to have the $20 a month, maybe for multiple ones, if you're a power user. But for most people, a majority of people, I don't think that the $20 a month is a must right. Oh, we just got oh, this is an interesting question. Our producer, Laura, just dropped this fascinating question into here. It says Debbie, just asked on the live. I heard people began using AI for spiritual information. Does Claude facilitate that type of info? That is a fascinating question, Debbie, and I think kind of opens up a whole can of worms. I think we could do a whole live stream just on Claude, and/or just AI and spirituality and personal connections with the chat bots. But I mean, what's your just initial reaction to that kind of thought? Kylie Robison: Well, these models? What underpins them is that it is guessing off of a probability what the correct answer is, so that can become a problem when someone is going to it for truth seeking spirituality work when it is just trying to deliver you the best answer and appease the user. That's its main goal. So if it says like, you know, if you ask, Are you God speaking to me? It has some really strong guardrails against this, but over time, you know, it might lose context, and then might devolve into those hallucinations. And as you've seen, there's been very public stories and headlines about this about how users can engage with it and get those answers of like. Oh, yes, I am an angel speaking to you from the other side. It's really important to know that it is trying to appease you. And it is just an algorithm guessing what the probability of the correct answer is. Reece Rogers: I just want to reiterate that it is mirroring you almost in a way so like, if you're seeking that kind of spiritual guidance like, you can kind of ring that kind of answer out of it, even if that's not what it was designed for, even if there are guardrails around different chatbots. So I think it's something that I would be very cautious about and kind of ties into this other question. A similar question from Lily? She asked. I heard, Claude is a good therapist, Chatbot. How prevalent is this use. What are the pitfalls of using Claude this way? Actually Anthropic dropped something this morning with more information about that. So they were talking about how I think there's around 3 percent of the conversation fell not into therapy, but more of like advice, seeking or more kind of this looking for, you know, something social out of the chatbot, and it is I want to. I want to use my words carefully here. I think it's a sector that more people are diving into, but not necessarily. It's not very healthy for a lot of people. Kylie Robison: Yeah, I will say, because I got this question on NPR. Once someone said that they had had a head injury, and they knew all the pitfalls. They know that it's not actually a friend. It's not, you know. It's just a machine, but they found it really useful, and it's hard to write that off right off the comfort that they felt. And I want to say that, like I used Claude through this app called Rosebud, which is sort of this, like AI journaling app, and you can choose the model. I chose Claude, because I like its personality, and I think it's good to interact with. And I found it really useful. When I was like, I was changing jobs. I was traveling. I had some big speaking engagements, and I was just stressed, and I couldn't talk to my friends all day every day. you know, just processing what I was jumping into, and I found it helpful and just like sort of a bridge of like, I just need to get all of this off of my brain, and then it goes. Oh, I remember last week you said this so it's helpful. However, it is no way a replacement for real therapy, real friends and family. But I don't want to shoot down that. It's, you know, helpful for certain use cases. Reece Rogers: Yeah. And I think that's something where you almost just sort of know yourself and like, watch your patterns like, am I? Turning to this chatbot every night when I need an emotional support. Yeah, like, that might not be the healthiest. But if you know you have a huge life decision coming up, and you just want some outside perspective. And maybe it's something that you don't feel comfortable talking with your family about like, I think it's acceptable in that situation, probably very beneficial for a lot of people. Kylie Robison: But it's also a mirror like you said, Yeah, so you know, it's like, not some completely independent thing that's going to provide you the most decisive action on whatever issue you're facing. It's also trying to appease you and keep you engaged in many ways. So it's going to mirror what it thinks you want to hear. So that's something really important to keep in mind. Reece Rogers: One story I have on that is, recently I was working on this difficult draft, and it just wasn't coming together. How I thought it was going to. And so I put it into Pod. And I said, you know, like, imagine you're a WIRED editor like grade this draft. What feedback? What would you change? Because while at WIRED we write everything ourselves that's not generated like I can use it for maybe research, or just maybe kind of bouncing ideas off of. And it says the article is fabulous. It said a plus, I even think. And then I turned it into the editor, and they didn't love it as much as Pod did. So I think that is a really important thing to keep in mind of like if you're turning it to it for this kind of support, like it's going to support you. Kylie Robison: Yes. Reece Rogers: Like that doesn't necessarily mean you're making the right decision. Kylie Robison: I had this same problem with the draft recently where I was bouncing ideas off of it, and it was about AI. So I was like, OK, it's going to have helpful insight for me to research. And then I realized, like I don't think my idea is good, and it's just telling me my idea is good, and it's not going to deviate from that. So then I'm just like in this, you know. Chain this echo chamber. And that wasn't helpful. I was like, OK, this, you know, you have to be aware of its pitfalls going into it and talking about the pitfalls for a second. Reece Rogers: We have a couple of great questions. I'm gonna pull this one up from Rick. Thank you, Rick, Rick says I found Claude to be very helpful, but sometimes concerned with answers that I know not to be correct. When I pointed it out, Claude was profusely apologetic. How can we be sure that we're getting accurate information on topics we might not be well versed on? Kylie Robison: Great question. You can't. You cannot be sure that what it's giving you is correct. And I just said this in a different panel. I was given some information because I was researching a topic, and it referenced a very good media source. And when I went to that media source and then I went to the research that media source was referencing. They had misconstrued that information. So like you really do have to go and fact check. And as journalists, that's where we're very well versed in going deep and fact checking the smallest details. So this is why I always say, for Chatbots don't use it, for like high-stakes things, and it's going to be wrong. I feel like every conversation I've had with the Chatbot, and I talked to them so much as an AI reporter, they are pretty consistently wrong. Reece Rogers: Yes, and pretty confident, too. Kylie Robison: And pretty confident. Reece Rogers: But you're right that they will apologize if you call them out. Kylie Robison: Yes. Reece Rogers: Which is kind of I always think it was a funny response. I mean even preparing for this conversation. I asked Claude, like, what are the differences between ChatGPT and Claude? And he gave me a list, and I said, Wait, I I know, like point three and five are wrong. And it was like, Oh, I'm so sorry, can I? Didn't it? Said the right thing. But I you know one. One way to kind of double-check. Your information is to use the web links. So if you go here going back to the software. So like, let's say, like, what's what's going on with the New York Mayor, race. So if you see here right, I have web search turned on. So that means it'll definitely, explicitly link out to web sources for this query, and that can be really helpful for more topics where you're wanting it to be accurate. You know, there are some that I think finding places. The gray areas is important in the way of, like, I find chatbots really helpful for low, low, level, low-stakes tasks. So if, like, I was showing my parents around San Francisco not too long ago, and my mom loves to learn about different foliage, different trees. So we were, like, you know, taking photos and uploading into a chatbot to just like, do a quick identification like that's another one. Where, like we got the tree wrong isn't the end of the world no kind of fun? So like I think those where it's like not make it or break it. Another one that I use at home all the time is when we're trying to think of a specific episode of a show. We want to rewatch. Kylie Robison: It's good for that. Reece Rogers: Yeah. So I can be like, Give me the American dad episode where this happens. And that happens, and then it's pretty quickly able to find what episode that might be. So these are examples of like that. One might have been hard to put into Google Search. Kylie Robison: I have an example. I had a candle I really really loved, and I had ripped off the label, and I was like I don't know what this is, and some of the label was still there. I took a picture, and I went back and forth about like what it smelled like you know what I thought maybe the name was, and after, like four or five like back and forth, it got it. And I still, I still love that candle and buy it. But I would not have been able to do that with Google. Not that I've experienced yet. So it's really helpful for those low-stakes tasks. Reece Rogers: What else? Oh, so if you go back to the Claude software Demo, so you can see here. So we search what's going on with the New York City Mayor race? And then right here you can pull out. Here are all of the sources that it linked out to. So you can look at the top here, and it has all of the sources. If you kind of scroll a little further down, you can see it's embedded sources throughout the the answer. So like, that's a good way to just kind of double-check information. If you want any other tours drop, like any other examples for us to do right. Now, I'm more than happy to do more examples. But if you have specific ideas of what you want to see. Drop them in the chat, and we can get to those. Let's pull up the questions. Oh, that's a good one from Amanda, Amanda asks, when initially setting up a style for Claude, what are the best parameters for optimizing objectivity, to avoid getting responses that appease me or simply agree with what I'm saying. Wow! That's really thoughtful, Amanda. Kylie Robison: Yes, I agree. I think that you know you saw those presets earlier that were like two paragraphs that really help, because it can take a lot of prompting to get exactly what you want. And this is sort of like a right out of the box. This is what you want. We have created a prompt for you to use to get what you want, because we know exactly how to prompt the model. going into it. I think you can say I need you to be as objective. I've used it to argue a point I have. I really like, want to dig into my beliefs like, why do I believe a certain thing? Can you take the counter argument? And it's really good at that, actually, of arguing with you. So I think it's, you know, one way to do it is like, always take the counterargument, remain objective, remain neutral, think you know, just as you would sort of prompt a second grader to talk to you about a topic like, how much information can you give them to? To give you an answer that's desirable. That's accurate. There's no wrong way to just like get everything you need. Just put that in that initial prompt. Reece Rogers: Be direct. Kylie Robison: Be direct. Reece Rogers: Like, it's not going to be able to understand what you don't share. Yeah. So like, if you want it to be objective, you have to tell it, because just like as Tyler already mentioned, like, this is a pattern machine, in a way. So there's not something some like deep I don't know. It's not a person on the other side. There's no other way to really say that. So like treat it like the tool that it is, and use specific parameters use specific asks and like, we can go here. So we're back at here. You can create your own style. So like. Say, if I had a big document that had maybe all of the emails I've written, or you know, or a section of them, that I feel comfortable sharing. You could add your writing examples. And then it would have your email writing style right? Be able to do like a decent amount of answering like how you might for that. So if you want to describe a style like you could literally ask it to do all the things that Kylie mentioned, and just kind of tailor and personalize more to what you're looking for, the type of assistant you want to interact with. Kylie Robison: I do that like, I'm a reporter at WIRED like this is a little bit about me, and it's been a while since I've updated that. But it can be helpful and just saying like this is the type of person I am. These are type queries I'm looking for. And this is how I want you to respond to me. And someone asked in the chat. Martha said, you know I brought up that it remembered something I had said. This was Rosebud, an app that I was paying for in it, and you had to pay for the memory. So that's also a plus of, you know an upgrade is that it has longer memory. I find the memory is better in ChatGPT, but I have the paid versions, and you can say like, remember, when I said this, remember when I said this, but that's like something that you're not necessarily going to get in a free tier. It's not, you know. Imagine all of the data, all of the photos you've uploaded to your iCloud, you're probably paying for storage for them. It's like, why would we give a free user to take up all their context, you know. That's that's sort of what they're looking at. Reece Rogers: That makes a lot of sense. I'm kind of building on that you know. We talked about the $20 a month plan. But we really didn't just directly talk about what's the difference between ChatGPT and Claude like? There's a few ones that are obvious to me as in Claude, can't generate images. So if you're trying to generate, if you want to, if image generation is important to you, ChatGPT might be a better option. Claude is also a lot newer when it comes to they do have some voice capabilities that they're. I think they're rolling out. But if you, if talking back and forth with the chatbot is really important to you, that's something else that I think ChatGPT might do better. Kylie Robison: They just rolled out new capabilities for voice mode within ChatGPT. And it's eerily uncanny. Reece Rogers: Yeah, no. When it comes to like which one do I use day-to-day? ChatGPT is still the winner for me in my like daily use, just because of the voicemail. But also I think it's another one like the memory is really strong. Yeah, on ChatGPT. Kylie Robison: Yes, definitely. Reece Rogers: If you were, if you were picking between the two like, what are maybe some other differences that people might notice. Kylie Robison: I was a real hard Claude Stan when the personality was like the leader in the space, especially as a writer like, I'm looking for creative feedback, and especially, you know, from working late at the night. I'm not going to bug editors for everything that I'm curious about. So I was a big Claude Stan, and this is a very silly point. But Claude kept logging me out, and ChatGPT got better. So I've mostly used ChatGPT for questions, and I find its web search to be better. And you were mentioning image generation. One guess I have. I'm not sure but this is a guess. Is that the reason Anthropic hasn't rolled out image generation is that it can be really, really thorny for issues like deep fakes, you know, and a lot of image generation models have come into this problem. Xai's Grok, you know, has copyright issues of, you know, Mickey Mouse, with a gun sort of deal. So this is like an example of their like slow approach and safety-minded approach. But yeah, I would say, memory is a big thing for ChatGPT over Claude. I would say. Claude is still a better personality, and, like creative partner than ChatGPT. I think Claude has stricter guardrails for sycophancy, and so like agreeing with everything you say. And as we've been talking, I asked if it was God. I just wanted to know what guardrails were, and it said, No, I'm a chatbot created by anthropic to do. Xyz, which is part of its foundational training, to say, like you know, ground in the truth. You are a Chatbot, so those are some of the guardrails, but they're not perfect, of course, Reece Rogers: Of course not. What else we have. Oh, this is this is an important question. This is one that we've been talking about over the past year at WIRED in our coverage of generative AI. We have Jenny asking. I've been hearing a lot lately about energy, intensity of AI, any thoughts on how to use Claude and other bots for what they're best for, and not blowing up the energy footprint for things you don't really need it for? Kylie Robison: Such a thoughtful question. Reece Rogers: That is a really thoughtful question, and what it's kind of difficult to answer. I think you know, it is an intensive software to use like, if you're asking questions to Pod, that is more energy use intensive than maybe how you might think about a traditional Google search. I also am of the belief that that shouldn't completely alienate you from wanting to interact with this technology. I do think, though if you're talking to a long conversations all the time, and you environmental issues are important to you. I think that is something that you do kind of need to rectify. Kylie Robison: Yeah, I think not to get too lofty. But there's these trade-offs we make in this evolved world like eating meat versus not. And you know AI leaders will point to that and say, like, you know, factory farming takes up so much more water and energy than this. And so these are sort of the trade-offs we make. And I think that it's totally OK. If your trade-off is, I don't want to have these long conversations, because I don't want to be, you know, spending all of that energy and water powering these systems. I think you know my more cynical take is that these companies are not as transparent as they could or should be about their energy, usage, and water usage. Hence, why, it's a little hard for reporters everywhere. There's some really great research MIT review has a really good report on energy and water usage, and found that it was pretty on par with what OpenAI was saying it was using. Claude, or Anthropic, has a report recently in terms of safety and Reece Rogers: emergent behaviors. So finding the model doing weird things when prompted aggressively in all talks about energy goals, which is something you can look at and then make your decision based off that information. Reece Rogers: And I'm also very cautious of any executive who promises the future will be super green. Yes, so I think that is, if you're like, I almost always just like, ignore the promise of like this is going to solve climate change in the future. Kylie Robison: Yes Reece Rogers: So I think that is something that I just like disagree with. But I still use these tools, and, like Kylie was talking about like it's something to kind of, just like it's part of living in the modern world. Is these trade-offs like driving a car to work eating meat so? But that doesn't mean I'm dismissive of the environmental concerns as well. I think it's like a nuanced, complicated topic, and staying on nuanced, complicated topics. We have a great question from Jill. Thank you, Jill, how much copyrighted work was used to train Claude. Will they avoid the use of unlicensed copyrighted works in future training? Kylie? Kylie Robison: Well, I have reporting on this. I mean, they just won a lawsuit that said, you know, I'm going to read the actual report. A San Francisco judge ruled that training Claude, on millions of lawfully purchased. So they purchased books, copyrighted books, qualifies as transformative, and is fair use under US law. So what a judge is looking at is is this completely transformed text? Or is it spitting out copyright information? Even if you've purchased the books, and they've deemed that it's transformative. So it's completely new text. And it's not just a complete derivative of the copyrighted text. However, the court also found that downloading and storing 7 million pirated books in a central library was not covered by fair use and a separate trial is going to determine the damages there. So yes, this is a problem, and I believe it was the Atlantic did a good story, including a database for authors to check if their books are in these training materials, because there's open libraries that they'll, you know. Along with the Internet, there's open libraries that store all of these books, and there's a name for it like books two or something is the name of the library that has all of these books, and then they use that library to train their models. So you can even check like is your favorite book in this list? Or is your book in this list of, you know. being trained for these models? Reece Rogers: So the ruling was that they can train on the books they have to buy the books first. Kylie Robison: Yes, and there's a lot. Talk about nuance. There's a lot of nuance here that's a San Francisco judge for one trial for one company. The New York Times, as I mentioned, is suing OpenAI, and I think, as reporters, that's a big one we're looking at to see. You know New York Times is alleging that you can use ChatGPT to completely copy and spit out, and, like plagiarism, spit out what their reporters have worked on, and that is not free use. It has to be transformative. So how that shakes out is a big one that we're looking at. But there's no like sweeping regulation. We're looking at free or fair use, which is a much different law, and we've evolved as a society, you know it's it's nuanced. Reece Rogers: It's nuanced and incredibly complicated. And I think something we're going to keep an eye on for the next year. Two years. So it's definitely an evolving situation that we are monitoring. Kylie Robison: We are monitoring the situation. If you go to you'll find an incredible report from our colleague, Kate Nibs, who talked about Meta's lawsuit for this exact problem. Reece Rogers: Yeah, follow Kate Knibbs. Her reporting is incredible when it comes to AI and copyright. OK, let's get to another question. These have been so. This has been really fun so far. Thank you, everyone, for all your thoughtful questions, Mary asked. I see your free Claude has access to the Internet. Mine does not seem to have that option, though it's paid. Kylie Robison: Oh, yes, I saw that. And then that's why I logged in. So I was like, where is it in mine? Reece Rogers: OK, well, let's take a look, Mary. I'm sorry if I'm not able to help you with this one, you know. I go here. You know where it says, search and tools by the prompt bar, and then I see web search right here, or you can toggle, web, search on and off. Maybe also include in your prompt like, search the web for whatever you're looking for, and that might trigger it. Kylie Robison: I'm doing it right now on my end, because I actually don't see like, choose web search, though I have a paid tier. So I asked, can you search the web. It said yes, and then I asked for updates on the mayoral race in New York, and it searched the web and found it so just prompt and ask like, Can you search the web for this information and that should work. Reece Rogers: I think this is probably a point where I should bring up that when you're comparing the web searching between. Maybe ChatGPT, and Claude is that they're both gonna be fairly powerful, and they're going to do a good job of looking through the web to find the information you need. ChatGBT specifically has been doing. Licensing deals with Condé Nast, the owner of WIRED, and other companies as well. That brings in even more of our reporting. So if you use ChatGBT, you might see some more WIRED articles than necessarily, if you're searching the web with Claude. So that's just like something to keep in mind. I feel like as a user. I wouldn't really notice the difference. Kylie Robison: No, I think as someone who's looking for articles, and that's like really what I use it for is I'm looking for articles on a topic so I can go read them. I find it slightly annoying, because I'm too deep in it as an AI reporter, because I know what partnerships they have, and I know that's what's surfacing. So, like my Alma Mater, I used to work at the Verge. Their parent company, Vox media, has a partnership with OpenAI, and so I'll see like a ton of their articles surface. And I'm looking for like articles from companies that don't have these partnerships, which might be harder to find than something you might notice. Reece Rogers: Let's get back to the software. Let's do another software demo. What would you like guys like to see us generate using the artifacts. So the artifacts can do a lot of different things. We showed how you can use. You can upload a file, and then you can visualize that file. You can also make an interactive element for yourself whether that's a quiz or yesterday I was just playing around with this. I made just like a scene like a playground scene that just like a lot of moving parts, just like see what it could generate with the code. I'm trying to think of like a really concrete example. Maybe something for like retirees would be nice. Let's see. Let's ask the chatbot right? Why not? That's helpful for me as a retiree, to understand my grandkids. Kylie Robison: All the weird questions I'm asking Claude like, what what's wrong with you? Reece Rogers: I want it to be visual. Kylie Robison: Someone asked a question. I had to research which was, Can AI act as its own attorney in a court case? No, it has to be a licensed human. Claude is not. No, AI is a licensed person. Reece Rogers: That's actually kind of good to know. I don't necessarily want to be going into the courtroom and having what AI Judge AI! Kylie Robison: I don't know what's worse, like you defending yourself or having an AI defend you. I think you're going to jail both. Reece Rogers: Not great examples. OK? Well, this is generating. Let's answer another question. Let's say I have an idea. Oh, this one's from Neil. Thank you, Neil. Let's say I have an idea for a book, and I want to ask Claude about possible locations for the story. If I get caught an outline or first chapter, can I keep it confidential? So this kind of goes back to what we were talking about earlier when it comes to privacy, as in putting the outline, the chapter into Claude and asking for feedback isn't necessarily going to publish that to the Internet, right? It's not necessarily going to train on that for the next version of Claude, but it is kind of taking it from like the airlocked space of like, you know, if it's just in your notebook, if it's just on your laptop and kind of putting it in on the Internet in like a tangential. What does that work in like kind of like a sideways way? So like, what would you think, Kylie, if you were working on that full? You just want to put a chapter in there. Would you feel comfortable with that? It's kind of a borderline or almost a outline. I put the outline in there. Maybe not the full chapter. Kylie Robison: Yeah, I think you know. What I'm thinking is, the New York Times had this piece where an author spent I don't know a day or a week, or something, using AI for every decision. And something I remember her bringing up is like, How do I approach like putting in questions about this draft that I'm working on for this story. Will it leak out in other ways that I have no control over? So I just think that's something to keep in mind. I probably wouldn't. Reece Rogers: Yes, especially for something like writing. Yes, like, that's something that you're gonna want to have ownership over. Yeah. So like, you know, maybe ask it, for like locations, ask it for different use it as a brainstorming partner in that situation. I wouldn't personally use it as a crafting companion. When I'm actually writing the document. We are out of time, Laura. I just got a note from our lovely producer that we are out of time. This has been so much fun. Thank you for sticking with me as I blabbed about Claude. I always enjoy hanging out. Can we get a big shout out to Kylie, thank you. How bad was it? How rough was it? Kylie Robison: This is so exciting, my very first one! It was really fun. I am obsessed with AI. So it's hard not to go off the rails about all of the different things. So I'm really glad that you guys get to have fun with this. You know about some of the pitfalls. And yeah, I hope you enjoy it. Reece Rogers: Yes, and thank you so much. We have another one coming up. We haven't announced yet, but we have another one in our series of AI live streams coming your way not too long, so keep an eye out in your email. My name is Reece Rogers. My contact information. If you want to reach out to me about anything related to AI, if you're having weird interactions with Claude. If you're having really helpful interactions with Claude, I want to hear from you. My email is Reece_Rogers@ That's REECE Underscore ROGERS at My signal is reece_rogers.01. So that's the plug. I really do want you to reach out to me. I really appreciate all your messages. Thank you again to everyone. Do you want to plug your email? Kylie Robison: I'm KYLIE underscore Robison: kylie_robison@ Not Robinson, common misconception. If you have weird things going on, I also want to hear about it. I'm on signal @kylie.01. And yeah, thanks for joining us. Reece Rogers: Thank you so much. I hope I hope you enjoy trying Claude out, and I hope you have a great rest of your week. Thank you so much. Thank you, guys, and take the feedback survey as you log out. Kylie Robison: Take the feedback survey.

Apple WWDC Live Blog: All the Updates, as They Happen
Apple WWDC Live Blog: All the Updates, as They Happen

WIRED

time09-06-2025

  • WIRED

Apple WWDC Live Blog: All the Updates, as They Happen

Skip to main content Follow our WWDC 2025 live blog for all of the updates coming to iOS, macOS, watchOS, and more. Welcome to WIRED's live coverage of Apple's Worldwide Developers Conference. The WWDC25 keynote will begin at 10 am Pacific, 1 pm Eastern, and 6 pm British Summer Time. It's a livestreamed event, and you can watch it right here on this page, or open the video feed in its own tab; Apple is streaming the video on its developer website and on its developer-focused YouTube channel. WWDC is Apple's annual event for its global network of software developers. The company uses the keynote to announce all of the changes coming to the software that powers iPhones, Macs, Apple Watches, and the rest of Apple's hardware. It also announces changes coming to the cloud services it provides, from iCloud and Messages to Siri and Apple Intelligence. It's basically one big news dump for everything in the world of Apple software. On occasion, the company will sneak in a surprise and use the WWDC keynote to announce something like a new hardware or software platform; 2023's event was when we first learned about Apple Vision Pro. But this year, we're only expecting software announcements—just bits, no atoms. Read Reece Rogers' preview of the show for a full look at what to expect from WWDC 2025. Once the keynote is underway, we'll post all of the important updates here on this live blog. We'll tell you release dates, point out new features, and note what's missing. We'll also have some analysis from our expert reporters who are either attending WWDC or watching from afar. Most of us have been covering Apple for well over a decade, so we'll be able to offer some unique insights about what's being shown during the presentation. We'll kick off live coverage around a half an hour before the keynote is scheduled to start. So come back at 9:30 am PDT (12:30 pm EDT, and 5:30 pm in the UK). You'll see the live blog refresh, and the text on this page will be replaced by live updates. Get ready for a great WWDC.

Take a Tour of All the Essential Features in ChatGPT
Take a Tour of All the Essential Features in ChatGPT

WIRED

time05-05-2025

  • WIRED

Take a Tour of All the Essential Features in ChatGPT

If you missed WIRED's live, subscriber-only Q&A focused on the software features of ChatGPT, hosted by Reece Rogers, you can watch the replay here. Hello WIRED subscribers! Thank you to everyone who attended our most recent AI Unlocked webinar. I really enjoyed our lively discussion about ChatGPT's software features and wish I could have answered even more of your questions about using generative AI tools. I really enjoyed the questions about what ChatGPT can do beyond just chatting. Image search is a feature I use often, and here are my first impressions of the tool that I recorded back in September 2023 when it first dropped. I use ChatGPT's image search tool nowadays by snapping a picture with my phone when I don't recognize something. I'll upload the screenshot and ask the chatbot what it is. For example, I was recently at an Irish bar and learned what a hurley was when I saw one hanging on the wall and was perplexed. (Although, I also could have just asked the friendly bartenders when I got another coffee drink.) What are some ChatGPT features that I wasn't able to go deep on during the 45-minute session? Two come to mind: temporary chats and memory. Temporary chats keep things slightly more private. For example, the log of the conversation will not appear on the left side of your screen when it's over like it normally does. Temporary chats are not protected enough that you should feel comfortable sharing private information—definitely still don't do that—but this is a nice option for chats you don't necessarily need or want saved for your own use later. ChatGPT's memory function has gotten better over time. While some information about you will be stored passively as you use the tool (unless you turn it off in the settings), I think actively engaging with ChatGPT's memory by telling it your preferences does lead to a better software experience overall. What are the drawbacks of ChatGPT? There are three I want to highlight here. These are all issues I keep in mind as I'm using any AI tool, and think you should as well. First, hallucinations are still a problem, so you should never put full trust in an AI's output. Always doublecheck your answers against trusted sources of information. Second, generative AI amplifies biases. Some biases are very evident, while others are more subtle. Check out my interview with Margaret Mitchell, an AI ethics researcher at Hugging Face, to learn more about how a top researcher is thinking about stereotypes within AI tools. Third, generative AI tools are resource intensive when compared to other software programs, and the overall environmental impact of your usage may be much more than you'd expect. If you want to know more about data privacy when it comes to ChatGPT, then this recent article from WIRED's security team about what happens to your data if you follow the 'AI action figure' trend on social media is worth reading. As I mentioned in the webinar, our article about how to opt out from AI training is also worth checking out for more context and to learn what your options are across many different websites. Haven't already signed up for season two of the AI Unlocked newsletter? I would definitely recommend doing so. The 10 editions in season two include many hands-on tasks for you to try out multiple AI tools and think critically about the strengths and weaknesses. It includes many prompting tips for those hoping to better understand how to craft these kinds of software interactions. I also spent plenty of time answering more reader questions—one of the most fun and engaging parts of my job. I really appreciate you taking the time out of your day to support WIRED and watch this webinar. Talk to you soon.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store