
Tell us your story: How well are you living in retirement?
A large surge of Americans will reach "Peak 65," with more than 4.1 individuals turning 65 each year from 2024 through 2027, according to the Alliance for Lifetime Income. Factors like tariffs and inflation may affect those retirees and aspiring retirees differently.
If you've reached that life stage, are you living the retirement that you had envisioned? If so, how did you do it?
Are there ways in which you could be doing better financially? If so, what could be better, and how do you want it to change?
If you would be interested in sharing your story for an article for CNBC.com, please email lorie.konish@nbcuni.com.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Axios
17 minutes ago
- Axios
Tariffs may push California car insurance costs higher by year's end
Car insurance rates in California could rise more sharply with prolonged tariffs, a new projection finds. Why it matters: The analysis shows one more way that Americans pay for higher tariffs. Driving the news: Car insurance rates in California remained "relatively flat" in the first half of 2025, but the average annual cost of full coverage in California could rise to nearly 7.4% between June and December if tariffs stay in effect, per a new report from insurance-comparison platform Insurify. That's compared to a roughly 4.4% increase without tariffs. The figures are based on the tariff picture as of Aug. 1, reflecting the Trump administration's recently lowered rates on cars and auto parts from Japan, South Korea and Europe. Between the lines: Tariffs affect insurance rates by increasing the costs of imported parts needed for repairs. Inflation, accident frequency and claims due to extreme weather also impact rates, among other factors. Caveat: Premiums may fluctuate as the fast-changing tariff landscape continues shifting, or if tariffs prove "less burdensome than expected," Insurify notes. Claims volume could also fall, and insurance regulators could fight price hikes. How it works: Insurify's projections are based on over 97 million rates from the insurance companies it works with, and reflect median costs for drivers age 20-70 with clean records and at least average credit.


Los Angeles Times
17 minutes ago
- Los Angeles Times
Not running for president (yet), Ro Khanna visits the fire zone
U.S. Rep. Ro Khanna came to Altadena last week to hear about fire recovery challenges. He got an earful: about balky insurance companies, charred businesses still not cleared away and how President Trump's tariffs on Canada will drive up the price of lumber for rebuilding. Homeowner Julian Saucedo pointed the Democratic congressman from the Silicon Valley to one bright spot: the speedy home debris removal by the U.S. Army Corps of Engineers. 'They followed their orders. They were champions for Altadena,' said Saucedo, a retired aerospace engineer. Saucedo stepped in close and added in a near-whisper: 'You can be the next champion ... if you can deliver on any of the things I mentioned,' like relocation assistance. Replied Khanna: 'We're gonna fight hard.' Though he is just one of 435 members of the House of Representatives, the California congressman is having a moment, or series of them, as the 48-year-old lawmaker is mentioned as a possible 2028 presidential candidate. As Khanna listened to homeowners, business people, clergy and nonprofit leaders in Altadena, I couldn't help but wonder how he would be received in Peosta (Iowa) and Goose Creek (South Carolina) — states that will be among the first to vote in the next presidential primary season. Khanna's team coordinated the visit with the Department of Angels, a nonprofit supporting community solutions in fire recovery areas and assuring the government does not lose focus on the huge rebuilding task. Khanna arrived in the San Gabriel Valley fresh off a bipartisan congressional visit with Mexican President Claudia Sheinbaum and a tour of South Carolina, where he talked about Trump's slashing of the federal workforce and cuts that the Congressional Budget Office estimated would put at risk healthcare for more than 11 million Americans. Khanna told an audience in Goose Creek that he was focused on a 'new economic patriotism' of job creation, lowering costs, and ending 'dumb wars,' WCBD television reported. He also gained substantial cable news air time with a petition, supported by even some Republicans, to force the Trump administration to release more information about the Jeffrey Epstein case. 'Are you protecting the rich and the powerful, or are you protecting our children?' Khanna told the Post and Courier newspaper. In the past, Khanna has clearly relished being asked about a possible future presidential run. But during his Altadena visit, the topic never came up, and only one reporter came along. Unlike some politicians on a 'listening tour,' Khanna mostly listened when Altadenans talked. He praised business people as the 'lifeblood' of a community, embraced homeowners who wept as they told about their losses and promised to urge all 43 California Democrats in the House to keep pushing for recovery aid — all the stuff you might expect from a presidential candidate. Among the takeaways for Khanna: Try to help local businesses get government debris removal, just as homeowners did; help secure additional federal funding and see about lowering a Trump tariff that will drive up the price of Canadian lumber needed for reconstruction. Khanna said it would be 'unconscionable' if Trump and Republicans did not move expeditiously to approve the $40 billion in supplemental disaster recovery money requested by Gov. Gavin Newsom. He said that he supports federal relief for North Carolina and Republican-leaning states slammed by Mother Nature. A representative of Sen. Alex Padilla's office said that federal funds continue to flow into the fire zones in Altadena and Pacific Palisades and that 'we continue to have good discussions with Republican colleagues in the Senate about the need for disaster aid.' The White House said in a statement that Democrats should focus on their own failures, including Gov. Gavin 'Newscum's total failure to prepare for the California fires' and L.A. Mayor Karen Bass's failure to do more to speed issuance of rebuilding permits. Altadenans told Khanna they appreciated his attention. With homes and lives to rebuild, most weren't ready to talk about a president race that is more than two years away. 'I was very impressed. He seems like a good and decent person,' said one woman whose home burned down. She asked not to be named, explaining, 'I'm just too busy, trying to get it together over here.' Today's great photo is from Times photographer Juliana Yamada, who captured a free plant stand that aims to make Altadena a little greener in the aftermath of the Eaton Fire. Jim Rainey, staff reporterDiamy Wang, homepage internIzzy Nunes, audience internKevinisha Walker, multiplatform editorAndrew Campa, Sunday writerKarim Doumar, head of newsletters How can we make this newsletter more useful? Send comments to essentialcalifornia@ Check our top stories, topics and the latest articles on


CNBC
18 minutes ago
- CNBC
First on CNBC: Transcript: OpenAI Sam Altman Speaks with CNBC's 'Squawk Box' Today
WHEN: Today, Friday, August 8, 2025 WHERE: CNBC's "Squawk Box" Following is the unofficial transcript of a CNBC interview with OpenAI Sam Altman on CNBC's "Squawk Box" (M-F, 6AM-9AM ET) today, Friday, August 8. Following are links to video on and All references must be sourced to CNBC. ANDREW ROSS SORKIN: Welcome back to "Squawk Box." OpenAI launching its latest new model, GPT-5, offering faster performance, fewer hallucinations, improved coding capabilities, and customizable personalities. I've been playing with it for the last 24 hours. Joining us right now is the man who co-founded the company, CEO, Sam Altman. Sam, it's great to see you, sir. SAM ALTMAN: Good to see you too. Thanks for having me. SORKIN: We've been trying to build, by the way, an app all morning, I'm in GitHub right now, trying to build Joe Kernen an app unsuccessfully. ALTMAN: How's it going? SORKIN: Well, I think you guys have done the right thing so far. I mean, I think that the code looks good. The problem is I can't get it to actually effectively play for me, if you will. I got to do it in GitHub. I was actually trying to get OpenAI to actually build me a way to play it effectively in the browser, but I'm not there yet. So, I still got work to do. ALTMAN: Well, we'll help you with that. I can do that. SORKIN: But help us understand the distinction between where you think ChatGPT-5 is today versus where we were just even 48 hours ago. ALTMAN: Yeah so, you said a bunch of things. The model is smarter, it's faster, it's more intuitive, it's more useful in a bunch of ways. One of the biggest things, though, is it's an integrated single experience. We have built a system that can answer easy questions quickly. It can think for a long time and answer hard questions. And it's -- you know, there's just one thing called GPT-5 now instead of the long list of models we used to have. It's much better for businesses and enterprises. The coding in particular is something that people have been really waiting for a great model for. So, we're seeing super strong enterprise adoption of this and, yes. SORKIN: Well, so let me ask you, some people have called it an Anthropic killer. Because one of the things that Anthropic, at least historically has done well, is the coding piece of this. How much of GPT-5 was focused on the coding side of things? ALTMAN: We did really want to improve on coding. That was, you know, an area that we heard consistently from consumers, from users, businesses, enterprise, everything that we just weren't good enough. So, we did put in a lot of effort there, but it's improved at lots of things. SORKIN: The other piece of it is healthcare. Lots of people ask it questions about their own healthcare, including myself. What's the difference between before and after? And I'm curious how much stuff do you put into the system? Because sometimes I think to myself I'll upload my, you know, reports from doctors and blood and this and that, and then I think, oh, I don't know if I should be doing that. What do you think? ALTMAN: So, healthcare is maybe the area where there's the strongest improvement of any category. We have seen healthcare become a huge issue. People use it as you said, people are putting in all their reports, everything they can, and it's become really important to people as a sort of, you know, a way to get help with their entire healthcare journey. And I think people feel a very, very big difference there. Healthcare is a huge fraction of the usage of ChatGPT. That's one of the largest categories. So, I put everything in, you know? I know a lot of other people do too. I think getting people better information on their healthcare and sort of the ability to make better decisions, that seems really important. And I'm like thrilled that we were able to make this better and definitely a huge user of it personally. SORKIN: You know, one of the reasons I asked the question is, and I think you made a mention of it maybe on a podcast recently, was just this idea that, ultimately, the information that we're inputting into systems like ChatGPT, and, by the way, into search, if you were to do this on Google, are all, quote/unquote, discoverable at some point, if somebody were to either sue you or something they could come to a company like yours and say, give me everything. And we're now putting so much more into these systems because we're asking oftentimes relatively intimate questions or providing pretty intimate data. ALTMAN: Yeah, look, I personally believe that society is likely to and I think should come to the conclusion at some point that we need something -- some concept like we have, you know, medical privilege or legal privilege. And if you are asking for this kind of advice, even though it's like, you know, not a human doctor, but like an AI medical adviser, a lot of those same principles should apply. And that just the fact that you've asked ChatGPT to analyze your medical records does not mean they should then be discoverable by a court, but they should be protected by the same standards. And I think as AI becomes more and more a part of people's lives and the way we get this kind of information, I think I hope whatever that society will decide to extend similar legal protections. SORKIN: Right. What do you think about safety? And I know that there's a lot of sort of different safety parameters built into this, certain types of questions that you'll answer, but you won't answer fully. Explain what that means. ALTMAN: So, you know, there's this question of like if you ask ChatGPT to help you build a bomb, you know, should it, should it do that. And there's educational value in that. There's other ways to learn that. You could be trying to learn it for a good reason. But if you're actually like, I'm going to go try to like, you know, really damage someone, obviously, we don't want ChatGPT to enable you to do, you know, cause like -- harm in the world in a way that you couldn't without it. But a new thing that GPT-5 does is try to be smarter and help you with parts of a question that might be okay and then not help you with parts that are that could be used for significant harm. This is an evolving area, like the calls here are difficult. There's like a lot of principles sound easy in theory, and then there's like a lot of edge cases. But as AI becomes very capable, society is really going to have to wrestle with these questions of we want to bring the tremendous value and enable people to do things they never could before. And then the amount of power and capability that brings people, you know, we don't want to enable really serious misuse. JOE KERNEN: Hey, Sam, can you share any key architectural changes or training innovations that distinguish GPT-5 from GPT-4 and how they impact its reasoning or grounding capabilities? SORKIN: And by the way, he got that question, Sam, from ChatGPT. KERNEN: Is that okay? Can it look at it? Can it look at itself? You know, that's supposedly the brain can't look at it at itself. I did get that. It's a really good question, though, I thought. SORKIN: I'm watching him, by the way, Sam. I'm literally looking at the— KERNEN: I was. I asked you— SORKIN: He literally asked GPT for a question that's good— KERNEN: Yes for you. Well, it works. ALTMAN: Well, it's an interesting and good question. So, you know— KERNEN: Thank you. Thank you. ALTMAN: Many improvements went into training GPT-5, but one that goes in the direction you touched on is we really started using synthetic data. So, the previous generation of the model is teaching the next generation of the model. And as these models get smarter, the data that they can create can be, you know, really quite interesting and helpful. And we saw that with this model. We expect to see that more with every future model. SORKIN: So, this is on -- you know, this gets to the AGI question, which is, at what point do you think where it becomes where the model starts to teach itself? ALTMAN: That's definitely happening now to some degree. Like definitely part of the reason GPT-5 is better than previous models is those previous models helping to teach this new one. SORKIN: And so where do you think we are on this AGI path then? ALTMAN: What do you -- what's your personal definition of AGI, and then I'll answer. SORKIN: Oh, that's a good question. Well, what is your personal definition of AGI? ALTMAN: I have many, so I think it's not a super useful term. If -- I think one definition that people like is, you know, when it's doing like a significant amount of the work in the world, although that keeps changing because people do new jobs. Another that some people use is when it can do like really good AI research on its own. I think the point of all of this is it doesn't really matter and it's just this continuing exponential of model capability that we'll rely on for more and more things. Maybe another one that people like is when it discovers like an important new mathematical theorem or an important new piece of science. I would expect that we're maybe like two-ish years away from something like that. SORKIN: Let me ask you a different question, just from a very practical perspective, in terms of productivity and maybe even the way you use it personally. How far are we away from it being able to be truly plugged in to just about every app, whether it's your email or your calendar, or as an agent buying stuff for you and really fully integrated? Obviously, the operator can do certain things, but it's not there yet. I think we would agree. I mean, when do you think that it sort of gets like supercharged? ALTMAN: So, we announced yesterday that next week we're going to integrate with email and calendar, and it'll be able to look at your calendar and do stuff for you and read your emails. You know, one of the things I've always wanted ChatGPT to do is when I wake up every morning, every email that came in overnight, I'd like a drafted response. And then I can choose whether to send it or not or edit it. But, you know, that would be like -- I think that would just be like a great efficiency gain for me. And I think we're only maybe another -- maybe six months away from being able to do that really well. SORKIN: In terms of the growth, this company is obviously just on a remarkable -- I mean, seeing the numbers from 500 million to 700 million active users, do you think that the growth is going to continue at that pace? Does it get faster? Does it get slower? And in terms of paid customers, how do you see that shifting? ALTMAN: We are -- it's only been, you know, 20 hours or whatever since we launched GPT-5, but we are seeing crazy numbers from the level of GPT-5 adoption. It looks like we're going to have a significant acceleration in growth on the business side. The consumer side will keep growing very strong. Like at this point we have a -- you know, I think like a -- we got to keep working hard to earn the right to keep this, but a pretty good lead on the consumer side. On the business side, I think you'll see very strong growth from here. We've prioritized the consumer side first because we think that that is like a very valuable thing to get, but now we'll try to really grow at the same rate on the enterprise side. KERNEN: Sam, do you -- can you tell me, and do you even know it, what to expect, let's say five years from now, do you know what -- do you -- would you even venture a guess at it? Because I think it starts -- you know, it is kind of a singularity-type thought where we don't even know when machines know so much more than we know. We don't even know what to expect. You feel like you have a good idea five years from now, ten years from now what AI is going to be? ALTMAN: No one knows, of course. It's -- what I would expect though and, you know, I say this a lot with humility, I can be totally wrong, is that five or ten years from now, AI is vastly smarter than people, and the rate of technological change in the world is astonishing. And yet the day-to-day way we live our lives and the way that society works, changes surprisingly a little. It's -- I think, in some sense, you could see this already. Like if you could go back five years and, you know, an oracle would tell you, hey, there's going to be like PhD-level intelligence available via like a chatbot. And it can't do long, complicated tasks but if you have like a question that would take, you know, some number of minutes or tens of minutes to answer, it'll just -- it can just tell you and it can like write a whole piece of software for you, you would probably say that's impossible. But if it does happen, the world will change like unrecognizably. And it did happen, and the world hasn't changed that much, or society hasn't changed that much, even there's an incredible technological change. So, my guess on a five to ten-year timeframe is crazy scientific change, crazy technology gain and society changes, of course, but not the impossible amount it sounds like it should. SORKIN: But do you think -- so, okay, there's a philosophical one and then I want to go back to the business, but the agency that we're all going to have over our own lives, and the reason I mentioned this is this idea that I find myself taking advice from ChatGPT, oftentimes, right? I'll ask it a question. It'll give me an answer. And I could see a time in life where it, not that I become the robot, but it starts to prompt me. And then the question is whether I accept the prompt. And maybe that's the question about agency in terms of what I do, right? If in fact all of our emails are written -- pre-written for us, yes, hopefully we'll have the agency to decide whether to send it or not, or change it or what have you, but over time, whether that sort of changes our own psychology about our own agency. ALTMAN: Do you perceive any less agency now even though ChatGPT has given you advice for— SORKIN: I don't. I also hope that I'm, you know, a mature person who's figured some stuff out. I just wonder if I was younger, maybe I would be more listening to certain answers over others, I don't know. I don't know. I don't -- I actually don't know the answer. ALTMAN: One more question about that, if it got to the point where you were confident that ChatGPT was giving you the right advice every time, like better than the -- and even if you didn't quite understand, you were just like you came to believe that the advice it was giving you was better than any idea you think of on your own, but you still had chose whether to follow it or not, would that feel like less agency? SORKIN: I don't know. I mean, I was saying earlier, there are times where, by the way, I look at the answer, I know the answer's not right, and I'll say, no, no, no, that's not what I'm looking for, or no, I don't think you're right. Can you check that again? So, clearly, I have agency today. I just wonder whether it changes the human condition. I mean, this is I think -- that's why I said, it's like a big philosophical question. We probably won't— ALTMAN: My sense is it won't -- it subjectively won't feel like not having agency. Even if in some sense, you know that the ChatGPT's advice is what you're supposed to do, even if you don't understand it at the time. I do worry about this general case of people that just make their decisions based off of what the advice ChatGPT gives them. Like it does feel to me like something gets important gets lost in that process, even if the advice is always really good. And today, it's not really good, just like talking forward to the future. But, you know, like we'll still be asking ChatGPT what to do. We'll still be telling it what our goals are and what we'd like to optimize for. And I think that will feel like all the agency, like if I let myself dream forward to someday far in the future, where, you know, I'm like retiring to an AI CEO for OpenAI, and I'm like writing that final prompt of like here's my advice to you, I would still feel like a lot of agency, I think. SORKIN: Okay, got to go back to the business. Lots of questions about just the race for talent right now, these huge numbers. We hear, you know, Mark Zuckerberg trying to, you know, pay people a billion dollars to attract them from your place and others. What does that race look like at the moment? How many people, by the way, are there out there right this moment that you think are in this sort of like super league that are almost, you know, indispensable? ALTMAN: Definitely this is the most intense talent market I have seen in my career. But if you think about the economic value being created by these people and how much we're all spending on compute, you know, maybe the market stays like this. I'm not totally sure what's going to happen. But it is like a crazy intense comp for a very small number of people right now. SORKIN: But is it like 200 people? And what do they know? What do these 200 people know that nobody else does? ALTMAN: I mean, the bet, the hope is they know how to like, discover the remaining ideas to get to super intelligence, that there are going to be a handful of algorithmic ideas and, you know, medium-sized handful of people who can figure them out. SORKIN: How many people you think that really is? ALTMAN: Like how many people are capable of doing it, or how many people will actually like make the discoveries that matter and like— SORKIN: How many people are capable? Like what's -- we're talking about like the market size. How big is the market right this minute for these people? ALTMAN: I bet it's like much bigger than people think, but, you know, some companies in the space have decided that they're going to go after a few shiny names. But I think there's like many thousands of probably -- many thousands of people that we could find and probably tens of thousands or hundreds of thousands of people in the world that are capable of doing this kind of work. SORKIN: Your company is reportedly now worth up to $500 billion. Is that accurate in terms of— ALTMAN: I think I'm not supposed to like comment on currently in flight stuff. SORKIN: Well, let me ask you this. There is an effort by Robinhood and, by the way, others to try to sell your shares into secondary market. Sometimes people are tokenizing these special purpose vehicles and other things like that. What do you make of what's happening? And do you think or want the public to be able to access these shares like that versus, by the way, there's some people are buying into Microsoft now and buying into SoftBank thinking that's like a public way backdoor into buying shares in your company? ALTMAN: Look, I totally get why people wish we were just a public company now, and I have very conflicted -- not conflicted like I have like negative feelings about how much growth happens in private markets and how -- you know, not every investor gets access to this phase of growth. Whenever we do go public, if we ever go public, I think there will be tremendous upside left in front of the company, but I get why people would love for us to be public or sooner. And I'm sure people also get the reality of like we're in still a crazy position and it would be very hard for us to be public given just all of the realities of that. But I hate that people get pushed to like various degrees of sketchy ways to try to get exposure to OpenAI. SORKIN: The company's still losing money and obviously you're building, building, building, Stargate among them. What is the path to profitability? When do you see the lines crossed, if you will? ALTMAN: I mean, I think it could happen sooner than I originally thought if we wanted it to, but it seems to me like the right thing to do is to just keep investing in compute and continuing to grow. SORKIN: But is that— ALTMAN: To continue to like grow our training compute budget. SORKIN: Is that a couple years from now then, you think, or I mean, you're saying if you stopped investing— ALTMAN: No, I think we should be willing to keep growing, to keep investing in training compute for a long time. Like as long as we're on this very steep curve of the models getting better and better, I think the rational thing to do is to just be willing to run at a loss for quite a while and continue to do that, which is also part of why I think it's nice not to be public. SORKIN: Right. And then finally, one of the paths to getting public is to make a deal with Microsoft. I think as you're trying to -- when I say make a deal, you're going to create this private vehicle effectively out of a not-for-profit. What is the state of play there? ALTMAN: Still trying -- you know, still like positive stuff, and we have lots of other things that are kind of higher priority at the moment, but like positive progress on it. SORKIN: You knew I'd asked the question. I think you knew I'd asked the question. You probably saw Elon yesterday. He said, quote, OpenAI will eat Microsoft alive, and then Satya responding to that. What do you think when you read that? ALTMAN: You know, I don't think about him that much. SORKIN: What do you think of the larger idea though that he's effectively saying that OpenAI, long-term, will eat Microsoft alive? ALTMAN: I don't even know what that means. Do you know what that means? SORKIN: I'm not sure what he means except to say that he thinks in the grand scheme of the partnership, that, ultimately, you'll have more power and more influence and more leverage over them than they'll have over you. ALTMAN: I thought he was most -- I mean, I -- someone was -- I thought he was just like tweeting all day about how much like OpenAI sucks and our model is bad and, you know, not being a good company and all of that. So, I don't know how you square those two things. SORKIN: Fair enough. I am told that you have to go, sir. I wish you didn't and that you could stay because I could talk to you, as you know, for— KERNEN: Needs help making my app— SORKIN: Oh, and I do need help with the app, but we can help with— ALTMAN: I'm happy to help you. SORKIN: Well, maybe I'll send you a note later and I'll share the app with you. But, Sam, we appreciate you joining us this morning very, very much. And I know it's early for you on the West Coast. ALTMAN: Thank you for having me. SORKIN: Thanks so much.