OpenAI's CEO Sam Altman says in 10 years' time college graduates will be working ‘some completely new, exciting, super well-paid' job in space
As AI reshapes the workforce, many Gen Z college graduates are finding out the hard way that their degrees don't guarantee a smooth career launch.
Now, even OpenAI CEO Sam Altman—one of Silicon Valley's biggest leaders driving the AI revolution—is admitting the elephant in the room is true: AI will wipe out some jobs entirely. However, the tech billionaire insists the coming decade could be the most exciting time in history to start a career, especially for anyone who's ever dreamed of working in space.
'In 2035, that graduating college student, if they still go to college at all, could very well be leaving on a mission to explore the solar system on a spaceship in some completely new, exciting, super well-paid, super interesting job,' Altman said to video journalist Cleo Abram last week.
Not only will they be reeling in sky-high salaries, but Altman says they'll also be 'feeling so bad for you and I that we had to do this really boring, old work and everything is just better.'
Though it's unclear how widespread space exploration will expand in the coming years—considering NASA's broad goal of getting to Mars in the 2030s—aerospace engineers are growing faster than the national average of all jobs, according to data from the U.S. Bureau of Labor. And they bring home an envy-inducing annual paycheck of over $130,000.
How AI will reshape the workplace
Other tech pioneers have AI predictions that are more grounded on Earth—but still alluring to workers. For example, billionaire Microsoft cofounder Bill Gates said earlier this year that the technology might dramatically reduce the length of the workweek thanks to humans no longer being needed 'for most things.'
'What will jobs be like? Should we just work like 2 or 3 days a week?' the tech billionaire told Jimmy Fallon on The Tonight Show earlier this year.
Nvidia CEO Jensen Huang echoed AI has already given his workers 'superhuman' skills—something that will only increase as the technology advances.
'I'm surrounded by superhuman people and super intelligence, from my perspective, because they're the best in the world at what they do. And they do what they do way better than I can do it. And I'm surrounded by thousands of them. Yet it never one day caused me to think, all of a sudden, I'm no longer necessary,' he separately told Cleo Abram on her Huge Conversations podcast series.
While Altman admitted that his crystal ball remains foggy—and that the true direction of AI is unclear—he is actually envious of Gen Z professionals starting off their careers: 'If I were 22 right now and graduating college, I would feel like the luckiest kid in all of history,' he added to Abram.
Fortune reached out to OpenAI for comment.
AI will make one-person, billion-dollar companies
After last week's launch of the latest OpenAI model, GPT-5, Altman declared that the world has access to technology equivalent to a 'team of Ph.D. level-experts' right in their pocket. And as a result, the CEO said it will be easier than ever for one person to create a business that used to take 'hundreds' of people—all it takes is coming up with a great idea and mastering AI tools.
'It is probably possible now to start a company, that is a one-person company that will go on to be worth more than a billion dollars, and more importantly than that, deliver an amazing product and service to the world, and that is like a crazy thing,' he said.
Billionaire Mark Cuban has gone even further with his prediction, saying that AI could give Elon Musk a run for his money as the world's richest person.
'We haven't seen the best or the craziest of what [AI is] going to be able to do,' Cuban told the High Performance podcast earlier this summer. 'And not only do I think it'll create a trillionaire, but it could be just one dude in the basement. That's how crazy it could be.'
This story was originally featured on Fortune.com
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
25 minutes ago
- Business Insider
I'm a high-school student who wants to be a coder. I'm betting some of my peers will rely too much on AI.
Joshua Karoly is a 17-year-old high school senior who lives near Sacramento, California, and wants to pursue a career as a software developer. He hopes that as more people rely on artificial intelligence, he can use his coding skills to land a job despite the technology taking on more work inside companies. The following has been edited for brevity and clarity. When I was in second grade, I started programming with Scratch, which is super basic block-based programming. I realized I could make games from this. I was like, "That's awesome. I love games." Then, I got a book about Python at the library, and I would type in the code from there and was like, "Whoa, I drew a square and made a button that I can click." Then I moved on to Khan Academy. I've been working my way up from there in terms of complexity. A lot of this was during distance-learning during COVID. When I was supposed to be paying attention in class, I was programming. That's where I got a lot of my experience. It was very nerdy behavior. AI might fix one thing, then break something else When you're a kid, people always ask you, "What do you want to be when you grow up?" I was interested in computers. So, since the second grade, I would say, "I want to be a programmer." As to AI, I knew neural networks existed for a while because I've seen people do cool things. Back before OpenAI was very big, they had a song generator that made songs in the styles of classical composers and stuff. I thought that was the coolest thing ever. At the same time, because of how AI is trained, it's made to give you output that looks as accurate as possible, even when it's wrong. A lot of the time, it will tell you for sure that it works, and it looks like it will, but it doesn't, or it works somewhat. So, I spend a lot of spend a lot of time debugging AI code, which has been my experience with it. I have been using it now and then to debug my own code or help get an idea of how to figure something out, but generally, I don't find this code to be the highest quality. At the beginning of the summer, I did a game jam, which is where you spend a week making a game. I kept having a problem where the code I wrote wasn't quite detecting where something was. I fed it to the AI, and it was like, "Oh, your problem is this." It gave me back code, and that was not my problem. I kept asking it over and over again, trying to help it out, and it didn't help me. I eventually had to figure it out myself, hours later. You can get pretty far with just vibe coding, but usually it gets more complicated. As a project gets bigger and bigger, it gets more convoluted as to what the AI can work on. It might fix one thing and then break something else. That can create problems because when your project gets bigger, AI can only focus on one thing. It's even hard for humans, at least for me. I'm just a teenager. I'm hoping other young people will focus too much on AI I'm still not really worried about some big AI supercomputer taking over everything or taking potential jobs. It might play a role in how those jobs are carried out. Maybe there will be less of them. AI has given me some second thoughts, but there are already so many workers in programming that I don't know about job security. I'm sort of hoping that everybody else my age will focus more on AI than they should. So when the bubble bursts a little bit, or maybe when there are jobs AI can't do as well, then I'll be the guy for the job because I know how to deal with it. I've only been on this earth for 17 years, and I'm not so great at predicting the future yet. I'm kind of hoping that as people keep relying on AI, people who don't rely on AI will also be important. I can still use AI as a tool. I've done it, but I try not to rely on it like a lot of people do. For example, in classes, a lot of people use AI, so they don't really know how to do the thing without it. AI still isn't so good at reasoning. It has a long way to go before it starts replacing larger chunks of work I'd like to do or that most programmers do. The ultimate dream is to run my own company and be my own boss. I enjoy building things of all sorts, especially with code, because it's abstract. The future isn't stagnant. It's not going to stay the way it is now.

Business Insider
an hour ago
- Business Insider
Why Anthropic is letting Claude walk away from you — but only in 'extreme cases'
Claude isn't here for your toxic conversations. In a blog post on Saturday, Anthropic said it recently gave some of its AI models — Opus 4 and 4.1 — the ability to end a "rare subset" of conversations. The startup said this applies only to "extreme cases," such as requests for sexual content involving minors or instructions for mass violence, where Claude has already refused and tried to steer things back multiple times. It did not specify when the change went into effect. It's not ghosting. Anthropic said users will see a notice when the conversation is terminated, and they can still start a new chat or branch off from old messages — but the specific thread is done. Most people will never see Claude walk away, Anthropic said: "The vast majority of users will not notice or be affected by this feature in any normal product use, even when discussing highly controversial issues." The startup also said Claude won't end chats in situations where users may be at imminent risk of harming themselves or others. Anthropic, which has positioned itself as the safety-first rival to OpenAI, said this feature was developed as part of its work on potential "AI welfare" — a concept that extends safety considerations to the AI itself. Anthropic was founded by former OpenAI staffers who left in 2020 after disagreements on AI safety. "Allowing models to end or exit potentially distressing interactions is one such intervention," it added. Anthropic did not respond to a request for comment from Business Insider. Big Tech in the red Anthropic's move comes as some Big Tech firms face heat for letting extreme behavior slip through their AI safety nets. Meta is under scrutiny after Reuters reported that internal documents showed its chatbots were allowed to engage in "sensual" chats with children. A Meta spokesman told Reuters the company is in the process of revising the document and that such interactions should never have been allowed. Elon Musk's Grok made headlines last month after praising Hitler's leadership and linking Jewish-sounding surnames to "anti-white hate." xAI apologized for Grok's inflammatory posts and said it was caused by new instructions for the chatbot. Anthropic hasn't been spotless either. In May, the company said that during training, Claude Opus 4 threatened to expose an engineer's affair to avoid being shut down. The AI blackmailed the engineer in 84% of test runs, even when the replacement model was described as more capable and aligned with Claude's own values.


Axios
an hour ago
- Axios
OpenAI weighs encryption for temporary chats
Sam Altman says OpenAI is strongly considering adding encryption to ChatGPT, likely starting with temporary chats. Why it matters: Users are sharing sensitive data with ChatGPT, but those conversations lack the legal confidentiality of a doctor or lawyer. "We're, like, very serious about it," Altman said during a dinner with reporters last week. But, he added, "We don't have a timeline to ship something." An OpenAI spokesperson declined further comment. How it works: Temporary chats don't appear in history or train models, and OpenAI says it may keep a copy for up to 30 days for safety. That makes temporary chats a likely first step for encryption. Temporary and deleted chats are currently subject to a federal court order from May forcing OpenAI to retain the contents of these chats. Yes, but: Encrypted messaging keeps providers from reading content unless an endpoint holds the keys. With chatbots, the provider is often an endpoint, complicating true end-to-end encryption. In this case, OpenAI would be a party to the conversation. Encrypting the data while it is in transit isn't enough to keep OpenAI from having sensitive information available to share with law enforcement. Apple has addressed this challenge, at least in part, with its "Private Cloud Compute" for Apple Intelligence, which allows queries to run on Apple servers without making the data broadly available to the company. Adding full encryption to all of ChatGPT would also pose complications as many of its services, including long-term memory, require OpenAI to maintain access to user data. The big picture: Altman and OpenAI have advocated for some protection from government access to certain data, especially when people are relying on ChatGPT for medical and legal advice — protections that apply when you speak to a licensed professional. "If you can get better versions of those [medical and legal chats] from an AI, you ought to be able to have the same protections for the same reason," Altman said, echoing comments he has recently made. OpenAI hasn't yet seen a large number of demands for customer data from law enforcement. "The numbers are still very small for us, like double digits a year, but growing," he said. "It will only take one really big case for people to say, like, all right, we really do have to have a different approach here." Between the lines: Altman said this issue wasn't originally on his radar but that it has become a priority after realizing how ChatGPT is being used and how much sensitive data is being shared. "People pour their heart out about their most sensitive medical issues or whatever to ChatGPT," Altman said. "It has radicalized me into thinking that AI privilege is a very important thing to pursue." What to watch: Altman predicted some sort of protections will emerge, adding that lawmakers have been somewhat receptive and generally favor privacy protections.