
Not using AI yet? Your career may already be falling behind, warns Perplexity CEO Aravind Srinivas
This is exactly the shift Aravind Srinivas, CEO of Perplexity, highlighted in his recent YouTube interview with Matthew Berman.
The discussion quickly turned from innovation to urgency. In a digital economy where generative artificial intelligence (AI) is evolving every few months, Srinivas delivered a message that should echo across classrooms, boardrooms, and job portals: if you're not learning how to work with AI, you're falling behind.
The frontier isn't theoretical anymore
For young professionals entering the workforce, the AI revolution isn't just an abstract trend. It's reshaping what employers expect from new hires right now. Those who've mastered AI tools are already outpacing their peers in productivity, value, and hiring potential.
"People who are really at the frontier of using AI are going to be way more employable than people who are not," Srinivas stated in the same video. He wasn't making a philosophical prediction; he was stating a professional reality.
From resume screenings to content generation and project management, AI is quietly absorbing the kind of cognitive labour that once demanded human time and training, and it's not waiting for anyone to catch up.
The paradox of fast tech and slow adoption
While AI is racing ahead, most professionals are still figuring out the basics. The pace at which this technology is evolving means the skills gap is widening, not just because of access, but because of how hard it is to keep up emotionally and mentally.
According to Srinivas, the real gap isn't just technological, it's psychological and systemic. Humanity, he pointed out, is inherently good at adaptation but not at speed. "We've never had a piece of technology evolve this fast," he remarked, highlighting how the breakneck pace of AI development is outstripping most people's capacity to keep up.
This has a direct bearing on employability. "You can tell people, 'Hey, go learn AI, be more useful to your team.'
But it takes a toll. People give up," he said. Educational resources are outdated by the time they gain traction and models get upgraded before manuals are printed. The result is a growing chasm between those who can speak the language of large language models and those still trying to locate the settings tab on ChatGPT.
AI won't replace you but someone using AI will
This now-familiar phrase was subtly woven into the Perplexity CEO's statement. The gap between those who can integrate AI into daily workflows and those who can't is becoming a real factor in promotions, hiring decisions, and team relevance.
The threat, Srinivas suggests, isn't AI itself. It's the human who knows how to wield it and the solution isn't fear, it's fluency. He urges job seekers, students, and working professionals alike to see AI tools not as competition but as companions.
"You need to be more useful to your team by being someone who can use AI and be faster and more efficient," he said. The employability edge, in 2025 and beyond, will be determined not by how hard you work but by how smartly you collaborate with machines.
The emotional cost of staying updated
Learning AI isn't just intellectually demanding, it's emotionally exhausting. For many professionals, especially those mid-career or switching industries, the fast-changing AI landscape creates burnout before results.
Interestingly, Srinivas doesn't sugarcoat the emotional fatigue associated with this rapid shift. "Some people are going to lose jobs because this is beyond their limits," he said frankly.
While that might sound bleak, he frames it as a challenge rather than a conclusion.
He acknowledges that even content designed to educate is often rendered obsolete within a few product cycles. This makes it harder for average users to stay in the know. "Whatever educational materials you can build for people around the current state-of-the-art models becomes irrelevant like six months from now," he said. But his message isn't one of resignation, it's one of responsibility.
New jobs will need new entrepreneurs
Traditional job roles are getting disrupted, but the upside is clear. There's room to build. Srinivas points out that the next wave of employment may come not from job boards, but from people creating value in entirely new ways.
While some leaders argue that AI-driven productivity could lead to more hiring, Srinivas is cautious. He agrees that teams who are hyper-productive with AI may attract more investment. However, this optimistic view assumes a large enough talent pool trained in the right tools.
"The flaw in that argument is that it assumes there's always going to be a big supply of people who know how to use AI," he explained.
So where do displaced workers go? According to Srinivas, one answer lies in entrepreneurship. "More entrepreneurs need to emerge to create new jobs," he said. Whether it's building new platforms or supporting AI-related services, the next wave of employment may be driven more by innovation than application.
Browsers, agents, and a new skill economy
The tasks being automated are often invisible. Things like summarising research, filling out forms, or sorting emails. These might seem small, but together they represent entire job categories and they're already being replaced.
The conversation also touched on how AI is set to take over tasks so routine we rarely reflect on them: browsing, form-filling, email writing. As AI agents begin automating entire workflows, Srinivas pointed out that some labour types will become irrelevant.
That shouldn't cause panic, he stressed, but preparation. "Spend less time doom-scrolling on Instagram. Spend more time using the AI," he advised. Not because platforms like Perplexity want more users, but because this is the only way to remain valuable in a workforce being reshaped in real time.
The quiet urgency of learning fast
At one point, Srinivas pointed out that, 'Most people are still stuck with GPT-4 on the default model,' suggesting that even those who are using AI tools may not be tapping into their full potential.
'I hope people try their best. That's all I can say,' he concluded. For students and job seekers navigating a shifting job market, the message is clear. The goal is not to fear the machine but to keep pace with it. As AI begins to absorb more cognitive tasks, staying employable will depend on how quickly one learns, adapts, and applies. In this new world of work, using AI isn't optional, it's fundamental.
TOI Education is on WhatsApp now. Follow us
here
.
Ready to navigate global policies? Secure your overseas future. Get expert guidance now!

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Mint
an hour ago
- Mint
Meta AI could be a threat to your privacy: Here's how to silence it on WhatsApp, Instagram and Facebook
Ever since first rolling out Meta AI in early 2023, Meta has been betting big on the chatbot, integrating it across its social media apps and even launching a standalone app. While Meta and other tech giants remain bullish on AI and its potential to transform how we interact online, not everyone is on board and some users would rather not have Meta's AI chatbot show up in their apps at all. Meta AI is the artificial intelligence chatbot by the tech giant that is present across all of its social media apps including Instagram, Facebook and WhatsApp. The chatbot competes directly with the likes of Google's Gemini, OpenAI's ChatGPT and Anthropic's Claude and is run on the company's Llama 3 model. While Meta AI has fallen behind the competition in recent months, the biggest selling point for the social media behemoth is that its AI can be summoned instantly across the world's most popular apps. In the last few months, Meta has only increased the number of Meta AI powered features on Facebook, Instagram and WhatsApp. While there is a section of users who cannot get enough of these features, there are others who don't want any part of the AI carnage in their daily apps. If that wasn't enough, there was a privacy issue in June when Meta AI app's discover feed exposed the personal conversations with the chatbot on social media handles. Unfortunately, Meta is so expansively present across WhatsApp, Instagram and Facebook that it isn't currently possible to cut it out of these apps entirely, but we can attempt to do the next best thing which is to limit the interaction with the AI chatbot. In order to limit the interactions with Meta AI on Instagram, go to your chats and search for Meta AI. Click on the 'i' icon and then tap on 'Mute' and switch it to 'Until I change it'. Similarly, to limit the interactions with Meta AI on WhatsApp, open the Meta AI chat and mute notifications for chat while choosing the 'always' option to never receive a notification for Meta AI chat. Meanwhile, if you want to limit Meta AI in group chats there is currently only one option available, which is to turn on 'advanced chat privacy' found in the group settings. However, turning this feature on would also mean that users in the group won't be able to save any media to their devices. On Facebook, one can similarly mute Meta AI chat as done on Instagram. There is also an option to open the basic Facebook mobile version in order to see minimal AI features on the app.


Time of India
an hour ago
- Time of India
UK's online safety law is putting free speech at risk, X says
Academy Empower your mind, elevate your skills Britain's online safety law risks suppressing free speech due to its heavy-handed enforcement, social media site X said on Friday, adding that significant changes were needed. The Online Safety Act, which is being rolled out this year, sets tough new requirements on platforms such as Facebook, YouTube, TikTok and X, as well as sites hosting pornography, to protect children and remove illegal it has attracted criticism from politicians, free-speech campaigners and content creators, who have complained that the rules had been implemented too broadly, resulting in the censorship of legal have complained about age checks that require personal data to be uploaded to access sites that show pornography, and more than 468,000 people have signed an online petition calling for the act to be government said on Monday it had no plans to do so and it was working with regulator Ofcom to implement the act as quickly as Secretary Peter Kyle said on Tuesday that those who wanted to overturn it were "on the side of predators".Elon Musk's X, which has implemented age verification, said the law's laudable intentions were at risk of being overshadowed by the breadth of its regulatory reach."When lawmakers approved these measures, they made a conscientious decision to increase censorship in the name of 'online safety'," it said in a statement."It is fair to ask if UK citizens were equally aware of the trade-off being made."X said the timetable for meeting mandatory measures had been unnecessarily tight, and despite being in compliance, platforms still faced threats of enforcement and fines, encouraging said a balanced approach was the only way to protect liberty, encourage innovation and safeguard children."It's safe to say that significant changes must take place to achieve these objectives in the UK," it said. A UK government spokesperson said it is "demonstrably false" that the Online Safety Act compromises free speech. "As well as legal duties to keep children safe, the very same law places clear and unequivocal duties on platforms to protect freedom of expression," the spokesperson said. Ofcom said on Thursday it had launched investigations into the compliance of four companies, which collectively run 34 pornography sites.


India.com
an hour ago
- India.com
ChatGPT Personal Chats Leaked On Google: How It Happened, OpenAI CEO Responds, And What Users Should Do
ChatGPT Data Leak: In today's fast-paced digital world, ChatGPT has become as essential as the internet on our smartphones. For many, it is more than just a tool — it is a trusted companion that holds countless secrets. Remember that time you poured your heart out to ChatGPT? Whether it was a messy fight with your girlfriend, late night overthinking, weird 2 AM thoughts, embarrassing kitchen mishaps, career planning, or simply searching for happiness, you shared it all with an AI that patiently listened to your problems and offered advice. Most importantly, this AI never judged you. It felt more like a close friend, like confiding in a diary that actually talks back. Now, imagine if ChatGPT exposed users' personal conversations on Google and made them available for the world to read. Chats about your problems, wild thoughts, mental health struggles, relationship advice, and even someone asking how to write a punch line to impress a girl. Sounds wild, but that is exactly what happened. In a surprising incident, thousands of private ChatGPT conversations showed up in Google search results. Some of these chats included personal topics like mental health, job stress, and relationship issues shared with the AI chatbot. ChatGPT Leak Data: How Did This Happen? OpenAI, the company behind ChatGPT, had a feature that let users share their chats using a share button. When someone clicked it and chose 'create link,' ChatGPT made a URL that could be shared with others. There was also an option called 'Make this chat discoverable.' If this was turned on, search engines like Google could show those chat links in public search results. Many users didn't understand this and turned it on by mistake, thinking it was needed to share chats with friends, without realizing it could make their chats public. The issue was first reported by Fast Company. They found that around 4,500 ChatGPT links had been indexed by Google. While many of these chats were harmless, some revealed deeply personal and sensitive details shared by users—things they never expected the world to see. ChatGPT Leak Data: What Open AI Responded OpenAI fixed the privacy issue by removing the "discoverable" option from the Share window. An OpenAI employee said it was a short lived experiment that made it too easy to share chats by mistake. Now, OpenAI's FAQ clearly says that shared chats are not public unless users choose to make them discoverable. ChatGPT Leak Data: What Users Should Do Step 1: Open ChatGPT and go to Settings. Step 2: Tap on Data Controls from the menu. Step 3: Click on Manage next to the Shared Links option. Step 4: You will now see a list of all shared chats. From here, you can delete any links you no longer want to keep public. No Legal Privacy For ChatGPT Users Adding further, OpenAI CEO Sam Altman has said that users should not expect legal privacy when using ChatGPT. Since there are no clear laws or rules about AI chats yet. However, if a legal case ever comes up, OpenAI could be required to hand over some of your most sensitive chats.