Latest news with #Perplexity


Time of India
an hour ago
- Business
- Time of India
Perplexity CEO to youngsters: Spend less time on Instagram, more time on…
Perplexity CEO Aravind Srinivas advises young individuals to prioritize mastering AI tools over excessive social media use, emphasizing that AI proficiency will significantly enhance employability. He acknowledges the challenges of adapting to rapidly evolving AI technology, which changes every three to six months. Perplexity CEO Aravind Srinivas has a message for young people: ditch the endless social media scrolling and start mastering artificial intelligence tools instead. Speaking in a Thursday interview with Matthew Berman, Srinivas urged people to "spend less time doomscrolling on Instagram; spend more time using the AIs." The CEO warned that those who fail to adapt to AI technology will be left behind in the job market . "People who really are at the frontier of using AIs are going to be way more employable than people who are not," Srinivas said. "That's guaranteed to happen." AI adoption challenges test human adaptability limits Srinivas acknowledged that keeping pace with rapidly evolving AI technology poses significant challenges for most people. "Human race has never been extremely fast at adapting," he said, noting that AI technology evolves every three to six months, testing "the limits in terms of how fast we can adapt." The Perplexity chief predicted that some workers will inevitably lose their jobs due to their inability to keep up with AI advances. However, he sees entrepreneurship as the solution to potential job displacement. Entrepreneurs must drive new job creation as AI reshinks teams As AI reduces headcounts across industries, Srinivas believes new employment opportunities must come from entrepreneurs rather than traditional companies. "Either the other people who lose jobs end up starting companies themselves and make use of AIs, or they end up learning the AIs and contribute to new companies," he explained. His comments align with broader industry concerns about AI's impact on employment. Anthropic CEO Dario Amodei recently predicted AI could eliminate 50% of white-collar entry-level positions within five years, while AI pioneer Geoffrey Hinton warned that artificial intelligence will replace workers in "mundane intellectual labor." However, other tech leaders like Nvidia's Jensen Huang take a more optimistic view, suggesting AI will transform rather than eliminate jobs entirely. AI Masterclass for Students. Upskill Young Ones Today!– Join Now About the Author TOI Tech Desk The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk's news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity. Read More End of Article by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links Recommended For You Is it better to shower in the morning or at night? Here's what a microbiologist says CNA Read More Undo An engineer reveals: One simple trick to get internet without a subscription Techno Mag Learn More Undo I thought my dad was 'boring'. Now, I'm learning that boring stuff is often what matters most CNA Read More Undo Is it legal? How to get Internet without paying a subscription? Techno Mag Learn More Undo Pierce Brosnan's Wife Lost 120 Pounds - This Is Her Now Undo These Are The Most Beautiful Women In The World Undo 11 Foods That Help In Healing Knee Pain Naturally Undo 20 Pieces of Clothing Older Women should Avoid Some styles never age — but others? Well, they might be aging you. Discover 20 fashion pieces that could be adding years instead of flair. Learn More Undo Restaurant Owner Discovers Plane in Jungle. What's Inside Leaves Him Frozen When a successful restaurateur follows a wartime rumor deep into the jungle, he expects rust and wreckage. But the hidden aircraft he uncovers — and what's inside — tells a story lost for decades. Some say it should've stayed buried. Undo Latest Mobiles View All Previous Vivo X Fold 5 5G ₹1,49,999 View Specs Ai+ Nova 5G ₹7,999 View Specs OnePlus Nord 5 5G ₹31,999 View Specs OPPO Reno 14 5G ₹38,999 View Specs Tecno Spark Go 2 ₹7,799 View Specs OPPO K13x 5G ₹12,499 View Specs Lava Storm Lite 5G ₹7,999 View Specs Lava Storm Play 5G ₹10,499 View Specs Itel Zeno 5G ₹10,799 View Specs Infinix GT 30 Pro 5G ₹24,999 View Specs Next 1 2 3 4 5 6 Follow Us On Social Media
Yahoo
3 hours ago
- Yahoo
For privacy and security, think twice before granting AI access to your personal data
AI is being forced on us in pretty much every facet of life, from phones and apps to search engines and even drive-throughs, for some reason. The fact that we're now getting web browsers with baked-in AI assistants and chatbots shows that the way some people are using the internet to seek out and consume information today is very different from even a few years ago. But AI tools are more and more asking for gross levels of access to your personal data under the guise of needing it to work. This kind of access is not normal, nor should it be normalized. Not so long ago, you would be right to question why a seemingly innocuous-looking free 'flashlight' or 'calculator' app in the app store would try to request access to your contacts, photos, and even your real-time location data. These apps may not need that data to function, but they will request it if they think they can make a buck or two by monetizing your data. These days, AI isn't all that different. Take Perplexity's latest AI-powered web browser, Comet, as an example. Comet lets users find answers with its built-in AI search engine and automate routine tasks, like summarizing emails and calendar events. In a recent hands-on with the browser, TechCrunch found that when Perplexity requests access to a user's Google Calendar, the browser asks for a broad swath of permissions to the user's Google Account, including the ability to manage drafts and send emails, download your contacts, view and edit events on all of your calendars, and even the ability to take a copy of your company's entire employee directory. Perplexity says much of this data is stored locally on your device, but you're still granting the company rights to access and use your personal information, including to improve its AI models for everyone else. Perplexity isn't alone in asking for access to your data. There is a trend of AI apps that promise to save you time by transcribing your calls or work meetings, for example, but which require an AI assistant to access your real-time private conversations, your calendars, contacts, and more. Meta, too, has been testing the limits of what its AI apps can ask for access to, including tapping into the photos stored in a user's camera roll that haven't been uploaded yet. Signal president Meredith Whittaker recently likened the use of AI agents and assistants to 'putting your brain in a jar.' Whittaker explained how some AI products can promise to do all kinds of mundane tasks, like reserving a table at a restaurant or booking a ticket for a concert. But to do that, AI will say it needs your permission to open your browser to load the website (which can allow the AI access to your stored passwords, bookmarks, and your browsing history), a credit card to make the reservation, your calendar to mark the date, and it may also ask to open your contacts so you can share the booking with a friend. There are serious security and privacy risks associated with using AI assistants that rely on your data. In allowing access, you're instantly and irreversibly handing over the rights to an entire snapshot of your most personal information as of that moment in time, from your inbox, messages, and calendar entries dating back years, and more. All of this for the sake of performing a task that ostensibly saves you time — or, to Whittaker's point, saves you from having to actively think about it. You're also granting the AI agent permission to act autonomously on your behalf, requiring you to put an enormous amount of trust in a technology that is already prone to getting things wrong or flatly making things up. Using AI further requires you to trust the profit-seeking companies developing these AI products, which rely on your data to try to make their AI models perform better. When things go wrong (and they do, a lot), it's common practice for humans at AI companies to look over your private prompts to figure out why things didn't work. From a security and privacy point of view, a simple cost-benefit analysis of connecting AI to your most personal data just isn't worth giving up access to your most private information. Any AI app asking for these levels of permissions should send your alarm bells ringing, just like the flashlight app wanting to know your location at any moment in time. Given the reams of data that you hand over to AI companies, ask yourself if what you get out of it is really worth it. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


TechCrunch
5 hours ago
- Business
- TechCrunch
For privacy and security, think twice before granting AI access to your personal data
AI is being forced on us in pretty much every facet of life, from phones and apps to search engines and even drive-throughs, for some reason. The fact that we're now getting web browsers with baked-in AI assistants and chatbots shows that the way some people are using the internet to seek out and consume information today is very different from even a few years ago. But AI tools are more and more asking for gross levels of access to your personal data under the guise of needing it to work. This kind of access is not normal, nor should it be normalized. Not so long ago, you would be right to question why a seemingly innocuous-looking free 'flashlight' or 'calculator' app in the app store would try to request access to your contacts, photos, and even your real-time location data. These apps may not need that data to function, but they will request it if they think they can make a buck or two by monetizing your data. These days, AI isn't all that different. Take Perplexity's latest AI-powered web browser, Comet, as an example. Comet lets users find answers with its built-in AI search engine and automate routine tasks, like summarizing emails and calendar events. In a recent hands-on with the browser, TechCrunch found that when Perplexity requests access to a user's Google Calendar, the browser asks for a broad swath of permissions to the user's Google Account, including the ability to manage drafts and send emails, download your contacts, view and edit events on all of your calendars, and even the ability to take a copy of your company's entire employee directory. Comet's requested access to a user's Google account. Image Credits:TechCrunch Perplexity says much of this data is stored locally on your device, but you're still granting the company rights to access and use your personal information, including to improve its AI models for everyone else. Perplexity isn't alone in asking for access to your data. There is a trend of AI apps that promise to save you time by transcribing your calls or work meetings, for example, but which require an AI assistant to access your real-time private conversations, your calendars, contacts, and more. Meta, too, has been testing the limits of what its AI apps can ask for access to, including tapping into the photos stored in a user's camera roll that haven't been uploaded yet. Signal president Meredith Whittaker recently likened the use of AI agents and assistants to 'putting your brain in a jar.' Whittaker explained how some AI products can promise to do all kinds of mundane tasks, like reserving a table at a restaurant or booking a ticket for a concert. But to do that, AI will say it needs your permission to open your browser to load the website (which can allow the AI access to your stored passwords, bookmarks, and your browsing history), a credit card to make the reservation, your calendar to mark the date, and it may also ask to open your contacts so you can share the booking with a friend. There are serious security and privacy risks associated with using AI assistants that rely on your data. In allowing access, you're instantly and irreversibly handing over the rights to an entire snapshot of your most personal information as of that moment in time, from your inbox, messages, and calendar entries dating back years, and more. All of this for the sake of performing a task that ostensibly saves you time — or, to Whittaker's point, saves you from having to actively think about it. You're also granting the AI agent permission to act autonomously on your behalf, requiring you to put an enormous amount of trust in a technology that is already prone to getting things wrong or flatly making things up. Using AI further requires you to trust the profit-seeking companies developing these AI products, which rely on your data to try to make their AI models perform better. When things go wrong (and they do, a lot), it's common practice for humans at AI companies to look over your private prompts to figure out why things didn't work. From a security and privacy point of view, a simple cost-benefit analysis of connecting AI to your most personal data just isn't worth giving up access to your most private information. Any AI app asking for these levels of permissions should send your alarm bells ringing, just like the flashlight app wanting to know your location at any moment in time. Given the reams of data that you hand over to AI companies, ask yourself if what you get out of it is really worth it.


Mint
7 hours ago
- Business
- Mint
‘Just one prompt': Perplexity CEO says his AI browser can replace two key roles every workplace depends on
San Francisco based AI startup Perplexity has been taking the competition to Google first with its generative AI backed search engine and now with a native AI browser called Comet. However, it's not just Google's lunch money that Perplexity is after, the company's CEO Aravind Srinivas in a recent interview with The Verge revealed that there are at least two white collar jobs that Perplexity's Comet browser should be able to take away soon. What are these two roles then? Well, it's the roles the roles that any workplace normally relies on, recruiters and administrative assistants. In the latest episode of The Verge's 'Decoder' podcast, Srinivas explained how Comet will eventually be able to replace the job of a recruiter after the introduction of a bettter reasoning model like a GPT-5 or Claude 4.5. 'A recruiter's work worth one week is just one prompt: sourcing and reach outs. And then you've got to do state tracking.' Srinivas said on the podcast. 'you want it to keep following up, keep a track of their responses. If some people respond, go and update the Google Sheets, mark the status as responded or in progress and follow up with those candidates, sync with my Google calendar, and then resolve conflicts and schedule a chat, and then push me a brief ahead of the meeting. Some of these things should be proactive. It doesn't even have to be a prompt.' he added Sinivas also notes that Perplexity has the ambition to make its Comet browser feel more like an operating system which runs processes like these in the background and implements commands based on natural language prompts. Comet browser is currently only available to Perplexity's paying customers but the company has also opened invites for free users who should get access to the AI powered product in due time. In a Reddit AMA session earlier in the week, Srinivas had confirmed that Comet browser will be available for even free users but some of the AI powered agentic tasks could be under a paywall. During the latest podcast Srinivas showed optimism in the possibility that users would want to pay for Comet in the long run because of the funtionality it offers. 'And at scale, if it helps you to make a few million bucks, does it not make sense to spend $2,000 for that prompt? It does, right? So I think we're going to be able to monetize in many more interesting ways than chatbots for the browser.' he added.


Indian Express
8 hours ago
- Indian Express
First-ever AI malware ‘LazyHug' hides in ZIP files to hack Windows PCs
A new family of malware called LameHug is infecting systems around the world using the very same tech that powers AI chatbots like ChatGPT, Gemini, Perplexity and Claude. Discovered by the Ukrainian national cyber incident response team (CERT-UA), the malware uses large language models to generate and run commands to infect and steal information from Windows PCs. CERT-UA says that the attacks are from the Russian threat group APT028. Written in the popular coding language Python, LameHug uses APIs from Hugging Face and is powered by Qwen-2.5-Coder-32B-Instruct, an open-sourced large language model developed by Alibaba Cloud to generate and send commands. As is the case with AI chatbots like Gemini, ChatGPT and Perplexity, the large language model can convert instructions given in natural language into executable code or shell commands. In an email sent by the group to Ukrainian government authorities impersonating ministry officials, the payload delivering the LameHug malware was hidden in a ZIP archive that contained files named 'AI_generator_uncensored_Canvas_PRO_0 and ' The malware used commands that allowed APT-28, the threat group that sent these emails, to extract information about the infected Windows PC and search for text and PDF documents stored in the Documents, Downloads and Desktop folders. This information was then sent to a remotely controlled server, but as of now, it is unclear how the LLM-powered attack was carried out. According to a recently issued advisory by the threat intelligence sharing platform IBM X-Force Exchange, this is the first documented case where a malware is using LLMs to write executable commands, which 'allows threat actors to adapt their practice during a compromise without needing new payloads, potentially making the malware harder to detect by security software or static analysis tools.' The news comes after security analysis firm Check Point said that it discovered a new malware called Skynet that evades detection by AI tools.