Latest news with #ReidHoffman


India Today
21 hours ago
- Business
- India Today
LinkedIn follows Microsoft and lays off 281 employees, most are software engineers
The latest wave of job cuts in Silicon Valley has now finally reached LinkedIn, the professional networking giant best known for helping people find jobs. Now, ironically, it is finding itself in the position of handing out pink slips. According to a filing with California's Employment Development Department this week, LinkedIn is letting go of 281 employees in the state. These cuts come hot on the heels of Microsoft's broader announcement earlier this month that it would trim around 6,000 jobs globally, roughly 3 per cent of its total workforce. As a Microsoft-owned company, LinkedIn hasn't been time around, the pink slips landed particularly hard on software engineers, though others weren't fully immune. Roles like senior product managers and talent account directors were also affected. According to the reports, the layoffs were communicated to employees on May 13, with many taking to LinkedIn itself to announce their new 'open to work' a moment of deja vu for LinkedIn staff, although the tone is markedly different. In 2023, when the company let go of 716 employees, CEO Ryan Roslansky personally addressed the decision in a heartfelt internal memo. Fast forward to 2025, there is no word from him so far. With no public comment yet, speculation is rife about whether more job cuts are on the horizon. Layoffs have become something of a gloomy tech sector trend this year, with companies like Meta, Google, and Autodesk all slimming down their workforces. While reasons vary—from restructuring to AI investments and claims of underperformance—the message is clear: the industry is transforming rapidly, and not everyone's role is of the biggest disruptors? Artificial intelligence. With AI tools now capable of generating code, debugging, and even suggesting product improvements, many companies are quietly rethinking how many engineers they actually need. It's a bittersweet twist—engineers building tools so effective that they might be coding themselves out of a job. Focusing on AI and how it affects jobs, LinkedIn co-founder Reid Hoffman clearly stated that AI is not a friend and whoever thinks this way will surely be harmed. He said, 'I don't think any AI tool today is capable of being a friend. And I think if it's pretending to be a friend, you're actually harming the person in so doing.'For LinkedIn, which employs over 18,400 people worldwide, the layoffs may indicate a strategic pivot. Once seen as a dependable cog in Microsoft's machine, the platform might now be undergoing a rethink as automation and economic uncertainty prompt tech giants to 'streamline operations.'While AI promises efficiency and innovation, it also raises uncomfortable questions about job security, even in fields previously considered safe.


Mint
3 days ago
- Business
- Mint
'AI can't be your friend and pretending so is dangerous': LinkedIn co-founder Reid Hoffman
Reid Hoffman, co-founder of LinkedIn and a prominent investor in artificial intelligence, has cautioned against the growing trend of portraying AI systems as emotional companions, arguing that such framing risks undermining human relationships and emotional well-being, reported Business Insider. Speaking on a recent episode of thePossible podcast, Hoffman asserted that no current AI tool possesses the emotional depth required to qualify as a friend, and that suggesting otherwise could be psychologically harmful, added the publication. 'I don't think any AI tool today is capable of being a friend,' he said. 'And I think if it's pretending to be a friend, you're actually harming the person in so doing.' You may be interested in As per the report, his remarks follow a wave of AI companion rollouts by Meta across its platforms, including Facebook, Instagram, WhatsApp and Ray-Ban smart glasses. Meta CEO Mark Zuckerberg recently claimed that AI chatbots might help address the United States' growing loneliness crisis, pointing to research indicating that many Americans have fewer than three close friends. However, Hoffman drew a clear distinction between companionship and authentic friendship. He stressed that genuine relationships are built on mutuality — both giving and receiving emotional support. 'Friendship is a two-directional relationship,' he explained. 'It's not only, 'Are you there for me?', but also 'I am here for you'.' Reportedly, the tech entrepreneur warned that blurring the line between simulated and real emotional bonds risks diluting the value and understanding of human connection. While AI systems can imitate empathy and responsiveness, he noted that they lack the true reciprocity required to sustain meaningful relationships. He praised Inflection AI's design of its 'Pi' assistant, which explicitly refers to itself as a companion rather than a friend. Hoffman commended the tool's emphasis on encouraging users to engage with real-world relationships. 'Helping you go out into your world of friends is, I think, an extremely important thing for a companion to do,' he added. Although he acknowledged that AI tools can offer utility and support — particularly for individuals facing social isolation — Hoffman warned against using them as a replacement for real-life human interaction. He expressed particular concern about the impact on children and vulnerable users who may struggle to differentiate between digital simulation and genuine emotion. Hoffman also called for stronger industry standards and possible government intervention to address what he views as a growing ethical concern. 'We as a market should demand it, we as an industry should standardise around it,' he said. 'And if there's confusion around this, I think we as government should say, 'Hey, look, if you're not stepping up to this, we should.'' He concluded with a broader reflection on the implications of emotionally misleading AI systems. 'I think that's a degradation of the quality of elevation of human life,' Hoffman said. 'And that should not be what it's doing.'


India Today
3 days ago
- Business
- India Today
LinkedIn Cofounder says AI won't be your friend, you will be harmed if you pretend so
LinkedIn cofounder, Reid Hoffman and someone who is an active investor in AI, recently said that AI cannot be your friend. According to a Business Insider report, Hoffman shared his thoughts on the Possible podcast, where he warned people not to think of AI as something it's not. While many companies like Meta are working on AI chatbots that feel more personal, Hoffman said this could be harmful. 'I don't think any AI tool today is capable of being a friend,' he said. 'And I think if it's pretending to be a friend, you're actually harming the person in so doing.' He explained that calling AI your friend is dangerous because it changes how we understand real human said that friendship is not just about someone listening or talking to you. In his words, 'Friendship is a two-directional relationship.' He believes a true friend is someone who supports you, but also expects support in return. It's about growing together and helping each other become better people. He added, 'It's not only, 'Are you there for me?', but I am here for you.' That kind of connection, he said, is not something AI can offer, no matter how smart or advanced it comes at a time when Meta CEO Mark Zuckerberg is promoting AI companions as a way to help people feel less lonely. He mentioned that many Americans have fewer than three close friends and said that AI could help fill that gap. These AI companions are now being added to platforms like Instagram, WhatsApp, Facebook, and even smart glasses. But Hoffman strongly feels that this approach can confuse people about what friendship really means. He said that some AI tools, like the Pi chatbot from Inflection AI, do a better job because they clearly tell users they are just companions and not friends. Hoffman said, 'Helping you go out into your world of friends is, I think, an extremely important thing for a companion to do.'advertisement I think what Hoffman is saying makes sense in many ways. It is kind of true that an AI companion cannot feel emotions or really care about you. It can reply and even sound friendly, but at the end of the day, it's still a machine. People, especially kids, might get too attached and think of the AI as a real friend, which could be harmful. But on the other hand, for people who have no one to talk to, having a chatbot might still be helpful in some situations. It can listen without judging and be available anytime. Basically, while AI can be useful and supportive in some ways, it cannot replace a real friend. Human connection is still something only real people can believes tech companies should be clear and honest about what AI can and cannot do. 'We as a market should demand it, we as an industry, all MPAs, should standardise around it,' he said. 'And if there's confusion around this, I think we as government should say, 'Hey, look, if you're not stepping up to this, we should.''Even OpenAI's CEO Sam Altman agrees. He said in a Senate hearing that while adults might talk to AI for support, children should be protected from forming close personal bonds with it. When asked if he would want his own child to have an AI best friend, he said, 'I do not.'


Time of India
4 days ago
- Business
- Time of India
LinkedIn co-founder slammed for his reaction to Trump halting student visas: 'We've completely forgotten how...'
Reid Hoffman criticized Trump's decision to stall all visa interviews for the time being. LinkedIn co-founder Reid Hoffman was panned on social media for his reaction to the US administration's decision to stall all visa interviews for the time being. While the state department said that the decision was taken to decide on expanding the social media vetting of visa applicants, Reid Hoffman lamented the decision and said America forgot how it prospered. Hoffman wrote this reacting to a post that was also negative about the decision. "Blowing a $50 billion hole in the budget of every state flagship and top private university in America. Gunshot wound to the head of the best education system in the world and the crown jewel of American soft power. Absolute madness," the post read. "Unfortunately, the truth. Other countries are moving to fill the gap and acquire top talent. We've completely forgotten how America prospered," Hoffman wrote. Social media users observed how his LinkedIn depends on H-1B visa programs and he does not care at all about America First. "America prospered because it was 90% White," one wrote. "Translation: LinkedIn needs a constant stream of grateful, compliant labor that won't challenge the system, just navigate it through his tool. I only care about money and don't care about how this hurts Americans," another post attempted to sarcastically read between Hoffman's reaction. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch CFD với công nghệ và tốc độ tốt hơn IC Markets Đăng ký Undo "Reid only cares about this because importing (indentured) talent allows him to keep wages lower - it's economic warfare against American Citizens whom Reid, like most far-left democrats, despise," one wrote. "There's no shortage of American talent — Hoffman just doesn't want to prioritize it. He profits from cheaper, visa-tied foreign labor. Over 60% of H-1B roles are certified below local median wages. It's about cutting costs, not fostering innovation. Reid Hoffman is a backer of H-1B, the programs that allow US companies to hire skilled foreign talent. This has been his long-standing demand as he urged the Obama administration as well to remove to cap on H-1B visas and bring in more foreign workers.

Business Insider
4 days ago
- Business
- Business Insider
Reid Hoffman says AI can't be your friend — and pretending it can is harmful
Reid Hoffman says AI can't be your friend, and pretending it can is "harming the person." Hoffman warned on a podcast that AI "friendships" erode human connections. "I don't think any AI tool today is capable of being a friend," Hoffman said in a Wednesday episode of the Possible podcast. "And I think if it's pretending to be a friend, you're actually harming the person in so doing." His comments came amid Meta CEO Mark Zuckerberg's push to embed AI companions across Facebook, Instagram, WhatsApp, and even Ray-Ban smart glasses. Last month, Zuckerberg told podcaster Dwarkesh Patel he sees AI chatbots as part of the solution to America's so-called loneliness epidemic. He cited statistics suggesting that the average American has "fewer than three friends" but has the capacity for 15. According to a 2021 report from the Survey Center on American Life, 49% of Americans report having three or fewer friends. But Hoffman drew a sharp distinction between companions and friends, saying that blurring that line erodes what it means to be human. "Friendship is a two-directional relationship," he said. "Companionship and many other kinds of interactions are not necessarily two-directional. And I think that's extremely important because it's the kind of subtle erosion of humanity." He said his theory of friendship was "two people agree to help each other become the best versions of themselves," a dynamic that involves not just emotional support, but also accountability — something no chatbot can reciprocate. "It's not only, 'Are you there for me?', but I am here for you." Hoffman praised design choices like Inflection AI's Pi assistant, which explicitly tells users, "I'm your companion," and encourages people to spend time with actual human friends. "Helping you go out into your world of friends is, I think, an extremely important thing for a companion to do," he said. As tech companies race to deploy more emotionally intelligent bots, Hoffman argued for more transparency and regulation. "We as a market should demand it, we as an industry, all MPAs, should standardize around it," he said. "And if there's confusion around this, I think we as government should say, 'Hey, look, if you're not stepping up to this, we should.'" For Hoffman, the stakes are high. "I think that's a degradation of the quality of elevation of human life," he said. "And that should not be what it's doing." Hoffman isn't alone in raising the alarm on AI companions. During a Senate testimony earlier this month, OpenAI CEO Sam Altman voiced similar concerns about AI forming personal bonds with children. When asked whether he'd want his own child to form a best-friend bond with an AI bot, he said, "I do not." He said that while adults might seek emotionally supportive relationships with AI, children require a "much higher level of protection" in how these systems interact with them.