I led teams at Meta and Airbnb. My Big Tech career taught me an important lesson about dealing with chaos and crisis at work.
During that time, he led teams through rapid changes, such as layoffs.
He learned one secret to effective leadership: authentic clarity.
Over the course of my career in Big Tech, I've been lucky to work at several successful, fast-growing companies. At Meta and Airbnb, I helped scale research and design teams from 2012 to 2022, ultimately becoming Head of Design Studio at Airbnb.
I loved building teams of talented people, but it wasn't all roses. Rapid changes and looming crises were constants, as they seem to be at most companies.
For example, I was working at Airbnb when COVID-19 hit in 2020. The company lost 80% of its business in a matter of weeks, and by May, I was forced to lay off more than 25% of my team. Managing that crisis and recovery was one of the most difficult leadership crucibles of my career.
The everyday chaos of a fast-paced company was just as educational, though, and one strategy for managing it rose above the rest.
The secret to effective leadership in times of change — whether it's reorgs, strategy shifts, or layoffs — is to provide authentic clarity.
Clarity allows people to move forward calmly, even if they don't have all the answers. In a chaotic environment, providing clarity takes frequent communication in an authentic voice.
Clarity, not certainty
Early in my leadership career, I mistakenly assumed that being a decisive leader in a crisis meant projecting certainty. My logic was that people need to be reassured their leader knows exactly what to do.
I quickly realized that was a fantasy.
I can't remember a single time in my leadership career when I had all the answers.
Once, in an effort to project certainty, I confidently presented some details using guesswork. But things were moving fast, and the information I shared was proven wrong just days later. What I thought would be useful only made me look foolish and ultimately damaged my team's trust in me.
After a few failed efforts, I realized my team didn't need me to have all the answers; they just needed me to provide clarity about what was happening. I learned the importance of clarity in three key areas:
What is happening? It's essential to clearly state the facts as you know them, even if they're incomplete, to help people process what's happening.
Why is it happening? Sharing the "what" without the "why" is a key mistake. My understanding of the "why" was usually incomplete, but sharing any context I had helped my teams make sense of it.
What does it mean for me? It's usually hard for people to translate high-level changes down to their level. Even simple reminders like "Your day-to-day work won't change," or "Here's when we'll know how this will affect our road map," helped people feel calmer.
The best leaders I've worked with were proactive about answering these questions, reaching out to teams early and often. I made it a practice to hold frequent Q&A sessions with my teams, and say things like: "Good question, I don't know. Let me see if I can find out." I found that even a response like that could be clarifying.
Communicate like a human
People can tell immediately when leaders aren't being themselves, so it's important to communicate in your own voice.
Early in my career, I followed instructions from HR or internal comms teams and stuck to the talking points. I used templates for my emails and repeated the language I was given during leadership meetings. But my team quickly called me out, and I realized I was hurting my reputation by communicating like a corporate puppet.
Rather than relying on jargon or HR talking points, I started trying to speak honestly and vulnerably. The strategy I developed wasn't going rogue in a sensitive situation; it was translating the company's carefully chosen talking points into my own voice, using empathy.
In practice, this also meant I'd say things like: "I don't know what's going to happen either. The uncertainty isn't great, but I'll let you know as soon as I know more."
Or: "This sucks. Layoffs are hard for everyone, especially when it's good friends and talented colleagues we're saying goodbye to."
Acknowledging real things like frustration or mistakes helped build trust by signalling we were all in the same boat.
Repeat yourself. Then repeat yourself.
The No. 1 mistake I've noticed leaders make during times of change isn't just poor communication; it's infrequent communication.
Even leaders who were good at providing authentic clarity weren't doing so consistently.
They'd communicate once and assume everyone understood. Or worse, they'd say nothing until they had all the answers, or there was something new to say. But that vacuum would often be filled with gossip and speculation.
I learned the solution was simply to repeat the message. I'd share the most important messages multiple times via several channels and in different words, because different framings might resonate with different people.
People have high anxiety and a short memory in times of crisis. Touching base often, even if there's little new information to share, builds confidence in a visible, highly present leader. Even without new information, it helps people feel confident that they didn't miss something.
Leading through change was never about having all the answers
During my Big Tech career, I observed that the most effective leaders in a crisis were rarely the ones with all the answers or the boldest vision. They were the ones who communicated clearly, showed up consistently, and were willing to be authentic. That's what builds trust and gets teams through chaos.
Representatives for Meta and Airbnb did not respond to a request for comment from Business Insider.
Do you have a story to share about managing teams through rough seas in Big Tech? Contact the editor, Charissa Cheong, at ccheong@businessinsider.com
Read the original article on Business Insider
Solve the daily Crossword

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
4 minutes ago
- Yahoo
Meta Q2 2025 Earnings Preview: AI Spend in Focus
Meta reports after the bell on July 30, 2025. Consensus implies gains of roughly 15% in revenue versus last year's $39.07 billion and 14% in EPS versus $5.16. Shares are up 21% year-to-date and 5% below the Jun 30 intraday record $747.90. Investors will watch CapEx discipline, with guidance now sitting between $64 to $72 billion for 2025, an increase from the prior outlook. Holding that range or outlining offsetting OpEx cuts would ease FCF worries. Still, Meta's aggressive hiring packages aimed at top AI talent could inflate expenses and offset any savings. Investors will also focus on the core ad engine. Maintaining ad-impression growth near Q1's 5% and price-per-ad gains around 10% YoY is critical to validating the nearly $44 billion ad-sales target and protecting Family-of-Apps margins. Reality Labs' losses of about $4 billion are expected, and any traction from smart-glasses or mixed-reality headsets could temper the bleed. A beat combined with positive commentary from CEO Mark Zuckerberg on AI trends would support the stock's 28 forward P/E, but another spending hike or signs of ad-demand fatigue may outweigh earnings upside. This article first appeared on GuruFocus.


Business Wire
6 minutes ago
- Business Wire
Forum Real Estate Income Fund Surpasses $300 Million in Net Assets
DENVER--(BUSINESS WIRE)--Forum Investment Group today announced that the Forum Real Estate Income Fund ('FREIF' or 'the Fund') surpassed $300 million in net assets under management (AUM) — a major milestone reflecting growing investor demand for differentiated, income-oriented solutions in today's evolving market. Structured as an interval fund (ticker: FORAX), FREIF is a compelling alternative to traditional fixed income that seeks to provide high income potential with lower volatility by investing in a diversified portfolio of public 1 and private real estate debt investment opportunities. Since launching October 30, 2019, the Fund has attracted RIAs, family offices, and high-net-worth individuals seeking alternatives to traditional bonds. 'Reaching the $300 million mark is more than a milestone, it signals FREIF's evolution into a scaled platform with staying power,' said Lee Beck, President at Forum. 'In a time when investors are rethinking portfolio allocations, FREIF presents a viable alternative to traditional fixed income exposure.' As of June 30, 2025, FREIF reported: 1-year net return: 10.82% (Class F), 10.72% (Class I) 3-year annualized return: 10.82% (Class F) Since inception cumulative net return 2: 58.43% (Class F), 29.16% (Class I), 9.22% (Class K) These results highlight the Fund's consistent income generation across multiple market cycles, including the COVID-19 crisis and recent interest rate volatility. 'FREIF fills a growing gap in investor portfolios,' said Pat Brophy, Portfolio Manager. 'We've built a strategy that emphasizes reliability, real estate fundamentals, and transparency — and we're proud of what's ahead.' FREIF's portfolio emphasizes hard-asset, real estate-backed debt investments, with a focus on downside risk mitigation and long-term value creation. For more information, About Forum: Forum Investment Group ('Forum') is a Denver-based boutique investment management firm dedicated to empowering individual investors by investing through real estate cycles. With assets in over 20 states, Forum built its foundation in development and evolved into acquisition and financing, providing access to a range of real estate investments. For more information, please visit Click here for the most recent performance and other Fund information. Past performance is no guarantee of future returns. The Fund's performance may be volatile, and the investment may involve a high degree of risk. Investors should consider the investment objectives, risks, charges, and expenses carefully before investing. Investors should read the prospectus and summary of additional information carefully with this and other information about the Fund. For additional information, please call 888-267-1456 or email InvestorRelations@ Forum Real Estate Income Fund is a diversified, closed-end management investment company that continuously offers its shares of common stock (the 'Shares'), and is operated as an 'interval fund.' There is currently no secondary market for its shares and the Fund does not expect any secondary market to develop for its shares. Accordingly, you may not be able to sell shares when and/or in the amount that you desire. Investors should consider shares of the Fund to be an illiquid investment. Distributor: Foreside Fund Services, LLC 1 Public debt refers to institutional securities investments typically only available to institutions and not to retail investors. 2 The FORFX Since Inception Net Return (Cumulative) is 58.43% and Since Inception Net Return (Annualized) is 8.46%. Inception date was 10/30/2019. FORAX Since Inception Net Return (Cumulative) is 29.16% and Since Inception Net Return (Annualized) is 11.50%. Inception date was 02/22/2023. FORBX Since Inception Net Return (Cumulative) is 9.22% and Since Inception Net Return (Annualized) is N/A. Inception date was 07/17/2024. The Fund has a limited operating history as an interval fund. The new investment structure imposes numerous constraints on Fund operations that did not apply to the Fund's previous types of investment structures. Please see disclosures at the end of this document for information on prior and current structure, fees and expenses. There is no assurance the stated objective(s) will be met. Investing involves risk, including the possible loss of principal and fluctuation of value.


CNBC
7 minutes ago
- CNBC
Meta's AI spending spree is Wall Street's focus in second-quarter earnings
Over the years, Meta has built a reputation for using rivals' innovations to bolster its technology. But its decision to copy a Chinese artificial intelligence lab in 2025 in an effort to compete with OpenAI backfired, forcing the company to overhaul its AI strategy. In a rush to mimic the techniques developed by Chinese startup DeepSeek, Meta released a new version of its Llama family of AI models that disappointed third-party developers, according to people familiar with the matter. The reaction was so bad that CEO Mark Zuckerberg decided to spend billions of dollars to revamp the company's AI unit, and he's still considering more shake-ups to Meta's AI strategy, said the people, who asked not to be named due to confidentiality. When Meta reports second-quarter earnings on Wednesday, Zuckerberg will make the case to investors for his AI hiring spree and the company's related strategy shift. Meta's AI blitzkrieg kicked off in June, when it invested $14.3 billion into Scale AI, resulting in the data annotating startup's CEO, Alexandr Wang, joining Meta along with a handful of employees to oversee a cornerstone AI unit. This new Meta Superintelligence Labs will be led by Wang, now Meta's chief AI officer, and former GitHub CEO Nat Friedman, who also joined the company in June along with business partner Daniel Gross. Gross was previously the CEO of AI startup Safe Superintelligence, which Meta tried to buy before being rebuffed by co-founder Ilya Sutskever. While Meta couldn't land the AI pioneer and former OpenAI co-founder, it did hire multiple top researchers from competitors like the ChatGPT maker, Apple and Google to help it regain its footing in the fiercely competitive artificial intelligence market. One such hire was ChatGPT co-creator Shengjia Zhao, who Zuckerberg last week named as his new AI lab's chief scientist. Although Meta's AI talent grab may not result in the company raising its projection for 2025 total expenses, estimated to come in between $113 billion to $118 billion, Cantor analysts said in a note published earlier this month that the investment potentially "moves the target above the low end." Translation: all that hiring comes with a cost, albeit slight. Meanwhile, revenue growth in the second quarter likely slowed to 15%, down from 22% a year earlier, according to LSEG. It would be the slowest rate of expansion for the company since early 2023, and analysts are expecting lower levels of growth in the coming quarters. Zuckerberg believes that new AI talent as part of the Superintelligence unit is worth it if Meta can regain its momentum and potentially create more powerful AI technology that steamrolls the competition, CNBC reported in June. For Meta, Llama 4 represented the company's answer to competing models from rivals like OpenAI, and executives have viewed it as helping the company dominate a potential computing platform of the future. Similar to other Meta-incubated technologies like the PyTorch AI developer tools, the company released Llama to the open-source community, which can then access and use the software for free, subject to certain licensing terms. While the predecessor, Llama 3, was a hit with developers, they haven't taken to Llama 4 because it's seen by some as more difficult to customize and integrate into their apps. That's resulted in many coders preferring Llama 3 over its successor, people familiar with the matter said. Additionally, Zuckerberg lost confidence in his generative AI team and its leadership in part due to a controversy over whether Meta may have gamed certain industry AI benchmark tests, the people said. Llama 4's struggles can be traced back to January, when the sudden rise and ensuing popularity of the open-source R1 AI model by DeepSeek caught Meta off guard, leading to a reevaluation of Llama's underlying architecture, the people said. DeepSeek's R1 is a so-called mixture-of-experts AI model, or MoE. R1 is similar to OpenAI's o1 family of models that can be trained to excel at multistep tasks like solving math equations or writing code. By contrast, Llama's models — before their latest release - were dense AI models, which are generally simpler for most AI developers to fine-tune and incorporate into their own apps, the people said. AI labs like OpenAI and Anthropic, researchers say, have been pushing MoE models to power AI agents that can perform a variety of step-by-step tasks. Those companies keep their designs closely guarded from competitors. OpenAI has been developing an AI model for the open-source community, but CEO Sam Altman said earlier this month that its debut is delayed indefinitely pending safety tests and other reviews. Although Meta has previously published research on MoE models, DeepSeek's release to the open-source community wowed researchers because R1 appeared to be less expensive to train and run compared with other AI models, experts said. Suddenly, Meta executives thought they had a clearer picture into how to create their own efficient and possibly cheaper MoE models, potentially leapfrogging rivals like OpenAI, people familiar with the matter said. Still, some staff members in Meta's GenAI unit pushed for Llama 4 to remain a dense AI model, which though generally less efficient, is still powerful, and Meta originally planned on that architecture acting as the backbone supporting improved voice recognition capabilities, the people said. Ultimately, Meta went with the MoE approach, due in part to DeepSeek's innovations and the promise of pulling ahead of OpenAI, the people said. Meta released two small versions in April and said a "Behemoth" version would come at a later date. But the new MoE architecture disappointed some developers, who were simply hoping Llama 4 would be a souped-up version of Llama 3, people familiar with the matter said. Llama 4 also failed to deliver a significant leap over competing open-source models from China, the people said. Executives at Meta as well as the Superintelligence Labs' high-profile hires are now questioning the company's current open-source AI strategy, and have considered skipping the release of Behemoth in favor of developing a more powerful proprietary AI model, the people said. A Meta spokesperson said in a statement that the company's "position on open source AI is unchanged." "We plan to continue releasing leading open source models," the spokesperson said. "We haven't released everything we've developed historically and we expect to continue training a mix of open and closed models going forward." The New York Times first reported that Meta was considering upending its open-source AI strategy. Despite Meta's AI struggles, the company's core online ad business remains strong, and investors are hopeful that the recent AI investments and hiring will eventually pay off. Zuckerberg said in July that the company would invest "hundreds of billions of dollars" into building out the computing infrastructure needed to power cutting-edge AI projects. "Meta Superintelligence Labs will have industry-leading levels of compute and by far the greatest compute per researcher," Zuckerberg said in a Facebook post. Analysts at Bank of America said in a note this month that Zuckerberg's comments indicate "a sign of confidence in Meta's revenue trajectory." The analysts said that his statement also "implies higher future Capex and Opex," which potentially equates to even more AI spending. "We also see the post as reaching out to AI talent, signaling Meta as a place for AI innovation," the analysts wrote. "We expect AI investment to be a top focus area on the upcoming earnings call, and Meta likely needs to make a case for strong AI returns to drive multiple expansion." Meta and its rivals' pursuit of AI researchers echoes the self-driving car frenzy of 2017, when companies like Google and Uber competed fiercely for talent, doling out "similarly crazy kind of pay packages across the board," said Megh Gautam, chief product officer at deal-tracking firm Crunchbase. "The dynamics still feel very much like a winner-take-all-market, so you're trying your best to give yourself the best shot possible to go make that happen," Gautam said. Investors appear more receptive to Meta's AI spending and strategy shifts, a contrast with a few years ago when the company was heavily pushing the metaverse, said Uday Cheruvu, an analyst and portfolio manager at Harding Loevner, which owns Meta shares. OpenAI, Google and Anthropic are also trying to hire and maintain talent all while continuing to spend billions of dollars on developing their respective AI models, Cheruvu said. "Now with AI, it's not just Meta – everyone else is doing it, so now the euphoria is much higher," Cheruvu said.