
What Nearly Blowing My Promotion Taught Me About Leadership
A few years ago, I stepped into a new role through an internal promotion. I'd been with the company for a while, understood the people and had a strong pulse on the business. Naturally, I thought I was ready to hit the ground running.
Spoiler alert: I wasn't.
Almost immediately, I found myself creating tension. Not because I had bad ideas, but because I was too eager to act without fully understanding my new reality. I later picked up What Got You Here Won't Get You There by Marshall Goldsmith, and the title alone felt like it was written for me. It helped me reflect on why my instincts (which had served me well until that point) were suddenly working against me.
Here's what I wish I had done differently and what I eventually figured out the hard way.
In my prior role, I was valued for being proactive, decisive and solution-focused. When I moved up, I leaned on those same traits—but now they were landing differently. Instead of being seen as helpful, I came off as pushy. I was unintentionally steamrolling people who had more context than I did.
I remember diving into a team meeting and immediately suggesting changes to a process I thought was outdated. What I didn't realize was that this 'outdated' process had been carefully developed to meet very real constraints I hadn't yet uncovered.
That was my first big leadership lesson: Leading from the middle is very different from leading from above. Execution and influence are not the same thing. My job was no longer to drive every solution but to enable the team to solve the right problems.
One trap I fell into was assuming that my tenure gave me all the insight I needed. I'd been involved in cross-functional projects, sat in on leadership meetings and even contributed to some strategic planning. But I didn't realize how much nuance I was missing until I started asking more questions.
Eventually, I slowed down and started meeting with key stakeholders—not to tell them my vision, but to ask for theirs. I asked questions like:
• 'What's working well that you'd want to protect?'
• 'Where do you think I can be most helpful?'
• 'What do you want me to understand before I try to change anything?'
Those conversations gave me insight that no dashboard or report ever could. I realized I had a few puzzle pieces—but not the whole picture.
I came into the role with good intentions, but my early attempts to make improvements were met with hesitation. It wasn't until I stepped back and focused on building trust that things began to shift.
I started by owning what I didn't know. I became more transparent about my learning curve and started showing more appreciation for the team's existing efforts. I stopped assuming and started listening with patience.
And something powerful happened: the resistance faded. People became more open, more engaged and, ironically, more receptive to change. They felt respected and included in the process.
There was a process I was certain we needed to sunset. In my mind, it was inefficient and outdated. But when I spoke to the team that created it, I heard the backstory. That process wasn't built in a vacuum; it was born from constraints, limited resources and a lot of trial and error.
Instead of scrapping it immediately, I invited the team to revisit it with me. We explored what still served us and what we'd outgrown. Together, we built something better. That collaboration turned what could've been seen as a teardown into a shared success.
It taught me something critical: Honoring past work isn't about clinging to the old. It's about showing people that their efforts matter—and that progress is something we create together.
I had a new title, new responsibilities and formal decision-making power. But the moment I tried to rely on that power alone, I hit walls. Influence, I realized, comes from alignment, not hierarchy.
So I brought people into the conversation early. I started socializing ideas instead of announcing them. I gave space for feedback, even when it was hard to hear. Over time, something shifted: the team stopped seeing change as my agenda and started treating it as our direction.
Looking Back: Growth Requires A Gear Shift
That promotion was a turning point for me not just professionally, but personally. It challenged my assumptions, exposed some blind spots and ultimately reshaped how I lead today.
Goldsmith's premise still rings true: The skills that get you promoted won't necessarily make you successful in your new role. Transitioning into leadership requires letting go of old habits, embracing new ones and remembering that people don't follow titles—they follow trust.
If you've recently stepped into a new role or have one on the horizon, here's my advice: don't rush to prove yourself. Start by listening, aligning and showing people that you respect the journey they've been on. Because when people trust that you see them, they'll walk with you even when the path changes.
Forbes Communications Council is an invitation-only community for executives in successful public relations, media strategy, creative and advertising agencies. Do I qualify?

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
23 minutes ago
- Yahoo
3 principles to invest by, whatever comes next
I recently opened a second-quarter investment account statement, not to euphoria—but relief. Let's not forget, US equities flirted with a bear market earlier this year. There were concerns that China's DeepSeek artificial intelligence would bring down US technology titans. There were the tariffs. Yet, here we sit in the third quarter of 2025 with the Morningstar US Market Index up nearly 8% for the year and the Morningstar US Core Bond Index having returned more than 3%. Ahead of us lie multiple pathways. Economic data and earnings announcements will provide direction, but there are also 'unknown unknowns' that could alter our course. Here are three investment principles to keep in mind. Principle 1: Where the market goes, nobody knows Coming into the year, how many pundits talked about AI out of China challenging US tech stocks? Did anyone expect tariffs to be such a factor? Though 2025 has had its plot twists, changes in market direction are hardly unusual. Consider thatstocks came into summer 2024 on a tear, beforesentiment turned. A series of releases—jobs, inflation, and earnings—compounded fears of a narrow and pricey market. Somehow, both an unexpected interest rate hike by the Bank of Japan and an expected rate cut by the Fed triggered selloffs. But markets recovered. Election results in November sparked a powerful rally. So, w here we go from here is anyone's guess. But I'll cite a story about how dead investors outperform living ones because they don't trade. Buy-and-hold has been a great strategy for long-term investors. Principle 2: Global investing can pay off Some think you don't need to own anything other than US equities. But I'llnote that the Morningstar Global Markets ex-US Index has trounced its US equivalent so far in 2025. What's behind the turnaround? The dollar weakening against many other currencies is part of the story. But markets in many regions—Europe, Latin America, and India—have roared to life this year. To me, global investing is about casting the net as widely as possible. So, just as some argue that US-based multinationals provide American investors with global exposure, you could also argue that they're not fully exposed to the US, from a revenue perspective, with a purely domestic portfolio. Principle 3: Bonds aren't broken The equity market's extreme volatility in 2025 makes bonds look like steady-Eddies. While US stocks were in free fall from late February through early April, the Morningstar US Core Bond Index gained 1.3%. Bonds diversified equities, serving as portfolio ballast. Bonds face a lot of 'headline risk.' There are debt and deficit concerns. And the shadow of 2022's ' worst bond market ever ' lingers. While the risks are undeniable, fixed income is also the victim of fearmongering. Not only have bonds diversified stocks, they're also providing above-inflation income streams. Thanks to the painful interest rate hikes of 2022-23, yields on fixed income are at levels not seen since the mid-2000s. The road ahead We're likely to see more twists before 2025 is out. Short-term asset-price fluctuations are driven by a complex interplay of variables—macro and micro, fundamental and technical. Longer term, valuation can be a useful guide. According to my Morningstar Equity Research colleagues, the US stock market looked a bit pricey coming into the second half of 2025—not in dangerous bubble territory, but 'trading at a slight premium.' The bargains cluster on the value side of the market and in smaller-cap stocks. Meanwhile, my colleagues in Morningstar Investment Management continue to see upside in bonds and international equities. As always, a buy-and-hold mindset and a diversified portfolio remain sensible investment tactics. ____ This article was provided to The Associated Press by Morningstar. For more markets content, go to Dan Lefkovitz is a strategist for Morningstar Indexes. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Entrepreneur
25 minutes ago
- Entrepreneur
Why AI Isn't Truly Intelligent — and How We Can Change That
Today's AI lacks true intelligence because it is built on outdated, biased and often unlicensed data that cannot replicate human reasoning. Opinions expressed by Entrepreneur contributors are their own. Let's be honest: Most of what we call artificial intelligence today is really just pattern-matching on autopilot. It looks impressive until you scratch the surface. These systems can generate essays, compose code and simulate conversation, but at their core, they're predictive tools trained on scraped, stale content. They do not understand context, intent or consequence. It's no wonder then that in this boom of AI use, we're still seeing basic errors, issues and fundamental flaws that lead many to question whether the technology really has any benefit outside its novelty. These large language models (LLMs) aren't broken; they're built on the wrong foundation. If we want AI to do more than autocomplete our thoughts, we must rethink the data it learns from. Related: Despite How the Media Portrays It, AI Is Not Really Intelligent. Here's Why. The illusion of intelligence Today's LLMs are usually trained on Reddit threads, Wikipedia dumps and internet content. It's like teaching a student with outdated, error-filled textbooks. These models mimic intelligence, but they cannot reason anywhere near human level. They cannot make decisions like a person would in high-pressure environments. Forget the slick marketing around this AI boom; it's all designed to keep valuations inflated and add another zero to the next funding round. We've already seen the real consequences, the ones that don't get the glossy PR treatment. Medical bots hallucinate symptoms. Financial models bake in bias. Self-driving cars misread stop signs. These aren't hypothetical risks. They're real-world failures born from weak, misaligned training data. And the problems go beyond technical errors — they cut to the heart of ownership. From the New York Times to Getty Images, companies are suing AI firms for using their work without consent. The claims are climbing into the trillions, with some calling them business-ending lawsuits for companies like Anthropic. These legal battles are not just about copyright. They expose the structural rot in how today's AI is built. Relying on old, unlicensed or biased content to train future-facing systems is a short-term solution to a long-term problem. It locks us into brittle models that collapse under real-world conditions. A lesson from a failed experiment Last year, Claude ran a project called "Project Vend," in which its model was put in charge of running a small automated store. The idea was simple: Stock the fridge, handle customer chats and turn a profit. Instead, the model gave away freebies, hallucinated payment methods and tanked the entire business in weeks. The failure wasn't in the code. It was during training. The system had been trained to be helpful, not to understand the nuances of running a business. It didn't know how to weigh margins or resist manipulation. It was smart enough to speak like a business owner, but not to think like one. What would have made the difference? Training data that reflected real-world judgment. Examples of people making decisions when stakes were high. That's the kind of data that teaches models to reason, not just mimic. But here's the good news: There's a better way forward. Related: AI Won't Replace Us Until It Becomes Much More Like Us The future depends on frontier data If today's models are fueled by static snapshots of the past, the future of AI data will look further ahead. It will capture the moments when people are weighing options, adapting to new information and making decisions in complex, high-stakes situations. This means not just recording what someone said, but understanding how they arrived at that point, what tradeoffs they considered and why they chose one path over another. This type of data is gathered in real time from environments like hospitals, trading floors and engineering teams. It is sourced from active workflows rather than scraped from blogs — and it is contributed willingly rather than taken without consent. This is what is known as frontier data, the kind of information that captures reasoning, not just output. It gives AI the ability to learn, adapt and improve, rather than simply guess. Why this matters for business The AI market may be heading toward trillions in value, but many enterprise deployments are already revealing a hidden weakness. Models that perform well in benchmarks often fail in real operational settings. When even small improvements in accuracy can determine whether a system is useful or dangerous, businesses cannot afford to ignore the quality of their inputs. There is also growing pressure from regulators and the public to ensure AI systems are ethical, inclusive and accountable. The EU's AI Act, taking effect in August 2025, enforces strict transparency, copyright protection and risk assessments, with heavy fines for breaches. Training models on unlicensed or biased data is not just a legal risk. It is a reputational one. It erodes trust before a product ever ships. Investing in better data and better methods for gathering it is no longer a luxury. It's a requirement for any company building intelligent systems that need to function reliably at scale. Related: Emerging Ethical Concerns In the Age of Artificial Intelligence A path forward Fixing AI starts with fixing its inputs. Relying on the internet's past output will not help machines reason through present-day complexities. Building better systems will require collaboration between developers, enterprises and individuals to source data that is not just accurate but also ethical as well. Frontier data offers a foundation for real intelligence. It gives machines the chance to learn from how people actually solve problems, not just how they talk about them. With this kind of input, AI can begin to reason, adapt and make decisions that hold up in the real world. If intelligence is the goal, then it is time to stop recycling digital exhaust and start treating data like the critical infrastructure it is.


Washington Post
27 minutes ago
- Washington Post
3 principles to invest by, whatever comes next
I recently opened a second-quarter investment account statement, not to euphoria—but relief. Let's not forget, US equities flirted with a bear market earlier this year. There were concerns that China's DeepSeek artificial intelligence would bring down US technology titans. There were the tariffs.