24-07-2025
How To Get The Most Out Of Your Early Product Metrics
Pallishree Panigrahi is Head of Data & Insights at Amazon Key.
In the early days of a new product, the numbers we track—the metrics—shouldn't be treated as verdicts. They're not there to crown your idea a success or condemn it as a failure. Early metrics are operational breadcrumbs—signals that reveal where users get stuck, what needs fixing and which assumptions need to be rethought.
It's easy to confuse early signals with signs of success—especially when dashboards look great on the surface.
I've seen this firsthand across multiple launches. On paper, a dashboard can look impressive: thousands of sign-ups, high activation rates, glowing ratings. But if you look closer, what seems like traction is often friction disguised as progress.
One example: In our smart access platform, we initially tracked how many times drivers successfully unlocked gates. The numbers were big and reassuring—until we realized many of those 'successful' unlocks were the same frustrated driver clicking over and over because there was no clear confirmation the gate was open.
This is why early metrics should be designed to help teams learn, not just look good in an investor update.
Five Metrics Every Early-Stage Product Should Track
I recommend every product team start with five foundational metrics. None of them declare success. All of them force you to ask better questions.
1. Time To First Value (TTFV)
How quickly does a user reach their first meaningful outcome? For an e-commerce platform, it's not when someone signs up, it's when they successfully place an order and receive their first product without issues. This metric reflects how smooth your onboarding, product discovery and checkout experience really are. If users don't get to value quickly, they often don't come back.
2. Activation Rate
What percentage of new users hit a milestone that proves they've engaged meaningfully? A good activation metric might be: User adds an item to their cart and checks out within 24 hours of sign-up. This goes beyond passive browsing and captures real purchase intent. A low activation rate could point to friction in product search, unclear pricing, or trust barriers at checkout.
3. Repeat Usage In A Short Window
Do users come back quickly to do the same task again? Traditional retention metrics are often too broad or slow to reveal early signals of value. Instead, track whether people return in seven or 14 days to do the same task again. Repeated use in context is a stronger indicator of product-market fit than one-time curiosity.
4. Top Drop-Off Point
Where in the journey do most users abandon the process and why? It's not enough to know your overall conversion rate. Pinpoint whether users are dropping off at the product page, after adding to cart or during payment. For instance, a high abandonment rate at checkout could reveal trust issues, hidden costs or UX problems with the payment flow.
5. Qualitative Feedback Volume And Themes
What are users telling you, in their own words about what's not working? Numbers show you what happened, but feedback tells you why. Categorize support tickets, surveys and reviews into clear themes. Often, your biggest opportunity hides inside the smallest complaints.
These metrics aren't permanent. As your product matures, they should evolve with it. But picking the right metrics is only half the challenge, what really matters is how you interpret them.
Discovery Metrics Vs. Validation Metrics
One of the most common traps is treating early metrics as validation—proof that your idea is working. But discovery metrics have a different purpose: They break your assumptions so you can build the right thing faster.
I ask three questions to tell them apart:
1. Is this metric helping us learn, or is it just there for reporting?
2. What decision will this help us make in the next sprint?
3. Is it tied to a hypothesis we're testing?
Daily active users (DAUs), for example, often look like success. But if people aren't completing meaningful tasks, those DAUs are empty calories. A stronger early signal might be the percentage of users who complete a workflow without retrying or contacting support.
Discovery metrics influence what you build next. Validation metrics only reassure you that everything's fine. Metrics guide better decisions only if they're designed well. That's where the VET framework comes in.
The VET Metrics Framework
To design metrics that matter, I use what I call the VET Metrics Framework. It's a simple test:
• Value: Does this metric reflect an outcome that matters to users or the business? Counting clicks or pageviews doesn't help if nobody is completing the core task.
• Evolvability: Can the metric adapt as your product matures? A KPI that only works in a beta test will quickly become obsolete.
• Trustworthiness: Does this metric produce actionable insight? For example, average session time is ambiguous—longer isn't always better. You need clarity, not just data.
When we built early KPIs for Amazon Key, we didn't settle for tracking how many times someone pressed the unlock button. We measured the percentage of deliveries completed without intervention—a metric that showed whether the system was truly solving the access problem. Of course, tracking progress is only useful if you're focused on solving the right thing. So how do you know you are?
Five Ways To Know If You're Solving The Right Problem
No single metric can answer this question, but you can triangulate using five lenses:
1. Repeated Use In Context: Are people coming back to solve the same problem?
2. Feedback Alignment: Do users describe the value in their own words the way you intended?
3. Problem Substitution: What old workaround did your product replace?
4. Tolerance Of Friction: Do users keep going even when parts of the experience are clunky?
5. Value Hypothesis Testing: Can you validate demand and usability before building everything out?
These signals give you a clearer picture of real-world fit. Even with the right metrics and mindset, it's easy to fall into common traps, especially when data looks deceptively positive.
Measurement Traps That Sink Early Products
Even experienced teams fall into familiar pitfalls:
• Tracking What's Easy, Not What Matters: Just because you can measure it doesn't mean it helps.
• Measuring Too Much, But Learning Too Little: A crowded dashboard is usually a sign of unclear priorities.
• Overvalidating And Underexploring: Early KPIs should challenge your assumptions, not confirm them.
• Lagging Indicators Masquerading As Insight: NPS and 30-day retention are too slow to guide early decisions.
• Ignoring Qualitative Feedback: Pair behavioral data with user comments to see the full story.
If your metrics are only telling you everything is fine, you're not measuring deeply enough. Early metrics should make you a little uncomfortable. That's their job. They exist to surface what's not working so you can fix it before you scale.
This week, take a fresh look at your dashboard—not just for trends, but for blind spots.
If your metrics aren't teaching you something new, you're not measuring deeply enough.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?