We often release new features without really knowing whether they succeeded. Teams track all sorts of metrics, yet very few of them tell us if a feature actually changed something in the user's life. One metric in particular tends to cut through the noise because it shifts attention from surface-level signals to genuine engagement. The TARS framework aims to measure the impact of product features by looking at how people behave and how satisfied they are, not how many installed the app or left a one-click rating.
The idea is simple enough. If a feature brings value, people use it in a consistent and meaningful way. If it doesn't, they won't, no matter how much we promote it. TARS gives a structured way to track this pattern. It focuses on a clearly defined group of users, follows them across the right stages, and reveals whether the feature became part of their actual workflow. And that is exactly what most teams struggle to understand.
Why TARS matters
TARS is an acronym for Try, Adopt, Retain, and Satisfaction. The letters may look a bit mechanical at first, but they capture the four moments that usually decide whether a feature becomes part of someone's routine. Try is the initial spark. Adopt shows whether the feature fits naturally into the person's flow. Retain tells you if they keep coming back without being nudged. And Satisfaction reflects the quiet, subjective sense that the feature actually helps.
Teams usually rely on numbers that look comforting at first glance. A feature might see thousands of impressions or a spike of early clicks, which may suggest momentum, yet none of that proves long-term usefulness. A few of my past projects taught me that even a glowing NPS score can hide the fact that no one is really adopting the feature in their day-to-day routine. TARS invites you to look at the moments when the feature must shine. It asks whether new users try it soon enough, whether returning users come back to it naturally, and whether the ones who rely on it feel satisfied with what it does.
Once you start seeing features through that lens, the noise around "engagement" fades quickly. You no longer chase the numbers that only mimic success. Instead, you get a much clearer view of what is actually happening, even when the results are a bit uncomfortable.
When to use TARS
TARS works best when you're introducing a feature that is supposed to become a habit or at least a regular stop in someone's workflow. A small cosmetic tweak won't need this level of scrutiny, but anything that changes core behavior benefits from being measured against a well-defined audience over time. It's particularly helpful during iterative releases, when you might be unsure whether to invest further. The framework shows whether the energy you're putting in is landing with the people it was meant for.
It's also surprisingly useful when a feature seems successful but something still feels off. Maybe adoption is high, but usage drops fast. Maybe people try it once and never return. TARS exposes these patterns early enough so you can adjust the product before investing months into polishing something no one will keep using.
How TARS actually works
The framework isn't complicated, though it asks for a bit of discipline. You start by defining the group of users who should realistically interact with the feature. This part matters more than people admit. If the audience is too broad, the signal dissolves. If it's too narrow, you end up guessing. Once that circle is drawn, you follow the users through the four stages: how many try the feature, how many adopt it, how many return to it over time, and whether the ones who stay are satisfied.
The input is essentially two things. You need a clear feature event that marks each stage, and you need behavioral data for the user group you defined. For example, a try event could be opening a new analytics dashboard; an adopt event might be completing a full report for the first time; retain could be using it at least once a week; and satisfaction might be captured through a lightweight, in-context prompt after a few repeated uses.
The output is a small but powerful picture of real engagement. You see where people drop off, which stage feels fragile, and whether the feature is worth polishing or rethinking. In practice, a product team usually ends up with a simple set of numbers tied to those four stages and a short narrative that explains what they mean. Something like: most users try the feature quickly, only a portion adopt it, retention is shaky, and satisfaction is surprisingly high among the small group that sticks around.
This approach keeps you anchored to what matters. You avoid stretching yourself thin by tracking every possible signal, and you resist the temptation to pepper the product with surveys that only irritate the heaviest users. TARS helps you stay focused on the few moments that truly define success. By looking at the right group of users and following their experience across the right stages, you develop a steadier sense of whether the feature deserves more attention or needs a different direction.
In the end, it brings a bit of clarity to the messy reality of product development. You stop guessing. You start observing how real people behave. And over time, that habit tends to produce features that aren't just launched, but actually adopted.
- https://www.reforge.com/guides/evaluate-feature-performance — Reforge: TARS.
- https://uxdesign.cc/tars-a-product-metric-game-changer-c523f260306a — Adrian H. Raudaschl: TARS, a product metric game changer.
- https://medium.com/@niklas2106_71245/tars-how-to-execute-and-evaluate-a-feature-strategy-f7a965cc1fb9 — Niklas Teichmann: TARS — how to execute and evaluate a feature strategy.
- https://coda.io/@john/product-playbook/tars-metrics-template-49 — John Scrugham: TARS metrics template, open-source product playbook.