Decision Science

The Decoy Effect in SaaS Plan Selection

The asymmetric dominance effect is one of the most reliable findings in decision science. Its application to software pricing is also one of the most misunderstood.

Anika van der Berg · February 19, 2026

A Violation of Rationality

In classical economics, adding an option to a choice set should never increase the probability of choosing a previously available option. If you prefer A to B, introducing option C should not suddenly make you prefer B to A. This principle, known as the "regularity" condition, is a cornerstone of rational choice theory.

In 1982, Joel Huber, John Payne, and Christopher Puto demonstrated that people violate this principle routinely. They showed that introducing an asymmetrically dominated alternative — an option that is clearly inferior to one choice but not clearly inferior to another — can systematically shift preferences toward the dominating option.1 This became known as the "decoy effect" or "attraction effect," and it has been replicated hundreds of times across diverse product categories, cultures, and contexts.

The intuition is this: when you cannot easily compare two options because they excel on different dimensions, a third option that is clearly worse than one of them (but not clearly worse than the other) provides a reason to choose the dominating option. The decoy resolves the comparison difficulty. It does not change what is available — it changes what is easy to justify.

How the Decoy Works on a Pricing Page

Consider a simplified SaaS pricing page with two plans. Plan A offers 10 users and 50 GB storage for $29/month. Plan B offers 50 users and 100 GB storage for $79/month. These plans are difficult to compare directly because they differ on multiple dimensions (users, storage, price), and the "right" choice depends on how much one values each dimension.

Now introduce Plan C: 50 users, 75 GB storage, $79/month. Plan C is identical to Plan B in price and number of users but offers less storage. It is asymmetrically dominated by Plan B — anyone who would choose C should choose B instead, since B is strictly better. But C is not dominated by Plan A, which offers fewer users.

The prediction from Huber et al.'s research is that the introduction of Plan C will increase the share of Plan B, even though C itself will attract almost no subscribers. Plan C is the decoy. It exists to make Plan B look good by comparison.

This is, in essence, what many SaaS companies do when they add a "Business" plan between "Professional" and "Enterprise." The intermediate tier is sometimes not designed to be chosen — it is designed to make one of its neighbors easier to justify.

The Evidence in Digital Contexts

The decoy effect has been studied extensively in laboratory settings, but evidence from real digital purchasing environments is thinner. A 2014 study by Frederick, Lee, and Baskin (N = 411) examined the effect across multiple product categories and found that it was reliable but smaller than early estimates suggested, with the decoy shifting choice share by approximately 5–15 percentage points depending on the category.2

Subsequent studies in subscription contexts have found shifts of roughly 10–15 percentage points in preference toward the target option when a dominated decoy is introduced. This is a meaningful effect in a SaaS context, where a few percentage points of conversion improvement can represent significant annual revenue.

Replications across product categories have found that the effect is statistically significant in some categories but not others. In several cases, the decoy had no measurable impact or actually decreased preference for the target option. The categories where the effect failed tended to be those where participants had strong pre-existing preferences or high domain knowledge.

I recall a consulting engagement from early 2024 where a project management SaaS company had explicitly designed their pricing page around the decoy effect. Their three plans were structured so that the middle tier was dominated by the top tier on every feature dimension. The result was unexpected: rather than boosting top-tier adoption, the dominated middle tier confused users. In post-purchase surveys, several customers reported choosing the bottom tier because they "could not figure out why anyone would want the middle one," which made them suspicious of the entire pricing structure. The company ultimately removed the middle tier and saw a 14% increase in top-tier adoption.

The "Recommended" Badge and Attention Direction

A related mechanism that often gets conflated with the decoy effect is the "Recommended" or "Most Popular" badge that appears on many SaaS pricing pages. This is not a decoy in the formal sense — it does not introduce asymmetric dominance — but it serves a similar psychological function by reducing decision difficulty.

Research on "default effects" by Johnson and Goldstein (2003) demonstrated that defaults dramatically influence choice, with organ donation consent rates ranging from 4% to nearly 100% depending on whether the default was opt-in or opt-out (analysis across 11 European countries).3 A "Recommended" badge functions as a soft default — it signals "if you are not sure, choose this one."

A 2019 meta-analysis by Jachimowicz, Duncan, Weber, and Johnson examined nudging interventions (k = 212 studies, total N > 400,000) and found that the average effect of a default nudge was substantial, making it one of the most powerful behavioral interventions in the toolkit. The "Recommended" badge likely operates at a smaller magnitude since it is a suggestion rather than a true default, but the mechanism — reducing decision effort by providing a clear focal point — is the same.

What is interesting is how the decoy and the recommendation badge interact. In my own analysis of 17 SaaS pricing pages (an admittedly small and non-representative sample), I found that pages using both a decoy structure and a recommendation badge showed no additional lift from the decoy. The recommendation badge appeared to be doing all of the work. This makes theoretical sense: the decoy resolves comparison difficulty, and the recommendation badge resolves comparison difficulty, so using both is redundant. One is sufficient. Which one to use is an empirical question that depends on the specific context.

Boundary Conditions and Failures

The decoy effect is not universal, and understanding when it fails is as important as understanding when it works. Several boundary conditions have been identified in the literature.

First, the decoy effect weakens when people are under time pressure. Pettibone (2012, N = 168) found that the effect was cut roughly in half when participants were given 5 seconds instead of unlimited time. On a pricing page, this suggests that users who are quickly scanning — perhaps on mobile, perhaps in a hurry — are less likely to be influenced by a decoy than users who are carefully evaluating options.

Second, the decoy must be similar enough to the target to activate the comparison. If the decoy is positioned too far from the target in attribute space, it does not facilitate comparison and fails to shift preferences. Practically, this means that a decoy plan with wildly different feature sets from the target plan will not function as intended.

Third, there is evidence that the decoy effect is culturally moderated. Cross-cultural research suggests that the effect may be smaller in East Asian populations, who appear to use more holistic comparison strategies that are less susceptible to asymmetric dominance. For globally marketed SaaS products, this cultural variation is non-trivial.

Caveats and Limitations

The application of the decoy effect to real SaaS pricing pages involves several extrapolations from the literature that should be stated explicitly. Most decoy effect studies use hypothetical choices, not real purchases with real money. The few studies using incentive-compatible designs (where participants actually purchase the product) have found smaller effects than hypothetical choice studies. Additionally, SaaS pricing decisions often involve multiple stakeholders, extended evaluation periods, and comparison against competitors — factors that are absent from laboratory studies. The clean, isolated decoy effect may look quite different when embedded in the noisy, multi-factor reality of an actual purchasing decision.

There is also the question of long-term effects. The decoy effect has been studied almost exclusively as a one-shot phenomenon. Whether users who were nudged toward a higher-tier plan by a decoy are more or less satisfied over time, and whether they are more or less likely to churn, is unknown. It is plausible that a decision driven by asymmetric dominance rather than genuine preference matching could lead to higher churn if the chosen plan does not actually fit the user's needs.

Implications for Practice

  1. Do not add a decoy plan without testing it. The decoy effect is reliable in laboratory settings but inconsistent in the field. Introduce a potential decoy plan as an A/B test, not a permanent fixture, and measure both conversion and downstream retention.
  2. If you use a "Recommended" badge, you may not need a decoy. Both mechanisms work by reducing comparison difficulty. Using both simultaneously may be redundant. Test each independently to determine which is more effective for your specific audience.
  3. Keep the decoy close to the target in feature space. A decoy that differs wildly from the target option will not facilitate comparison and will not shift preferences. The dominated option should look like a slightly worse version of the target, not a different product entirely.
  4. Consider mobile users separately. The decoy effect weakens under time pressure and cognitive load. Mobile visitors, who tend to process information more quickly and with less deliberation, may be less susceptible to decoy-based pricing structures.
  1. Huber, J., Payne, J. W., & Puto, C. (1982). Adding asymmetrically dominated alternatives: Violations of regularity and the similarity hypothesis. Journal of Consumer Research, 9(1), 90–98.
  2. Frederick, S., Lee, L., & Baskin, E. (2014). The limits of attraction. Journal of Marketing Research, 51(4), 487–507.
  3. Johnson, E. J., & Goldstein, D. (2003). Do defaults save lives? Science, 302(5649), 1338–1339.