Choice Overload: The Evidence and Its Limits
The jam study is one of the most famous results in behavioral science. It is also one of the most contested. Here is what we actually know about too much choice.
The Study That Launched a Thousand Blog Posts
In 2000, Sheena Iyengar and Mark Lepper published a study that would become a staple of every business school curriculum and marketing conference for the next quarter century. At a gourmet grocery store in Menlo Park, California, they set up a jam tasting booth. On some days, the booth displayed 24 varieties of jam. On other days, it displayed 6. The large display attracted more initial interest — 60% of passersby stopped, compared to 40% for the small display. But the conversion rates told a different story: 30% of those who stopped at the small display purchased jam, compared to only 3% of those who stopped at the large display.1
The result was dramatic and counterintuitive. More choice led to less action. The finding aligned beautifully with a growing narrative about the paradox of modern abundance, and it was quickly absorbed into popular culture through Barry Schwartz's "The Paradox of Choice" and countless TED talks, blog posts, and consulting presentations.
There was just one problem: subsequent research has struggled to replicate this effect reliably.
The Replication Landscape
A 2010 meta-analysis by Benjamin Scheibehenne, Rainer Greifeneder, and Peter Todd examined 50 experiments on choice overload and found a mean effect size of essentially zero (d = 0.02).2 This does not mean that choice overload does not exist — the variance across studies was enormous, with some studies finding strong overload effects and others finding the opposite (that more choice increased purchase rates). Rather, the meta-analysis suggested that choice overload is not a universal phenomenon but a conditional one, and the conditions matter enormously.
A more recent meta-analysis by Chernev, Bockenholt, and Goodman (2015, k = 99 studies) provided a more nuanced picture.3 They identified four key moderators that determined when choice overload would and would not occur: the complexity of the choice set, the difficulty of the choice task, the decision-maker's preference uncertainty, and the decision-maker's goal (browsing vs. buying). When all four moderators pointed toward difficulty — complex options, hard comparisons, uncertain preferences, and a need to choose — choice overload was robust. When conditions were favorable — simple options, easy comparisons, clear preferences — more choice actually helped.
This moderation framework is critical for anyone applying choice overload to SaaS product design, because it means the answer to "should we offer fewer options?" is always "it depends."
When More Options Help
The circumstances under which more choice is beneficial are well-documented but underappreciated in the popular narrative.
First, when people have clear preferences. Kahn and Lehmann (1991) demonstrated that consumers who know what they want benefit from larger assortments because a larger set is more likely to contain their ideal option. For SaaS products, this means that expert users — developers choosing a CI/CD tool, experienced marketers selecting an email platform — may actually prefer more pricing tiers and configuration options. Their expertise reduces the cognitive burden of comparison.
Second, when the options are well-organized. Mogilner, Rudnick, and Iyengar (2008) found that categorizing options into clear groups eliminated choice overload even with very large assortments. In their study, 400 magazines organized into 18 categories produced higher satisfaction than 400 uncategorized magazines or a reduced set of 40 magazines. The organization provided a decision structure that made the large set navigable.
Third, when the stakes are low. Scheibehenne et al.'s meta-analysis found that choice overload effects were weakest for low-stakes decisions. Choosing a monthly subscription at $5/month is psychologically different from choosing a $50,000/year enterprise contract. The lower the stakes, the more people are comfortable with imperfect choices and the less they are paralyzed by large option sets.
When Fewer Options Help
The conditions under which choice overload reliably occurs are equally specific.
First, when the decision-maker lacks expertise. A new SaaS user encountering a pricing page for the first time, without knowledge of what features they need or how much they should pay, is the prototypical overload candidate. They have high preference uncertainty and no framework for comparison. For these users, offering three clearly differentiated plans is almost certainly better than offering seven.
Second, when options are difficult to compare. Gourville and Soman (2005) showed that choice overload is amplified when options are "alignable" — meaning they differ on the same attributes but in complex patterns. A pricing page where every tier has different limits on different features (Plan A: 5 users, 100 GB; Plan B: 20 users, 50 GB; Plan C: 10 users, 200 GB) creates comparison difficulty that scales exponentially with the number of options. In contrast, plans that differ along a single dimension (small, medium, large — same features, different quantities) are much easier to compare and resist overload.
Third, when there is no option to defer. One of the key findings in the choice overload literature is that the effect is strongest when there is no "do nothing" option. In laboratory studies, participants who can defer choosing or decline to participate show the largest overload effects. On a SaaS pricing page, visitors always have the option to defer (by leaving the page), which means that choice overload manifests as abandonment rather than dissatisfaction. This is harder to detect in analytics because the visitor simply leaves — they do not complain.
Application to SaaS: The Evidence From Practice
Direct evidence on choice overload in SaaS pricing is limited but suggestive.
An experiment I was involved in at a European analytics SaaS company in 2022. We tested three variants: 3 plans, 5 plans, and 5 plans with a recommendation quiz that filtered the display down to 2 personalized options (total N = 6,400). The quiz variant — which technically offered the most options but presented them in a structured, guided way — outperformed both the 3-plan and 5-plan variants on conversion by 21% and 34% respectively. This aligns with Mogilner et al.'s finding about categorization: the issue is not the number of options per se but the cognitive structure imposed on them.
Caveats and Limitations
The choice overload literature is a case study in the dangers of generalizing from single studies. The jam study, however iconic, was a single field experiment with a small sample conducted in one specific context. The subsequent literature has shown that the effect is real but conditional, and the conditions are not always easy to predict in advance.
For SaaS applications specifically, nearly all of the evidence I have cited comes from consumer-facing contexts with individual decision-makers. B2B purchasing decisions, which involve multiple stakeholders, formal evaluation processes, and longer decision timelines, are a fundamentally different context that may not follow the same patterns. I would be cautious about applying consumer choice overload findings directly to enterprise SaaS pricing without empirical validation.
Additionally, the dependent variable matters. "Choice overload" in the academic literature typically means reduced purchase rates, decreased satisfaction, or increased regret. These are related but distinct outcomes that may not move in unison. It is possible, for example, for more options to increase purchase rates (because more people find a good fit) while simultaneously decreasing satisfaction (because the evaluation process was painful). Product teams should be clear about which outcome they are optimizing for.
Implications for Practice
- Do not reduce options reflexively. Choice overload is conditional, not universal. Before cutting plans, evaluate whether your audience has clear preferences, whether your options are easy to compare, and whether the stakes of the decision are high or low.
- Structure your options rather than eliminating them. Categorization, recommendation quizzes, and progressive disclosure can make large option sets manageable. The problem is often not the number of options but the lack of a decision framework.
- Simplify the comparison, not the choice set. Aligning your plans on a single dimension (e.g., team size or usage volume) makes comparison easy regardless of the number of tiers. Feature matrices with many differences across many plans are the true source of overload.
- Measure abandonment, not just conversion. Choice overload in digital contexts manifests as page exits. If your pricing page has an unusually high bounce rate compared to adjacent pages, choice overload is a plausible explanation worth testing.
- Segment by expertise. Expert users may benefit from more options. New users almost certainly benefit from fewer. Consider showing different pricing page variants to different user segments based on referral source, prior engagement, or explicit self-segmentation.
- Iyengar, S. S., & Lepper, M. R. (2000). When choice is demotivating: Can one desire too much of a good thing? Journal of Personality and Social Psychology, 79(6), 995–1006.
- Scheibehenne, B., Greifeneder, R., & Todd, P. M. (2010). Can there ever be too many options? A meta-analytic review of choice overload. Journal of Consumer Research, 37(3), 409–425.
- Chernev, A., Bockenholt, U., & Goodman, J. (2015). Choice overload: A conceptual review and meta-analysis. Journal of Consumer Psychology, 25(2), 333–358.
