Can AI Optimize Pricing? A Behavioral Science Perspective
AI pricing tools are solving the wrong optimization problem. They optimize for what people will pay, not for what people feel about paying.
The market for AI-powered pricing optimization tools has grown rapidly. Platforms like Intelligems, Pricefx, Zilliant, and PROS Holdings promise to use machine learning to identify optimal price points, automate dynamic pricing, and maximize revenue. Intelligems, which focuses on Shopify merchants, claims to help brands "test and optimize pricing with confidence." Pricefx, targeting enterprise clients, offers "AI-powered price optimization and management." The value proposition across the category is consistent: prices should be set by algorithms, not by intuition.
From a neoclassical economics perspective, this makes perfect sense. If consumers have well-defined price sensitivities and the goal is to find the price that maximizes revenue (or profit, or volume), then a machine learning model trained on historical transaction data should outperform a human pricing analyst. And in certain narrow contexts—airline seats, hotel rooms, commodity products with high transaction volumes—it does.
But consumer pricing decisions are not made by the rational agents of economic theory. They are made by humans with reference prices, fairness intuitions, loss aversion, and long memories for feeling ripped off. AI pricing tools, as currently designed, largely ignore this psychological dimension. They optimize the economics of the transaction while overlooking the psychology of the relationship.
Reference Price Theory: The Number in Your Head
Perhaps the most important concept that AI pricing tools underweight is reference price theory. Consumers do not evaluate prices in absolute terms; they evaluate them relative to an internal reference point—what they expect to pay, what they have paid before, what they believe is "fair."
Kalyanaram and Winer (1995) synthesized two decades of reference price research and identified several robust findings.1 First, reference prices are heavily influenced by the most recently paid price. Second, deviations from the reference price produce asymmetric effects: a price increase (loss relative to reference) produces a larger negative utility change than an equivalent price decrease (gain relative to reference) produces positive utility. This is consistent with Kahneman and Tversky's prospect theory (1979). Third, reference prices update sluggishly—they move slowly in response to new information, which creates a lag between a price change and consumer acceptance of the new price.
AI pricing tools typically model price sensitivity as a relatively stable attribute that can be estimated from transaction data. What they often miss is that price sensitivity is dynamically shaped by the reference price, which is itself shaped by the consumer's price history with the brand. A price that is optimal in a static, cross-sectional analysis may be suboptimal or even destructive in a dynamic, longitudinal context because it violates reference price expectations.
Consider a practical example. An AI pricing tool analyzes a Shopify store's transaction data and identifies that the price elasticity for a particular product suggests an optimal price of $49—$10 higher than the current price of $39. The model predicts that the price increase will reduce unit sales by 8% but increase revenue by 15%. On the basis of these projections, the price is raised.
What the model may not account for is that repeat customers—who constitute 40% of purchases for this product—have a reference price of $39. For these customers, the $49 price is not simply a point on a demand curve; it is a $10 loss relative to their expectation. Prospect theory predicts that this loss will be psychologically weighted at roughly twice its magnitude, producing a subjective "pain" equivalent to a $20 loss. Some of these customers will not merely decline to purchase; they will feel betrayed, reducing their lifetime value and their propensity to recommend the brand.
The Fairness Constraint That Algorithms Ignore
Thaler (1985) demonstrated that consumer perceptions of price fairness act as a binding constraint on pricing decisions—a constraint that is not captured in transaction-level demand data.2 In his seminal study, he found that consumers judged a hardware store that raised the price of snow shovels after a blizzard as acting unfairly (82% of respondents rated it unfair, n=107), even though the price increase was consistent with supply-and-demand economics.
The fairness constraint operates as a norm, not a preference. Consumers do not simply prefer lower prices; they believe that prices should not be raised in certain ways or certain contexts. Raising prices in response to increased demand (as opposed to increased costs) is perceived as exploitative. Charging different customers different prices for the same product is perceived as discriminatory, unless there is a transparent and accepted basis for the difference (student discounts, bulk pricing).
AI dynamic pricing tools are particularly prone to fairness violations because they are designed to respond to demand signals—precisely the trigger that consumers perceive as unfair. When Uber's surge pricing algorithm multiplied fares during a hostage crisis in Sydney in 2014, the public backlash was severe. The algorithm was operating exactly as designed: demand was high, supply was constrained, so prices rose. But consumers judged the outcome as profoundly unfair, and Uber's brand suffered meaningful damage.
Intelligems, to its credit, focuses on testing discrete price points rather than continuous dynamic pricing, which reduces some fairness risks. But even discrete price testing can run afoul of fairness norms if customers discover that they paid different prices for the same product during the same period—a near-inevitable consequence of price testing in an age when consumers share information on social media.
The Pain of Paying and Mental Accounting
Prelec and Loewenstein (1998) developed a theory of the "pain of paying"—the negative hedonic experience associated with spending money—that has direct implications for pricing optimization.3 Their work suggests that the timing, method, and framing of payment affect the subjective cost of a purchase, independent of the objective price.
For example, consumers experience less pain when paying with credit cards than with cash (because the pain is deferred and decoupled from consumption). They experience less pain when paying a subscription than a per-use fee (because the subscription is a flat "loss" that is amortized over many consumption episodes). They experience less pain when the price is bundled with other items than when it is itemized.
AI pricing tools optimize the number—the price itself—but rarely optimize the payment experience. A Pricefx implementation might identify that $149/month is the revenue-maximizing subscription price, but it will not tell you that framing the price as "$3.72 per day" or "less than your daily coffee" reduces the pain of paying while leaving the objective price unchanged. Nor will it recommend annual billing at a modest discount as a strategy to collect revenue upfront while reducing monthly payment pain.
These framing effects are not marginal. In a study of gym membership pricing, consumers were significantly more likely to purchase an annual membership when the per-day price was displayed alongside the total ($1.63/day vs. $595/year), even though the total was identical. No AI pricing tool I have evaluated incorporates this dimension into its optimization.
Dynamic Pricing and the Erosion of Trust
Perhaps the most concerning trend in AI pricing is the move toward real-time dynamic pricing in categories where it has not traditionally been applied: e-commerce, SaaS, and even physical retail. The technical capability exists. The question is whether it should be deployed.
Research on procedural justice suggests that the answer depends heavily on transparency and perceived legitimacy. Bolton, Warlop, and Alba (2003, n=480) found that consumers tolerated price variation when it was linked to cost variation (the cost-based justification) but reacted negatively when it was linked to demand variation (the demand-based justification), even when the resulting prices were identical.4
This creates a strategic dilemma for AI dynamic pricing. The entire value proposition is demand-based price adjustment—charging more when demand is high and less when demand is low. But demand-based pricing is precisely the type that consumers perceive as unfair. The tool optimizes a variable (price as a function of demand) along a dimension (fairness) that it does not measure.
I should note a caveat here. There are contexts in which dynamic pricing is accepted and even expected: airlines, hotels, event tickets. These categories have established norms of price variation, and consumers have adapted their mental models accordingly. But extending this model to categories where stable pricing is the norm—consumer electronics, apparel, software subscriptions—carries significant psychological risk that the current generation of AI tools is not designed to assess.
What AI Pricing Tools Should Incorporate
The critique above is not an argument against AI-assisted pricing. It is an argument for expanding the optimization problem. Current tools optimize for revenue or profit in a single transaction or a short time window. What they should optimize for is long-term customer lifetime value, which requires incorporating reference price effects, fairness perceptions, pain of paying, and trust dynamics.
This is technically feasible but requires different data inputs and different loss functions. Instead of training solely on transaction data (prices and quantities), models should incorporate customer tenure, repeat purchase rates, NPS scores, and sentiment data. The objective function should penalize prices that maximize short-term revenue at the cost of reference price violations that reduce long-term loyalty.
Implications for Practice
- Segment price sensitivity by customer history. Before applying AI pricing recommendations, segment customers by tenure and purchase history. New customers have weak reference prices and are more tolerant of price variation. Long-term customers have strong reference prices and will react negatively to increases that the algorithm treats as equivalent.
- Never raise prices without a narrative. If AI-driven analysis indicates that a price increase is warranted, pair it with a cost-based justification ("our raw material costs have increased") or a value-based justification ("we've added these new features"). Demand-based justifications ("lots of people want this right now") trigger fairness violations.
- Optimize the frame, not just the number. Before testing a higher price, test alternative framings of the current price: per-day equivalents, annual savings, comparative value anchors. Framing effects are often larger than price effects and carry no risk of reference price violation.
- Monitor fairness signals alongside revenue. Track social media mentions, review sentiment, and support ticket themes when implementing AI-recommended price changes. Revenue may increase in the short term while trust erodes beneath it. The erosion is not visible in transaction data until it is too late.
- Kalyanaram, G., & Winer, R. S. (1995). Empirical generalizations from reference price research. Marketing Science, 14(3), G161-G169.
- Thaler, R. (1985). Mental accounting and consumer choice. Marketing Science, 4(3), 199-214.
- Prelec, D., & Loewenstein, G. (1998). The red and the black: Mental accounting of savings and debt. Marketing Science, 17(1), 4-28.
- Bolton, L. E., Warlop, L., & Alba, J. W. (2003). Consumer perceptions of price (un)fairness. Journal of Consumer Research, 29(4), 474-491.
