The Cognitive Biases AI Marketing Tools Exploit (Including Their Own Users)
The tools designed to optimize your marketing are sold to you using the very biases they claim to neutralize.
There is a particular irony in the AI marketing tool industry that deserves closer examination. These platforms—tools like Persado, Phrasee, Jasper, and dozens of newer entrants—promise to help marketers craft messages that are more persuasive, more data-driven, and less reliant on gut instinct. They claim to reduce the role of cognitive bias in marketing decisions. And yet, the way these tools are marketed to their own customers relies heavily on the same cognitive biases they purport to overcome.
This is not necessarily cynical. It may simply be unavoidable. But it is worth understanding, because marketers who adopt these tools without recognizing the dynamics at play risk replacing one set of biases with another—and calling it progress.
Anchoring: The Big Number on the Landing Page
Nearly every AI marketing platform opens with an anchoring number. "Increase conversions by 41%." "Generate 10x more content." "Save 23 hours per week." These figures function as cognitive anchors—reference points that shape all subsequent evaluation of the product, even when the figures themselves are poorly substantiated.
Tversky and Kahneman's foundational work on anchoring (1974) demonstrated that even arbitrary numbers influence subsequent numerical estimates.1 In their classic wheel-of-fortune experiment (n=100), participants who saw a random high number subsequently estimated higher values for unrelated questions. The effect has been replicated hundreds of times across domains.
When a tool like Persado claims a "41% average uplift in conversions," the number anchors the prospect's expectations. Even a skeptical buyer who mentally discounts the figure—thinking perhaps the real number is 15-20%—is still anchored higher than they would be without the initial claim. The original study conditions are remarkably well preserved: the number is presented early, prominently, and before the prospect has formed an independent estimate.
What is rarely disclosed is that these headline numbers typically come from best-case scenarios, cherry-picked client results, or aggregate figures that mask enormous variance. A 41% average uplift almost certainly includes clients who saw 200% gains (often those with very poor baseline copy) alongside clients who saw negligible or negative effects. The median result, which would be far more informative, is conspicuously absent.
Social Proof: The Logo Wall and the "Trusted By" Counter
Below the anchor number, one typically finds the logo wall: a grid of recognizable brand logos indicating that these companies use the product. This is textbook social proof, the principle identified by Cialdini (1984) in which people look to others' behavior to determine correct action under uncertainty.2
The mechanism is well understood. Marketers evaluating AI tools face genuine uncertainty—these are novel technologies with limited track records, and the average marketing director lacks the technical expertise to independently evaluate their AI claims. Under conditions of uncertainty, social proof becomes particularly potent. If Unilever, JPMorgan Chase, and Airbnb use the tool, it must be legitimate.
But the logo wall obscures critical information. It does not tell you which division of JPMorgan uses the tool (a team of three in a subsidiary?), whether they renewed their contract, what results they achieved, or whether the "use" was a brief pilot that was subsequently abandoned. Research by Goldstein, Cialdini, and Griskevicius (2008, n=1,058) on social proof in hotel towel reuse showed that specificity matters enormously—generic social proof ("most guests reuse towels") was significantly less effective than specific, relevant social proof ("most guests who stayed in this room reused towels"). Yet AI tool marketing almost universally relies on the vaguest possible form: a logo, decontextualized.
I noticed this pattern acutely when evaluating tools for a consulting project last year. One platform displayed 47 enterprise logos on its homepage. When I contacted three of those companies through personal connections, one had never used the product (they had participated in a co-marketed webinar), one had trialed it for six weeks and discontinued, and one was a genuine active customer. This is an anecdotal sample of three, and I would not generalize from it. But it suggests the logo wall warrants skepticism.
Authority Bias: The "AI" Label Itself
Perhaps the most pervasive bias at work is authority bias—the tendency to attribute greater accuracy and reliability to the judgments of perceived authorities. In the context of marketing technology, "AI" has become an authority signal. Research by Logg, Minson, and Moore (2019, n=1,500 across six studies) found that people consistently preferred algorithmic advice over identical human advice, a phenomenon they termed "algorithm appreciation."3
This has created an environment in which appending "AI-powered" to any feature provides an immediate credibility boost, regardless of the sophistication of the underlying technology. A simple rule-based system that sorts email subject lines by historical open rates becomes "AI-powered subject line optimization." A template library with merge fields becomes "AI-generated personalization." The label does real cognitive work, independent of the technology behind it.
The effect is compounded by the opacity of the technology. Most marketing professionals cannot distinguish between a sophisticated large language model, a basic natural language processing pipeline, and a set of if-then rules dressed up with machine learning terminology. This asymmetry of expertise is precisely the condition under which authority bias operates most powerfully. The customer cannot independently verify the authority's competence, so the authority signal—"AI"—does all the persuasive heavy lifting.
The Bandwagon Effect and Urgency Framing
AI marketing tools also exploit the bandwagon effect, closely related to social proof but distinct in its temporal dimension. The messaging frequently implies that adoption is accelerating and that non-adopters risk falling behind. "87% of marketing leaders are already using AI tools" (a statistic whose denominator and methodology are rarely specified). "Don't get left behind in the AI revolution."
This urgency framing interacts with loss aversion—Kahneman and Tversky's (1979) finding that losses are psychologically weighted roughly twice as heavily as equivalent gains. The implicit message is not "you could gain an advantage by adopting this tool" but rather "you are losing ground by not adopting it." The framing shifts from potential gain to potential loss, which prospect theory predicts will be substantially more motivating.
Several SaaS platforms have made this explicit. Copy.ai's growth marketing in 2024 featured the tagline "Your competitors are already using AI to write better copy." The statement may or may not be true for any given prospect, but its persuasive force does not depend on its accuracy. It activates competitive anxiety and loss aversion simultaneously.
The Availability Heuristic and Case Studies
The case studies that AI marketing tools publish are carefully selected for memorability and vividness—conditions that activate the availability heuristic, whereby people estimate the probability of an event based on how easily examples come to mind (Tversky & Kahneman, 1973). A case study describing how a mid-size e-commerce brand "increased email revenue by 317% in 90 days" is memorable precisely because the result is extreme.
This creates a systematic distortion in prospects' expectations. The available examples—the ones published, promoted, and presented at conferences—are the outlier successes. The modal outcome (modest improvement, no change, or negative results with implementation costs) is invisible, not because it is hidden maliciously but because no one writes case studies about average results.
It is worth noting a caveat here. This dynamic is not unique to AI marketing tools. It characterizes virtually all B2B SaaS marketing. But the combination of the AI authority signal with the availability heuristic is particularly potent because prospects lack the technical expertise to calibrate their expectations independently.
The Meta-Irony: Bias-Aware Tools Sold Through Bias
The deepest irony is structural. Several of these platforms explicitly market themselves as tools that help marketers overcome cognitive biases. "Stop guessing, start knowing." "Remove human bias from your marketing decisions." "Data-driven creative, not gut-driven creative." The implicit promise is that the tool will make the buyer more rational.
And yet the purchase decision itself is driven by anchoring, social proof, authority bias, the bandwagon effect, loss aversion, and the availability heuristic. The buyer is supposed to become more rational after adoption, but the path to adoption is paved with irrationality.
This is not necessarily hypocritical. It may simply reflect a pragmatic recognition that you have to sell to people as they are, not as they aspire to be. But it does suggest that the transition from "biased marketing decisions" to "AI-optimized marketing decisions" is less of a clean break than the marketing copy implies. The biases do not disappear; they migrate from the content creation process to the tool selection process, and potentially to the interpretation of the tool's outputs.
There is also the question of whether the tools actually reduce bias in practice, or whether they simply replace human biases with algorithmic ones—training data biases, objective function biases, the bias toward optimizing measurable short-term metrics at the expense of unmeasurable long-term brand effects. That question deserves its own essay, but it is worth flagging here as the unexamined assumption underlying the entire value proposition.
Implications for Practice
- Demand base rates, not highlights. When evaluating an AI marketing tool, ask for the median outcome across all customers, not the mean and not the best case study. If the vendor cannot or will not provide this, treat their performance claims as anchors, not evidence.
- Verify social proof independently. If a logo wall or "trusted by" counter influences your evaluation, contact at least two of the named companies to confirm they are active, satisfied customers. The cost of a few LinkedIn messages is trivial compared to the cost of a poor tool selection.
- Distinguish "AI-powered" from "AI-dependent." Ask the vendor what the tool does that could not be accomplished by a competent analyst with a spreadsheet. If the answer is unclear, the "AI" label may be doing more persuasive work than technical work.
- Apply the same rigor to tool selection that the tool promises to apply to your marketing. If a platform claims to bring scientific rigor to your creative process, evaluate the platform's own claims by the same standards. Do their case studies include control groups? Are their uplift figures statistically significant? What is the confidence interval?
- Budget for the possibility that results will be average. Base your ROI projections on conservative estimates, not on the anchored figures from the vendor's landing page. If the tool is still worthwhile under pessimistic assumptions, it is likely a sound investment.
- Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131.
- Cialdini, R. B. (1984). Influence: The Psychology of Persuasion. New York: William Morrow.
- Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90-103.
