Sales Repository Logo
ONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKSONLY FOR SALES GEEKS

Moving the Goalposts

Adapt and redefine buyer expectations to maintain momentum and close the sale effectively

Introduction

Moving the Goalposts is the tactic of changing the standard of proof after evidence has been presented, so that success keeps receding. It looks like due diligence, but it is actually a refusal to let evidence count. The fallacy misleads reasoners because it blends legitimate refinement with unfair escalation and often happens gradually across meetings and documents.

This guide defines the pattern, shows why it persuades despite being invalid, and gives practical tools to recognize, avoid, and counter it in media, analytics, and especially sales.

Sales connection: In sales cycles, the target can shift from “show a 10 percent lift in a pilot” to “show 20 percent across all sites,” then to “must be zero-touch and cheaper than current,” all before a decision. Such drift burns trust, reduces close rates, and seeds churn when expectations keep changing post sale.

Formal Definition & Taxonomy

Crisp definition

Moving the Goalposts occurs when someone raises or alters the evidentiary standard after the initial standard has been met or while evidence is being evaluated, making the claim unfalsifiable or perpetually out of reach. It is an informal fallacy of presumption because it presumes that previously agreed criteria no longer apply without adequate reason (Copi, Cohen, & McMahon, 2016; Walton, 2015).

Taxonomy

Category: Informal
Type: Presumption and relevance
Family: Burden shifting and immunizing strategies that prevent disconfirmation

Commonly confused fallacies

No true Scotsman: Redefines category membership to dismiss counterexamples. Goalpost shifting is about changing standards of evidence or success, not redefining who counts.
False dilemma: Narrows options to two. Goalpost shifting can accompany it, but the core error is raising or altering criteria midstream.

Sales lens - where it shows up

Inbound qualification: From “simple discovery call” to “submit full RFP by Friday” after you accept the call.
Discovery: From “prove value on one workflow” to “prove on every workflow” after success on the first.
Demo: From “show feature parity” to “show parity plus three bespoke integrations.”
Proposal: From “pilot success equals purchase” to “pilot plus external audit plus discount parity.”
Negotiation or renewal: From “renew if NPS improves” to “renew only if NPS improves and cost decreases 20 percent.”

Mechanism: Why It Persuades Despite Being Invalid

The reasoning error

When criteria change after evidence arrives, the original hypothesis cannot be falsified or confirmed on fair terms. That makes the argument invalid as a method of testing claims. Even if stronger criteria might be sensible in a new context, applying them retroactively without acknowledging the change leads to unsound inferences about what existing evidence shows (Walton, 2015; Copi et al., 2016).

Cognitive principles that amplify it

Confirmation bias: People unconsciously raise the bar for disconfirming evidence and lower it for confirming evidence (Mercier & Sperber, 2017).
Loss aversion and reactance: Decision makers feel risk in committing, so they unconsciously invent extra hurdles to delay an uncomfortable choice (Kahneman, 2011).
Fluency and authority cues: Polished, reasonable sounding “next checks” feel like due diligence even when they silently rewrite the rules.

Sales mapping

Confirmation bias nudges evaluators to request “just one more proof” when results contradict prior preference.
Loss aversion in late stage committees encourages escalation to avoid blame.
Fluency makes added hurdles sound prudent rather than fallacious.

Surface cues in language, structure, or visuals

“Before we decide, we also need...” with no acknowledgment that the new demand differs from the agreed test.
Slide titles that subtly change KPI definitions across decks.
Milestones renamed post hoc as “preliminary” after being met.

Typical triggers

New stakeholders joining late and asking for fresh proofs without revisiting the decision framework.
External events that legitimately raise standards but are applied retroactively to old data.
Internal politics or budget freezes that make delay attractive.

Sales-specific cues

“Pilot was great, now prove the same across every region before approval.”
“Yes you hit 10 percent, but to be safe we need 20 percent and zero variance.”
“Security review passed, but please add third party red team and on prem option for parity pricing.”

Examples Across Contexts

Each example includes: claim, why it is fallacious, and a stronger alternative.

Public discourse or speech

Claim: “You proved the program reduced wait times by 12 percent in three cities, but until it works everywhere it proves nothing.”
Why fallacious: The standard moved from “works in pilots” to “works everywhere” after evidence arrived.
Stronger: “Pilot results show a 12 percent reduction with CIs. Next decision is whether scaling conditions hold and what risks appear in non pilot contexts.”

Marketing or product/UX

Claim: “Usability improved on the agreed tasks, but we will not accept improvement unless it covers every edge case.”
Why fallacious: Retroactive expansion from agreed core tasks to all edge cases.
Stronger: “Core tasks improved. For edge cases, here is a risk matrix and a follow up experiment plan with thresholds.”

Workplace or analytics

Claim: “The forecasting model met accuracy targets for Q1 and Q2, but we now require perfect accuracy before rollout.”
Why fallacious: A feasible target was met, then replaced by an impossible one.
Stronger: “The model meets the 10 percent MAPE target. If risk tolerance changed, document the new target, trade offs, and decision date.”

Sales - discovery, demo, proposal, objection

Claim: “We said 30 day pilot with 10 percent lift would trigger purchase. After you achieved it, we need 20 percent across all units before we consider it.”
Why fallacious: The evaluative threshold and scope changed post hoc.
Stronger: “Pilot reached the threshold. If we expand the requirement, let us amend the success criteria, timeline, and commercial terms accordingly.”

How to Counter the Fallacy (Respectfully)

Step by step rebuttal playbook

1.Surface the structure
2.Clarify burden of proof
3.Request the rationale and the scope
4.Offer a charitable reconstruction
5.Present a valid alternative

Reusable counter moves and phrases

“Let us freeze the current decision on the original criteria, then consider new criteria for the next phase.”
“If standards increase, let us adjust scope, time, and budget accordingly.”
“I want to separate due diligence from retroactive rule changes.”
“What evidence would make us decide now under the agreed test?”

Sales scripts that de escalate

Discovery: “We will document the pilot KPIs and what triggers a purchase decision, plus what could legitimately pause that decision.”
Demo: “If additional integrations become required, let us add a phase 2 gating criterion and revised timeline.”
Proposal: “Since the pilot met the 10 percent threshold, we can sign for rollout while we plan a phase 2 test for multi region parity.”
Negotiation: “If procurement requires 2 more controls after security sign off, we will incorporate them as a condition for expansion, not for the already passed phase.”
Renewal: “Renewal hinges on the KPIs we set last year. If the bar needs to rise, we will set that for the next term and align price to the higher standard.”

Avoid Committing It Yourself

Drafting checklist

Claim scope: Write explicit, measurable success criteria before testing.
Evidence type: Choose methods that match those criteria, and pre register data windows and formulas.
Warrant: State why the threshold is sufficient for the decision at hand.
Counter case: Document what would count as failure or partial success.
Uncertainty language: Use ranges and power analysis so risk is transparent and does not morph into fresh hurdles.

Sales guardrails

Use decision charters that include criteria, owners, stop rules, and decision dates.
Adopt pilot protocols with frozen KPIs, variance handling, and remediation steps.
Tie commercial terms to the agreed test, with milestone pricing for additional proofs.
Maintain a change log for any new standards with effectivity dates.
Invite legal, security, and finance to co own criteria before the test.

Rewrite - weak to strong

Weak (moving goalposts): “Great pilot, but now we need global proof and 20 percent lift before we sign.”
Strong (valid and sound): “The pilot met the 10 percent threshold we set. Let us proceed with phase 1 rollout and schedule a phase 2 test for multi region parity. Expansion to all sites will be contingent on achieving 15 percent median lift with confidence interval X.”

Table: Quick Reference

Pattern or templateTypical language cuesRoot bias or mechanismCounter moveBetter alternative
Post hoc escalation“Before we decide, also show...” after hitting targetConfirmation biasFreeze prior criteria, log changesAmend charter prospectively with new thresholds
Scope creep as proof“Not just pilot, prove across all regions first”Loss aversionSplit phases, stage approvalsPhase 1 rollout plus phase 2 validation plan
KPI redefinition“Success means 10 percent” becomes “must be 20 percent and zero variance”Reactance, risk aversionAsk for rationale and effectivity dateKeep original decision, set new standard for next term
Sales competitive trap“Add these bespoke features to count as parity”Availability of rival claimsSeparate parity from differentiationParity test on agreed list, custom work as paid SOW
Security ratchet“Passed review, now add 3 more audits”Authority and fluencyTie new controls to policy changeAdopt controls prospectively, adjust timeline and price

(Contains multiple sales specific rows.)

Measurement & Review

Lightweight ways to audit comms for goalpost shifting

Peer prompts: “Do we have a charter that states success criteria and decision date?” “Is any new request a change to criteria or part of the original test?”
Logic linting checklist: Flag words like “also,” “just to be safe,” “before we decide” when they arrive after criteria are met.
Comprehension checks: Ask a neutral colleague to restate the current decision rule. If they cannot, your standard is drifting.

Sales metrics tie in

Win rate vs deal health: Late stage losses with “no decision” and repeated asks for proofs are hallmarks of moving goalposts.
Objection trends: Track “need more proof” after agreed thresholds are met.
Pilot to contract conversion: Improves when criteria are pre registered and tied to milestone pricing.
Churn risk: Drops when renewals reference frozen KPIs from the prior term instead of retroactive expectations.

Guardrails for analytics and causal claims

Use experimental or quasi experimental designs with pre specified windows and thresholds.
Publish power calculations so a single noisy run does not trigger endless new demands.
Distinguish invalidity of retroactive escalation from unsoundness when the new premise about risk is itself weak.
Not legal advice.

Adjacent & Nested Patterns

Burden of proof shift: After evidence is presented, the burden is pushed back with new requirements.
Texas sharpshooter: Cherry picking can hide behind shifting standards, selecting only the windows that meet the new bar.
Sales boundary conditions: Sometimes standards must rise due to regulation or scope change. The non fallacious move is to apply new criteria prospectively, document rationale, adjust time and price, and keep the prior test’s verdict intact.

Conclusion

Moving the Goalposts is persuasive because it looks like prudence. In reality, it undermines fair evaluation by changing the rules midstream. Strong communicators and sellers freeze standards up front, log justified changes prospectively, and tie commercials to the agreed test.

Sales closer: When you replace drifting demands with decision charters, pilot protocols, and milestone pricing, you strengthen buyer trust, sharpen forecast accuracy, and grow accounts on evidence rather than delay.

End matter

Checklist - Do and Avoid

Do

Write and share success criteria, owners, and decision date before testing.
Pre register KPIs, formulas, data windows, and variance handling.
Keep a change log with rationale and effectivity dates.
Tie commercial terms to the agreed test with milestone pricing for extra proofs.
Split rollout into phases, each with its own frozen criteria.
Invite legal, finance, and security to co own standards early.
Use power analysis to size pilots and avoid endless “one more test.”
Celebrate and document when criteria are met, even if new standards arise for future phases.

Avoid

Accepting new demands as if they were always part of the plan.
Letting KPI definitions drift across decks.
Retroactively applying new risk rules to old data without acknowledgment.
Treating “just to be safe” as evidence.
Allowing scope proofs to masquerade as parity or due diligence.
Moving from achievable thresholds to impossible ones after success.

Mini quiz

Which statement contains Moving the Goalposts?

1.“Your pilot hit the 10 percent lift we set, but before approval you must hit 20 percent company wide.” ✅
2.“The pilot met the target. For company wide rollout we will set a new 15 percent bar prospectively, with a 90 day validation plan.”
3.“We agreed parity features A, B, C. Custom D and E will be a phase 2 SOW.”

References

Copi, I. M., Cohen, C., & McMahon, K. (2016). Introduction to Logic - 14th ed., Pearson.**
Walton, D. (2015). Informal Logic: A Pragmatic Approach - 2nd ed., Cambridge University Press.
Kahneman, D. (2011). Thinking, Fast and Slow - Farrar, Straus and Giroux.
Mercier, H., & Sperber, D. (2017). The Enigma of Reason - Harvard University Press.

This explainer distinguishes logical invalidity - retroactive standard changes that block fair inference - from unsoundness, where the new risk premise is weak even if applied prospectively.

Last updated: 2025-11-13