Dark Patterns: A Critical Look at Deceptive Design Tactics
Let me begin by asking you this: Is a quick business win ever worth losing users’ trust?
Before we even think about using dark patterns, this is the question we should sit with.
Most of us have come across them.
Some of us have been tricked by them, maybe without even knowing.
Often called deceptive patterns, dark patterns are design tricks that coerce or mislead users into actions they might not otherwise take.
They often boost short-term conversions for businesses, but at a steep cost to user trust and experience.
This is Part 1 of a two-part series exploring the world of dark patterns in UX.
Today, I’ll cover the definition, explore the psychology behind how they work, and break down the most common types of dark patterns in digital products today.
Part 2 (released next week) will dive into the ethics (or lack thereof) of deceptive design, global regulations pushing back against these patterns, and examples of ethical UX done right.
📌What’s Inside (Part 1)
What are dark patterns, and where did they come from?
Psychology behind how they work
Common types of dark patterns in UX (with real-world examples)
💬 So, once again:
Is any short-term business gain ever worth the long-term damage to user trust?
If you’re anything like me, the answer’s already clear — and it’s no.
As UX professionals, we’re constantly balancing business objectives with user needs.
We’re expected to drive engagement, support growth, and yes, sometimes boost monetisation.
But we also know that the most powerful driver of retention, loyalty, and advocacy is something far harder to quantify: trust.
And that’s where it gets messy.
We’ve all seen it…
Interfaces designed not to help the user, but to nudge, guilt, confuse or trap them.
Whether it’s sneaky subscriptions, consent buried under five layers of settings, or decline buttons that shame you for opting out, dark patterns are (sadly) becoming the norm.
And when they “work”, they really do…
for a moment.
But what happens after that moment?
What happens when the user realises they’ve been tricked, manipulated, or nudged into doing something they didn’t want?
They leave. And they don’t come back.
I know I don’t.
If I’ve been misled, subscribed without consent, or had my attention hijacked — that’s it.
It feels intrusive and wrong.
Think about this: especially now, in a market oversaturated with manipulation and short-term thinking, there’s an opportunity to stand out by NOT following the crowd.
Not just because it’s ethical, but because it works.
Users reward transparency. They remember respect.
They return to products that value their autonomy.
In Part 2, we’ll explore why that approach isn’t just better for people, but it’s better for business, too.
We’ll unpack the ethical implications, look at legal regulations, and see what ethical, trust-centred design can really look like.
But for now, let’s start with the basics:
What are dark patterns, how do they work, and how can we recognise them before they do more harm than good?
🕯️Dark Patterns: A Quick Primer
In 2010, UX designer Harry Brignull coined the term “dark patterns” to shine a light on these deceptive design tactics.
He defined dark patterns simply as “tricks used in websites and apps that make you do things that you didn’t mean to, like buying or signing up for something.”
Unlike honest UX patterns that help users, dark patterns are carefully designed to benefit the business, often at the user’s expense.
They leverage our cognitive biases and push emotional buttons to drive actions like unintended purchases, subscriptions, or data sharing.
Why do dark patterns proliferate despite their shady nature?
One reason is short-term results.
These designs can be “phenomenally effective at boosting conversions” — whether that’s more sign-ups, higher cart totals, or additional data captured.
In a culture of growth hacking, even an unethical tweak that nudges metrics upward can be tempting.
Another factor is imitation: as Nielsen Norman Group notes, many companies copy competitors’ designs, inadvertently spreading deceptive patterns that have become “standardised” in some industries.
Regulatory gaps played a role too; for years, the legal system lagged behind these tricks, though recent laws are starting to catch up (more on that later).
But dark patterns carry heavy risks.
As UX designer Alex Hill puts it,
“Hidden costs and trick questions may boost your profits in the short-term, but the long-term effects … are hugely detrimental to your brand.”
Users eventually catch on, and the backlash can damage reputation and loyalty.
A quick win via deception can sow seeds of distrust that drive users away over time.
“A satisfied customer is the best business strategy of all.”
– Michael LeBoeuf, quoted by Ana Valjak.
Before we dive into the psychology that makes these dark patterns so effective, let’s tour some of the most common dark patterns in UX, with examples of how they appear in real products.
A Tour of Common Dark Patterns
🎣 Bait-and-Switch
This scheme promises one thing but delivers another.
The interface lures users with a desirable action or offer, then switches to a different outcome — essentially a digital bait-and-switch.
One infamous example was Microsoft’s 2016 Windows 10 upgrade prompt:
Users clicked the “X” to close the pop-up (assuming it meant cancel as usual), but Microsoft had redefined the red “X” to mean “OK, go ahead”, resulting in unwanted upgrades.
Another example: some tax software advertised “Free Filing!” but made the free version nearly impossible to find, steering users into paid products.
Regulators eventually intervened — in 2022, the U.S. FTC took action against Intuit (maker of TurboTax) for deceptive “free” advertising.
🪤 Roach Motel
Named after a cockroach trap (“roaches check in, but they don’t check out”), a roach motel design makes it easy to get in and ridiculously hard to get out.
Sign-up flows are simple and fast, but cancellation or opt-out flows are hidden, lengthy, or blocked.
Many subscription services have been guilty of this.
Amazon Prime’s cancellation process was a classic roach motel: a “multi-layered, friction-filled cancellation dance” full of obscure menus, confusing wording, and multiple “Are you sure?” confirmations — all intended to wear you down.
It was so bad that after EU consumer groups complained, Amazon was pressured to simplify Prime cancellation to just 2 clicks in Europe.
If you’ve ever hunted through account settings for an elusive “Delete account” button, only to be forced to email customer support, you’ve been in a roach motel.
😳 Confirmshaming
This tactic shames or guilts the user into compliance.
When offering a choice, the “decline” option is worded in an intentionally embarrassing way to nudge you to click the preferred (positive) option.
Email sign-up popups often do this: The “Yes” button says something enticing like “Yes, send me 15% discount!” while the “No” link says “No thanks, I hate saving money.”
The aim is to make users feel silly or wrong for opting out.
This plays on our emotions as nobody likes to feel foolish — and many will click “Yes” to avoid the little pang of guilt.
Confirmshaming has become so widespread that it’s almost a dark-pattern cliché. (We’ve likely all seen the “No, I don’t like fun” or “No, I want to stay poor” style decline buttons.)
It’s effective in the moment, but it certainly doesn’t foster goodwill.
“Confirmshaming tricks users into opting in by making the alternative sound absurdly undesirable”.
🔐 Forced Continuity
You sign up for a “free trial” that quietly converts into a paid subscription, often without a clear reminder or easy way to cancel.
This dark pattern hooks users with an initial freebie (leveraging loss aversion) and banks on inertia or forgetfulness to start charging them.
Streaming services and online publications commonly do this: e.g., a 7-day free trial that auto-renews into a monthly plan if not cancelled by Day 7 at 11:59 PM.
The “force” comes in when cancellation is made cumbersome or when users aren’t adequately warned that billing will begin.
A notorious case was Audible’s subscription, which for years was hard to cancel online — many users ended up getting charged for months because they couldn’t find the way out easily.
Regulators are increasingly cracking down on forced continuity; e.g., several U.S. states now require clearer disclosures for auto-renewing subscriptions.
But if you’ve ever forgotten to cancel a “free” trial and saw an unexpected charge, you’ve felt the sting of forced continuity.
🛒 Sneak Into Basket (and Hidden Costs)
Imagine shopping online and, during checkout, an extra item or fee magically appears in your cart without your explicit consent.
That’s sneaking into the basket — a dark pattern where sites add products (or donations, warranties, etc.) to your order by default.
Perhaps there was a pre-checked box saying “Yes, add a $5 donation” that you overlooked, or a tiny opt-out for “premium packaging” buried at the bottom of the page.
A real example: in 2015, SportsDirect was caught automatically adding a £1 magazine to every online order, customers had to notice and remove it manually.
Many people didn’t, effectively paying a hidden surcharge.
Closely related are hidden costs: only revealing extra fees (shipping, taxes, “service fees”) at the last step of checkout.
By the time users see these surprise charges, they’ve already invested time and effort, making them less likely to abandon the purchase.
It’s a dark pattern if the site intentionally withholds these costs till the end.
Ethically run e-commerce sites disclose fees upfront; deceptive ones surprise you in hopes you’ll sigh and click “Complete Purchase” anyway (the sunk cost fallacy at work).
📰 Disguised Ads
These are advertisements camouflaged as other content or UI elements.
They trick users into clicking because they look like a part of the page (for example, a download button, navigation link, or social media post.
A classic instance: fake “Download” buttons on free software sites.
If you search for, say, a freeware utility, you might land on a page with multiple big green “Download Now” buttons, only one of which is the actual download; the rest are ads for other products.
Softpedia, a popular software repository, was notorious for this: one download page had 6 different “Download” buttons (ads for driver updaters, etc.) before the real link.
It’s essentially a visual shell game, exploiting users’ expectations and haste.
Other examples include sponsored content that mimics news articles (blending into a news feed) and ads styled as in-game or in-app content.
The danger is that users click these thinking they’re legitimate actions, only to be whisked away to promotions or worse (malware in some cases).
Disguised ads prey on our tendency to click what looks right.
The onus is then on the user to discern the one true button among decoys, which is exactly the opposite of good UX.
An example of disguised ads on a download page:
Here, multiple green “Download” buttons are actually adverts leading to other software.
The real download link is hidden in plain text (“External mirror”) without a flashy button.
Many users will click one of the fake buttons, thinking it’s the right one.
👁️ Privacy Zuckering
This dark pattern (named after Facebook’s Mark Zuckerberg) tricks users into sharing more personal data than they intend.
It often involves obscure privacy settings or misleading consent flows that push users to divulge information.
For example, early Facebook designs buried privacy controls and defaulted your data to public, effectively nudging you to “share by accident”.
A modern example: in mid-2024, Meta (Facebook) introduced a convoluted process for users to opt out of their data being used to train an AI model.
The opt-out was hidden behind multiple clicks and even required users to provide a reason for opting out (a completely unnecessary obstacle.
Regulators called this out as a dark pattern to discourage opt-outs.
Privacy zuckering thrives on user inattention and confusion: pre-ticked consent boxes, long terms of service that few read, or interfaces that make “Allow” far easier than “Deny”.
The result? Users end up handing over contact lists, location data, or profile info without fully realising it.
One study found 95% of popular free apps exhibited some form of deceptive pattern, especially around data and permissions, averaging 7 such patterns per app.
That’s privacy zuckering on a grand scale.
With laws like GDPR and CCPA, there’s now more legal protection against these tricks, but enforcement is still catching up.
🤯 Trick Questions (and Visual Trickery)
Here, the wording or design of a question intentionally confuses the user.
It’s a subtle but insidious pattern: the text might use double negatives, ambiguous phrasing, or odd placement so that users might misinterpret what they’re agreeing to.
Example: “Uncheck this box if you do not want to receive updates” — a sentence that ties your brain in knots.
Many will just leave the box as-is, inadvertently “agreeing” to marketing emails.
Trick questions often appear in sign-up forms or check-out flows, especially as tiny footnotes.
The Green Man Gaming checkout, for instance, had a checkbox labelled “Please tick this box if you do not want to receive offers via email.”
Screenshot of a trick question during checkout.
The checkbox text is deliberately confusing: “tick if you do not want emails.”
It’s easy to misread that as “tick to get emails” due to the unusual wording, leading users to do the opposite of their intent.
Many users skim and assume checking it means subscribe (when it actually means opt-out). This exploits our habit of scanning and the expectation that checkboxes usually opt you in by ticking.
Visual tricks are similar: placing a prominent green button for “Yes” and a low-contrast text link for “No thanks”.
The user’s eye is drawn to the big colourful option, which is the one the company wants you to choose (e.g. “Yes, add recommended extras”).
We tend to click the big, obvious button without parsing every word.
Trick questions and misdirection prey on that impatience or assumption.
Of course, there are other dark patterns (friend spam, fake urgency or scarcity, etc.), but the ones above are among the most prevalent in today’s digital products.
A recent international review found that nearly 76% of sites and apps examined used at least one possible dark pattern, and 67% used multiple.
The sneaking category (like hidden info or preselected options) and interface interference (like misdirection and obstruction) were the most commonly observed.
In other words, deceptive design is everywhere — from retail sites and mobile games to social networks and SaaS tools.
Now, let’s delve into the psychology of why these tricks work.
Understanding that can help us recognise dark patterns and, crucially, design ethically instead.
🔍Why Do Dark Patterns Work? The Psychology Behind the Deception
Dark patterns are effective because they tap into the same psychological triggers and cognitive biases that drive our everyday decisions.
Here are a few key principles and biases at play:
🧠Loss Aversion
In behavioural economics, loss aversion means we strongly prefer avoiding losses over acquiring gains.
Dark patterns exploit this by framing choices in terms of avoiding a loss.
The user is made to feel they are about to lose something valuable (a discount, a bonus, premium access) if they decline.
Design practices like free trial conversion leverage loss aversion by giving the user something (access) and then taking it away unless they keep paying — a kind of endowment effect in action.
We’re neurologically wired to avoid losses, and dark patterns hijack that wiring to spur clicks and commitments that benefit the business.
🎚️Default Bias (or Status Quo Bias)
Humans tend to stick with default options.
We assume the default is recommended or normal, and changing settings takes effort.
Studies have shown that setting a default can massively influence behaviour — for example, opt-out organ donor programs lead to over 90% participation, versus under 15% in opt-in programs, purely because most people don’t change the default.
Dark patterns leverage this by making the undesirable choice the default.
For instance, cookie consent banners might highlight “Accept All” (default) and hide “Manage Settings” — taking advantage of the fact that most people click the big default button.
😵💫Ambiguity Aversion (Ambiguity Effect)
We generally avoid options with unknown outcomes.
If something is unclear or confusing, we shy away and prefer the choice that feels more certain.
Dark patterns sometimes create intentional ambiguity to herd us toward a particular option.
For example, if the process to decline something is vague (e.g. a muddy “Maybe later” vs a clear “No”), users often give up and accept the clear affirmative path.
When faced with a strangely worded checkbox (the trick question), many will leave it as is because they’re not 100% sure what checking it does, thereby doing whatever the company hoped.
The ambiguity effect thus pairs with default bias: confusion leads to inaction, which means the default wins.
👥Social Proof and Authority Bias
People are influenced by what others are doing — we look for cues from the crowd when uncertain.
Some dark patterns feed false social proof to sway us.
It’s those “Everyone is buying this!” messages or fake activity feeds (e.g. “John in London just purchased this item!”).
If we believe others have chosen something, we’re more likely to follow (the bandwagon effect).
Social proof and authority cues can be used ethically (e.g., real user testimonials), but in dark patterns, they’re often manipulative and misleading, playing on our trust in others’ behaviour.
🗣️Framing and Language
The way choices are framed makes a big difference.
Dark patterns often use emotionally charged language to frame the user’s decision. Confirmshaming is a prime example — the frame is “Decline = I’m a fool”.
Another framing trick is positive framing of costs (“Upgrade for only $5!” vs “Stay on basic — lose out on premium features”).
Many settings interfaces frame the data-sharing option as positive and useful (“Enable personalised ads for a better experience”) while framing the privacy-preserving option as negative or less helpful (“Limit ad relevancy”).
This nudges users through subtle language bias.
Psychologically, we respond to these frames: no one wants to click “Don’t protect me”, even if in context it might simply mean “don’t add extra data tracking”.
These are just a few cognitive levers; dark patterns can also leverage urgency (fear of missing out), commitment/consistency (getting users to say yes multiple times), and good old habit and muscle memory.
Understanding these principles is not about learning to use them unethically, but rather to spot when they’re being abused.
A user forewarned is forearmed: as Harry Brignull wisely noted, “the scams don’t work if the victim knows what the hustler is trying to do.”
If users recognise a manipulative pattern, they can resist the nudge (or better yet, call it out).
Education is power, for both users and designers, to avoid deceptive UX schemes.
And that’s a wrap for today.
I hope this gave you something to think about!
⏭️ Coming Next: Ethics, Regulations & Honest UX
In Part 2, we’ll look at what happens when deceptive design meets regulation, what ethical alternatives look like, and how transparent UX is becoming a competitive advantage.
We’ll also talk about your role in this, as a UX professional.
Because the best way to stand out today is to design with honesty.
What’s your take on this?
For more content like this…⬇️
Spread the word…⬇️
📚 Sources:
How to Use Dark Patterns — IONOS
The Real Impact of Dark UX Patterns — by Alex Hill | UX Collective
Deceptive Patterns in UX: How to Recognize and Avoid Them — Nielsen Norman Group
Dark Patterns Designs That Pull Evil Tricks on Our Brains — Infinum
Amazon Forced to Simplify Prime Cancellation After EU Pressure