âWe value your privacy.â For every website that says it, why do I get the feeling that the complete opposite is true? Literally popping onto screens a couple years ago, this is one of the most blatant examples of slippery marketing copy on user interfaces, where positive spin is used to mask negative outcomes for users. Design researcher Caroline Synders calls this use of language a âcontent strategy dark patternâ:
A âcontent strategyâ dark pattern is where language and design create misinterpretations
Large steps towards making Dark patterns illegal were made in 2019*, but throughout 2020 and even today, the crackdown appears to have lead the relationship between language and design to become even more sneaky and strategic. Shady euphemisms are employed to trick users into handing over personal data, and manipulative descriptions continue to deceive people.
From Facebook to Medium, this article calls out increasingly deceitful web copy techniques used today by the masters of the dark arts of design writing.
The purpose of this article is to demonstrate dark patterns in UI copy through examples. Dark patterns often exist to support business goals, and so it may not be a designer or writerâs choice to implement them.
Dark Patterns and âAsshole Designersâ
There are different labels and categories for dark patterns depending on where you look. Harry Brignulâs darkpatterns.org is the most well-known, with labels such as âConfirmshamingâ and âPrivacy Zuckeringâ. Then, thereâs a more recent study of âAsshole designâ, which highlights 6 strategies of deceitful design, including Nickle-and-Diming, Entrapping or Misrepresenting. Hereâs a table from the study:
Whilst the above offer great definitions, to address dark patterns specific to copy on elements in user interfaces, Iâll use the following terms in this article:
5 Terms for Dark Patterns in UI Copy
- Shady Euphemisms: where any words that can be perceived negatively (e.g. paywall) are disguised with a more positive phrase (e.g. partner program).
- Humbug Headers: Use of friendly headings to deflect negative things
- Self-serving Syntax: Sentences are reordered to support a bias or motive.
- Manipulative Button Text: Button text that tries to get you to reconsider (similar to confirmshaming)
- Walls of Jargon: Use of large paragraphs of small text that nobody will read.
1. Shady Euphemisms
This is where anything that can be perceived negatively is rephrased, or rebranded to sound positive. Using a positive tone is widely practiced to make websites easier to understand, and is therefore common practice across many websites and apps. For instance, hereâs the âWriting Positivelyâ guide from Mailchimpâs content style guide:
As shown above, the practice of positive writing includes turning negative language into positive language, much like a euphemism, where mild or indirect expressions are used in place of a blunter truth. The goal is to make us feel things â and it happens fairly often on the web:
Medium
âPaywallâ â âPartner Programmeâ
Amazon
âCancellationâ â âEnd Benefitsâ
Facebook
âTrackingâ â âPersonalised Adsâ
1.1 When âWriting Positivelyâ Becomes Misleading (Medium.com)
âPaywallâ â âPartner Programmeâ
For example, blogging platform Medium often persuades writers to publish stories behind a paywall. However, due negative associations with the word âpaywallâ, they often replace it with more positive terms and phrases such as âPartnerProgramâ or âMeter my storyâ, as highlighted in the following screenshot:
If we take a closer look at the word choice underlined in pink, a few shady issues arise:
- Masked Outcomes: Opting in with the checkbox performs a restrictive action (putting an article behind a paywall). However, the positive wording masks this as an incentivised action â earning money.
- Misleading Terminology: Choosing positive words can cause confusion if theyâre not easy to understand. Whilst this euphemism for paywall, âMeter my storyâ, is more pleasent, it can also be confusing as itâs not a widely used term.
- Opt-in by default: This option is checked by default, meaning you actively have to say no to earning money when unchecking it.
Reading between the lines, the option can be translate as follows:
Is it a dark pattern?
Because of the difference in the description of the checkbox, and the outcome of checking it, the example above could easily be classified as a dark pattern. Important information (the writerâs story wonât be available to non-paying readers) is obscured through word choices that persuade users down a predefined path that favours shareholder interests.
We define asshole designer properties as instances where designers explicitly assert control over the userâs experience, implementing obnoxious, coercive, or deceitful behaviors that are almost solely in the shareholderâs best interest.
Asshole Designers, by Colin M. Gray, Sai Shruthi Chivukula, and Ahreum Lee
1.2 Cancel? Thatâs not a word đ€ (Amazon)
Cancel â End Benefits
Amazonâs cancellation page is another example of positive wording that can mislead. Similar to Mediumâs âPartnerProgramâ branding for Paywall, Amazon use âPrime Benefitsâ, or âBenefitsâ as a veil for cancellations. So instead of a negative âCancel Membershipâ page, you get the more positive âEnd Benefitsâ page. In the following screenshot, every trace of the word âCancelâ is repackaged as âEnd Benefitsâ:
Again, even though itâs more positive, it becomes less clear â possibly by design. Founder of Creative Good, Mark Hurst, also conveys this in his post âWhy Iâm losing faith in UXâ:
Increasingly, I think UX doesnât live up to its original meaning of âuser experience.â Instead, much of the discipline today, as itâs practiced in Big Tech firms, is better described by a new name.
UX is now âuser exploitation.â
Mark Hurst, founder of Creative Good
In his article, Hurst explains how Amazon have fallen from leaders in User Experience design, to one of the biggest pushers of âuser exploitationâ design.
This has not gone unnoticed by others either, and Amazon face a few legal challenges:
1.3 You Call it Tracking, We Call it âPersonalisationâ (Facebook)
Tracking â Personalised Service
This third example of shady euphemisms is common across social media websites, like the friendly Facebook. As one of the most aggressive miners of personal data, Facebook intentionally package all of this intrusive behaviour as a feature that benefits users.
For example, cross-website tracking and hidden pixels becomes your âAd preferencesâ. In fact, thereâs no clear mention of tracking or mining â itâs all euphemisms and positive spin that masks whatâs happening in order to make users feel in control:
The above screenshots are taken from Facebookâs privacy checkup, and although nothing is factually untrue, it raises the question â is information being withheld?
Dark Patterns vs Positive Writing
Despite the good intentions of writing positively for users, as shown in the above examples, thereâs also a very dark side to it, and itâs often intentional. Itâs acceptable for websites to be persuasive, or have a bias towards their own goals, but when positive wording and euphemisms mask information or mislead users, as Arielle Pardes shows in Wired, it becomes more unethical:
By definition, design encourages someone to use a product in a particular way, which isnât inherently bad. The difference, Yocco says, is âif youâre designing to trick people, youâre an assholeâ.
Arielle Pardes, quoting Victor Yocco
For instance, the upcoming privacy changes on Appleâs iOS expose that Facebook avoid the word âTrackingâ at all costs, despite it being the most accurate term to use for explaining their behaviour:
In contrast to their original requests for tracking consent, on Apple devices, Facebook will be forced to use 2 simpler options:
- â Ask App Not to Track
- â Allow Tracking
Despite this being clearer, Facebook doesnât appear too happy with it as itâs likely to negatively affect their profit margins, so thereâs currently a battle going on over privacy in big tech. If you want to dive deeper into this saga, check out Sara Fischerâs media trends newsletters at Axios.
In a similar vein to these shady euphemisms, letâs move on to see how header text can be used to distract and deflect:
2. Humbug Headers
Not too dissimilar from shady euphemisms, in this case, large header text is used to distract or mislead users when theyâre confronted with choices, such as agreeing to personalised ads (again), or upgrading an account. The header often says something reassuring to deflect from whatâs going on in the UI.
For example, this personalised ads request from Twitter starts by saying âYouâre in controlâ, but the entire modal encourages you to accept tracking for personalised ads (the primary blue button):
It seems to cause mistrust more than anything:
Feigning Good Intentions
Hereâs another example where the titles arenât untrue, but they elaborately feign good intention to gain an end. Instagram and Reddit both want us to download their more addictive mobile apps, but disguise it as a benefit to users:
Since the mobile websites are already well-made, as Android Police highlight, these popups could indeed be a ploy to suck you into using their app every day. The popups themselves actually make the websites harder to use:
Redditâs mobile website is well-made and fast, but for ages, the platform has been pushing anyone who visited that site to the official app instead, complete with an obnoxious banner that shows up every time you open a Reddit link in your phoneâs browser.
Itâs probably because downloading the app massively benefits the company, as social media apps are often much more addictive than their web counterparts through the use of notifications that aim to grab attention (as opposed to functional notifications). Avery Hartmans at Business Insider explains this in her article on the sneaky ways apps like Instagram, Facebook, Tinder lure you in:
App makers are using deliberate techniques to attract your attention. They arenât simply relying on you to come to them whenever you have downtimeâŠInstagram sends dozens of push notifications each week and uses âStoriesâ to attract you.
Conversely, there do exist legitimate cases where a mobile app would better than the web version, such as that of a writing app, or even when there arenât resources for a solid web experience. But for these giants, itâs really not the case:
âDonât you want to view it in the official Reddit app for the best experience? No, no I donât. And the official reddit app is not the best experience.â
So clever itâs confusing
Hereâs another header, this time from Medium.com, that also deflects from the real purpose:
The intention here may be good, but due to the tone used, it can also come across riddling, or even arrogant. Famous web developer, Wes Bos, highlights that artful headings such as this often lead to more confusion than benefit (and that may be the intention):
Here, Wes Bos is concerned that users now have to log in to read any Medium article, when in fact they donât. Because the messaging is consistently indirect, nobody is ever too sure what it really means. To quote Tyson Fury, theyâre âgoing around the bushes and putting their arse in the hedgeâ.
3. Self-serving Syntax
Here, a user is presented with one or more options, but sentences explaining those options are structured to deflate the negative ones. Continuing with Medium, thereâs a lot of self-serving syntax in this simple dropdown:
Similar to the first Medium example in section 1, Medium is again off-handedly convincing users to put articles behind their paywall. Instead of asking directly though, hereâs how they structure the request to disguise intentions:
- Positive options first: At first glance, it looks like thereâs 1 single positive option â allow Medium to recommend your story to a wider audience. This option actually has less significance, but is intentionally prioritised.
- Negative option last: The most important option is bundled along secondary because it can be perceived negatively: âRecommended stories are part of Mediumâs metered paywallâ
- Disguised outcomes: The outcome of checking the box actually agrees to multiple conditions, when it would make more sense if the options were mutually exclusive.
Peterâs tweet here sums it up perfectly:
Double Negatives
Usually, a writer would prioritise the most important and impactful information first, so that users are well informed of all implications of their choices. With this in mind, it becomes even more worrying that the checkbox above is opted-in by default. Itâs clear that this is intentional when you look at Mediumâs help page:
Even the help page is confusing through the use of a double negative statement. They explain that to remove your article from the paywall, you have to uncheck the box.
According to Plainlanguage.gov, double negative statements like this should be avoided for clear, and understandable language:
More disguised outcomes
Letâs not forget Facebook are experts at this one too. The following example originated in 2018, but things havenât changed much since, as youâll see below. Here, Facebook frames face recognition as a security feature, shifting âbenefitsâ to the top.
In doing so, they hide what we all know is equally, or even more true â they want to extract more data from you. Jenniferâs commentary sums it up:
2018 Facebook:
2021 Facebook:
For more on how social media sites like Facebook continue use tricks like this, Wired has a great article on it.
4. Manipulative Button Text
Similar, but more subtle to Brignulâs confirmshaming (where users are guilted into opting into something), here, button text is crafted to persuade you down a preferred path. For example, Amazon try to get people reconsider canceling subscriptions by adding âand End Benefitsâ to the cancellation button:
And they also make the opposing option positive: âKeep My Membership and My Benefitsâ.
Confirmshaming is the act of guilting the user into opting in to something. The option to decline is worded in such a way as to shame the user into compliance.
This page could actually be a lot simpler:
Going back to Hurstâs article on the downfall of UX, he suggests something along the same lines:
What should be a single page with a âCancel my subscriptionâ link is now a six-page process filled with âdark patternsâ â deceptive design tricks known to mislead users â and unnecessary distractions.
This twitter thread from digital policy maker, Finn, delves a bit deeper into the deflective wording used in these buttons:
âKeep Less Relevant Adsâ
The button text on the Twitter modal used in a previous example works in a similar way. Instead of an option to decline ads, youâre given the option to âKeep less relevant adsâ:
As illustrated above, it looks as if simple options have been reworked to portray a friendlier, but more manipulative message. However, at least the description itself is transparent and human. Instead of framing ads into a user benefit (like the Facebook example in section 1), they explain that ads are used to keep their service free â .
5. Walls of Jargon
Banking on the research that nobody on the internet reads, large walls of text are a great way to get users to agree to whatever you need. Here, what might be more fitting on a terms and conditions page is squished into a modal, often with a single choice â to accept. Take WhatsAppâs most recent confusing update for a quick example of this:
As well as the large amount of text, there are 5 links out to pages with even more information to read before agreeing. As tech journalist, Jennifer Baker says, âWho other than a tech journalist has time for reading all that?â
According to The Verge, the WhatsApp example above actually lead to mass confusion. And this isnât the first time Facebookâs terms and conditions have caused such chaos, showing their preference towards money over user privacy mightnât have changed much since back in 2012:
In that instance, Facebook was attempting to âborrowâ your photos and sell them to third party companies. And like the recent WhatsApp example, they were forced to reconsider.
As surprising as it may sound, people DO pay attention to these âboringâ legal agreements, and when they see something that is unclear or confusing, they speak up.
If youâre interested in how privacy policies themselves are perfectly crafted to âto tell you things without actually telling you thingsâ, Shoshana Wodinskyâs article in Gizmodo is a must read: What Facebookâs Privacy Policies Donât Tell You. Check out her Twitter for more comprehensive research into privacy issues:
ive spent two years researching the minutiae of whatsappâs privacy policies / combing through every page its business-facing code / getting into shouting matches w random engineers over this shit
this is the most comprehensive explanation youâll read đ https://t.co/bq08JKTk1V
â shoshana wodinsky (@swodinsky) January 15, 2021
Do Ethical Tech Companies Exist?
It might be argued that some of the above examples arenât dark patterns, but just badly written copy or thoughtless errors. Although when you see it coming from an actual writing platform, or a company with smart people working for them like Amazon and Facebook, it becomes hard to believe.
This isnât an accident. Instead, and this is the point of Decade 3, thereâs a highly-trained, highly-paid UX organization at Amazon that is actively working to deceive, exploit, and harm their users.
Mark Hurst</cite
Tech founder Paul AzorĂn furthers this when writing that such companies are known to prioritise money over whatâs right:
Large tech companies such as Facebook, Google, and Amazon are known for making unethical decisions. Tech companies should focus on whatâs right instead of simply what makes money.
From shareholders pressure, to greed, there are many forces pushing companies towards the use of dark patterns. Of the examples above, the transition from ethical to deceiving is most noticeable with Medium, where the burden of $135 million funding turned them from a pleasant to use writing platform, into one riddled with those confusing messages.
Corporate and crooked
Another example where ethics are disregarded for money was when analytics app, Baremetrics was sold by its creator and founder, Josh Pigford, to a venture capital firm. Straight away, the corporate acquirers implemented bizarre messaging and popups to prevent customers from cancelling subscriptions:
Now that money seems to be the priority, you have to have a call with ânon-salespersonâ Brian before cancelling your subscription.
Cheap prices and fast delivery wins
Even though all of this can be frustrating for customers, at the end of the day, these sneaky tricks are business tactics to support the goal of generating profit. Author of How Design Makes the World, Scott Berkun, points this out when suggesting customers of companies like Amazon are happier with cheaper prices and next day delivery than good UX. Similar to a Medium writer who benefits from excellent distribution â they can put up with the downsides and dark patterns because the service is so good.
You can have a great user experience in one sense and be exploited, or exploit others, at the same time.
Scott Berkun, author of How Design Makes the World
Despite them being annoying, dark patterns still exist because theyâre getting results. The question remains though, at what point are these patterns illegal and unethical? When do they start to ruin a product?
Resources to Push Back and Further Reading
If you disagree with shady practices and dark patterns, hereâs a few different ways to push back against sites that use them:
Tips for Writers and Designers
As an ethical designer, or anyone producing UI copy, as Andrea Drugay states, you can try and write your way out of using dark patterns:
The point isnât that only someone with the title âUX Writerâ can write their way out of dark patterns. The point is that anyone who is writing product copy should be able to write their way out of dark patterns.
In her article, âThe role of UX writing in design ethicsâ, Andrea suggests the following prompts you can use to push back:
I donât feel comfortable writing language that misleads users. I suggest we use ___ instead.
This flow seems misleading. How might we get the information weâre looking for without implying the feature already exists?
UX writing best practices typically have the button call-to-action match the verb in the header. This modal would be more clear if the implied action in the title matched the CTA. How about we rephrase it like this: ___?
Read her post for more of these.
Ask Why
As writer of the book White Hat UX, Trine Flibe states, the best way to understand why things are done unethically is to simply ask why:
Ask why something is being done unethically; ask why you are told to make a black hat feature; question the current state of things.
Find out more about ethical design in her post on Smashing, Ethical Design: The Practical Getting-Started Guide.
Find Ethical Work
Alternatively, you can always find an ethical company to work for instead. Ethical freelance collectives like Thea might be a good place to look too:
Use Privacy-Focused and Ethical Alternatives
As a web user, you can opt for ethical alternatives. Instead of Facebook, use Space Hey đŸ. Instead of Google Analytics, try Simple Analytics. The sites have curated alternatives for both of these:
For more privacy-focused alternatives, check out this article from mindful UX studio, This Too Shall Grow.
Call out shady behaviour
As shown in this article, Dark Patterns havenât gone away â theyâre now subtler and sneakier, and are likely to be just as effective (or they wouldnât exist). Writing an article like this can help call them out and get people talking about them. Alternatively, just add a comment, or tweet to DarkPatterns.org to get dark practices listed in the hall of shame.
Thanks for reading, Iâll be coming back to edit it (missed a few image captions, alt text, and links).
Also, thanks to Ann and Sophie for the feedback on this article.
*Edited 17th Feb 2021: a bill was proposed to make dark patterns illegal, which may not have been actioned yet.