Build Design Systems With Penpot Components
Penpot's new component system for building scalable design systems, emphasizing designer-developer collaboration.

uxdesign.cc – User Experience Design — Medium | Joe Salowitz
We’ve all heard the terms “Fake News” and “Alternative Facts” so many times it’s annoying. That being said, it’s a problem that needs to be addressed.
In this post I’m going to give you a few design techniques you can put in your UX toolbox to design experiences that squelch fake content.
Just yesterday (March 29, 2017), it was revealed that The Kremlin employed 1,000+ people to create fake stories targeting key swing states in the 2016 U.S. Election.
There were upwards of 1000 internet trolls working out of a facility in Russia, in effect taking over a series of computers which are then called botnets, that can then generate news down to specific areas. ~ U.S. Senator Mark Warner
Fake news is often intentionally created with a predisposed bias, then is shared within target demographics (who create an echo chamber around it), and sometimes it is even bolstered by likes and comments from networks of bots (like in the 2016 election example above).
Most fake content exists to influence the opinions of you and me.
This Facebook trending story is 100% made up. Nothing in it is true. This post of it alone has 10k shares in the last six hours.
All of these stories were fake/prank videos that were reported widely on and shared/liked countless times
The Sandy Hook conspiracy, flat-earth theory, and anti-vaxxing are all movements that started and grew based on false, yet persuasive content
It’s a big year for us. As product creators, we’ve traditionally exempted ourselves from responsibility when it comes to content in our products.
How many times have you heard a designer or a developer say: “We’re just creating the platform. Users provide the content.”
Not anymore. When it comes to content that misleads the masses, inspires hate, and fails to inform correctly… We’re accountable.
Product designers are on the hook, not to censor or bias content, but to provide a mechanism that informs their users of the reliability of the content being disseminated in their products.
All credit goes to Onion.com for this masterful piece of Fake News
Every product that relies on content sourced from its users must build into their design features a mechanism to actively hinder fake content.
Calling people out on their content is really uncomfortable. But it’s of tantamount importance that we do so.
There are disputed facts. But there are also truths and lies. Amidst our culture of political correctness, we must acknowledge this.
News outlets are already doing this. Products with user-generated content must follow suit.
We must design our products to support firm truths, counter firm lies, and enable dialogue about disputed facts.
The product features we design to counter fake content must be stern, yet gracious — corrective, yet humble — persuasive, yet unbiased — informative and bullet-proof.
This fake content prevention framework must never censor content. It must never prevent the flow of information (even false information).
Knowledge is power, censorship is not.
The framework should:
The framework must do all these things with grace. It must honestly, without bias or agenda, expose lies and reveal the truth.
I’ve designed such a framework, a UX guide for firmly but respectfully fighting and hindering the dissemination of fake content.
There are four main components that make up the skeletal structure of a “Fake Content Prevention” mechanism:
The backbone of fake news prevention is the Validation Engine. It is the mechanism by which you catch blatantly false content and by which you measure a post, and a poster’s, reliability.
Important: the Validation Engine does not:
The Validation Engine has 2 critical duties:
Facebook’s new disputed content feature flow:
The Validation Engine is built by combining these 4 things:
Snapchat has a rigorous review and approval process that “Discover” publishers must adhere to
Avoid polarizing terms when your validation engine flares content.
For example, Facebook has chosen to use the term “disputed” with a red tag and warning icon in their content-monitoring
What if you can’t build enough of a case to prove a story is Fake News?
Don’t touch it. I repeat: if your validation engine is unsure, don’t touch it. There is such a thing as degree of certainty, only flare content as disputed if your validation engine has a high degree of certainty that it is faulty.
The goal of your Validation Engine is to inform about common myths and slow down potentially-viral Fake News stories.
Note: it’s not possible, and not even necessary, to catch all fake content postings. It’s better to have some fake content slip through the cracks than to have true content censored and the reliability of your product put at risk.
If a piece of content is flared as disputed, it’s important to let users know why it’s unreliable. The citation experience allows curious users to dig into the sources and reasoning backing up the flare. It’s a critical component to building reliability into your product’s validation system and informing your users.
You citation experience should include the following things:
Facebook’s “Citation Experience”. It’s pretty generic and could use some work in my opinion.
Integrate reliability visually into your product by tying it to user profiles and published content. These are publicly visible cues that users can come to trust to inform them of the reliability of the content they are viewing.
Give all users a “Reliability Rating”
A rating based entirely on the factual accuracy of their past posts. This reliability rating is public, and thus creates an environment where people fight for the value of their reputation and start to feel a sense of ownership over the content they disseminate.
Online bullying and soap-boxing are often side effects of people being able to hide behind their online profiles.
Online profiles often have no mechanism of accountability.
A Reliability Rating begins to help people understand the consequence of their actions online.
Our culture really hates labeling people (often for good reason), but we’ve reached a point where we need to be okay calling a liar a liar when it comes to objective facts. Labeling someone as a “reliable” or “unreliable” source may be a scary thought at first, but if it’s purely based in how factual their words are, should it be?
“Visual design and User Experience can be used as a powerful force to give people quick indications of the quality of what they’re reading and sharing.” ~ Jeremy Johnson
Give posts a “Reliability Rating”
For false content, this is the “disputed” flare we talked about previously. You could go so far as providing a reliability rating (or flare) for proven-to-be-true content, or unproven-but-not-disproven content. The goal here is to accurately inform users about each piece of content they absorb and to gain trust.
Credit goes to Jeremy Johnson for these mockups
Flock.co has this feature in their team messaging app. They call their validation engine the “Fake News Detector,” and it works by “cross-referencing the URLs of links shared on Flock against a database of more than 600 verified fake news sources. Any fake news is immediately flagged with a highly visible icon and red bar alongside the preview of the URL. Using this tool, Flock users can easily identify fake news and refrain from sharing such content.”
The Fake News Detector (FND) flags unreliable content when shared on Flock (flock.co)
To take this one feature step further, completely unreliable content can be collapsed by default, to hamper users from knee-jerk liking as they scroll down the page and to prevent snowballing viral-ity that is so-common with provocative fake content. Collapsed content takes a conscious effort to expand and acts as a gateway that alerts the user that they’re viewing blatantly false content.
It’s important to enable private dialogue between individuals. Conversation and debate are healthy ways to build understanding. Privacy is important here to prevent brigading and bullying.
Instead of only providing an option for a community member to “report inaccurate content”, present users with the opportunity to directly engage the offending user in dialogue via private messaging.
Facebook allows users to direct message each other about Fake Content
Additionally, allow content-posting users to file complaints with your validation engine and product support team if they feel like their content has been unjustly flared or disputed. Give posters the opportunity to argue their case.
We’ve become a consumption based culture that has a hard time identifying with “the other” (whatever that “other” might be), and encouraging engagement and meaningful dialogue is one step back in the right direction towards building common ground and inspiring empathy in each other.
But it can dissuade it and inform busy users. As product owners, that’s really as far as we can take it at this point in time. You could even argue that’s the full extent of what we’re responsible for, and I may just agree with you.
Here’s the takeaway: your product doesn’t have to stop all fake content. You’ll die of anxiety before you can even get close to achieving this.
But you can’t not try.
Preventing a few of the worst posts from going viral is better than not catching anything at all. And from there you can only get better at it.
Designers today must take some level of accountability for the content being disseminated in their products.
“Even if it’s not your fault. It is your responsibility.”
~Terry Pratchett, Author
You’re responsible for walking the tightrope. Uphold freedom of speech. Avoid bias. Inform users. Don’t let lies fly unchecked.
Here are some tools that are actively working to fight fake content:
Sources, Contributions, and Image Credits
I am a UX guy who works at Universal Mind — A Digital Experience Firm. Follow me on Twitter at @joesalowitz or visit my website at joesalowitz.com
How to UX the F*** out of Fake News was originally published in uxdesign.cc on Medium, where people are continuing the conversation by highlighting and responding to this story.
AI-driven updates, curated by humans and hand-edited for the Prototypr community