Build Design Systems With Penpot Components
Penpot's new component system for building scalable design systems, emphasizing designer-developer collaboration.

medium bookmark / Raindrop.io |
Fighting a small grass fire in Marin County.
For the past 25 years, I’ve focused on the process of determining the form and behavior of technology products. A lot of other people do this, too, and we can’t seem to agree on our tools, our process, our objectives, our responsibilities, our titles, or even what to call what we do.
The term I prefer is “interaction design.” I wasn’t the first to use it, and I don’t particularly like the term, but it was the best I could find at the time. I’m not sure anybody really likes it, as evidenced by the many practitioners who have invented their own nomenclature, me included.
Welcome to the hellish world of terminology in the tech business.
But I’m not here to talk about terminology. I’m here to talk about the aforementioned “process of determining the form and behavior of technology products,” but because of the terminology jungle, I have to lay out some ground rules first.
In 1990, interacting with software was a novel experience for most people and, as we on the inside of the industry well knew, it wasn’t very pleasant. Our users complained. They yelled at us. They told us our products were too complicated, too geeky, too obscure, and too hard to learn. Eventually, we admitted that we had to make our software “user friendly.” We tried to imagine a process that could do that and called it “user-centered.”
But we didn’t actually know how to make software user friendly. We didn’t actually know how to be user-centered.
In academia, at the time, there was an older discipline variously called “human-computer interaction” or “computer-human interaction,” abbreviated as HCI and CHI respectively. HCI was an analytical process that consisted primarily of testing products already in existence by observing and recording how well real users performed. Eventually, in collaboration with engineers, they found it less expensive to test early versions of a product, otherwise known as a prototype, rather than waiting for the final version.
The primary tool of the HCI world was usability testing, wherein a human test subject would be asked to use a product while being silently observed, typically from behind a one-way mirror. Because engineers think about systems so differently than users do, usability tests always surprised the observing HCI pros, and when the engineers watched, they were shocked as well.
Those surprises had kernels of valuable insight, and HCI pros always claimed — honestly — that they gained tremendous value from them, value that ultimately benefitted the users. I have no doubt that that is true, but I believed then, and still believe today, that prototyping-and-testing is one of the weakest and most expensive interaction design tools you can use. The reason why is simple: it comes after much programming has occurred.
There is nothing generative about prototype-and-test. That is, it isn’t…design. HCI professionals gave engineers advice on how well their code was performing, and offered suggestions for improvement, but little more. It’s inherently post-facto. It’s just a critique of someone else’s design, albeit a rigorous one.
Prototype-and-test didn’t even acknowledge the concept of observing the user before the product had been created, or of generating any design imperatives in advance of engineering. And when, 25 years ago, I proposed that such a thing was possible, HCI professionals unanimously reacted with imperious, incredulous rectitude, “How can you know what users want?” They denied outright that such a feat was even possible, let alone desirable. I had many lengthy discussions, debates, and heated arguments with educated men and women on this point, and I never converted a soul.
I had a singular advantage over most of the other players in this game. I had spent the last 15 years writing and inventing software, including lots of prototypes, but I had never been drilled in academia’s prototype-and-test catechism, so I suspected otherwise, and set out to find a generative, user-centered process that could make tech products easy-to-use.
I made so much progress inventing an interaction design methodology, that from the moment I began to the time I published a 550-page (soon to be bestselling) book on the subject was less than five years. Four years after that, I published another one. Both books have had multiple editions, are still in print, and many practitioners credit my writing with creating their career for them.
Not only was prototyping-and-testing not a part of the discipline I created, I specifically called it out as problematic, non-generative, and recommended against it in my books.
The core tenet of all my work on interaction design is based on the simple notion that if you can identify the user, and learn what they are trying to accomplish, and why they want to accomplish it, you have all of the information necessary to generate a good design. And you can do it in advance of any engineering or coding. That’s why I call my approach “goal-directed.”
Just because goal-directed design is conceptually simple, that doesn’t mean it’s easy.
It’s actually very hard to learn about the inner workings of someone else’s mind. Programming is easy by comparison, as is usability testing, and I suspect that is one reason why prototype-and-test is making such a powerful comeback.
Yes. Sad, but true. After many years licking its wounds on the sidelines, prototype-and-test is in remission and gaining currency around the world. Designers now have more powerful tools to create their own prototypes, and a trendy new name has emerged, “design thinking.”
Because with this method designers can create their own prototypes, there is a frisson of generative effort, and testing prototypes in front of users seems a lot like being user-centered. Unfortunately, the whole notion of learning about, understanding, and analyzing the user has been reduced to a few platitudes and maybe a poster on the wall. It becomes a designer-centered process instead.
Prototype-and-test is very ego gratifying because it always keeps the designer in command. It puts the emphasis on the designer’s experimentation rather than on the user’s needs. User testing always yields some morsels for improvement, so everyone inside the building feels like a winner. Management likes it because it never rocks the boat. Developers like it because it rarely requires big changes. You can think of prototype-and-test as institutionalized, professional, sanctioned “faster-horsing.”
Prototyping-and-testing is a very useful pedagogical tool. A student of interaction design needs to learn how human users react to programmed behavior and there is no better hands-on method than writing something interactive and then watching users flail, misinterpret, and curse your work. But just because something is good for training doesn’t mean it’s effective in the real world. Unfortunately, few students today ever learn the lesson that, in the commercial world, the drawbacks of prototyping-and-testing make its use problematic.
When you create a prototype you expose yourself — and your users — to a myriad of cognitive illusions.
There’s the sunk-cost fallacy, confirmation bias, recency and validity illusions, and countless others. The data you gather from your prototype-and-test is deeply compromised by its very Heisenbergian existence. Test subjects want to please, they want to help. When you give them an artifact, they will riff on it, regardless of its appropriateness.
Hey, I’m not knocking prototyping! I lived off of my prototyping skills for a decade, and the products I made changed the world. For example, back in the 1980s, I wrote a prototype that amazed Bill Gates so much that he bought it, and eventually released it as Visual Basic, one of the most successful programming languages. Prototyping is good for invention. It’s good for communication. It’s good for learning. But it comes at a huge cost.
Without a doubt, the biggest cost is that prototypes are artifacts of code, not design. Modern tools blur the lines somewhat, but not enough to make a difference. Code has different imperatives than design does, and more powerful ones. Design is done for users. Code is done for computers. Doing them at the same time creates a very powerful conflict of interest.
If you can wait until you’ve done real user-centered design, then by all means go ahead and prototype your work. Put it in front of users and see how they react.
But every single person or organization that I’ve seen do this always uses it as an excuse to shorten, reduce, deemphasize, and eliminate the user-centered design phase.
I’m currently judging a major design contest, reviewing over 90 entries by top designers and companies. Only a couple of them used interaction design. Almost all of them used some variant of prototype-and-test. Their solutions are attractive and clever, but are they good for the user? How can they even answer that question when they don’t even know who the user is? Few of the submissions even bother to mention the user.
The rock-solid core of interaction design is the user, not the designer. Fans of prototype-and-test need to study human cognition as hard as they study composition, color theory, or information architecture. Read Kahneman’s Thinking Fast and Slow, or any of Dan Ariely’s books. The smallest word or gesture can change the user’s response.
Interaction design is a lot harder than most practitioners imagine because it asks bigger questions, and it asks them before artifacts exist.
Our field’s endless quest for a title is partly due to the widespread misunderstanding of what we do for a living. Do we execute the boss’ vision, or do we advocate for the user’s goals? UXers are the kind of people who want to make others happy, and we’ve been willing to change the name of what we do to please others, but a side-effect of changing how we call ourselves is that we change how we behave. It’s easy to make your boss happy at the expense of the user.
I don’t care what name you call it, I mind how you do it. Today, most practitioners don’t call themselves interaction designers, preferring the term “user experience designer.” But waddaya know, when few name it, few do it. Putting a UX designer in an interaction designer’s role gets you…something else. Sure, I sometimes call myself an “experience designer” because, like, I want to be cool. But I’m an interaction designer. That’s because my work is primarily about the user, and only secondarily about me and my ideas.
Doing real interaction design means subordinating your clever inventiveness to the needs of your user.
Knowing your user isn’t hard, but it demands that you let go of your bright ideas first. It’s not about your solutionizing but about your attentiveness. That’s hard. And special. And rare.
I’m disappointed to see interaction design fading from the world. The magnitude of the loss won’t be fully understood for another decade. In tech, the zeitgeist swings like a pendulum between diversity and consolidation. We had diversity with interaction design, now we have consolidation with prototype-and-test. Safer. Business always grows and flourishes in times of consolidation. Users always benefit in times of diversity.
So, practitioners do “design thinking,” or “user experience,” or “product design,” or “service design,” or “product strategy,” but not user-centered interaction design. The word soup just reflects how we have let our mission and our effectiveness drain away. It creates hidey holes for weak performers. It delivers cleverness and coolness and flashy demos, but it probably won’t deliver satisfied users. When that happens, the bean counters will get wise, and they will call an end to the party. There will be resentment and anger. The money people will close the spigot. To paraphrase a popular TV show, backlash is coming.
Winter is here. It’s just unevenly distributed.
This screed originated in a lengthy Twitter-rant I did a few weeks ago that was favorably regarded. I wasn’t sure I should post it here, and I’m still not sure it’s a good idea. I suspect I’ll just come off as a grumpy old man telling kids to get off my lawn. But I think excessive money in the tech world has compelled people to abandon their youthful, altruistic notions of helping the world with better software design. Now they just want to create products that make the most money in the short term. We have to change that.
I have thoughts on design, code, coding designers, and designing coders. Just take a look at some of the other posts on this blog. Here’s a good one.
AI-driven updates, curated by humans and hand-edited for the Prototypr community