Hanselman had my attention with his seminal article on the painfully anachronistic icons we still use to this day in computing (long after their relevance to everyday life has passed):
Today I had the pleasure of a different take on some of the most noteworthy icons in computing:
What does all this mean to me?
- We need to remain aware of the meaning of how we summarize expected actions/outcomes in our interfaces, and try very hard to connect to the target user. Making them learn our meaning just because we’re too lazy to learn theirs is a massive fail
- Cultural context is key – just because those of us with the experience of growing up at a certain time in middle-class North America are aware of what a certain visual used to mean, doesn’t mean the other 6 billion are just as “intuitively clueful”. I got to grow up in the shadow of the US and am keenly aware of how easy it is to assume that “everyone is like us, right?
- Sometimes an outcome has no analogue or universal meaning in our experience, and we should pick something with elegance or abstract individuality. I’m a big fan of doing it right, but when there is no “right”, do it artfully.
I’m disappointed at the continued “blame the victim” framing these kinds of articles take – as if it’s a simple matter of changing the behaviour of hundreds of millions of consumers every day, it’s their own fault and no one else is culpable for nakedly exploiting this fact of human behaviour. Makes my blood boil.
Let’s take it as a given that when things get so complex that you need to create and force training on masses of end users, you have failed to design a system with which the end users can reasonably succeed.
In the future, as in the past, when people say “so we’re going to build training for that” I will continue to slow down the conversation and ask “is there a way for us to refactor the system that does not require separate and egregious training?”
|Study: Many Consumers Still Untrained On Privacy Risks: