Melinda Gates Asked For Ideas to Help Women in Tech: Here They Are
I am psyched that a powerhouse like Gates is taking up the cause, and I sincerely hope she reads this (and many other) articles to get a sense of the breadth of the problem (and how few working solutions there are). The overlap with race, the attempts to bring more women into classrooms, the tech industry bias towards the elite schools and companies (and not the wealth of other experiences). It’s a target-rich environment to solve.
Building a Psychologically Safe Workplace: Amy Edmondson at TEDxHGSE
I am super-pleased to see that the concept of Psychological Safety is gaining traction in the circles and organizations I’m hanging with these days. I spend an inordinate amount of time in my work making sure that my teammates and colleagues feel like it’s OK to make a mistake, to own up to dead ends and unknowns, and will sure make the work easier when I’m not the only one fighting the tide of mistrust/worry/fear that creates an environment where learning/risks/mistakes are being discouraged.
Andrea and Scott are two people who have profoundly changed my outlook on what’s possible to bring to the workplace, and how to make a workplace that truly fits what you want (and sometimes need) it to be. Talking about empathy as a first-class citizen, bringing actual balance to the day and the communications, and treating your co-workers better than we treat ourselves – and doing it in a fun line of business with real, deep impact for individual customers.
This is the kind of organization that I could see myself in. And which would draw in the kinds of people I enjoy working with each day.
So after meeting them earlier this year in Portland, I’ve followed their adventures via their blog and twitter accounts. This article is another nuanced look at what has shaped their workplace, and I sincerely hope I can do likewise someday.
Reducing Visual Noise for a Better User Experience
These days I find myself apprehensively clicking on Design articles on Medium. While there’s great design thinking being discussed out there, I seem to be a magnet for finding the ones that complain why users/managers/businesses don’t “get it”.
As I’d hoped, this was an honest and detailed discussion of the inevitable design overload that creeps into most “living products”, and the factors that drove them to improve the impact for non-expert users.
(I am personally most interested in improving the non-expert users’ experience – experts and enthusiasts will always figure out a way to make shit work, even if they don’t like having to beat down a new door; the folks I care to feed are those who don’t have the energy/time/inclination/personality for figuring out something that should be obvious but isn’t. Give me affordances, not a learning experience e.g. when you’ve got clickable/tappable controls on your page, give me lines/shadows/shading to signify “this isn’t just text”, not just subtle whitespace that cues the well-trained UI designer that there’s a button around that otherwise-identically-styled text.
Here’s where the performance analytics and “business analytics” companies need to keep an eye or two over their shoulder. This sounds like a serious play for the high-margin customers – a big capital “T” on your SWOT analysis, if you’re one of the incumbents Google’s threatening.
The hardest part of a job search (at least for me) is trying to imagine how I would walk away from a job offer, even if it didn’t suit my needs, career aspirations. Beyond the obvious red flags (dark/frantic mood around the office, terrible personality fit with the team/boss), it feels ungrateful to say “no” based on a gut feel or “there’s something better”. Here’s a few perspectives to bolster your self-worth algorithm.
I’m one of the many who fell for this little mental sleight-of-hand. Sounds great, right? A magic proportion that will make any design look “perfect” without being obvious, and will help elevate your designs to the ranks of all the other design geeks who must also be using the golden ratio.
Except it’s crap, as much a fiction and a force-fit as vaccines and autism or oat bran and heart disease (remember that old saw?). Read the well-researched discussion.
This well-meaning dude fundamentally misunderstands Agile and is yet so expert that he knows how to improve on it. “Shuffling Trello cards” and “shipping often” doesn’t even begin…
Not even convinced *he* has read the Manifesto. Gradle is great, CD is great, but if you have no strategy for Release Management or you’re so deep in the bowels of a Microservices forest that you don’t have to worry about Forestry Management, then I’d prefer you step back and don’t confuse those chainsaw-wielders who I’m trying to keep from cutting off their limbs (heh, this has been brought to you by the Tortured Analogies Department).
Captures all my feelings about the complaint from Designers (and Security reviewers, and all others in the “product quality” disciplines) that they get left out of discussions they *should* be part of. My own rant on the subject doesn’t do this subject justice, but I’m convinced that we *earn* our right to a seat by helping steer, working through the messy quagmire that is real software delivery (not just throwing pixel-perfect portfolio fodder over the wall).
I’m constantly amazed and amused at this kind of “but *I* deserve to be invited too” thinking:
All too often folks don’t want to bring everyone in on Day 1.
And that’s the real problem.
They don’t want to relinquish the (illusion of) control. They want the freedom to make many of the decisions without participating in this crucial collaborative work. Well, guess what? That’s a very costly move: The later everyone is brought in the greater the overall project risk.
In my career, I’ve heard this from the Operations folks, the Support team, the Security high priests, and most recently from the UX zealots.
This usually takes the form of “but if only they’d included us too in the conversation at the beginning, we wouldn’t be in this mess” fantasy. The longer I watch these folks argue from the sidelines [and one of the things I used to do], the less sympathy I feel.
Telling us on the development & delivery side of the organization that we need to include you too feels a little like telling a kid they have to watch all the good movies with their parents in the room. I’m sorry, what exactly about that sounds like an incentive?
“Oh, well if you found that security flaw in architecture instead of during test, it would’ve been orders of magnitude cheaper.” As if it’s a pure win-win scenario – and not, as reality suggests from talking to the folks actually doing the real work, that rather than *prevent* every statistical possibility, often times we’d rather get the product out in front of people and find out which things *actually* bit them/us on the butt, and only spend time fixing *those* things. Get product out there capturing revenue months earlier, plus reduce your investment on the long tail of an infinite number of possible issues that would cost schedule and profits to fix up front (and turn out to be non-issues)? Yeah, you don’t need an MBA to make that kind of call.
[Not to mention, “fixing something in architecture is cheaper” assumes (1) that the architecture is communicated, interpreted and implemented carefully and successfully, (2) that new bugs aren’t introduced at every translation layer because the architects abandon their responsibility to follow-through, and (3) that they anticipated and addressed every implementation issue.]
“But if you just invited the UX designers/researchers before starting to talk about product features and ideas, you’ll have a much wider palette of well-designed ideas to work from.” Yes, that’s potentially true – if your designers have a clear idea what the target users need – or if the researchers can turn around actionable findings in a short timeframe – or your UX bigots don’t throw cold water on every speculative idea and colour the conversation with “how crappy everyone but me is”. That dude is real fun at parties.
Are you one of these people I’m picking on? Are you sufficiently pissed off yet? OK, good – then we’re getting close to a defensive wound we’re all still harbouring. Which is the right time to clarify: I absolutely appreciate working with folks who are aligned to our business priorities, and work to get us actionable results in a timely manner that are relevant to the business problem we’re facing. I’ve spent decades now working with security and usability geeks, and some I’ve found to be extremely helpful. Some I’ve found less so. Guess which ones I’ve heard complain like this?
Here’s the pitch from a Product Manager to everyone who’s vying to get a seat at the table: I don’t have enough room at the table to entertain everyone’s ego. You ever try to drive an effective decision-making body when the room (or conference bridge) is stuffed so bad, it looks like a clown car?
Those who I invite to the table are effective collaborators. If you have a concern, make sure it’s the most important thing on your plate, make sure it’s something I can understand, and make damn sure it’s something that’s going to have an impact on our business results. Every time you spend your precious ante on “but what if…” and not “here’s a problem and here are all the possible/feasible/useful solutions, depending on your priorities”, your invitation to the next conversation fades like Marty McFly’s family in that photo.
I used to work for Microsoft (7 years) and I only used PCs for about 20 years. Macs actually intimidated me. Then I got an iPhone.
I was also the family tech support, and every time I saw someone go Mac, I never heard from them again. The Windows folks, have me clean up their PC every year and are calling every few months with another infection, another driver problem, another hardware issue.
I’m now on my fourth iPhone, and got a Mac Mini a few years ago to hook up to the TV. Turns out I was intimidated for very little reason. I didn’t understand it all at once, but it was damned easy (in fact too easy – I kept expecting I’d have to do something fancy or painful to make it work the same way my PCs do).
I’ve grown to love my Apple stuff and grown to hate how much excess maintenance, worry and frustration I get from Windows PCs. Now I tell people that if they don’t want to worry for 5+ years and just have the sucker work and stay speedy, get an Apple product. If they want to mow the lawn, make the bed and change the oil every time they want to do something like browse the web or write a document, by all means get a Windows system and start saving for its replacement in a couple of years.
I still know more internals and tricks for working with my work PC, and I’ll never be a guru about my Apple products. And what I’ve learned is, I don’t need to be a guru to be happy and productive with them. It’s very freeing when you realise that the PC promise of ultimate flexibility really means constant maintenance and incompatibilities. You really do get what you pay for.
One week from now, I expect to see you smiling back at me from the audience of CHIFOO, hearing me regale you with great UX moments I’ve discovered in my favourite comic books.
Like this one:
If that’s not quite enough to entice you out to the warrens of NW Portland, consider this: you will be one of a privileged few who get to see me sporting the masterpiece that is the Superman kilt handcrafted by my lovely partner Sara.
The more we see big flat glass touch panels show up in the dashboard of new cars, the less I am convinced that in-dash experience designers are ignoring in-your-lap tablet experience design principles and norms, and the more it looks like they’re just copying-and-pasting their tablet interaction models straight to the car.
I have a recent Prius and I have long since lost interest in fighting with that hard-to-control-at-a-glance UI. This week I saw this article on the Tesla and felt like if even this pinnacle of beautiful design can’t seem to understand driver needs, it’s going to be a long time (and a lot of road scares and harm) before we ever tune the technology to actually assist and not detract from the driver’s main job.
I love Tesla! The Model S is a gorgeous car. Finally, somebody designed an electric car that doesn’t look terrible. As for the dashboard, I think the UI is awesome. But I’m not sure it doesn’t impact the driver’s ability to pay attention to the road. I like the tactile sensation of knobs and buttons when I’m driving. I feel like I will have to pay much more attention to a fully touch dashboard, and thus pay less attention to the road.
I love the intent to question aging, anachronistic design paradigms, and to experiment with “start from scratch” designs. I too am concerned that this “iPad on your dash” hasn’t yet reached the balance between “fully flexible and context-dependent” and “easy for users to learn and fumble to a correct interaction without massive shifts in attention off the many critical attention foci that surround a moving vehicle every second”. It’s concerning that we’re literally performing these experiments on life-and-limb-threatening and increasingly attention-distracted roadways while the industry teaches itself new interaction models.
Without any physical affordances (e.g. edges/boundaries, permanent/predictable/easily-learnable targets) the 17-inch piece of glass is a nightmare of “at a glance, with little attention” interactions in a car for the driver. An interesting middle ground (which I hope we reach in the future) is a balance of bright, big, non-distracting display and haptic/physically-bounded touch targets [for touch interactions] and/or less-intrusive voice/eye-tracking/gesture-based input models.
For me, trying to touch those never-in-the-same-place-from-UI-to-UI buttons on my Prius’ touchscreen is just dangerous, frustrating and error-prone. At minimum, I’d like to see these “buttons” about 2-3 times their current size, so I can just grossly mash at them rather than have to precisely target them.
These kinds of finger-sized touch targets work find on a tablet where you have time to concentrate; very counterproductive in car UI [for the driver] where I’d expect sub-second glance-target-mash-resume interactions should be the interim goal (and “no loss of visual attention on the roadway” should be the final goal).
Colleague 2 said:
The future aviation dashboards are touch rich devices (thales avionics future cockpit won a design award).
But the industry is currently in a bit of a split. The modernization of the flight systems is helping the more mundane tasks like cruising or altitude climbing, but creating huge problems with takeoffs/descents/approaches/landings. Instead of knowing the 10 buttons you need to push/turn, you now need to remember what menu things are under. In an emergency, the manual systems have a better result. There aren’t any hidden features of the aircraft that you might have accidentally triggered.
There is also a school of thought that all this aircraft automation/simplification is creating pilots that don’t know how to fly well. So when an emergency hits. They are just as clueless as the passengers as to what to do.
I worry about this. I sincerely hope that we never find drivers in the position of having to perform emergency interactions with their car’s controls through a flat-glass, multi-level-menu touch interface. It’s bad enough this has begun to creep into the airline industry; hopefully the car manufacturers are being more cognizant of the vehicle occupants’ lives (though I worry that the buried-deep-in-the-bowels-of-the-corporation’s-design-studios’ interface designers aren’t always made to recognize this as the primary goal of every surface of the vehicle).
At least in the case of an airplane at 30,000 feet, there’s a little time to recover from a significant mistake [boy I sure hope that’s true]; in the case of a car, I can’t call up Robert Hays from the back seat to take over when I screw up – no least because most screw-ups that threaten life and limb afford very low latency.
We need to remain aware of the meaning of how we summarize expected actions/outcomes in our interfaces, and try very hard to connect to the target user. Making them learn our meaning just because we’re too lazy to learn theirs is a massive fail
Cultural context is key – just because those of us with the experience of growing up at a certain time in middle-class North America are aware of what a certain visual used to mean, doesn’t mean the other 6 billion are just as “intuitively clueful”. I got to grow up in the shadow of the US and am keenly aware of how easy it is to assume that “everyone is like us, right?
Sometimes an outcome has no analogue or universal meaning in our experience, and we should pick something with elegance or abstract individuality. I’m a big fan of doing it right, but when there is no “right”, do it artfully.
I have to admit, I did nothing last week as social “after-work design & tech fun”. Bad Mike – no gold star for you last week. Instead I found myself researching the Comic Book Storytelling UX talk – it’s six months away but I’m already obsessed with collecting the perfect pages – fun for me and eye-opening for you. Dog help me if this keeps up.
Of note: the October CHIFOO event is a tickets-only event, and tickets are going surprisingly fast. (57 left as of press time) If you’re thinking about it, get yours now!
I’m one of those crazy bastards who loves to track all sorts of data about myself – what I eat, how well I digest it, which scary movies I’ve seen more than five times, how often I’ve listened to that embarrassing electronic-psytrance-chill group.
So I was a goner when I spotted the Jawbone UP in my last buzz through the Apple Store. Sara suggested it as an early birthday present to myself, which turned a normally-agonizing months-long decision into a two-minute mental debate over colour, size and which credit card to use.
The unboxing-and-configuration experience was a spiritual experience. Every design choice seemed to hit my endorphin button, from information design (what questions were asked for what little information they needed to convey or discover) to interaction design (nice simple choices laid out in a pleasing and effective manner) to great visual design (colours, sizing, contrast).
However, within a few days I started to itch for more – this feels like a device with so much potential, and it feels close enough for me to touch but teasingly just out of reach.
Doesn’t seem to register those times when I’ve fully awakened but didn’t “get up”. I’ve had that experience at least once a night since getting this band, but only once in the first four nights did the band trigger that alert-but-still-lying-down state as “awake”. Since then I’ve seen more “orange slices” in my sleep, so maybe it’s tuning itself to my behaviour – or maybe I’m just thrashing more.
Steps while horizontal
I had an experience one morning where I never left the bed and yet registered 286 steps. How can that be? On another occasion, I checked the band before and after getting in the car and driving for half an hour, and no steps were registered – so obviously some types of movement scenarios are being done right.
Just as much of a PITA as any other calorie-counter app, but I’m not sure if that level of granularity is even ever going to be integrated into cool/insightful reports. The food log is a very rich database, but for someone like me who eats a lot of home-cooked food (due to my celiac and lactose intolerance, I can’t eat as much prepared food as my laziness would like), I end up having to make some pretty undesirable choices: (a) forgo the nutrition information entirely on a “custom entry in my food library”, (b) choose something that might be close from the available (but hard to preview) restaurant/commercial foods, or (c) just take ridiculous guesses as to the nutritional values of meals prepared from the fridge.
Why does the wake alarm still go off after I’ve switched UP to “I’m awake” mode? Isn’t this supposed to be meant to be a “smart alarm”, to wake me out of sleep? And have I not proven I’m awake by holding down the “mode switch” button for the required two continuous seconds? I find the silent alarm redundant and annoying for this reason, and after a couple of weeks I just gave up and turned it off – even though it would be brilliant to wake me at “the ideal time in my sleep cycle” on those rare occasions when I’m sleeping way past the time I should.
What is the “correct” (by-design) expectation for when I should trigger the sleep mode: when I lie down? When I’m starting to feel sleepy? When I intend to try to get to sleep? I don’t know what I’m supposed to be learning from the “how long I was awake before sleep kicked in”, or whether it matters. Y’know what would be interesting? Graph of “time you crawled into bed vs. amount of time before you fell asleep”. That would tell me whether I’m really wasting time going to bed early or not (i.e. am I getting more sleep when lying down earlier, or does that only weakly correlate?).
Provides me trend correlations to show how my mood correlates with sleep or food intakes (if any)
Lets me input other qualitative markers on the graphs like “took a pill” (pain, anxiety, antacid)
Full data dumping: I took my band and a computer to a recent Quantified Self session, thinking we could hack on the rich data that the Lifeline seemingly keeps. However, what we got from the web site was a cryptic (terribly annotated) and heavily-edited data set – none of my meals or nutrient data, nothing I could see that captured my moods (especially not the custom labels that are so easy to add), no rich data showing my actual steps (not even in short-time-increment intervals like “how many steps in each 5-minute period”).
Community: I’d dearly love access to a community of UP wearers (e.g. online forum) to discuss, compare results, insights and guesses of what’s going on. Following a twitter hashtag isn’t nearly there.
Food library (1): how can I edit/delete an entry that I screwed up? I misread the Servings value on a jar of food, and now the Library entry there is forevermore going to be double what it should be (worse, because I combined it with a single-serving food – I created “peanut butter toast” as a one-click entry). Now I can either delete and re-create that entry and edit all previous meals, create another entry and try to remember which one to choose, or “dial the portion size” on the bad entry to an approximate value.
Food library (2): wouldn’t it be awesome to one-click copy a “meal” worth of nutritional-values-populated food from one meal (or one ‘team member’) to another? This one came from my partner – she’s furiously adding all the ingredients one by one for each meal so she gets accurate calorie, protein etc data. Now why couldn’t I “copy” her meal and add it to my own feed? I often eat the same thing she (or a friend who’s also part of my team) eats, but I don’t get any real “writeable/reusable” value out of that team membership.
Auto-interpolating sleep: UP has a manual toggle into “sleep” mode (telling my device when I’m sleeping), which forces me to either (a) toggle it as soon as I crawl into bed, so that I never forget, or (b) try to catch myself going to sleep just before I actually slip into unconsciousness [or risk losing the tracked sleep entirely]. I’d love for the UP to “sense” when I’m resting on my back and call it “sleeping” without me having to remember to toggle it back and forth. Heck, even if it recorded a session that wasn’t actually sleep, couldn’t I later “curate” my data to re-categorize the false positive?
Edge-case bug: Irritatingly, if I forget to switch it back to “awake” mode before plugging it in to dump its data to my phone, it turns out that the UP app doesn’t log that sleep as “sleep” but just as another form of activity. In other words, someone forgot to test and optimize for “what if the user plugs in their UP while the band still thinks the user is asleep?” I’m sure to the designers it sounds like a non-scenario, but it’s happened to me once already and I doubt it’ll be the last time.
A couple of months into bonding myself to this techno-upgrade, I killed it. Inadvertently, but still. I followed instructions, never immersing it fully under water, but instead took a nice long shower (officially endorsed by Jawbone) to refresh myself after a couple of weeks of bedrest. (That’s another story for another time.) Darned thing never responded to another button press again, and I can only believe that somehow its water resistance became less so.
So I dropped a note to the very polite and helpful folks at Jawbone support, who after supplying me with instructions for soft- and hard-reset (neither of which worked, but both of which were reassuring for future occasions when I might find myself painted into a firmware-not-hardware issue), shipped out a replacement band at their (warranty-amortized) expense.
They shipped it to me in an environmentally-sensitive envelope, but with not one iota of expectation-setting, preparation or documentation to let me know (a) if the replacement band needed to be charged [it did, it was dead] or (b) how to properly convince my UP app to forget about the old band and orient itself to a new one. I dug around like a madman to find out if there was anything I should NOT do with a replacement band, and finding nothing I finally just synced it with the app and was pleasantly surprised at how little I had to do.