Occupied Neurons, late September edition

Modern Agile (Agile 2016 keynote)


This call out for advancement of Agile beyond 2001 and beyond the fossilization of process and “scale” is refreshing. It resonates with me in ways few other discussions of “is there Agile beyond SCRUM?” have inspired – because it provides an answer upon which we can stand up actual debate, refinement and objective experiments.

While I’m sure there are those who would wish to quibble of perfecting these new principles before committing to their underlying momentum, I for one am happy to accept this as an evolutionary stage beyond Agile Manifesto and use it to further my teams and my own evolution.

Forget Technical Debt – Here’s How to Build Technical Wealth


I had the pleasure of meeting and talking with (mostly listening and learning intently on my part) Andrea Goulet at .NET Fringe 2016 conference. Andrea is a refreshing leader in software development because she leads not only through craftsmanship but also communication as key tenet of success with her customers.

Andrea advances the term “software remodelling” to properly focus the work that deals with Technical Debt. Rather than approach the TD as a failing, looking at it “as a natural outgrowth of occupying and using the software” draws heavily and well on the analogy of remodelling your/a home.

Frequent Password Changes Are The Enemy of Security


After a decade or more of participating in the constant ground battle of information security, it became clear to me that the threat models and state of the art in information warfare has changed drastically; the defenses have been slow to catch up.

One of the vestigial tails of 20th-century information security is the dogmatically-proscribed “scheduled password change”.

The idea back then was that we had so few ways of knowing whether someone was exploiting an active, privileged user account, and we only had single-factor (password) authentication as a means of protecting that digital privilege on a system, that it seemed reasonable to force everyone to change passwords on a frequent, scheduled basis. So that, if an attacker somehow found your password (such as on a sticky note by your keyboard), *eventually* they would lose such access because they wouldn’t know your new password.

So many problems with this – for example:

  • Password increments – so many of us with multiple frequently-rotating passwords just tack on an increment img number to the end of the last password when forced to change – not terribly secure, but the only tolerable defense when forced to deal with this unnecessary burden
  • APTs and password databases – most password theft these days don’t come from random guessing, it comes from hackers either getting access to the entire database at the server, or persistent malware on your computer/phone/tablet or public devices like wifi hardware that MITM’s your password as you send it to the server
  • Malware re-infections – changing your password is only good if it isn’t as easy to steal it *after* the change as it was *before* the change – not a lot of point in changing passwords when you can get attacked just as easily (and attackers are always coming up with new zero-days to get you)

I was one of the evil dudes who reflexively recommended this measure to every organization everywhere. I apologize for perpetuating this mythology.

Problems-to-solve: finding meetup-friendly spaces in Portland


Sometimes I encounter a problem in my day to day life that I find so frustrating – and to me, so obvious (hasn’t been thought of by some PM already; or should’ve been caught by PO/PM acceptance validation, or during usability testing, or in the User Story’s acceptance criteria) that I can’t help thinking of how I’d have pitched this to the engineering team myself.

Think of this as a Product Guy’s version of “fantasy football” – “fantasy product ownership/management”.


User Story: as the organizer of a Meetup in Portland, I want to be able to quickly find all the meetup-friendly spaces in Portland so that I can book my meetup in a suitable space.

BDD Scenario: Given that I have an existing meetup group AND that the meetup does not have an booked meetup space, when I search for available meetup-friendly spaces in Portland, then I see a listing of such spaces in Portland including address, contact info and maximum number of attendees.


I’ve been an active participant in the meetup scene in Portland for a few years now. I’ve briefly co-led a meetup as well, and been solicited to help organize a number of other meetups.

One of the phenomena I’ve observed is how challenging it can be for some meetups to find a space for their meetings. Many meetups find one space, lock it in for a year and never roam. Some meetups have to change spaces from month to month, and regularly put out a call to attendees to help them find suitable locations. And once in a while, a meetup has to change venues for space or other logistical reasons (e.g. a very popular speaker is coming to town).

Whenever I talk to meetup organizers about this part of the job, it strikes me as odd that they’re all operating like this is a high-school gossip circle: no one has all the information, there is no central place to find out where to go/who to talk to, and most people are left to ask friends if they happen to know of any spaces.

In a tech-savvy city such as Portland, where we have dozens of meetups every day, and many tech conferences a month, it’s surprising to find that getting a meetup successfully housed relies so much on word of mouth (or just using your employer’s space, if you’re lucky to be in such a position).

I’ve been at meetups in some great spaces, nearly all of them in a public-friendly space of tech employers across Portland. Where is the central directory of these spaces? Is there an intentional *lack* of public listing, so that these spaces don’t get overrun? Is this a word-of-mouth resource so that only those event organizers with a personal referral are deemed ‘vetted’ for use?

From the point of view of the owners of these spaces, I can imagine there’s little incentive to make this a seven-nights-a-week resource. Most of these employers don’t employ staff to stick around at night to police these spaces; many of them seem to leave the responsibility up to an employee [often an existing member of the meetup group] to chaperone the meetup attendees and shoo them out when they’re too tired or have to go home/back to work.

My Fantasy Scenario

Any meetup organizer in Portland will be able to find suitable meetup spaces and begin negotiating for available dates/times. A “suitable” space would be qualified on such criteria as:

  • Location
  • Number of people the space can legally accommodate
  • Number of seats available
  • Days and hours the space is potentially available (e.g. M-F 5-8, weekends by arrangement)
  • A/V availability (projector, microphone)
  • Guest wifi availability
  • Amenities (beer, food, snacks, bike parking)
  • Special notes (e.g. door access arrangements, must arrange to have employee chaperone the space)
  • Contact info to inquire about space availability [email, phone, booking system]

Future features

I can also see a need for a service that similarly lists conference-friendly spaces around town – especially for low-budget conferences that can’t afford the corporate convention spaces. I’ve been at many community-oriented conferences here in Portland, and I’m betting the number of spaces I’ve visited [e.g. Eliot Center, Armory, Ambridge, Portland Art Museum, Center for the Arts], still aren’t anywhere near the secret treasures that await.

  • Number of separate/separable rooms and their seating
  • Additional limitations/requirements e.g. if food/drinks, must always use the contracted catering

Workarounds Tried

Workaround: the http://workfrom.co service includes a filter for “Free Community Spaces”, labelled Community spaces are free and open to all, no purchase required. Common community spaces include libraries, student unions and banks. Unfortunately, as of now there are only five listings (three of them public library spaces).

Workaround: I was told by a friend that Cvent has a listing of event spaces in Portland. My search of their site led to this searchable interface. Unfortunately, this service appears to be more oriented to helping someone plan a conference or business meeting and keeping attendees entertained/occupied – where “venue type” = “corporate

Coding Again? Experimenting with the Marvel API

I’ve been hanging around developers *entirely* too much lately.

These days I find myself telling myself the story that unless I get back into coding, I’m not going to be relevant in the tech industry any longer.

Hanging out (aka volunteering) at developer-focused conferences will do that to you:

Volunteering on open source projects will do that to you (jQuery Foundation‘s infrastructure team).

Interviewing for engineering-focused Product Owner and Technical Product Manager roles will do that to you. (Note: when did “technical” become equivalent to “I actively code in my day job/spare time”?)

One of the hang-ups I have that keeps me from investing the immense amount of grinding time it takes to make working code is that I haven’t found an itch to scratch that bugs me enough that I’m willing to commit myself to the effort. Plenty of ideas float their way past my brain, but very few (like CacheMyWork) get me emotionally engaged enough to surmount the activation energy necessary to fight alone past all the barriers: lonely nights, painful problem articulation, lack of buddy to work on it, and general frustration that I don’t know all the tricks and vocabulary that most good coders do.

Well, it finally happened. I found something that should keep me engaged: creating a stripped-down search interface into the Marvel comics catalogue.  Marvel.com provides a search on their site but I:

  1. keep forgetting where they buried it,
  2. find it cumbersome and slow to use, and
  3. never know if the missing references (e.g. appearances of Captain Marvel as a guest in others’ comics that aren’t returned in the search results) are because the search doesn’t work, or because the data is actually missing

Marvel launched an API a couple of years ago – I heard about it at the time and felt excited that my favourite comics publisher had embraced the Age of APIs.  But didn’t feel like doing anything with it.

Fast forward two years: I’m a diehard user of Marvel Unlimited, my comics reading is about half-Marvel these days, and I’m spending a lot of time trying to weave together a picture of how the characters relate, when they’ve bumped into each other, what issue certain happenings occurred in, etc

Possible questions I could answer if I write some code:

  • How socially-connected is Spidey compared with Wolverine?
  • When is the first appearance of any character?
  • What’s the chronological publication order of every comic crossover in any comics Event?

Possible language to use:

  • C# (know it)
  • F# (big hawtness at the .NET Fringe conf)
  • Python (feel like I should learn it)
  • Typescript (ES6 – like JavaScript with static types and other frustration-killers)
  • ScriptCS (a scriptable C#)

More important than choice of language though is availability of wrappers for the API – while I’m sure it would be very instructive to immediately start climbing the cliff of building “zero tech” code, I learn far faster when I have visible results, than when I’m still fiddling with getting the right types for my variables or trying to remember where and when to set the right kind of closing braces.

So for sake of argument I’m going to try out the second package I found – Robert Kuykendall’s “marvelous” python wrapper: https://github.com/rkuykendall/marvelous

See you when I’ve got something to report.

Occupied Neurons, early July 2016: security edition

Who are you, really: Safer and more convenient sign-in on the web – Google I/O 2016

Google shared some helpful tips for web developers to make it as easy as possible for users to securely sign in to your web site, from the Google Chrome team:

  • simple-if-annoying-that-we-still-have-to-use-these attributes to add to your forms to assist Password Manager apps
  • A Credential Management API that (though cryptically explained) smoothes out some of the steps in retrieving creds from the Chrome Credential Manager
  • This API also addresses some of the security threats (plaintext networks, Javascript-in-the-middle, XSS)
  • Then they discuss the FIDO UAF and U2F specs – where the U2F “security key” signs the server’s secondary challenge with a private key whose public key is already enrolled with the online identity the server is authenticating

The U2F “security key” USB dongle idea is cute and useful – it requires the user’s interaction with the button (can’t be automatically scraped by silent malware), uses RSA signatures to provide strong proof of possession and can’t be duplicated. But as with any physical “token”, it can be lost and it requires that physical interface (e.g. USB) that not all devices have. Smart cards and RSA tokens (the one-time key generators) never entirely caught on either, despite their laudable security laurels.

The Credential Manager API discussion reminds me of the Internet Explorer echo chamber from 10-15 years ago – Microsoft browser developers adding in all these proprietary hooks because they couldn’t imagine anyone *not* fully embracing IE as the one and only browser they would use everywhere. Disturbing to see Google slip into that same lazy arrogance – assuming that web developers will assume that their users will (a) always use Chrome and (b) be using Chrome’s Credential Manager (not an external password manager app) to store passwords.

Disappointing navel-gazing for the most part.

Google’s password-free logins may arrive on Android apps by year-end

Project Abacus creates a “Trust Score API” – an interesting concept which intends supplant the need for passwords or other explicit authentication demands, by taking ambient readings from sensors and user interaction patterns with their device to determine how likely it is that the current holder/user is equivalent to the identity being asserted/authenticated.

This is certainly more interesting technology, if only because it allows for the possibility that any organization/entity that wishes to set their own tolerance/threshold per-usage can do so, using different “Trust Scores” depending on how valuable the data/API/interaction is that the user is attempting. A simple lookup of a bank balance could require a lower score than making a transfer of money out of an account, for example.

The only trick to this is the user must allow Google to continuously measure All The Thingz from the device – listen on the microphone, watch all typing, observe all location data, see what’s in front of the camera lens. Etc. Etc. Etc.

If launched today, I suspect this would trip over most users’ “freak-out” instinct and would fail, so kudos to Google for taking it slow. They’re going to need to shore up the reputation of Android phones and their inscrutably cryptic if comprehensive permissions model and how well that’s sandboxed if they’ll ever get widespread trust for Google to watch everything you’re doing.


Looks like Microsoft is incorporating “widely-used hacked passwords” into the set of password rules that Active Directory can enforce against users trying to establish a weak password. Hopefully this’ll be less frustrating than the “complex passwords” rules that AD and some of Microsoft’s more zealous customers like to enforce, making it nigh-impossible to know what the rules are let alone give a sentient human a chance of getting a password you might want to type 20-50 times/day. [Not that I have any PTSD from that…]

Unfortunately, they do a piss-poor job of explaining how “Smart Password Lockout” works. I’m going to take a guess how this works, and hopefully someday it’ll be spelled out. It appears they’ve got some extra smarts in the AD password authentication routine that runs at the server-side – it can effectively determine whether the bad password authentication attempt came from an already-known device or not. This means that AD is keeping a rolling cache of the “familiar environments” – likely one that ages out the older records (e.g. flushing anything older than 30 days). What’s unclear is whether they’re recording remote IP addresses, remote computer names/identities, remote IP address subnets, or some new “cookie”-like data that wasn’t traditionally sent with the authentication stream.

If this is based on Kerberos/SAML exchanges, then it’s quite possible to capture the remote identity of the computer from which the exchange occurred (at least for machines that are part of the Active Directory domain). However, if this is meant as a more general-purpose mitigation for accounts used in more Internet (not Active Directory domain) setting, then unless Active Directory has added cookie-tracking capabilities it didn’t have a decade ago, I’d imagine they’re operating strictly on the remote IP address enveloped around any authentication request (Kerberos, NTLM, Basic, Digest).

Still seems a worthwhile effort – if it allows AD to lockout attackers trying to brute-force my account from locations where no successful authentication has taken place – AND continues to allow me to proceed past the “account lockout” at the same time – this is a big win for end users, especially where AD is used in Internet-facing settings like Azure.

Occupied Neurons, late May 2016

Understanding Your New Google Analytics Options – Business 2 Community

Here’s where the performance analytics and “business analytics” companies need to keep an eye or two over their shoulder. This sounds like a serious play for the high-margin customers – a big capital “T” on your SWOT analysis, if you’re one of the incumbents Google’s threatening.

10 Revealing Interview Questions from Product Management Executives

Prep’ing for a PM/PO job interview? Here’s some thought-provoking questions you should think about ahead of time.

When To Decline A Job Offer

The hardest part of a job search (at least for me) is trying to imagine how I would walk away from a job offer, even if it didn’t suit my needs, career aspirations. Beyond the obvious red flags (dark/frantic mood around the office, terrible personality fit with the team/boss), it feels ungrateful to say “no” based on a gut feel or “there’s something better”. Here’s a few perspectives to bolster your self-worth algorithm.

The Golden Ratio: Design’s Biggest Myth

I’m one of the many who fell for this little mental sleight-of-hand. Sounds great, right? A magic proportion that will make any design look “perfect” without being obvious, and will help elevate your designs to the ranks of all the other design geeks who must also be using the golden ratio.

Except it’s crap, as much a fiction and a force-fit as vaccines and autism or oat bran and heart disease (remember that old saw?). Read the well-researched discussion.

Agile Is Dead

This well-meaning dude fundamentally misunderstands Agile and is yet so expert that he knows how to improve on it. “Shuffling Trello cards” and “shipping often” doesn’t even begin…

Not even convinced *he* has read the Manifesto. Gradle is great, CD is great, but if you have no strategy for Release Management or you’re so deep in the bowels of a Microservices forest that you don’t have to worry about Forestry Management, then I’d prefer you step back and don’t confuse those chainsaw-wielders who I’m trying to keep from cutting off their limbs (heh, this has been brought to you by the Tortured Analogies Department).

Perspectives on Product Management (if you’re asking)

As part of a recent job application, they asked for my responses to a number of interesting questions regarding my approach to Product Management.  In the spirit of Scott Hanselman’s “don’t waste your keystrokes“, I’m sharing my thoughts to give more folks the benefit of my perspective.

As a “Product Manager”, what are the product management challenges in a Start-Up (Private) company environment?

Key is determining which of the possible ideas and market gaps you’ve identified are real winners with significant and long-term revenue opportunity, without the benefit larger, older companies have of market/revenue history to guide your guesses.

Choosing among an infinite range of new product ideas is much harder and feels more arbitrary than choosing among the more focused features and enhancements that an established customer base can provide you.

How do the product management challenges of a Start-Up (Private) company differ/compare to an established Fortune 500 environment?

In the established Fortune 500 companies I worked for, the challenges included weighing the benefits/risks of cannibalizing existing products, making incremental market share improvements in mature/low-growth markets, and how to encourage existing customers to buy more of the products you’re offering when you’ve maxed out their capacity to buy the ones they already have.

Start-up companies have the opposite challenges: establishing *any* market share in pre-existing markets (trying to gain visibility and credibility with target customers), making the “first sale” and determining what are the actionable and actual barriers-to-purchase in the market when you have few customers to quiz for any leading/lagging indicators.

Please describe (2-3 sentences) your experience developing a software product or service in a product manager role.

I’ve managed a range of software opportunities, from those I’ve birthed from scratch myself (and managed through many major releases and business needs changes over the years), to managing a pair of employee-focused productivity solutions, to juggling a wide range of developer-focused software solutions that had competing and sometimes conflicting customer requirements.

I’ve always managed teams “too small for the job”, and always focused on making sure they are more confident and prepared to deliver the software their customers actually need (no matter how unclear the initial requirements may have been).

My “business value” focus is weighted towards ensuring that the primary use case is never difficult to follow, that we design for the user with the least experience with/attention to the system, and that we’re always focused on making incremental improvements based on actual customer feedback rather than infinite analysis paralysis that halts good experiment-driven development.

Please describe your product management experience where “need” has been identified, but not “demand”.

The product I managed the longest was a set of business applications that I led because I was tired of seeing my colleagues sending around spreadsheets, and knew that they would be much better prepared and more effective with a centralized, real-time solution. I further determined that the engineers who were the day-to-day users of the system needed not only to know what they were expected to do, but how they were expected to know when they had successfully completed the required tasks proscribed by my solution.

Neither of these focuses were requested by my stakeholders and customers – in fact the former was something I was actively encouraged *not* to pursue by my management, and the latter was something that my management believed was irrelevant to the purpose.

In the end, this solution went from a system no one asked for or cared about to the most critical piece of infrastructure that measured and enabled the Security Development Lifecycle across Intel.

How do you deal with frequent product goal changes?

I have two main strategies I pursue:

I reduce the amount of time I invest into “gold-plating” (grooming, refining, updating) the roadmap or the product Backlog artifacts – up front I’ll define their “why” and primary goals, but I spend as little time as possible (sometimes just a few minutes based on my intuition and initial impressions) to refine these artifacts from e.g. “could be 10-40 points of work” to “I’m pretty confident this is 10-15 points of work”. I take this approach with the knowledge that (a) as the timetable approaches when we’ll actually deliver the items, (a) many of them will have been discarded [for which any refining effort would have been entirely wasted], (b) we’ll usually have to significantly revise what we were initially focused on as new market demands and insights become available, and (c) we’ll spend the least amount of wasted engineering cycles doing the final evaluation & estimation of the effort to deliver that work, by delaying the detailed investigation to just before they need to be delivered.

I stay tuned into all market/customer feedback channels – listening closely to Sales, Support and my direct customer interactions, effectively “leaning in” to the market/customer volatility to get a clear idea what how the fluctuations at our customers are turning into fluctuations in their requirements of us. For example, when a customer’s business is radically changing, or they’re subject to significant changes in what they’re expected to deliver (e.g. new business, losing their old business, changes in management or *their* customer base), that has significant downstream impact – they’ll frequently change their mind, or forget the last thing they requested. In cases where we have that kind of apparent chaos in the signals we’re getting from significant customers, I’ve made the effort to get directly in touch with the customer and have a longer conversation to help us understand what’s going on behind the scenes – and to help them prioritize among the stream of conflicting requests. This effort to “lean in” and engage the customer directly also has the beneficial effect of helping me determine which channels and individuals (e.g. sales, support) are reliable sources of information, and which ones warrant fewer immediate “drop everything” reactions from us.

Product Management when the customer’s problem/pain is identified is easy. How do you manage a product when the customer has not identified the problem/pain?

The classic answer is “ask them ‘Why’ five times until you get to the root of their problem”. However it’s rarely that simple – some customers aren’t able to articulate, some get defensive, and sometimes it takes a few rounds of conversation (with thinking in between) for them to articulate/admit what’s really going on. Some customers can own up immediately if only you ask directly.

In my experience, I have learned to ask the following question, when they demand a specific change or enhancement to the software I’ve helped deliver: “What decisions will you be able to make with this new information, or actions will you be able to take with this change in the system, that you aren’t currently able to make without it?” That, or variations on this question, usually helps me and the customer sort between “ideas that sound good or that make me feel better” and “those that will have a material impact on our business”.

The former are worth considering too – sometimes the usability, the pleasure evoked in a smoother experience, makes for a much more ‘sticky’ product (i.e. one which the customer is more likely to renew/purchase again). However, in my experience if you can’t identity or you ignore the latter in favor of the former, you risk allowing frustration and dissatisfaction to fester and ultimately doom your relationship with the customer.

Occupied Neurons, April 2016

So many Product Managers are making it up as they go along – generating whatever kinds of artifacts will get them past the next checkpoint and keep all the spinning plates from veering off into ether. This is the first time in a long time I’ve seen someone propose some viable, useable and not totally generic tools for capturing their PM thinking. Well worth a look.

The “BUT” model for Product Management is a hot topic, and there’s a number of folks taking a kick at deciphering it in their context. I’ve got a spin on it that I’ll write about soon, but this is a great take on the model too.

Captures all my feelings about the complaint from Designers (and Security reviewers, and all others in the “product quality” disciplines) that they get left out of discussions they *should* be part of. My own rant on the subject doesn’t do this subject justice, but I’m convinced that we *earn* our right to a seat by helping steer, working through the messy quagmire that is real software delivery (not just throwing pixel-perfect portfolio fodder over the wall).

An unconference to expand awareness of a movement among leading thinkers on how to organize work in the 21st century. Looks fascinating – unconference format is dense and high-learning, the subject is still pretty fresh and new (despite the myriad of books building up to this over the last decade), and the energy in the Portland community is bursting.

Meetups where you’ll find Mike’s hat, Spring 2016 edition

Occasionally I’ll tell people I meet about all the meetups I have so much fun at.

Or rather, I’ll try to enumerate them all, and fail each and every time.

Primarily because there’s so many meetups I like to check in on.

So occasionally I’ll enumerate them like this, so that my friends have a valiant hope of crossing paths with me before the amazing event has passed.

Meetups I’m slavishly devoted to

Meetups I’ll attend anytime they’re alive

Meetups I sample like caviar – occasionally and cautiously

Recent additions that may soon pass the test of my time


Pruning features via intelligent, comprehensive instrumentation

Today’s adventures in the LinkedIn Product Management group gave us this article:

The critical statement (i.e. the most/only actionable information) in the article is this:

Decide a “minimum bar of usage/value” that every feature must pass in order for it to remain a feature. If a new feature doesn’t hit that bar in some set period of time, prune it.

I’d love to hear from folks who are able to prove with data that a feature is not getting the level of usage that we need to justify its continued existence.  AFAIK, whether it be a desktop, mobile or web/cloud app, instrumenting all the things so that we have visibility into the usage of every potentially-killable feature is a non-trivial (and sometimes impractical) investment in itself.

I’m not even arguing for getting that work prioritized enough to put it in up front – that’s just something that if it’s technically feasible, we should *all* do to turn us from cavemen wondering what the stars mean to explorers actually measuring and testing hypotheses about our universe.

I’m specifically inquiring how it’s actually *done* in our typical settings.  I know from having worked at New Relic what’s feasible and what are the limits of doing this in web/cloud and mobile settings, and it’s definitely a non-trivial exercise to instrument *all* the things (especially when we’re talking about UI features like buttons, configuration settings and other directly-interactive controls).  It’s far harder in a desktop setting (does anyone still have a “desktop environment”?  Maybe it’s just my years of working at Microsoft talking…).

And I can see how hard it is to not only *instrument* the settings but gather the data and catalogue the resulting data in a way that characterizes both (a) the actual feature that was used and, even better, (b) the intended result the user was trying to achieve [i.e. not just the what or how but the *why*].

Developers think one way about naming the internals of their applications – MVC patterns, stackoverflow examples, vendor cultures – and end users (and likely/often, we product managers) think another way.  Intuitive alignment is great, but hardly likely and usually not there.  For example, something as simple as a “lookup” or “query” function (from the engineering PoV) is likely thought of as a “search” function by the users.  I’ve seen far more divergent, enough to assume I won’t follow if I’m just staring at the route/controller names.

If I’m staring at the auto-instrumented names of an APM vendor’s view into my application, I’m likely looking at the lightly-decorated functions/classes/methods as named by the engineers – in my experience, these are terribly cryptic to a non-engineer.  And for all of our custom code that wove the libraries together, I’m almost certainly going to have to have the engineers add in custom tracers to annotate all the really cool, non-out-of-the-box features we added to the application.  Those custom tracers, unless you’ve got an IA (information architecture) nut on the team to get involved in the naming, will almost certainly look like a foreign language.

Does that make it easy for me to find the traces of usage by the end users of a specific feature (e.g. an advanced filtering textbox in my search function)?  Nope, not often, but it’s sure a start.

So what do you do about this, to make it less messy down the road when you’re dying to know if anyone’s actually using those advanced filtering features?

  1. Start now with the instrumentation and the naming.  Add the instrumentation as a new set of acceptance criteria to your user stories/requirements/tickets.  If the app internals have been named in a way that you understand at a glance, awesome – encourage more of the same from the engineers, and codify those approaches into  a naming guideline if possible.  Then if you’re really lucky, just derive the named instrumentation from the beautiful code.
  2. If not, start the work of adding the mapped names in your custom instrumentation now – i.e. if they called it “query”, make sure that the custom instrumentation names it “search”.
  3. Next up, adding this instrumentation for all your existing features.  Here, you have some interesting decisions:
    • Do you instrument the most popular and baseline features? (If so, why?  What will you do with that data?)
    • Do you instrument the features that are about to be canned? (If so, will this be there to help you understand which of your early adopter customers are still using the features – and do you believe that segment of your market is predictive of the usage by the other segments?)
    • Or do you just pick on the lesser-known features?  THESE ARE THE ONES I’D RECOMMEND for the most benefit for the invested energy – the work to add and continue to support this instrumentation is the most likely to be actionable at a later date – assuming you’ve got the energy to invest in that tension-filled EOL plan (as the above article beautifully illustrates).
  4. Finally, all of this labour should have convinced you to be a little more judicious in how many of these dubious features you’re going to add to your product.

Enhancing your ability to correct for these mistakes later is great; factoring in the extra cost up front, and helping justify why you’re not doing it now is even better.

And all that said?  Don’t get too hung up on the word “mistakes”.  We’re learning, we’re moving forward, and some of us are learning that Failure Is An Option.  But mostly, we’re living life the only way it’s able to be lived.


Purity of Product, Sales or Code – are ALL wrong

I caught this echo-chamber discussion on the LinkedIn Product Management channel:


Somehow I thought I’d mention a thing or two and then bail, but the more I wrote the more strongly I felt how misguided the “keep your product vision pure” message was. Here’s what I wrote, think of it what you will:

Purity of the product is one ideal; never losing another “key” sale is another such ideal. Making every user happy and productive with your product is a third such, and never shipping imperfect code is yet another.

None of these ideals can be achieved in isolation for very long *and* maintain a successful business. Sometimes we have to flex on one of these ideals to help the rest of the organization improve their position on another.

If I put my foot down and refused to engage with any of my key customers’ feature requests, any responsible sales exec would have my head…examined.

If I gave into every customer want, no product leader (engineering, marketing or product management) would have any faith in my vision of the product.

If I constantly let the engineers gold-plate the code and drive out all tech debt, we’d be so far behind the market we’d never get around to acquiring new customers.

Rallying around the banner of “never let Sales disrupt the purity of my product’s vision@ sounds just as naive as “never ship an insecure product” and “always involve your Designers up front and throughout the development process” (both of which I’ve observed firsthand in those respective domains).

Never forget that every signal from the market can be that incredible insight that *causes* you the change your product strategy, and Sales are often your eyes and ears gathering input from customers that you yourself weren’t there to acquire.

Best advice? If it’s a significant sales opportunity/downgrade, and you haven’t spoken with that customer before, it’s worth considering taking a half-hour out of your schedule to qualify the sales opportunity *and* the feature request yourself – if only to increase the strength of the signal and decrease the risk of making the wrong call.

I personally don’t do enough of this, but when I do it gives me irreplaceable opportunities to learn from customers who aren’t there to feed me what they think I should hear (i.e. the standing “customer council”) and let me pivot the conversation to find out what else they really need (i.e. other items in considering) that Sales didn’t take the time to ask about.