Pruning features via intelligent, comprehensive instrumentation

Today’s adventures in the LinkedIn Product Management group gave us this article:
http://product.hubspot.com/blog/the-5-whys-of-feature-bloat

The critical statement (i.e. the most/only actionable information) in the article is this:

Decide a “minimum bar of usage/value” that every feature must pass in order for it to remain a feature. If a new feature doesn’t hit that bar in some set period of time, prune it.

I’d love to hear from folks who are able to prove with data that a feature is not getting the level of usage that we need to justify its continued existence.  AFAIK, whether it be a desktop, mobile or web/cloud app, instrumenting all the things so that we have visibility into the usage of every potentially-killable feature is a non-trivial (and sometimes impractical) investment in itself.

I’m not even arguing for getting that work prioritized enough to put it in up front – that’s just something that if it’s technically feasible, we should *all* do to turn us from cavemen wondering what the stars mean to explorers actually measuring and testing hypotheses about our universe.

I’m specifically inquiring how it’s actually *done* in our typical settings.  I know from having worked at New Relic what’s feasible and what are the limits of doing this in web/cloud and mobile settings, and it’s definitely a non-trivial exercise to instrument *all* the things (especially when we’re talking about UI features like buttons, configuration settings and other directly-interactive controls).  It’s far harder in a desktop setting (does anyone still have a “desktop environment”?  Maybe it’s just my years of working at Microsoft talking…).

And I can see how hard it is to not only *instrument* the settings but gather the data and catalogue the resulting data in a way that characterizes both (a) the actual feature that was used and, even better, (b) the intended result the user was trying to achieve [i.e. not just the what or how but the *why*].

Developers think one way about naming the internals of their applications – MVC patterns, stackoverflow examples, vendor cultures – and end users (and likely/often, we product managers) think another way.  Intuitive alignment is great, but hardly likely and usually not there.  For example, something as simple as a “lookup” or “query” function (from the engineering PoV) is likely thought of as a “search” function by the users.  I’ve seen far more divergent, enough to assume I won’t follow if I’m just staring at the route/controller names.

If I’m staring at the auto-instrumented names of an APM vendor’s view into my application, I’m likely looking at the lightly-decorated functions/classes/methods as named by the engineers – in my experience, these are terribly cryptic to a non-engineer.  And for all of our custom code that wove the libraries together, I’m almost certainly going to have to have the engineers add in custom tracers to annotate all the really cool, non-out-of-the-box features we added to the application.  Those custom tracers, unless you’ve got an IA (information architecture) nut on the team to get involved in the naming, will almost certainly look like a foreign language.

Does that make it easy for me to find the traces of usage by the end users of a specific feature (e.g. an advanced filtering textbox in my search function)?  Nope, not often, but it’s sure a start.

So what do you do about this, to make it less messy down the road when you’re dying to know if anyone’s actually using those advanced filtering features?

  1. Start now with the instrumentation and the naming.  Add the instrumentation as a new set of acceptance criteria to your user stories/requirements/tickets.  If the app internals have been named in a way that you understand at a glance, awesome – encourage more of the same from the engineers, and codify those approaches into  a naming guideline if possible.  Then if you’re really lucky, just derive the named instrumentation from the beautiful code.
  2. If not, start the work of adding the mapped names in your custom instrumentation now – i.e. if they called it “query”, make sure that the custom instrumentation names it “search”.
  3. Next up, adding this instrumentation for all your existing features.  Here, you have some interesting decisions:
    • Do you instrument the most popular and baseline features? (If so, why?  What will you do with that data?)
    • Do you instrument the features that are about to be canned? (If so, will this be there to help you understand which of your early adopter customers are still using the features – and do you believe that segment of your market is predictive of the usage by the other segments?)
    • Or do you just pick on the lesser-known features?  THESE ARE THE ONES I’D RECOMMEND for the most benefit for the invested energy – the work to add and continue to support this instrumentation is the most likely to be actionable at a later date – assuming you’ve got the energy to invest in that tension-filled EOL plan (as the above article beautifully illustrates).
  4. Finally, all of this labour should have convinced you to be a little more judicious in how many of these dubious features you’re going to add to your product.

Enhancing your ability to correct for these mistakes later is great; factoring in the extra cost up front, and helping justify why you’re not doing it now is even better.

And all that said?  Don’t get too hung up on the word “mistakes”.  We’re learning, we’re moving forward, and some of us are learning that Failure Is An Option.  But mostly, we’re living life the only way it’s able to be lived.

success

Advertisements