Occupied Neurons, Santa edition: lessons for software engineering

How to Be An Insanely Successful Software Manager

https://hackernoon.com/how-to-be-an-insanely-successful-software-manager-13efe08fd890

Aside from a few dubious quotes and phrasings, I believe someone channeled my life when they wrote this.  (Is that why I smelled burning sulfur recently?)  When the goal of a software org is getting the most value into customers hands as quickly as possible, shaving down every point of friction between “User Story” and “running in production” is an obsessive mission.

No one does phrasing better than Sterling Archer

Benefit vs Cost: How to Prioritize Your Product Roadmap

https://www.productplan.com/how-to-prioritize-product-roadmap/

I’ve been a data-driven, quantitative prioritization junkie in my Product work for years. When you want to have a repeatable, defensible, consensus-able (?) way for everyone to see what are the most valuable items in your Backlog, you ought to invest in a way to estimate Business Value just as much as you need the engineering team to estimate Effort.  Makes planning and communicating much easier in the long run.

Specific methodologies are a reflection of the rigour and particular characteristics an organization derives value for its customers and shareholders.  A heavily regulated industry might use “Regulatory Compliance” as a double-weighted factor in their ‘algorithm’; an internal IT team might focus .   Many teams put emphasis on Estimated Revenue Impact and Reducing Customer Churn, and I’ve personally ensured that UX (“Expected Frustration Reduction”) has a place at the table.  Numeric scales, “high-medium-low”, “S-M-L-XL” or “Y/N” can all factor in, to whatever degree of rigour is necessary to sufficiently order and prioritize your backlog – don’t overengineer a system when half as much effort will get a useful starting place for the final “sorting negotiation” among stakeholders.

 

Introduction to ES6 Promises

http://jamesknelson.com/grokking-es6-promises-the-four-functions-you-need-to-avoid-callback-hell/

Been hearing about Promises and async/await from my engineering colleagues for ages.  Conceptually they’re a great advancement – making code more efficient, reflecting the unpredictable nature of distributed software systems, breaking the serialization bias of every new programmer yet again.

However, the truth of implementing Promises, for those who have never wrangled such code, is far more complex than I expected.  Just reading the explanations by accomplished programmers, with all their multi-layered assumptions and skipping-ahead-without-clarification, makes me feel dense in a way that doesn’t have to be true.  I’m sure if every element of the canonical use of a Promise object was explained (where it’s used, in what order, by what consumer), it would be much easier to get it to work.  I’ll keep hunting for that pedagogical example.

GraphQL vs REST Overview

https://philsturgeon.uk/api/2017/01/24/graphql-vs-rest-overview/

I’m hearing a lot of developers extoll the virtues of GraphQL in their side projects (and professional work, where they have room to advocate up).  I haven’t managed a Product shipping GraphQL services yet, so I’ve been curious what the folks already implementing these are learning.

One problem this article highlights is “deprecation” – when is it time to no longer support a field or endpoint in your API?  The latter is easy to see how many requests you’re currently receiving; the former is trickier in a REST environment, and GraphQL supports “sparse field sets”.

The question I don’t see addressed here is: does GraphQL require every request to specify every field they’re going to obtain?  Or is there also support to request all fields (cf. SELECT * from TABLE), in which case that benefit quickly vanishes?  If only some of your requests specify which fields they’re using, and the rest just demand them all, then you still don’t know whether that field you want to depreciate is up for grabs, nor which users are still using it.  You can infer some educated guesses based on the data you do have, but it’s still down to guesswork.

(Edit: I’ve concluded that fields must be explicit in GraphQL requests)

REST is the New SOAP

https://medium.com/@pakaldebonchamp/rest-is-the-new-soap-97ff6c09896d

OK OK I get it – REST is challenging in many ways when trying to deal with the reality of API behaviours.  Thank you for writing an article that outlines in specific what your problems are.  (OTOH, wouldn’t it be nice to see an article that acknowledges the problems *and* extolls the remaining virtues – see author’s own words, “I don’t doubt that some smart people out there will provide cases where REST shines”?  Or even better, talks about when to use this solution and when to use a solution that works better for a specified scenario/architecture – and not just offhandedly mention something they “heard that one time”?)

And why do I have that itchy feeling in the back of my brain that newer alternatives like GraphQL will put us in a state of complexity that we ran from the last time we did this to ourselves in the name of “giving ourselves all the tools we might ever need” (aka SOAP)?

It’s smart to select a relevant architecture for the problem space – it just makes me worry every time I watch someone put in place all sorts of “just in case” features that they have no need for now – and can’t even articulate a specific problem for which this is a great solution – but are sure there’ll be some use for in the future.  I haven’t delved deeply enough into GraphQL (obviously) but my glancing analysis made it seem much more flexible – and the last time I saw “eminently flexible” was when I saw OAuth 2 described to me as “puts all the grenades you’ll ever need in your hands and pulls the pins”.

Kitty Quinn captures my unease very well here, and quotes Allen Sherman to boot:

‘Cause they promise me miracles, magic, and hope,
But, somehow, it always turns out to be SOAP

Advertisements

Linus rants at the security community again – bravo

https://lkml.org/lkml/2017/11/17/767

Linus goes off on the security community who keep trying to make sweeping, under-tested, destabilizing changes to the kernel, and while his delivery leaves something to be desired, the message is welcome and apparently remains necessary.  Making radical changes that do nothing to help the system operators and users know what’s going on, or be able to control or even just report the issues, is shall we say frustrating.

keep-calm-and-burn-it-down-5

It’s this kind of flagrant power play by security mavens that irks the rest of us to homicidal degree. It punishes the user in the hopes that that user will push the pain uphill to the originator of the buggy code.

Except that no typical user (i.e. 99% of the computing end user population) even *recognises* that the problem is with the calling code (app, driver) rather than the OS (“computer”, “CPU”, “crap phone”) that is merely trained to enforce these extreme behaviours.

I find after a couple of decades in infosec land that this is motivated by the disregard security folks have for the end user victims of this whole tug-of-war, which seems so often to break down to “I’m sick of chasing software developers to convince them to fix their bugs, so instead let’s make the bug ‘obvious’ to the end users and then the users will chase down the software developers for me”.

Immediate kernel panic may have been an appropriate response decades ago when operators, programmers and users were closely tied in space and culture. It may even still be an appropriate posture for some mission-critical and highly-sensitive systems, if you favour “protection” over stability.

It is increasingly ridiculous for the user of most other systems to have any idea how to communicate with the powers that be what happened and have that turned into a fix in a viable timeframe – let alone rely on instrumented, aggregated, anonymized crash reports be fed en masse to the few vendors who know let alone have the time to request, retrieve and paw through millions of such reports looking for the few needles in haystacks.

Punish the victim and offload the *real* work of security (i.e. getting bugs fixed) to people least interested and least expert at it? Yeah, good luck.

It is entirely appropriate in an increasing number of circumstances to soften the approach and try warning the user and trusting them with a little power to make some decisions themselves (rather than arbitrarily punish them for mistakes not their own).

I love many of my colleagues in the security community dearly, and wouldn’t tell them to quit their jobs, but goddamn do we quickly forget that the options are not just “PREVENT” but also “DETECT” and “CORRECT”. I’m glad to see that Kees Cook’s followup clarifies that he’s already looking into this, and learning that such violent change to a kernel can’t be swallowed whole.