ImportError: No module named ‘rest_framework_swagger’

Summary

Building our Django app locally (i.e. no Docker container wrapping it) works great. Building the same app in Docker fails. Hint: make sure you know which requirements.txt file you’re using to build the app.  (And get familiar with the -f parameter for Docker commands.)

Problem

When I first started build the Docker container, I was getting the ImportError error after the container successfully builds:

ImportError: No module named 'rest_framework_swagger'

Research

The only half-useful hit on StackOverflow was this one, and it didn’t seem like it explicitly addressed my issue in Docker:

http://stackoverflow.com/questions/27369314/django-rest-framework-swagger-ui-importerror-no-module-named-rest-framework

…And The Lightning Bolt Struck

However, with enough time and desperation I finally understood that that article wasn’t wrong either.  I wasn’t using the /requirements.txt that contained all the dependencies – I was using the incomplete/abandoned /budget_proj/requirements.txt file, which lacked a key dependency.

Aside

I wasn’t watching the results of pip install closely enough – and when running Docker-compose up --build multiple times, the layer of interest won’t rebuild if there’s no changes to that layer’s inputs. (Plus this is a case where there’s no error message thrown, just one or two fewer pip installs – and who notices that until they’ve spent the better part of two days on the problem?)

Detailed Diagnostics

If you look closely at our project from that time, you’ll notice there are actually two copies of requirements.txt – one at the repo root and one in the /budget_proj/ folder.

Developers who are just testing Django locally will simply launch pip install -r requirements.txt from the root directory of their clone of the repo.  This is fine and good.  This is the result of the pip install -r requirements.txt when using the expected file:

$ pip install -r requirements.txt 
Collecting appdirs==1.4.0 (from -r requirements.txt (line 1))
 Using cached appdirs-1.4.0-py2.py3-none-any.whl
Collecting Django==1.10.5 (from -r requirements.txt (line 2))
 Using cached Django-1.10.5-py2.py3-none-any.whl
Collecting django-filter==1.0.1 (from -r requirements.txt (line 3))
 Using cached django_filter-1.0.1-py2.py3-none-any.whl
Collecting django-rest-swagger==2.1.1 (from -r requirements.txt (line 4))
 Using cached django_rest_swagger-2.1.1-py2.py3-none-any.whl
Collecting djangorestframework==3.5.4 (from -r requirements.txt (line 5))
 Using cached djangorestframework-3.5.4-py2.py3-none-any.whl
Requirement already satisfied: packaging==16.8 in ./budget_venv/lib/python3.5/site-packages (from -r requirements.txt (line 6))
Collecting psycopg2==2.7 (from -r requirements.txt (line 7))
 Using cached psycopg2-2.7-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Collecting pyparsing==2.1.10 (from -r requirements.txt (line 8))
 Using cached pyparsing-2.1.10-py2.py3-none-any.whl
Collecting requests==2.13.0 (from -r requirements.txt (line 9))
 Using cached requests-2.13.0-py2.py3-none-any.whl
Requirement already satisfied: six==1.10.0 in ./budget_venv/lib/python3.5/site-packages (from -r requirements.txt (line 10))
Collecting gunicorn (from -r requirements.txt (line 12))
 Using cached gunicorn-19.7.0-py2.py3-none-any.whl
Collecting openapi-codec>=1.2.1 (from django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Collecting coreapi>=2.1.1 (from django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Collecting simplejson (from django-rest-swagger==2.1.1->-r requirements.txt (line 4))
 Using cached simplejson-3.10.0-cp35-cp35m-macosx_10_11_x86_64.whl
Collecting uritemplate (from coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
 Using cached uritemplate-3.0.0-py2.py3-none-any.whl
Collecting coreschema (from coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Collecting itypes (from coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Collecting jinja2 (from coreschema->coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
 Using cached Jinja2-2.9.5-py2.py3-none-any.whl
Collecting MarkupSafe>=0.23 (from jinja2->coreschema->coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Installing collected packages: appdirs, Django, django-filter, uritemplate, requests, MarkupSafe, jinja2, coreschema, itypes, coreapi, openapi-codec, simplejson, djangorestframework, django-rest-swagger, psycopg2, pyparsing, gunicorn
 Found existing installation: appdirs 1.4.3
 Uninstalling appdirs-1.4.3:
 Successfully uninstalled appdirs-1.4.3
 Found existing installation: pyparsing 2.2.0
 Uninstalling pyparsing-2.2.0:
 Successfully uninstalled pyparsing-2.2.0
Successfully installed Django-1.10.5 MarkupSafe-1.0 appdirs-1.4.0 coreapi-2.3.0 coreschema-0.0.4 django-filter-1.0.1 django-rest-swagger-2.1.1 djangorestframework-3.5.4 gunicorn-19.7.0 itypes-1.1.0 jinja2-2.9.5 openapi-codec-1.3.1 psycopg2-2.7 pyparsing-2.1.10 requests-2.13.0 simplejson-3.10.0 uritemplate-3.0.0

However, because our Django application (and the related Docker files) is contained in a subdirectory off the repo root (i.e. in the /budget_proj/ folder) – and because I was an idiot at the time and didn’t know about the -f parameter for docker-compose , so I was convinced I had to run docker-compose from the same directory as docker-compose.yml – docker-compose didn’t have access to files in the parent directory of wherever it was launched.  Apparently Docker effectively “chroots” its commands so it doesn’t have access to ../bin/requirements.txt for example.

So when docker-compose launched pip install -r requirements.txt, it could only access this one and gives us this result instead:

Step 12/12 : WORKDIR /code
 ---> 8626fa515a0a
Removing intermediate container 05badf699f66
Successfully built 8626fa515a0a
Recreating budgetproj_budget-service_1
Attaching to budgetproj_budget-service_1
web_1 | Running docker-entrypoint.sh...
web_1 | [2017-03-16 00:31:34 +0000] [5] [INFO] Starting gunicorn 19.7.0
web_1 | [2017-03-16 00:31:34 +0000] [5] [INFO] Listening at: http://0.0.0.0:8000 (5)
web_1 | [2017-03-16 00:31:34 +0000] [5] [INFO] Using worker: sync
web_1 | [2017-03-16 00:31:34 +0000] [8] [INFO] Booting worker with pid: 8
web_1 | [2017-03-16 00:31:35 +0000] [8] [ERROR] Exception in worker process
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/arbiter.py", line 578, in spawn_worker
web_1 | worker.init_process()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/workers/base.py", line 126, in init_process
web_1 | self.load_wsgi()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/workers/base.py", line 135, in load_wsgi
web_1 | self.wsgi = self.app.wsgi()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/app/base.py", line 67, in wsgi
web_1 | self.callable = self.load()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
web_1 | return self.load_wsgiapp()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
web_1 | return util.import_app(self.app_uri)
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/util.py", line 376, in import_app
web_1 | __import__(module)
web_1 | File "/code/budget_proj/wsgi.py", line 16, in <module>
web_1 | application = get_wsgi_application()
web_1 | File "/usr/local/lib/python3.5/site-packages/django/core/wsgi.py", line 13, in get_wsgi_application
web_1 | django.setup(set_prefix=False)
web_1 | File "/usr/local/lib/python3.5/site-packages/django/__init__.py", line 27, in setup
web_1 | apps.populate(settings.INSTALLED_APPS)
web_1 | File "/usr/local/lib/python3.5/site-packages/django/apps/registry.py", line 85, in populate
web_1 | app_config = AppConfig.create(entry)
web_1 | File "/usr/local/lib/python3.5/site-packages/django/apps/config.py", line 90, in create
web_1 | module = import_module(entry)
web_1 | File "/usr/local/lib/python3.5/importlib/__init__.py", line 126, in import_module
web_1 | return _bootstrap._gcd_import(name[level:], package, level)
web_1 | ImportError: No module named 'rest_framework_swagger'
web_1 | [2017-03-16 00:31:35 +0000] [8] [INFO] Worker exiting (pid: 8)
web_1 | [2017-03-16 00:31:35 +0000] [5] [INFO] Shutting down: Master
web_1 | [2017-03-16 00:31:35 +0000] [5] [INFO] Reason: Worker failed to boot.
budgetproj_web_1 exited with code 3

Coda

It has been pointed out that not only is it redundant for the project to have two requirements.txt files (I agree, and when we find the poor soul who inadvertently added the second file, they’ll be sacked…from our volunteer project ;)…

…but also that if we’re encapsulating our project’s core application in a subdirectory (called budget_proj), then logically that is where the “legit” requirements.txt file belongs – not at the project’s root, just because that’s where you normally find requirements.txt in a repo.

Notes to self: merging my fork with upstream

It’s supposed to be as natural as breathing, right?  See a neat repository on Github, decide you want to use the code and make some minor changes to it right?  So you fork the sucker, commit some change, maybe push a PR back to the original repo?

Then, you want to keep your repo around – I dunno, maybe it’s for vanity, or maybe you’re continuing to make changes or use the project (and maybe, just maybe, you’ll find yourself wanting to push another PR in the future?).  Or maybe messages like this just bother your OCD:

github-branch-is-xx-commits-behind

Eventually, most developers will run into a situation in which they wish to re-sync their forked version of a project with the updates that have been made in “upstream”.

Should be dead easy, yes?  People are doing this all the time, yes?  Well, crap.  If that’s the case, then I’m an idiot because I’d tried this a half-dozen times and never before arrived at the beautiful message “This branch is even with…”.  So I figured I’d write it out (talk to the duck), and in so doing stumble on the solution.

GitHub help is supposed to help, e.g. Syncing a fork.  Which depends on Configuring a remote for a fork, and which is followed by Pushing to a remote.

Which for a foreign repo named e.g. “hackers/hackit” means the following stream of commands (after I’ve Forked the repo in GitHub.com and git clone‘d the repo on my local machine):

git remote add upstream git@github.com:hackers/hackit.git
git fetch upstream
git checkout master
git merge upstream/master

That last command will often result in a bunch of conflicts, if you’ve made any changes, e.g.:

git merge upstream/master
Auto-merging package.json
CONFLICT (content): Merge conflict in package.json
Auto-merging README.md
Auto-merging .travis.yml
CONFLICT (content): Merge conflict in .travis.yml
Auto-merging .babelrc
Automatic merge failed; fix conflicts and then commit the result.

At this point I temporarily abandon the command line and dive into my favourite editor (Visual Studio Code with a handful of extensions) to resolve the conflicting files.

Once I’d merged changes from both sources (mine and upstream), then it was a simple matter of the usual commands:

git add .
git commit -m "merged changes from upstream"
git push

And the result is…

github-branch-is-xx-commits-ahead

(No it wasn’t quite the “even” paradise, but I’ll take it.)

Aside

I somehow got myself into a state where I couldn’t get the normal commands to work.  For example, when I ran git push origin master, I get nowhere:

git push origin master
fatal: 'origin' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

Or git push:

git push
ERROR: Permission to hackers/hackit.git denied to MikeTheCanuck.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

Then when I added upstream…:

git remote add upstream git@github.com:hackers/hackit.git

…and ran git remote -v…:

git remote -v
upstream git@github.com:hackers/hackit.git (fetch)
upstream git@github.com:hackers/hackit.git (push)

…it appears I no longer had a reference to origin. (No idea how that happened, but hopefully these notes will help me not go astray again.)  Adding back the reference to origin seemed the most likely solution, but I didn’t get the kind of results I wanted:

git remote add origin git@github.com:mikethecanuck/hackit.git
git remote -v
origin git@github.com:mikethecanuck/hackit.git (fetch)
origin git@github.com:mikethecanuck/hackit.git (push)
upstream git@github.com:hackers/hackit.git (fetch)
upstream git@github.com:hackers/hackit.git (push)
git push origin master
To github.com:mikethecanuck/hackit.git
 ! [rejected]        master -> master (fetch first)
error: failed to push some refs to 'git@github.com:mikethecanuck/hackit.git'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

 

And when I pushed with no params, I went right back to the starting place:

git push
ERROR: Permission to hackers/hackit.git denied to MikeTheCanuck.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

(I finally rm -rf‘d my forked repo, cloned it again, and started over – that’s how I got to the first part of the article.)

How Do I Know What Success Looks Like?

I was asked recently what I do to ensure my team knows what success looks like.  I generally start with a clear definition of done, then factor usage and satisfaction into my evaluation of success-via-customers.

Evaluation Schema

Having a clear idea of what “done” looks like means having crisp answers to questions like:

  • Who am I building for?
    • Building for “everyone” usually means it doesn’t work well for anyone
  • What problem is it fixing for them?
    • I normally evaluate problems-to-solve based on the new actions or decisions the user can take *with* the solution that they can’t take *without* it
  • Does this deliver more business value than other work we’re considering?
    • Delivering value we can believe in is great, and obviously we ought to have a sense that this has higher value than the competing items on our backlog

What About The Rest?

My backlog of “ideas” is a place where I often leave things to bake.  Until I have a clear picture in my mind who will benefit from this (and just as importantly, who will not), and until I can articulate how this makes the user’s life measurably better, I won’t pull an idea into the near-term roadmap let alone start breaking it down for iteration prioritization.

In my experience there are lots of great ideas people have that they’ll bring to whoever they believe is the authority for “getting shit into the product”.  Engineers, sales, customers – all have ideas they think should get done.  One time my Principal Engineer spent an hour talking me through a hyper-normalized data model enhancement for my product.  Another time, I heard loudly from many customers that they wanted us to support their use of MongoDB with a specific development platform.

I thanked them for their feedback, and I earnestly spent time thinking about the implications – how do I know there’s a clear value prop for this work?

  • Is there one specific user role/usage model that this obviously supports?
  • Would it make users’ lives demonstrably better in accomplishing their business goals & workflows with the product as they currently use it?
  • Would the engineering effort support/complement other changes that we were planning to make?
  • Was this a dealbreaker for any user/customer, and not merely an annoyance or a “that’s something we *should* do”?
  • Is this something that addresses a gap/need right now – not just “good engineering that should become useful in the future”?  (There’s lots of cool things that would be fun to work on – one time I sat through a day-long engineering wish list session – but we’re lucky if we can carve out a minor portion of the team’s capacity away from the things that will help right now.)

If I don’t get at least a flash of sweat and “heat” that this is worth pursuing (I didn’t with the examples mentioned), then these things go on the backlog and they wait.  Usually the important items will come back up, again and again.  (Sometimes the unimportant things too.)  When they resurface, I test them against product strategy, currently-prioritized (and sized) roadmap and our prioritization scoring model, and I look for evidence that shows me this new idea beats something we’re already planning on doing.

If I have a strong impression that I can say “yes” to some or all of these, then it also usually comes along with a number of assumptions I’m willing to test, and effort I’m willing to put in to articulate the results this needs to deliver [usually in a phased approach].

Delivery

At that point we switch into execution and refinement mode – while we’ve already had some roughing-out discussions with engineering and design, this is where backlog grooming hammers out the questions and unknowns that bring us to a state where (a) the delivery team is confident what they’re meant to create and (b) estimates fall within a narrow range of guesses [i.e. we’re not hearing “could take a day, could take a week” – that’s a code smell].

Along the way I’m always emphasizing what result the user wants to see – because shit happens, surprises arise, priorities shift, the delivery team needs a solid defender of the result we’re going to deliver for the customer.  That doesn’t mean don’t flex on the details, or don’t change priorities as market conditions change, but it does mean providing a consistent voice that shines through the clutter and confusion of all the details, questions and opinions that inevitably arise as the feature/enhancement/story gets closer to delivery.

It also means making sure that your “voice of the customer” is actually informed by the customer, so as you’re developing definition of Done, mockups, prototypes and alpha/beta versions, I’ve made a point of taking the opportunity where it exists to pull in a customer or three for a usability test, or a customer proxy (TSE, consultant, success advocate) to give me their feedback, reaction and thinking in response to whatever deliverables we have available.

The most important part of putting in this effort to listen, though, is learning and adapting to the feedback.  It doesn’t mean rip-sawing in response to any contrary input, but it does mean absorbing it and making sure you’re not being pig-headed about the up-front ideas you generated that are more than likely wrong in small or big ways.  One of my colleagues has articulated this as Presumptive Design, whereby your up-front presumptions are going to be wrong, and the best thing you can do is to put those ideas in front of customers, users, proxies as fast and frequently as possible to find out how wrong you are.

Evaluating Success

Up front and along the way, I develop a sense of what success will look like when it’s out there, and that usually takes the form of quantity and quality – useage of the feature, and satisfaction with the feature.  Getting instrumentation of the feature in place is a brilliant but low-fidelity way of understanding whether it was deemed useful – if numbers and ratios are high in the first week and then steadily drop off the longer folks use it, that’s a signal to investigate more deeply.  The user satisfaction side – post-hoc surveys, customer calls – to get a sense of NPS-like confidence and “recommendability” are higher-fidelity means of validating how it’s actually impacting real humans.

This time, success: Flask-on-AWS tutorial (with advanced use of virtualenv)

Last time I tried this, I ended up semi-deliberately choosing to use Python 3 for a tutorial (I didn’t realize quickly enough) was built around Python 2.

After cleaning up my experiment I remembered that the default python on my MacBook was still python 2.7.10, which gave me the idea I might be able to re-run that tutorial with all-Python 2 dependencies.  Or so it seemed.

Strangely, the first step both went better and no better than last time:

Mac4Mike:flask-aws-tutorial mike$ virtualenv flask-aws
Using base prefix '/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5'
New python executable in /Users/mike/code/flask-aws-tutorial/flask-aws/bin/python3.5
Also creating executable in /Users/mike/code/flask-aws-tutorial/flask-aws/bin/python
Installing setuptools, pip, wheel...done.

Yes it didn’t throw any errors, but no it didn’t use the base Python 2 that I’d hoped.  Somehow the fact that I’ve installed Python 3 on my system is still getting picked up by virtualenv, so I needed to dig further into how virtualenv can be used to truly insulate from Python 3.

Found a decent article here that gave me hope, and even though they punted to using the virtualenvwrapper scripts, it still clued me in to the virtualenv parameter “-p”, so this seemed to work like a charm:

Mac4Mike:flask-aws-tutorial mike$ virtualenv flask-aws -p /usr/bin/python
Running virtualenv with interpreter /usr/bin/python
New python executable in /Users/mike/code/flask-aws-tutorial/flask-aws/bin/python
Installing setuptools, pip, wheel...done.

This time?  The requirements install worked like a charm:

Successfully installed Flask-0.10.1 Flask-SQLAlchemy-2.0 Flask-WTF-0.10.3 Jinja2-2.7.3 MarkupSafe-0.23 PyMySQL-0.6.3 SQLAlchemy-0.9.8 WTForms-2.0.1 Werkzeug-0.9.6 argparse-1.2.1 boto-2.28.0 itsdangerous-0.24 newrelic-2.74.0.54

Then (since I still had all the config in place), I ran pip install awsebcli and skipped all the way to the bottom of the tutorial and tried eb deploy:

INFO: Deploying new version to instance(s).                         
ERROR: Your requirements.txt is invalid. Snapshot your logs for details.
ERROR: [Instance: i-01b45c4d01c070555] Command failed on instance. Return code: 1 Output: (TRUNCATED)...)
  File "/usr/lib64/python2.7/subprocess.py", line 541, in check_call
    raise CalledProcessError(retcode, cmd)
CalledProcessError: Command '/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1. 
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03deploy.py failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
ERROR: Unsuccessful command execution on instance id(s) 'i-01b45c4d01c070555'. Aborting the operation.
ERROR: Failed to deploy application.

This kept barfing over and over until I remembered that the target environment was still configured for Python 3.4.  Fortunately or not, you can’t change major versions of the platform – so back to eb init I go (with the -i parameter to re-initialize).

This time around?  The command eb deploy worked like a charm.

Lesson: be *very* explicit about your Python versions when messing with someone else’s code.  [Duh.]

Highlights from latest Lean Coffee

A lively crowd around the table at last Sunday’s Lean Coffee session, and fresh faces to the discussion (thank you to Scott for inviting your colleagues, and to all for coming out).

There’s no way I can do justice to the breadth and depth of the discussion, so I’m just going to mention those things I wrote down on sticky notes to myself – the things that I thought, “Boy, I should get this tattooed on myself somewhere”:

  • Don’t Automate Waste – a killer principle from the Lean camp that Dan Walsh graced us with, it speaks to the tension of not optimizing early, and to my instinct not to assume you have the solution without experimentation
  • “Agile/Scrum is a Problem Discovery Framework, not a Project Management Methodology” – courtesy of Scott Henderson, every word here lends subtle meaning to the mental shift it encourages
  • Lean Coffee has been used successfully in at least two settings I haven’t tried – as the basis for both the Retrospective and Brainstorming sessions (which helps get ideas on the table that might be ‘swallowed’ by the time attention comes around to the less-confident individual)
  • Code 46 and Sully were the two movies that came up in conversation, so off to Netflix I go

2016-12-04 11.59.58.jpg

I posed a question to the group which came back with some great thoughts: “how to workaround a situation [which I’ve observed at many software companies] where the testing infrastructure/coverage isn’t reliable, and there’s no quick route to addressing that?”

  1. Ensure that you at least have Unit Tests included in the Definition of Done
  2. Try an experiment where for a single sprint, the team only works on writing unit tests – when this was tried at one organization, it surprised everyone how much progress and coverage could truly be made
  3. Try a regular “Game Day” exercise – run tabletop simulation of a production bug that takes out one or more of your customer-facing services.  This identifies not only who must be involved, but also how long it can take to execute corrective action once identified, and ultimately can result in significant time savings by making upstream changes in product/devops.
  4. Run an occasional discussion at Retrospective to ask “what’s the worst thing we could do to the product?”  This can uncover issues and concerns that otherwise go unspoken by folks who are worried about retribution or downplaying.
  5. And the most obvious, start out future sprints by planning tests up front (either via TDD or manually between QA and Dev)

Occupied Neurons, November edition (late)

Docker In Production: a History of Failure

A cautionary tale to counter some of the newbie hype around the new Infrastructure Jesus that is Docker. I’ve fallen prey to the hype as well, assuming that (a)Docker is ready for prime time, (b) Docker is universally beneficial for all workloads and (c) Docker is measurably superior to the infrastructure design patterns that it intends to replace.

That said, the article is long on complaints, and doesn’t attempt to back its claims with data, third-party verification or unemotional hyperbole. I’m sure we’ll see many counter-articles claiming “it works for me”, “I never saw these kinds of problems” and “what’s this guy’s agenda?”  I’ll still pay attention to commentary like this, because it reads to me like the brain dump of a person exhausted from chasing their tail all year trying to find a tech combo that they can just put in production and not devote unwarranted levels of monitoring and maintenance to. I think their expectations aren’t unreasonable. It sure sounds like the Docker team are more ambitious or cavalier than their position and staffing levels warrant.

Wat

This is one of the most hilarious and horrifying expeditions into the dark corners of (un?)intended consequences in coding languages.  Watching this made me feel like I’m more versed in the lessons of the absurd “stupid pet tricks” with many languages, even if I’d never use 99% of these in real life.  It also made me feel like “did someone deliberately allow these in the language design, or did some nearly-insane persons just end up naturally stumbling on these while trying to make the language do things it should never have done?”

Is Agile dying a slow death?  Or is it being reborn?

This guy captures all my attitudes about “Agile according to the rules” versus “getting an organization tuned to collaborate and learn as fast as possible”.  While extra/unnecessary process makes us feel like we have guard rails to keep people from making mistakes, in my experience what it *actually* does it drive DISengagement and risk aversion in most employees, knowing that unless they have explicit permission to break the rules, their great new idea is likely to attract organizational antibodies.

Stanford’s password policy shuns one-size-fits-all security

This is better than a Bigfoot sighting! An actual organization who’ve thought about security risk vs punishing anti-usability and come up with an approach that should satisfy both campaigns! This UX-in-security bigot can finally die a happy man.

A famed hacker is grading thousands of programs – and may revolutionise software in the process

May not get to the really grotty code security issues that are biting us some days, and probably giving a few CIOs a false sense of security.  Controversial?  Yes.

A necessary next step as software grows up as an engineering discipline? Absolutely.

Let’s see many more security geeks meeting the software developer where they live, and stop expecting em to voluntarily become part-time security experts just because someone came up with another terrific Hollywood Security Theater plot.

A Rebuttal for Python 3

Why are some old-school Pythonistas so damned pissy about Python 3 – to the point of (in at least one egregiously dishonest case) writing long articles trying to dissuade others from using it? Are they still butthurt at Guido for making breaking changes that don’t allow them to run their old Python 2 code on the Python 3 runtime? Do they not like change? Are they aware that humans are imperfect and sometimes have to admit mistakes/try something different? I find it fascinating to watch these kinds of holy wars – it gives the best kinds of insights into what frailties and hot buttons really motivate people.

The best quote’s in the comments: “Wow, I haven’t seen this much bullshit in a “technical” article in a while. A Donald Trump transcript is more honest and informative than that. I seriously doubt Zed Shaw himself believes a single paragraph there; if he actually does, he should stop acting like a Python expert and admit he’s an idiot.”

How The Web Became Unreadable

It’s painful to see some designers slavishly devote their efforts more to the third hand fashion they hear about from other designers, than to the end users of the sites and services to which they deliver their designs. I love a lot of the design work that’s come out the last few years – the jumbled mess that was web design ten years ago was painful – but the practical implications of how that design is consumed in the wild must be paramount.  And it is where I am the final decision maker on shipping software.

Occupied Neurons, October edition

Melinda Gates Asked For Ideas to Help Women in Tech: Here They Are

https://backchannel.com/an-open-letter-to-melinda-gates-7c40d8696b63#
I am psyched that a powerhouse like Gates is taking up the cause, and I sincerely hope she reads this (and many other) articles to get a sense of the breadth of the problem (and how few working solutions there are).  The overlap with race, the attempts to bring more women into classrooms, the tech industry bias towards the elite schools and companies (and not the wealth of other experiences). It’s a target-rich environment to solve.

Building a Psychologically Safe Workplace: Amy Edmondson at TEDxHGSE

https://m.youtube.com/watch?feature=youtu.be&v=LhoLuui9gX8

I am super-pleased to see that the concept of Psychological Safety is gaining traction in the circles and organizations I’m hanging with these days.  I spend an inordinate amount of time in my work making sure that my teammates and colleagues feel like it’s OK to make a mistake, to own up to dead ends and unknowns, and will sure make the work easier when I’m not the only one fighting the tide of mistrust/worry/fear that creates an environment where learning/risks/mistakes are being discouraged.

Three Books That Influenced CorgiBytes Culture

http://corgibytes.com/blog/2016/09/15/three-influential-books/

Andrea and Scott are two people who have profoundly changed my outlook on what’s possible to bring to the workplace, and how to make a workplace that truly fits what you want (and sometimes need) it to be. Talking about empathy as a first-class citizen, bringing actual balance to the day and the communications, and treating your co-workers better than we treat ourselves – and doing it in a fun line of business with real, deep impact for individual customers.

This is the kind of organization that I could see myself in. And which would draw in the kinds of people I enjoy working with each day.

So after meeting them earlier this year in Portland, I’ve followed their adventures via their blog and twitter accounts. This article is another nuanced look at what has shaped their workplace, and I sincerely hope I can do likewise someday.

Reducing Visual Noise for a Better User Experience

https://medium.com/@alitorbati/reducing-visual-noise-for-a-better-user-experience-ae3407ff9c99

View story at Medium.com

These days I find myself apprehensively clicking on Design articles on Medium.  While there’s great design thinking being discussed out there, I seem to be a magnet for finding the ones that complain why users/managers/businesses don’t “get it”.

As I’d hoped, this was an honest and detailed discussion of the inevitable design overload that creeps into most “living products”, and the factors that drove them to improve the impact for non-expert users.

(I am personally most interested in improving the non-expert users’ experience – experts and enthusiasts will always figure out a way to make shit work, even if they don’t like having to beat down a new door; the folks I care to feed are those who don’t have the energy/time/inclination/personality for figuring out something that should be obvious but isn’t.  Give me affordances, not a learning experience e.g. when you’ve got clickable/tappable controls on your page, give me lines/shadows/shading to signify “this isn’t just text”, not just subtle whitespace that cues the well-trained UI designer that there’s a button around that otherwise-identically-styled text.