Lean Coffee September insights report

That’s our Sunday morning Lean Coffee practice. Here’s where we landed after a good 1.5-ish hours of structured-and-friendly conversation.

 On the subject of landing a job as a Scrum Master

  • You must be very familiar with the SCRUM Guide, and especially the “Why” behind each practice – so that you can address real questions about when you’ll recommend a practice and when you’ll recommend evolving past it
  • Should be very comfortable with trying new things AS EXPERIMENTS
  • Must avoid “always pitying the SCRUM team” at the expense of the overall business goals, or else business will hamstring your influence and bypass your role
  • Relies heavily on Situational Leadership abilities
  • Starts with CI, graduate to Continuous Learning

On the subject of what’s changed and what is changing

  • According to our discussion of Crossing the Chasm, once those beyond the chasm start adopting, then rather than chasing the downward slope, you should chase a new curve starting from the other side of the chasm
  • We’re seeing signs that other non-software disciplines are adopting Agile practices eg. Marketing functions, DevOps
  • Perhaps we’re merely waiting for the rest of the org to catch up to those of us who are post-Agile and delivering continuously
  • The VUCA model (Volatility, Uncertainty, Complexity, Ambiguity) made it into a Harvard Business Review article
  • Neurodiversity is getting broader consciousness

 On the subject of creating success as a Scrum Master

  • The basic SM is a “boundary manager”
  • They’re there not only to help the team “learn to be a team” and more to help the team “learn how to be Agile as a team”
  • They’re there to work with the team and enable them to determine what process solutions to try, rather than dictate or even “guide” the team to specific outcomes
  • Tip: should be very familiar with the Agile Fluency model
  • When interviewing for a SM role, an insightful question is to ask what are the inputs and outputs of the engineering team?
  • Geoff Watts published an article asking What kind of support a Scrum Master would need?

On the subject of no estimates

  • Analogy of a cook: asking for precise estimates is like asking them to cook a dinner with a menu they’ve never cooked before
  • Analogy of car mechanic: they can only give predictable, tight estimated of when the repair will be completed for operations they’ve already done before (enough time to have codified the standard timeframe) and with mechanics who are highly experienced

Miscellaneous Insights

  • Meetup (as in the collection of fluid communities) is like a grand ongoing Un-Conference – people announcing a topic they’d like to talk about, those who wish to attend come, people obeying the law of two feet as the meetup’s theme no longer keep their attention
  • Check out Rachel Davies’ Agile Coaching book
  • There’s growing insight that SAFe can find better flow mechanics across the portfolio if it uses Kanban rather than SCRUM – but that a prerequisite is that the teams must already have in place high-quality technical practices (eg. low big output, continuous integration, short distance from idea to value) and functioning teams before Kanban at scale will create consistent results
  • Book: check out the free mini-book on Scrumban by Henrik Kniberg

See you there next time (if you’re lucky).

Advertisements

The Yahoo Hack: Protect Yourself, PLEASE

http://money.cnn.com/2016/09/22/technology/yahoo-data-breach/

password-acquired

If you have a Yahoo account (you probably do, by these numbers), first go change the identical password on other sites (you probably re-used the password between Yahoo and some other sites)…

AND be prepared to change the answers to (and maybe even questions of, if you often use the same ones) your security questions [the ones used to help you – OR A HACKER – reset a forgotten password] on any sites with answers in common.  Please, these responses that you’ve typed in – if accurate, and used on many sites – are not only a great way for someone who gets your password on one site, to then dig into those answers and reset your password (even one you never used elsewhere) on another site.

Focus first on your primary email address (because that’s often the most valuable – since it’s where all password resets get sent, right?), and then on your financial accounts (even those with two-factor authentication – let’s not let them drain our savings just because we were a bit lazy).

Then consider whether any of your other online accounts have real value to you if you permanently or even temporarily lost control of them. e.g. Twitter/Instagram/Tumblr/Wordpress, if you have a public presence that has helped build your reputation.

Then go get yourself a password manager (see some reviews here and here). I adopted 1Password three years ago (mostly because I prefer good UX over infinite configurability), and now I don’t care how ridiculous my random passwords are, and I intentionally provide random/hilarious (at least to me) misinformation in my security questions (because I just write these misinfos down in my password manager in the Notes field for each site).

Then reset the rest of your passwords on sites where you used the same one as your Yahoo account(s).

Sorry this was so long. But a breach like this hits lots of people and opens them up to a LOT of malicious activity across much of their digital life.  You may not be that attractive a target, but I bet your financial accounts are.

Occupied Neurons, late September edition

Modern Agile (Agile 2016 keynote)

https://www.infoq.com/news/2016/08/agile2016-modern-agile

This call out for advancement of Agile beyond 2001 and beyond the fossilization of process and “scale” is refreshing. It resonates with me in ways few other discussions of “is there Agile beyond SCRUM?” have inspired – because it provides an answer upon which we can stand up actual debate, refinement and objective experiments.

While I’m sure there are those who would wish to quibble of perfecting these new principles before committing to their underlying momentum, I for one am happy to accept this as an evolutionary stage beyond Agile Manifesto and use it to further my teams and my own evolution.

Forget Technical Debt – Here’s How to Build Technical Wealth

http://firstround.com/review/forget-technical-debt-heres-how-to-build-technical-wealth

I had the pleasure of meeting and talking with (mostly listening and learning intently on my part) Andrea Goulet at .NET Fringe 2016 conference. Andrea is a refreshing leader in software development because she leads not only through craftsmanship but also communication as key tenet of success with her customers.

Andrea advances the term “software remodelling” to properly focus the work that deals with Technical Debt. Rather than approach the TD as a failing, looking at it “as a natural outgrowth of occupying and using the software” draws heavily and well on the analogy of remodelling your/a home.

Frequent Password Changes Are The Enemy of Security

http://arstechnica.com/security/2016/08/frequent-password-changes-are-the-enemy-of-security-ftc-technologist-says/

After a decade or more of participating in the constant ground battle of information security, it became clear to me that the threat models and state of the art in information warfare has changed drastically; the defenses have been slow to catch up.

One of the vestigial tails of 20th-century information security is the dogmatically-proscribed “scheduled password change”.

The idea back then was that we had so few ways of knowing whether someone was exploiting an active, privileged user account, and we only had single-factor (password) authentication as a means of protecting that digital privilege on a system, that it seemed reasonable to force everyone to change passwords on a frequent, scheduled basis. So that, if an attacker somehow found your password (such as on a sticky note by your keyboard), *eventually* they would lose such access because they wouldn’t know your new password.

So many problems with this – for example:

  • Password increments – so many of us with multiple frequently-rotating passwords just tack on an increment img number to the end of the last password when forced to change – not terribly secure, but the only tolerable defense when forced to deal with this unnecessary burden
  • APTs and password databases – most password theft these days don’t come from random guessing, it comes from hackers either getting access to the entire database at the server, or persistent malware on your computer/phone/tablet or public devices like wifi hardware that MITM’s your password as you send it to the server
  • Malware re-infections – changing your password is only good if it isn’t as easy to steal it *after* the change as it was *before* the change – not a lot of point in changing passwords when you can get attacked just as easily (and attackers are always coming up with new zero-days to get you)

I was one of the evil dudes who reflexively recommended this measure to every organization everywhere. I apologize for perpetuating this mythology.

What I’ve learned: setting up Bash/Ubuntu/Win10 for Ansible + Vagrant + VirtualBox

My Goal: test the use of this Ansible Role from Windows 10, using a combination of Windows and Bash for Ubuntu on Windows 10 tools.  Favour the *nix tools wherever possible, for maximum compatibility with the all-Linux production environment.

Preconditions

Here is the software/shell arrangement that worked for me in my Win10 box:

  • Runs in Windows: Virtualbox, Vagrant
  • Runs in Bash/Ubuntu: Ansible (in part because of this)

In this setup, I’m using a single Virtualbox VM in default network configuration, whereby Vagrant ends up reporting the host listening on 127.0.0.1 and SSH listening on TCP port 2222.  Substitute your actual values as required.

Also note the versions of software I’m currently running:

  • Windows 10: Anniversary Update, build 14393.51
  • Ansible (*nix version in Bash/Ubuntu/Win10): 1.5.4
  • VirtualBox (Windows): 5.0.26
  • Vagrant (Windows): 1.8.1

Run the Windows tools from a Windows shell

  • C:\> vagrant up
  • (or launch a Bash shell with cbwin support:  C:\>outbash, then try running /mnt/c/…/Vagrant.exe up from the bash environment)

Start the Virtualbox VMs using Vagrant

  • Vagrant (Bash) can’t just do vagrant up where VirtualBox is installed in Windows – it depends on being able to call the VBoxManage binary
    • Q: can I trick Bash to call VBoxManage.exe from /mnt/c/Program Files/Oracle/VirtualBox?
    • If not, is it worth messing around with Vagrant (Bash)?  Or should I relent and try Vagrant (Windows), either using cbwin or just running from a different shell?
  • Vagrant (Windows) runs into the fscking rsync problem (as always)
    • Fortunately you can disable rsync if you don’t need the sync’d folders
    • Disabling the synced_folder requires editing the Vagrantfile to add this in the Vagrant.configure section:
      config.vm.synced_folder “.”, “/vagrant”, disabled: true

Setup the inventory for management

  • Find the IP’s for all managed boxes
  • Organize them (in one group or several) in the /etc/ansible/hosts file
  • Remember to specify the SSH port if non-22:
    [test-web]
    127.0.0.1 ansible_ssh_port=2222
    # 127.0.0.1 ansible_port=2222 when Ansible version > 1.9
    • While “ansible_port” is said to be the supported parameter as of Ansible 2.0, my own experience with Ansible under Bash on Windows was that ansible wouldn’t connect properly to the server until I changed the inventory configuration to use “ansible_ssh_port”, even though ansible –version reported itself as 2.1.1.0
    • Side question: is there some way to predictably force the same SSH port every time for the same box?  That way I can setup an inventory in my Bash environment and keep it stable.

Getting SSH keys on the VMs

  • (Optional: generate keys if not already) Run ssh-keygen -t rsa
  • (Optional: if you’ve destroyed and re-generated the VM with vagrant destroy/up, wipe out the existing key for the host:port combination by running the following command that is recommended when ssh-copy-id fails): ssh-keygen -f “/home/mike/.ssh/known_hosts” -R [127.0.0.1]:2222
  • Run ssh-copy-id vagrant@127.0.0.1 -p 2222 to push the public key to the target VM’s vagrant account

Connect to the VMs using Ansible to test connectivity

  • [from Windows] vagrant ssh-config will tell you the IP address and port of your current VM
  • [from Bash] ansible all -u vagrant -m ping will check basic Ansible connectivity
    • (ansible all -c local -m ping will go even more basic, testing Ansible itself)

Run the playbook

  • Run ansible-playbook [playbook_name.yml e.g. playbook.yml] -u vagrant
    • If you receive an error like “SSH encountered an unknown error” with details that include “No more authentication methods to try.  Permission denied (publickey,password).”, make sure to remember to specify the correct remote user (i.e. one that trusts your SSH key)
    • If you receive an error like “stderr: E: Could not open lock file /var/lib/dpkg/lock – open (13: Permission denied)”, make sure your remote user runs with root privilege – e.g. in the [playbook.yml], ensure sudo: true is included
  • Issue: if you receive an error like “fatal: [127.0.0.1]: UNREACHABLE! => {“changed”: false, “msg”: “Failed to connect to the host via ssh.”, “unreachable”: true}”, check that your SSH keys are trusted by the remote user you’re using (e.g. “-u vagrant” may not have the SSH keys already trusted)
  • If you wish to target a subset of servers in your inventory (e.g. using one or more groups), add the “-l” parameter and name the inventory group, IP address or hostname you wish to target
    e.g. ansible-playbook playbook.yml -u vagrant -l test-web
    or ansible-playbook playbook.yml -u vagrant -l 127.0.0.1

Protip: remote_user

If you want to stop having to add -u vagrant to all the fun ansible commands, then go to your /etc/ansible/ansible.cfg file and add remote_user = vagrant in the appropriate location.

Rabbit Hole Details for the Pedantically-Inclined

03ec8fe3bb146924423af6381eb99ea9

Great Related Lesson: know the difference between vagrant commands

  • Run vagrant ssh to connect to the VM [note: requires an SSH app installed in Windows, under this setup]
  • Run vagrant status to check what state the VM is in
  • Run vagrant reload to restart the VM
  • Run vagrant halt to stop the VM
  • Run vagrant destroy to wipe the VM

Ansible’s RSA issue when SSH’ing into a non-configured remote user

  • The following issue occurs when running ansible commands to a remote SSH target
    e.g. ansible all -m ping
  • This occurs even when the following commands succeed:
    • ansible -c local all -m ping
    • ssh vagrant@host.name [port #]
    • ssh-copy-id -p [port #] vagrant@host.name
  • Also note: prefixing with “sudo” doesn’t seem to help – just switches whose local keys you’re using
  • I spent the better part of a few hours (spaced over two days, due to rage quit) troubleshooting this situation
  • Troubleshooting this is challenging to say the least, as ansible doesn’t intelligently hint at the source of the problem, even though this must be a well-known issue
    • There’s nothing in the debug output of ssh/(openssl?) that indicates that there are no trusted SSH keys in the account of the currently-used remote user
    • Nor is it clear which remote user is being impersonated – sure, I’ll bet someone that fights with SSH & OpenSSL all day would have noticed the subtle hints, but for those of us just trying to get a job done, it’s like looking through foggy glass
  • Solution: remember to configure the remote user under which you’re connecting (i.e. a user with the correct permissions *and* who trusts the SSH keys in use)
    • Solution A: add the -u vagrant parameter
    • Solution B: specify remote_user = vagrant in the ansible.cfg file under [defaults]
mike@MIKE-WIN10-SSD:~/code/ansible-role-unattended-upgrades$ ansible-playbook role.yml -vvvv

PLAY [all] ********************************************************************

GATHERING FACTS ***************************************************************
<127.0.0.1> ESTABLISH CONNECTION FOR USER: mike
<127.0.0.1> REMOTE_MODULE setup
<127.0.0.1> EXEC ['ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/mike/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'Port=2222', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', '127.0.0.1', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1471378875.79-237810336673832 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1471378875.79-237810336673832 && echo $HOME/.ansible/tmp/ansible-tmp-1471378875.79-237810336673832'"]
fatal: [127.0.0.1] => SSH encountered an unknown error. The output was:
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/home/mike/.ansible/cp/ansible-ssh-127.0.0.1-2222-mike" does not exist
debug2: ssh_connect: needpriv 0
debug1: Connecting to 127.0.0.1 [127.0.0.1] port 2222.
debug2: fd 3 setting O_NONBLOCK
debug1: fd 3 clearing O_NONBLOCK
debug1: Connection established.
debug3: timeout: 10000 ms remain after connect
debug3: Incorrect RSA1 identifier
debug3: Could not load "/home/mike/.ssh/id_rsa" as a RSA1 public key
debug1: identity file /home/mike/.ssh/id_rsa type 1
debug1: identity file /home/mike/.ssh/id_rsa-cert type -1
debug1: identity file /home/mike/.ssh/id_dsa type -1
debug1: identity file /home/mike/.ssh/id_dsa-cert type -1
debug1: identity file /home/mike/.ssh/id_ecdsa type -1
debug1: identity file /home/mike/.ssh/id_ecdsa-cert type -1
debug1: identity file /home/mike/.ssh/id_ed25519 type -1
debug1: identity file /home/mike/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7p1 Debian-5+deb8u1
debug1: match: OpenSSH_6.7p1 Debian-5+deb8u1 pat OpenSSH* compat 0x04000000
debug2: fd 3 setting O_NONBLOCK
debug3: put_host_port: [127.0.0.1]:2222
debug3: load_hostkeys: loading entries for host "[127.0.0.1]:2222" from file "/home/mike/.ssh/known_hosts"
debug3: load_hostkeys: found key type ECDSA in file /home/mike/.ssh/known_hosts:2
debug3: load_hostkeys: loaded 1 keys
debug3: order_hostkeyalgs: prefer hostkeyalgs: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-dss-cert-v00@openssh.com,ssh-ed25519,ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: zlib@openssh.com,zlib,none
debug2: kex_parse_kexinit: zlib@openssh.com,zlib,none
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com
debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: setup hmac-sha1-etm@openssh.com
debug1: kex: server->client aes128-ctr hmac-sha1-etm@openssh.com zlib@openssh.com
debug2: mac_setup: setup hmac-sha1-etm@openssh.com
debug1: kex: client->server aes128-ctr hmac-sha1-etm@openssh.com zlib@openssh.com
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ECDSA 07:f3:2f:b0:86:b5:b6:2b:d9:f5:26:71:95:6e:d9:ce
debug3: put_host_port: [127.0.0.1]:2222
debug3: put_host_port: [127.0.0.1]:2222
debug3: load_hostkeys: loading entries for host "[127.0.0.1]:2222" from file "/home/mike/.ssh/known_hosts"
debug3: load_hostkeys: found key type ECDSA in file /home/mike/.ssh/known_hosts:2
debug3: load_hostkeys: loaded 1 keys
debug1: Host '[127.0.0.1]:2222' is known and matches the ECDSA host key.
debug1: Found key in /home/mike/.ssh/known_hosts:2
debug1: ssh_ecdsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /home/mike/.ssh/id_rsa (0x7fffbdbd5b80),
debug2: key: /home/mike/.ssh/id_dsa ((nil)),
debug2: key: /home/mike/.ssh/id_ecdsa ((nil)),
debug2: key: /home/mike/.ssh/id_ed25519 ((nil)),
debug1: Authentications that can continue: publickey,password
debug3: start over, passed a different list publickey,password
debug3: preferred gssapi-with-mic,gssapi-keyex,hostbased,publickey
debug3: authmethod_lookup publickey
debug3: remaining preferred: ,gssapi-keyex,hostbased,publickey
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /home/mike/.ssh/id_rsa
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey,password
debug1: Trying private key: /home/mike/.ssh/id_dsa
debug3: no such identity: /home/mike/.ssh/id_dsa: No such file or directory
debug1: Trying private key: /home/mike/.ssh/id_ecdsa
debug3: no such identity: /home/mike/.ssh/id_ecdsa: No such file or directory
debug1: Trying private key: /home/mike/.ssh/id_ed25519
debug3: no such identity: /home/mike/.ssh/id_ed25519: No such file or directory
debug2: we did not send a packet, disable method
debug1: No more authentication methods to try.
Permission denied (publickey,password).

TASK: [ansible-role-unattended-upgrades | add distribution-specific variables] ***
FATAL: no hosts matched or all hosts have already failed -- aborting

PLAY RECAP ********************************************************************
           to retry, use: --limit @/home/mike/role.retry

127.0.0.1                  : ok=0    changed=0    unreachable=1    failed=0

Ansible’s permissions issue when trying to run non-trivial commands without sudo

  • ansible -m ping will work fine without local root permissions, making you think that you might be able to do other ansible operations without sudo
  • Haha! You would be wrong, foolish apprentice
  • Thus, the SSH keys for enabling ansible to work will have to be (a) generated for the local root user and (b) copied to the remote vagrant user
mike@MIKE-WIN10-SSD:~/code/ansible-role-unattended-upgrades$ ansible-playbook -u vagrant role.yml

PLAY [all] ********************************************************************

GATHERING FACTS ***************************************************************
ok: [127.0.0.1]

TASK: [ansible-role-unattended-upgrades | add distribution-specific variables] ***
ok: [127.0.0.1]

TASK: [ansible-role-unattended-upgrades | install unattended-upgrades] ********
failed: [127.0.0.1] => {"failed": true, "item": ""}
stderr: E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

msg: 'apt-get install 'unattended-upgrades' ' failed: E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?


FATAL: all hosts have already failed -- aborting

PLAY RECAP ********************************************************************
           to retry, use: --limit @/home/mike/role.retry

127.0.0.1                  : ok=2    changed=0    unreachable=0    failed=1

 

Articles I reviewed while doing the work outlined here

https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys–2

http://blog.publysher.nl/2013/07/infra-as-repo-using-vagrant-and-salt.html

https://github.com/devopsgroup-io/vagrant-digitalocean

https://github.com/mitchellh/vagrant/issues/4073

http://stackoverflow.com/questions/23337312/how-do-i-use-rsync-shared-folders-in-vagrant-on-windows

https://github.com/mitchellh/vagrant/issues/3230

https://www.vagrantup.com/docs/synced-folders/basic_usage.html

http://docs.ansible.com/ansible/intro_inventory.html

http://stackoverflow.com/questions/36932952/ansible-unable-to-connect-to-aws-ec2-instance

http://serverfault.com/questions/649659/ansible-try-to-ping-connection-between-localhost-and-remote-server

http://stackoverflow.com/questions/22232509/vagrant-provision-works-but-i-cannot-send-an-ad-hoc-command-with-ansible

http://stackoverflow.com/questions/21670747/what-user-will-ansible-run-my-commands-as#21680256

Hiring in the Kafka universe of BigCorp: a play in infinite acts

Ever wondered why it takes so long for a company to get around to deciding whether to hire you?

The typical hiring “process” as I’ve observed it from inside the belly of two beasts (Microsoft and Intel – though I gather this is typical of most large, and many small, companies):

  • “yeah, we’ve got two heads requested, has to get through Mid-Year Budget Adjustment Review Fuckup”
  • “update? Yeah, MYBARF is taking a little longer than usual, but I’m hearing we’re likely to get the heads, so I’ve started drafting the job req”
  • “new emergency project announced – I’ll be heads-down for a few weeks with my key engineer – BTW we lost one of he heads to another project, last one isn’t approved yet”
  • “yeah, MYBARF got approved last month but the open head is still under negotiation”
  • “OK the head is approved – I lost the draft req, could someone volunteer to write one up for me?”
  • “HR had some feedback on the req language”
  • “we posted the req”
  • “I’ll have time to review resumes from HR in a week”
  • “HR has no idea how to screen for this job so I had to reject the initial batch of resumes”
  • “OK, I’ll have time to phone screen starting next week”
  • “I haven’t seen any mind-blowing candidates yet so I’m talking to HR *again* about my expectations”
  • “Can you do a tech screen tomorrow morning between 7:30 and 8:15? That’s the only time one candidate has for us to talk…”

Bash/Ubuntu on Win10: getting *nix vagrant working with virtualbox (not)

TL;DR Getting vagrant + virtualbox running natively in Bash for Unbuntu on Windows is a no-go.  Try a hybrid Windows/WSL solution instead.

At the end of our last episode, our hero was trapped under the following paradox:

mike@MIKE-WIN10-SSD:/mnt/c/Users/Mike/VirtualBox VMs/BaseDebianServer$ vagrant up
VirtualBox is complaining that the installation is incomplete. Please
run `VBoxManage --version` to see the error message which should contain
instructions on how to fix this error.
mike@MIKE-WIN10-SSD:/mnt/c/Users/Mike/VirtualBox VMs/BaseDebianServer$ VBoxManage --version
WARNING: The character device /dev/vboxdrv does not exist.
         Please install the virtualbox-dkms package and the appropriate
         headers, most likely linux-headers-3.4.0+.

         You will not be able to start VMs until this problem is fixed.

However, the advice for installing virtualbox-dkms is merely a distraction:

mike@MIKE-WIN10-SSD:/mnt/c/Users/Mike/VirtualBox VMs/BaseDebianServer$ sudo apt-get install virtualbox-dkms
Reading package lists... Done
Building dependency tree
Reading state information... Done
virtualbox-dkms is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 44 not upgraded.

And installing linux-headers-3.4.0+ doesn’t seem to work:

mike@MIKE-WIN10-SSD:/mnt/c/Users/Mike/VirtualBox VMs/BaseDebianServer$ sudo apt-get install linux-headers-3.4.0+
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package linux-headers-3.4.0
E: Couldn't find any package by regex 'linux-headers-3.4.0'

Where to go from here?

AskUbuntu turns up this tasty lead:

http://askubuntu.com/questions/465454/problem-with-the-installation-of-virtualbox

…where VBoxManage instead indicates “…most likely linux-headers-generic”.  This aligns with my previous investigation into the version of Linux that ships with Bash on Ubuntu for Windows (‘uname -r’ returns “3.4.0+”, which I suspect is what VBoxManage appends to its “most likely” hint).

Aside

On a lark, I decided to see if I could confirm this theory from the virtualbox source code.  Since it’s Oracle, of course they had to use an “enterprise-y” repo (Trac) which provides a browseable but not searchable front-end, so I pawed through each of the .cpp files by hand on the off-chance this message was being constructed directly in VBoxManage*.cpp source:

https://www.virtualbox.org/browser/vbox/trunk/src/VBox/Frontends/VBoxManage

It’s entirely possible the message is passed up from an imported library, or that it’s constructed from fragments that don’t explicitly include the string “most likely” in any one line of source, but I wasn’t able to find it from this branch of the virtualbox source repo.

Dead End, Take a Guess

OK, if there’s no specific indication which version of the headers must be used, and on the assumption no damage can be caused by downloading what should merely be text files, then let’s just try the linux-headers-generic and see what happens.

And the apt-get messages seem promising – especially that it selected linux-headers-3.13.* files magically without me tracking down which specific versions I needed:

mike@MIKE-WIN10-SSD:/mnt/c/Users/Mike/VirtualBox VMs/BaseDebianServer$ sudo apt-get install linux-headers-generic
[sudo] password for mike:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  linux-headers-3.13.0-92 linux-headers-3.13.0-92-generic
The following NEW packages will be installed:
  linux-headers-3.13.0-92 linux-headers-3.13.0-92-generic
  linux-headers-generic
0 upgraded, 3 newly installed, 0 to remove and 44 not upgraded.
Need to get 9,571 kB of archives.
After this operation, 77.0 MB of additional disk space will be used.
Do you want to continue? [Y/n]

Cool, except for these lines in the script output:

Examining /etc/kernel/header_postinst.d.
run-parts: executing /etc/kernel/header_postinst.d/dkms 3.13.0-92-generic /boot/vmlinuz-3.13.0-92-generic

(why does that make me think I just overwrote something important?)

…and for reasons initially unknown to me, the run-parts script seems to go zombie.

[Aside: I’m still too polluted by my Win32 experience, so I kept trying to interrupt with Ctrl-C.  No.  Bad dog, no treat.  Instead, try Ctrl-Z (pronounced “zed”, ’cause I’m Canadian like that.]

Finding Out if Anyone Else Has Seen This

Vagrant is a pretty popular way of managing virtual machines these days, right?  Yeah.  And while I might be in the first days of the public release of Bash on Windows, there’s been an Insiders Preview going for months, and lots of people banging on the corners.

So what are the odds someone else has tried this too?

Old school: search stackoverflow.com, social.technet.microsoft.com.  No bueno – plenty of folks reporting issues on SO with Bash on Windows, but no one there has reported this vagrant problem.

New school: somehow stumbled across the github repo for BashOnWindows, and dutifully filled out as detailed an issue report as I could muster.

=== NOW HERE’S THE PART THAT BLEW MY MIND ===

A Microsoft employee responded with an intelligent and helpful reply within hours on the same day!!!

(I remember a decade ago, Microsoft’s ‘engagement’ with customers reporting real issues with new software – even when Microsoft’s external bug trackers existed – was abysmal.  You’d be lucky to get an acknowledgement inside a month, and rarely if ever would they bother to update the issue when/if it ever got dispositioned, let alone addressed.  THIS KIND OF RESPONSIVENESS IS AMAZING FROM A CORPORATION.)

Root Issue

My bad, I’d misunderstood the implications of this: WSL (Windows Subsystem for Linux), which supports the user-mode Bash on Ubuntu layer, doesn’t implement any native Linux kernel support.  It’s all user-mode support, and it’s only for non-GUI apps (i.e. things that don’t require Display:0).

Our intrepid Microsoft employee reports here that DKMS isn’t currently supported.  The fact I took it even further to try installing the linux headers was moot; /dev/vboxdrv wouldn’t be available no matter what.

Cleanup in Aisle 4

Did you happen to go down the same road as me?  [What, are you similarly touched in the head?]  If so, here’s what I did to back out of my mess:

  • Performed the lock/install package cleanup specified here
  • Did as clean an uninstall of the linux-headers-generic package as I could (running sudo apt-get –purge remove linux-headers-generic), which outputs
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following packages will be REMOVED:
      linux-headers-generic*
    0 upgraded, 0 newly installed, 1 to remove and 47 not upgraded.
    2 not fully installed or removed.
    After this operation, 29.7 kB disk space will be freed.
    Do you want to continue? [Y/n]

    …which leads to the same run-parts script that fails.  Cleanup the locks/install packages again…then pray not enough damage was done by run-parts (in either direction) to matter. [Boy is that a landmine waiting to go off months from now…]

  • Clean uninstall of vagrant (sudo apt-get –purge remove vagrant)…which somehow leads again to these same lines:
    ...
    Do you want to continue? [Y/n]
    (Reading database ... 64370 files and directories currently installed.)
    Removing vagrant (1.4.3-1) ...
    Purging configuration files for vagrant (1.4.3-1) ...
    Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
    Setting up linux-headers-3.13.0-92-generic (3.13.0-92.139) ...
    Examining /etc/kernel/header_postinst.d.
    run-parts: executing /etc/kernel/header_postinst.d/dkms 3.13.0-92-generic /boot/vmlinuz-3.13.0-92-generic

    Ctrl-Z, rm locks and install bits.  [This is getting old.]

  • Clean uninstall of virtualbox (sudo apt-get –purge remove virtualbox)…and once again that unkillable linux-headers setup rears its head.
  • Let’s look closer.
  • Here’s the preamble when removing the vagrant package:
    mike@MIKE-WIN10-SSD:/var/lib/dpkg/updates$ sudo apt-get --purge remove vagrant
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following packages were automatically installed and are no longer required:
      bsdtar libarchive13 liblzo2-2 libnettle4 libruby1.9.1 ruby ruby-childprocess
      ruby-erubis ruby-ffi ruby-i18n ruby-log4r ruby-net-scp ruby-net-ssh
      ruby1.9.1
    Use 'apt-get autoremove' to remove them.
    The following packages will be REMOVED:
      vagrant*
    0 upgraded, 0 newly installed, 1 to remove and 47 not upgraded.
    1 not fully installed or removed.
    After this operation, 1,612 kB disk space will be freed.
    Do you want to continue? [Y/n]

    [My italics for emphasis]

  • Maybe – just MAYBE – the “not fully installed” package is linux-headers-generic, and if I could coax apt-get or dpkg to clean *that* up, we’d rid ourselves of this mess.  [*foreshadowing*  …or maybe I just need to find out how to wipe and reinstantiate Bash on Windows…]
  • First, do the suggested cleanup (sudo apt-get autoremove)
  • Then install debfoster and deborphan
  • Debfoster reports nothing interesting, but deborphan reports:
    deborphan: The status file is in an improper state.
    One or more packages are marked as half-installed, half-configured,
    unpacked, triggers-awaited or triggers-pending. Exiting.
  • This article provides a great grep for isolating the issue – here’s what it uncovered:
    Package: linux-headers-3.13.0-92-generic
    Status: install ok half-configured
    --
    Package: dialog
    Status: install ok unpacked
    --
    Package: debfoster
    Status: install ok unpacked
    --
    Package: deborphan
    Status: install ok unpacked
  • sudo dpkg –audit reports:
    The following packages are only half configured, probably due to problems
    configuring them the first time.  The configuration should be retried using
    dpkg --configure <package> or the configure menu option in dselect:
      linux-headers-3.13.0-92-generic Linux kernel headers for version 3.13.0 on 64
  • We already know “retry” isn’t the answer here…
  • sudo dpkg –configure –pending definitely kicks off the dead-end configuration of the headers…what can cause this to back out, or to remove the stuff that keeps getting triggered?
  • As I was about to uninstall Bash for Ubuntu, I (for no reason) ran exit from within the Bash shell, which showed me this new output:
    mike@MIKE-WIN10-SSD:/var/lib/dpkg$ exit
    exit
    run-parts: waitpid: Interrupted system call
    Failed to process /etc/kernel/header_postinst.d at /var/lib/dpkg/info/linux-headers-3.13.0-92-generic.postinst line 110.
    dpkg: error processing package linux-headers-3.13.0-92-generic (--configure):
    subprocess installed post-installation script returned error exit status 4
    Setting up dialog (1.2-20130928-1) …
  • After a few minutes I just killed the window
  • Restarted Bash, didn’t appear to have made any improvements on my situation.

Perhaps it’s time to finally throw in the towel.

Complete rebuild of Bash on Ubuntu

When all else fails, uninstall and reinstall.  Thankfully I hadn’t invested a ton of real work into this…

According to this comment, the following command cleans up the whole deal:

Lxrun /uninstall /full

(Coda: In case you get an error re-installing afterwards, try running this command again.  I happened to end up with error code 0x80070091 for which I could find no help, but others have reported other error codes too.)

Let’s try this again from scratch.

Hope: I discovered the cbwin project is being actively developed, to enable users of Bash on Ubuntu for Win10 to launch Windows binaries from within the bash environment.  I’ll try this for the vagrant/virtualbox combo and report back.

Update

I quickly ran into limits with cbwin in this particular setup, but seemed to have found peace with a hybrid approach.

Running *nix apps on Win10 “Anniversary update”: initial findings

Ever since Microsoft announced the “Bash on Windows” inclusion in the Anniversary update of Win10, I’ve been positively *itching* to try it out.  I spent *hours* in Git Bash, Cygwin and other workarounds inside Windows to get tools like Vagrant to work natively in Windows.

Spoiler: it never quite worked. [Aside: if anyone has any idea how to get rsync to work in Cygwin or similarly *without* the Bash shell on Windows, let’s talk.  That was the killer flaw.]

Deciphering the (hidden) installation of Bash

I downloaded the update first thing this morning and got it installed, turned on Developer Mode, then…got stumped by Hanselman’s article (above) on how exactly to get the shell/subsystem itself installed.  [Seems like something got mangled in translation, since “…and adding the Feature, run you bash and are prompted to get Ubuntu on Windows from Canonical via the Windows Store…” doesn’t make any grammatical sense, and searching the Windows Store for “ubuntu”, “bash” or “canonical” didn’t turn up anything useful.]

The Windows10 subreddit’s megathread today left incomplete instructions, and a rumour that this was only available on Win10 Pro (not Home).

Instead, it turns out that you have to navigate to legacy control panel to enable, after you’ve turned on Developer Mode (thanks MSDN Blogs):

Control Panel >> Programs >> Turn Windows features on or off, then check “Windows Subsystem for Linux (Beta)”.  Then reboot once again.

WindowsFeaturesDialog

Then fire up CMD.EXE and type “bash” to initiate the installation of “Ubuntu on Windows”:

BashUbuntuWin10Install

Now to use it!

Once installed, it’s got plenty of helpful hints built in (or else Ubuntu has gotten even easier than I remember), such as:

BashUbuntuWin10Hints

Npm, rpm, vagrant, git, ansible, virtualbox are similarly ‘hinted’.

Getting up-to-date software installed

Weirdly, Ansible 1.5.4 was installed, not from the 2.x version.  What gives?  OK, time to chase a rat through a rathole…

This article implies I could try to get a trusty-backport of ansible:
https://blogs.msdn.microsoft.com/commandline/2016/04/06/bash-on-ubuntu-on-windows-download-now-3/

Does that mean the Ubuntu on Windows is effectively an old version of Ubuntu?  How can I even figure that out?

Running ‘apt-get –version’ indicates we have apt 1.0.1ubuntu2 for amb64 compiled on Jan 12 2016.  That seems relatively recent…

Running ‘apt-cache policy ansible’ gives me the following output:

mike@MIKE-WIN10-SSD:/etc/apt$ apt-cache policy ansible 
ansible: 
  Installed: 1.5.4+dfsg-1 
  Candidate: 1.5.4+dfsg-1 
  Version table: 
 *** 1.5.4+dfsg-1 0 
        500 http://archive.ubuntu.com/ubuntu/ trusty/universe amd64 Packages 
        100 /var/lib/dpkg/status

Looking at /etc/apt/sources.list, there’s only three listed by default:

mike@MIKE-WIN10-SSD:/etc/apt$ cat sources.list 
deb http://archive.ubuntu.com/ubuntu trusty main restricted universe multiverse 
deb http://archive.ubuntu.com/ubuntu trusty-updates main restricted universe multiverse 
deb http://security.ubuntu.com/ubuntu trusty-security main restricted universe multiverse

So is there some reason why Ubuntu-on-Windows’ package manager (apt) doesn’t even list > 1.5.4 as an available installation?  ‘Cause I was previously running v2.2.0 of Ansible on native Ubuntu (just last month).

I *could* run from source in a subdirectory from my home directory – but I’m shamefully (blissfully?) unaware of the implications – are there common configuration files that might stomp on each other?  Is there common code stuffed in some dark location that is better left alone?

Or should I add the source repo mentioned here?  That seems the safest option, because then apt should manage the dependencies and not leave me with two installs of ansible (1.5.4 and 2.x).

Turns out the “Latest Releases via Apt (Ubuntu)” seems to have done well enough – now ‘ansible –version’ returns “ansible 2.1.1.0”, which appears to be latest according to https://launchpad.net/~ansible/+archive/ubuntu/ansible.

Deciphering hands-on install dependencies

Next I tried installing vagrant, which went OK, but then complained about an incomplete installation:

mike@MIKE-WIN10-SSD:/mnt/c/Users/Mike/VirtualBox VMs/BaseDebianServer$ vagrant up

VirtualBox is complaining that the installation is incomplete. Please run
`VBoxManage --version` to see the error message which should contain
instructions on how to fix this error. 

mike@MIKE-WIN10-SSD:/mnt/c/Users/Mike/VirtualBox VMs/BaseDebianServer$ VBoxManage --version 
WARNING: The character device /dev/vboxdrv does not exist. 
         Please install the virtualbox-dkms package and the appropriate 
         headers, most likely linux-headers-3.4.0+. 

         You will not be able to start VMs until this problem is fixed.

So, tried ‘sudo apt-get install linux-headers-3.4.0’ and it couldn’t find a match.  Tried ‘apt-cache search linux-headers’ and it came back with a wide array of options – 3.13, 3.16, 3.19, 4.2, 4.4 (many subversions and variants available).

Stopped me dead in my tracks – which one would be appropriate to the Ubuntu that ships as “Ubuntu for Windows” in the Win10 Anniversary Update?  Not that header files *should* interact with the operations of the OS, but on the off-chance that there’s some unexpected interaction, I’d rather be a little methodical than have to figure out how to wipe and reinstall.

Figuring out what is the equivalent “version of Ubuntu” that ships with this subsystem isn’t trivial:

  • According to /etc/issue, it’s “Ubuntu 14.04.4 LTS”.
  • What version of the Linux kernel comes with 14.04.4?
  • According to ‘uname -r’, it’s “3.4.0+”, which seems suspiciously under-specific.
  • According to /proc/version, its “Linux version 3.4.0-Microsoft (Microsoft@Microsoft.com) (gcc version 4.7 (GCC) ) #1 SMP PREEMPT Wed Dec 31  14:42:53 PST 2014”.

That’s enough for one day – custom versions of the OS should make one ponder.  Tune in next time to see what kind of destruction I can wring out of my freshly-unix-ized Windows box.

P.S. Note to self: it’s cool to get an environment running; it’s even better for it to stay up to date.  This dude did a great job of documenting his process for keeping all the packages current.

Problems-to-solve: finding meetup-friendly spaces in Portland

Preamble

Sometimes I encounter a problem in my day to day life that I find so frustrating – and to me, so obvious (hasn’t been thought of by some PM already; or should’ve been caught by PO/PM acceptance validation, or during usability testing, or in the User Story’s acceptance criteria) that I can’t help thinking of how I’d have pitched this to the engineering team myself.

Think of this as a Product Guy’s version of “fantasy football” – “fantasy product ownership/management”.

Summary

User Story: as the organizer of a Meetup in Portland, I want to be able to quickly find all the meetup-friendly spaces in Portland so that I can book my meetup in a suitable space.

BDD Scenario: Given that I have an existing meetup group AND that the meetup does not have an booked meetup space, when I search for available meetup-friendly spaces in Portland, then I see a listing of such spaces in Portland including address, contact info and maximum number of attendees.

Background

I’ve been an active participant in the meetup scene in Portland for a few years now. I’ve briefly co-led a meetup as well, and been solicited to help organize a number of other meetups.

One of the phenomena I’ve observed is how challenging it can be for some meetups to find a space for their meetings. Many meetups find one space, lock it in for a year and never roam. Some meetups have to change spaces from month to month, and regularly put out a call to attendees to help them find suitable locations. And once in a while, a meetup has to change venues for space or other logistical reasons (e.g. a very popular speaker is coming to town).

Whenever I talk to meetup organizers about this part of the job, it strikes me as odd that they’re all operating like this is a high-school gossip circle: no one has all the information, there is no central place to find out where to go/who to talk to, and most people are left to ask friends if they happen to know of any spaces.

In a tech-savvy city such as Portland, where we have dozens of meetups every day, and many tech conferences a month, it’s surprising to find that getting a meetup successfully housed relies so much on word of mouth (or just using your employer’s space, if you’re lucky to be in such a position).

I’ve been at meetups in some great spaces, nearly all of them in a public-friendly space of tech employers across Portland. Where is the central directory of these spaces? Is there an intentional *lack* of public listing, so that these spaces don’t get overrun? Is this a word-of-mouth resource so that only those event organizers with a personal referral are deemed ‘vetted’ for use?

From the point of view of the owners of these spaces, I can imagine there’s little incentive to make this a seven-nights-a-week resource. Most of these employers don’t employ staff to stick around at night to police these spaces; many of them seem to leave the responsibility up to an employee [often an existing member of the meetup group] to chaperone the meetup attendees and shoo them out when they’re too tired or have to go home/back to work.

My Fantasy Scenario

Any meetup organizer in Portland will be able to find suitable meetup spaces and begin negotiating for available dates/times. A “suitable” space would be qualified on such criteria as:

  • Location
  • Number of people the space can legally accommodate
  • Number of seats available
  • Days and hours the space is potentially available (e.g. M-F 5-8, weekends by arrangement)
  • A/V availability (projector, microphone)
  • Guest wifi availability
  • Amenities (beer, food, snacks, bike parking)
  • Special notes (e.g. door access arrangements, must arrange to have employee chaperone the space)
  • Contact info to inquire about space availability [email, phone, booking system]

Future features

I can also see a need for a service that similarly lists conference-friendly spaces around town – especially for low-budget conferences that can’t afford the corporate convention spaces. I’ve been at many community-oriented conferences here in Portland, and I’m betting the number of spaces I’ve visited [e.g. Eliot Center, Armory, Ambridge, Portland Art Museum, Center for the Arts], still aren’t anywhere near the secret treasures that await.

  • Number of separate/separable rooms and their seating
  • Additional limitations/requirements e.g. if food/drinks, must always use the contracted catering

Workarounds Tried

Workaround: the http://workfrom.co service includes a filter for “Free Community Spaces”, labelled Community spaces are free and open to all, no purchase required. Common community spaces include libraries, student unions and banks. Unfortunately, as of now there are only five listings (three of them public library spaces).

Workaround: I was told by a friend that Cvent has a listing of event spaces in Portland. My search of their site led to this searchable interface. Unfortunately, this service appears to be more oriented to helping someone plan a conference or business meeting and keeping attendees entertained/occupied – where “venue type” = “corporate

Mashing the marvelous wrapper until it responds, part 1: prereq/setup

I haven’t used a dynamic language for coding nearly as much as strongly-typed, compiled languages so approaching Python was a little nervous-making for me.  It’s not every day you look into the abyss of your own technical inadequacies and find a way to keep going.

Here’s how embarrassing it got for me: I knew enough to clone the code to my computer and to copy the example code into a .py file, but beyond that it felt like I was doing the same thing I always do when learning a new language: trying to guess at the basics of using the language that everyone who’s writing about it already knows and has long since forgotten, they’re so obvious.  Obvious to everyone but the neophyte.

Second, is that I don’t respond well to the canonical means of learning a language (at least according to all the “Learn [language_x] from scratch” books I’ve picked up over the years), which is

  • Chapter 1: History, Philosophy and Holy Wars of the Language
  • Chapter 2: Installing The Author’s Favourite IDE
  • Chapter 3: Everything You Don’t Have a Use For in Data Types
  • Chapter 4: Advanced Usage of Variables, Consts and Polymorphism
  • Chapter 5: Hello World
  • Chapter 6: Why Hello World Is a Terrible Lesson
  • Chapter 7: Author’s Favourite Language Tricks

… etc.

I tend to learn best by attacking a specific, relevant problem hands-on – having a real problem I felt motivated to attack is how these projects came to be (EFSCertUpdater, CacheMyWork).  So for now, despite a near-complete lack of context or mentors, I decided to dive into the code and start monkeying with it.

Riches of Embarrassment

I quickly found a number of “learning opportunities” – I didn’t know how to:

  1. Run the example script (hint: install the python package for your OS, make sure the python binary is in your current shell’s path, and don’t use the Windows Git Bash shell as there’s some weird bug currently at work)
  2. Install the dependencies (hint: run “pip install xxxx”, where “xxxx” is whatever shows up at the end of an error message like this:
    C:\Users\Mike\code\marvelous>python example.py 
    Traceback (most recent call last):     
        File "example.py", line 5, in <module>
            from config import public_key, private_key 
    ImportError: No module named config

    In this example, I ran “pip install config” to resolve this error.

  3. Set the public & private keys (hint: there was some mention of setting environment variables, but it turns out that for this example script I had to paste them into a file named “config” – no, for python the file needs to be named “config.py even though it’s text not a script you would run on its own – and make sure the config.py file is stored in the same folder as the script you’re running.  Its contents should look similar to these (no, these aren’t really my keys):
        public_key = 81c4290c6c8bcf234abd85970837c97 
        private_key = c11d3f61b57a60997234abdbaf65598e5b96

    Nope, don’t forget – when you declare a variable in most languages, and the variable is not a numeric value, you have to wrap the variable’s value in some type of quotation marks.  [Y’see, this is one of the things that bugs me about languages that don’t enforce strong typing – without it, it’s easy for casual users to forget how strings have to be handled]:

        public_key = '81c4290c6c8bcf234abd85970837c97' 
        private_key = 'c11d3f61b57a60997234abdbaf65598e5b96'
  4. Properly call into other Classes in your code – I started to notice in Robert’s Marvelous wrapper that his Python code would do things like this – the comic.py file defined
         class ComicSchema(Schema):

    …and the calling code would state

        import comic 
        … 
        schema = comic.ComicSchema()

    This was initially confusing to me, because I’m used to compiled languages like C# where you import the defined name of the Class, not the filename container in which the class is defined.  If this were C# code, the calling code would probably look more like this:

        using ComicSchema;
        … 
        _schema Schema = ComicSchema();

    (Yes, I’m sure I’ve borked the C# syntax somehow, but for sake of this sad explanation, I hope you get the idea where my brain started out.)

    I’m inferring that for a scripted/dynamic language like Python, the Python interpreter doesn’t have any preconceived notion of where to find the Classes – it has to be instructed to look at specific files first (import comic, which I’m guessing implies import comic.py), then further to inspect a specified file for the Class of interest (schema = comic.ComicSchema(), where comic. indicates the file to inspect for the ComicSchema() class).

Status: Learning

So far, I’m feeling (a) stupid that I have to admit these were not things with which I sprang from the womb, (b) grateful Python’s not *more* punishing, (c) smart-ish that fundamental debugging is something I’ve still retained and (d) good that I can pass along these lessons to other folks like me.

Coding Again? Experimenting with the Marvel API

I’ve been hanging around developers *entirely* too much lately.

These days I find myself telling myself the story that unless I get back into coding, I’m not going to be relevant in the tech industry any longer.

Hanging out (aka volunteering) at developer-focused conferences will do that to you:

Volunteering on open source projects will do that to you (jQuery Foundation‘s infrastructure team).

Interviewing for engineering-focused Product Owner and Technical Product Manager roles will do that to you. (Note: when did “technical” become equivalent to “I actively code in my day job/spare time”?)

One of the hang-ups I have that keeps me from investing the immense amount of grinding time it takes to make working code is that I haven’t found an itch to scratch that bugs me enough that I’m willing to commit myself to the effort. Plenty of ideas float their way past my brain, but very few (like CacheMyWork) get me emotionally engaged enough to surmount the activation energy necessary to fight alone past all the barriers: lonely nights, painful problem articulation, lack of buddy to work on it, and general frustration that I don’t know all the tricks and vocabulary that most good coders do.

Well, it finally happened. I found something that should keep me engaged: creating a stripped-down search interface into the Marvel comics catalogue.  Marvel.com provides a search on their site but I:

  1. keep forgetting where they buried it,
  2. find it cumbersome and slow to use, and
  3. never know if the missing references (e.g. appearances of Captain Marvel as a guest in others’ comics that aren’t returned in the search results) are because the search doesn’t work, or because the data is actually missing

Marvel launched an API a couple of years ago – I heard about it at the time and felt excited that my favourite comics publisher had embraced the Age of APIs.  But didn’t feel like doing anything with it.

Fast forward two years: I’m a diehard user of Marvel Unlimited, my comics reading is about half-Marvel these days, and I’m spending a lot of time trying to weave together a picture of how the characters relate, when they’ve bumped into each other, what issue certain happenings occurred in, etc

Possible questions I could answer if I write some code:

  • How socially-connected is Spidey compared with Wolverine?
  • When is the first appearance of any character?
  • What’s the chronological publication order of every comic crossover in any comics Event?

Possible language to use:

  • C# (know it)
  • F# (big hawtness at the .NET Fringe conf)
  • Python (feel like I should learn it)
  • Typescript (ES6 – like JavaScript with static types and other frustration-killers)
  • ScriptCS (a scriptable C#)

More important than choice of language though is availability of wrappers for the API – while I’m sure it would be very instructive to immediately start climbing the cliff of building “zero tech” code, I learn far faster when I have visible results, than when I’m still fiddling with getting the right types for my variables or trying to remember where and when to set the right kind of closing braces.

So for sake of argument I’m going to try out the second package I found – Robert Kuykendall’s “marvelous” python wrapper: https://github.com/rkuykendall/marvelous

See you when I’ve got something to report.