ImportError: No module named ‘rest_framework_swagger’

Summary

Building our Django app locally (i.e. no Docker container wrapping it) works great. Building the same app in Docker fails. Hint: make sure you know which requirements.txt file you’re using to build the app.  (And get familiar with the -f parameter for Docker commands.)

Problem

When I first started build the Docker container, I was getting the ImportError error after the container successfully builds:

ImportError: No module named 'rest_framework_swagger'

Research

The only half-useful hit on StackOverflow was this one, and it didn’t seem like it explicitly addressed my issue in Docker:

http://stackoverflow.com/questions/27369314/django-rest-framework-swagger-ui-importerror-no-module-named-rest-framework

…And The Lightning Bolt Struck

However, with enough time and desperation I finally understood that that article wasn’t wrong either.  I wasn’t using the /requirements.txt that contained all the dependencies – I was using the incomplete/abandoned /budget_proj/requirements.txt file, which lacked a key dependency.

Aside

I wasn’t watching the results of pip install closely enough – and when running Docker-compose up --build multiple times, the layer of interest won’t rebuild if there’s no changes to that layer’s inputs. (Plus this is a case where there’s no error message thrown, just one or two fewer pip installs – and who notices that until they’ve spent the better part of two days on the problem?)

Detailed Diagnostics

If you look closely at our project from that time, you’ll notice there are actually two copies of requirements.txt – one at the repo root and one in the /budget_proj/ folder.

Developers who are just testing Django locally will simply launch pip install -r requirements.txt from the root directory of their clone of the repo.  This is fine and good.  This is the result of the pip install -r requirements.txt when using the expected file:

$ pip install -r requirements.txt 
Collecting appdirs==1.4.0 (from -r requirements.txt (line 1))
 Using cached appdirs-1.4.0-py2.py3-none-any.whl
Collecting Django==1.10.5 (from -r requirements.txt (line 2))
 Using cached Django-1.10.5-py2.py3-none-any.whl
Collecting django-filter==1.0.1 (from -r requirements.txt (line 3))
 Using cached django_filter-1.0.1-py2.py3-none-any.whl
Collecting django-rest-swagger==2.1.1 (from -r requirements.txt (line 4))
 Using cached django_rest_swagger-2.1.1-py2.py3-none-any.whl
Collecting djangorestframework==3.5.4 (from -r requirements.txt (line 5))
 Using cached djangorestframework-3.5.4-py2.py3-none-any.whl
Requirement already satisfied: packaging==16.8 in ./budget_venv/lib/python3.5/site-packages (from -r requirements.txt (line 6))
Collecting psycopg2==2.7 (from -r requirements.txt (line 7))
 Using cached psycopg2-2.7-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Collecting pyparsing==2.1.10 (from -r requirements.txt (line 8))
 Using cached pyparsing-2.1.10-py2.py3-none-any.whl
Collecting requests==2.13.0 (from -r requirements.txt (line 9))
 Using cached requests-2.13.0-py2.py3-none-any.whl
Requirement already satisfied: six==1.10.0 in ./budget_venv/lib/python3.5/site-packages (from -r requirements.txt (line 10))
Collecting gunicorn (from -r requirements.txt (line 12))
 Using cached gunicorn-19.7.0-py2.py3-none-any.whl
Collecting openapi-codec>=1.2.1 (from django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Collecting coreapi>=2.1.1 (from django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Collecting simplejson (from django-rest-swagger==2.1.1->-r requirements.txt (line 4))
 Using cached simplejson-3.10.0-cp35-cp35m-macosx_10_11_x86_64.whl
Collecting uritemplate (from coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
 Using cached uritemplate-3.0.0-py2.py3-none-any.whl
Collecting coreschema (from coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Collecting itypes (from coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Collecting jinja2 (from coreschema->coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
 Using cached Jinja2-2.9.5-py2.py3-none-any.whl
Collecting MarkupSafe>=0.23 (from jinja2->coreschema->coreapi>=2.1.1->django-rest-swagger==2.1.1->-r requirements.txt (line 4))
Installing collected packages: appdirs, Django, django-filter, uritemplate, requests, MarkupSafe, jinja2, coreschema, itypes, coreapi, openapi-codec, simplejson, djangorestframework, django-rest-swagger, psycopg2, pyparsing, gunicorn
 Found existing installation: appdirs 1.4.3
 Uninstalling appdirs-1.4.3:
 Successfully uninstalled appdirs-1.4.3
 Found existing installation: pyparsing 2.2.0
 Uninstalling pyparsing-2.2.0:
 Successfully uninstalled pyparsing-2.2.0
Successfully installed Django-1.10.5 MarkupSafe-1.0 appdirs-1.4.0 coreapi-2.3.0 coreschema-0.0.4 django-filter-1.0.1 django-rest-swagger-2.1.1 djangorestframework-3.5.4 gunicorn-19.7.0 itypes-1.1.0 jinja2-2.9.5 openapi-codec-1.3.1 psycopg2-2.7 pyparsing-2.1.10 requests-2.13.0 simplejson-3.10.0 uritemplate-3.0.0

However, because our Django application (and the related Docker files) is contained in a subdirectory off the repo root (i.e. in the /budget_proj/ folder) – and because I was an idiot at the time and didn’t know about the -f parameter for docker-compose , so I was convinced I had to run docker-compose from the same directory as docker-compose.yml – docker-compose didn’t have access to files in the parent directory of wherever it was launched.  Apparently Docker effectively “chroots” its commands so it doesn’t have access to ../bin/requirements.txt for example.

So when docker-compose launched pip install -r requirements.txt, it could only access this one and gives us this result instead:

Step 12/12 : WORKDIR /code
 ---> 8626fa515a0a
Removing intermediate container 05badf699f66
Successfully built 8626fa515a0a
Recreating budgetproj_budget-service_1
Attaching to budgetproj_budget-service_1
web_1 | Running docker-entrypoint.sh...
web_1 | [2017-03-16 00:31:34 +0000] [5] [INFO] Starting gunicorn 19.7.0
web_1 | [2017-03-16 00:31:34 +0000] [5] [INFO] Listening at: http://0.0.0.0:8000 (5)
web_1 | [2017-03-16 00:31:34 +0000] [5] [INFO] Using worker: sync
web_1 | [2017-03-16 00:31:34 +0000] [8] [INFO] Booting worker with pid: 8
web_1 | [2017-03-16 00:31:35 +0000] [8] [ERROR] Exception in worker process
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/arbiter.py", line 578, in spawn_worker
web_1 | worker.init_process()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/workers/base.py", line 126, in init_process
web_1 | self.load_wsgi()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/workers/base.py", line 135, in load_wsgi
web_1 | self.wsgi = self.app.wsgi()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/app/base.py", line 67, in wsgi
web_1 | self.callable = self.load()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
web_1 | return self.load_wsgiapp()
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
web_1 | return util.import_app(self.app_uri)
web_1 | File "/usr/local/lib/python3.5/site-packages/gunicorn/util.py", line 376, in import_app
web_1 | __import__(module)
web_1 | File "/code/budget_proj/wsgi.py", line 16, in <module>
web_1 | application = get_wsgi_application()
web_1 | File "/usr/local/lib/python3.5/site-packages/django/core/wsgi.py", line 13, in get_wsgi_application
web_1 | django.setup(set_prefix=False)
web_1 | File "/usr/local/lib/python3.5/site-packages/django/__init__.py", line 27, in setup
web_1 | apps.populate(settings.INSTALLED_APPS)
web_1 | File "/usr/local/lib/python3.5/site-packages/django/apps/registry.py", line 85, in populate
web_1 | app_config = AppConfig.create(entry)
web_1 | File "/usr/local/lib/python3.5/site-packages/django/apps/config.py", line 90, in create
web_1 | module = import_module(entry)
web_1 | File "/usr/local/lib/python3.5/importlib/__init__.py", line 126, in import_module
web_1 | return _bootstrap._gcd_import(name[level:], package, level)
web_1 | ImportError: No module named 'rest_framework_swagger'
web_1 | [2017-03-16 00:31:35 +0000] [8] [INFO] Worker exiting (pid: 8)
web_1 | [2017-03-16 00:31:35 +0000] [5] [INFO] Shutting down: Master
web_1 | [2017-03-16 00:31:35 +0000] [5] [INFO] Reason: Worker failed to boot.
budgetproj_web_1 exited with code 3

Coda

It has been pointed out that not only is it redundant for the project to have two requirements.txt files (I agree, and when we find the poor soul who inadvertently added the second file, they’ll be sacked…from our volunteer project ;)…

…but also that if we’re encapsulating our project’s core application in a subdirectory (called budget_proj), then logically that is where the “legit” requirements.txt file belongs – not at the project’s root, just because that’s where you normally find requirements.txt in a repo.

Status Report: Docker Toolbox failures on Windows 10

TL;DR

A growing body of experience building containers on Windows 10 systems (using the Docker Toolbox for Windows) indicates that Docker Toolbox for Windows (on Win10 at minimum) is not a supportable combination for our project at present.  It appears that the command in docker-compose.yml cannot be found – at least in cases where the called script is stored in a directory inside the container.

While there may be some adjustment that could be made to support both Windows and *NIX hosts in this scenario, it’s yet another incompatibility we’ve encountered (among many) that only crops up when using Windows as the host, and our team has to focus its cycles on supporting the majority of our project’s developers (who aren’t on Windows).

Problem

One of my colleagues on the project reported the following error when bringing up the Docker container from this branch in this repo.  He’s using Windows 10, Docker Toolbox for Windows:

user@DESKTOP MINGW64 /c/develop/python/team-budget/budget_proj (dockerize)
$ docker-compose up
Starting budgetproj_web_1

ERROR: for web Cannot start service web: oci runtime error: container_linux.go:247: starting container process caused "exec: \"/code/docker-entrypoint.sh\": stat /code/docker-entrypoint.sh: no such file or directory"
ERROR: Encountered errors while bringing up the project.

I explained that as far as I understand Docker engine and Docker Toolbox (which hosts the engine in a Virtualbox VM), what’s going on is that inside the container, Docker is trying to execute /code/docker-entrypoint.sh – so theoretically there should be no reason why this would behave any differently on Windows than Mac or Linux, since the runtime environment inside the Docker container shouldn’t know anything about its underlying host’s environment.  I know for sure it’s working well on Mac and Linux, even on my personal Mac that’s running the Docker Toolbox.

Investigation

Budget repo on Windows: fails

I attempted this myself with the same branch/repo using Docker Toolbox for Windows (downloaded today, running v1.13.1 of Docker engine) on Windows 10 (Anniversary update), and received effectively the same result:

...

Step 12/12 : WORKDIR /code
 ---> 37cb2ce39964
Removing intermediate container af6440be2e49
Successfully built 37cb2ce39964
WARNING: Image for service web was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating budgetproj_web_1
Attaching to budgetproj_web_1
web_1  | standard_init_linux.go:178: exec user process caused "no such file or directory"
budgetproj_web_1 exited with code 1

Budget repo on Mac: succeeds

The same commit from the same branch/repo on OS X 10.11 with Docker Toolbox for Mac:

...

Step 12/12 : WORKDIR /code
 ---> 0da697ffe35c
Removing intermediate container d4df9c99e8f9
Successfully built 0da697ffe35c
Recreating budgetproj_web_1
Attaching to budgetproj_web_1
web_1 | Operations to perform:
web_1 | Apply all migrations: admin, auth, budget_app, contenttypes, sessions
web_1 | Running migrations:
web_1 | Applying contenttypes.0001_initial... OK
web_1 | Applying auth.0001_initial... OK
web_1 | Applying admin.0001_initial... OK
web_1 | Applying admin.0002_logentry_remove_auto_add... OK
web_1 | Applying contenttypes.0002_remove_content_type_name... OK
web_1 | Applying auth.0002_alter_permission_name_max_length... OK
web_1 | Applying auth.0003_alter_user_email_max_length... OK
web_1 | Applying auth.0004_alter_user_username_opts... OK
web_1 | Applying auth.0005_alter_user_last_login_null... OK
web_1 | Applying auth.0006_require_contenttypes_0002... OK
web_1 | Applying auth.0007_alter_validators_add_error_messages... OK
web_1 | Applying auth.0008_alter_user_username_max_length... OK
web_1 | Applying budget_app.0001_initial... OK
web_1 | Applying budget_app.0002_auto_20170221_0359... OK
web_1 | Applying sessions.0001_initial... OK
web_1 | [2017-03-01 21:42:05 +0000] [10] [INFO] Starting gunicorn 19.6.0
web_1 | [2017-03-01 21:42:05 +0000] [10] [INFO] Listening at: http://0.0.0.0:8000 (10)
web_1 | [2017-03-01 21:42:05 +0000] [10] [INFO] Using worker: sync

So I tried this on a couple of other of our organization’s projects.

Housing repo on Windows: fails

Housing-17 under Docker Toolbox for Windows on Windows 10:

$ docker-compose up --build
Building web
Step 1/6 : FROM python:3.5
 ---> 4e5ed9f6613e
Step 2/6 : ENV PYTHONUNBUFFERED 1
 ---> Using cache
 ---> a62a6ae73cec
Step 3/6 : ADD ./requirements.txt /provision/
 ---> Using cache
 ---> 9f34a7d35294
Step 4/6 : WORKDIR /provision/
 ---> Using cache
 ---> 4e06b4c2249f
Step 5/6 : RUN pip install -r requirements.txt
 ---> Using cache
 ---> e96a581fc549
Step 6/6 : WORKDIR /code/
 ---> Using cache
 ---> fc61cc36c06f
Successfully built fc61cc36c06f
Starting housing17_db_1
Starting housing17_web_1
Attaching to housing17_db_1, housing17_web_1
db_1   | LOG:  database system was shut down at 2017-03-01 19:38:08 UTC
db_1   | LOG:  MultiXact member wraparound protections are now enabled
db_1   | LOG:  database system is ready to accept connections
web_1  | standard_init_linux.go:178: exec user process caused "no such file or directory"
housing17_web_1 exited with code 1

Housing repo on Mac: succeeds

Housing-17 under Docker Toolbox for Mac on OS X 10.11:

...

Step 6/6 : WORKDIR /code/
 ---> Using cache
 ---> f79cbc2964cb
Successfully built f79cbc2964cb
Starting housing17_db_1
Starting housing17_web_1
Attaching to housing17_db_1, housing17_web_1
db_1 | LOG: database system was shut down at 2017-03-01 03:26:27 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
web_1 | 
web_1 | 0 static files copied to '/code/static', 126 unmodified.
web_1 | [2017-03-01 21:50:00 +0000] [7] [INFO] Starting gunicorn 19.6.0
web_1 | [2017-03-01 21:50:00 +0000] [7] [INFO] Listening at: http://0.0.0.0:8000 (7)
web_1 | [2017-03-01 21:50:00 +0000] [7] [INFO] Using worker: sync

Emergency_response repo on Windows: fails

Emergency-response-backend under Docker Toolbox for Windows on Windows 10:

... 

---> d1dece959fab
Removing intermediate container 446d7dae0532
Step 14/14 : COPY . /code/
 ---> d12ccbed9557
Removing intermediate container 27809cb3988a
Successfully built d12ccbed9557
WARNING: Image for service web was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating emergencyresponsebackend_web_1
Attaching to emergencyresponsebackend_web_1
web_1  | standard_init_linux.go:178: exec user process caused "no such file or directory"
emergencyresponsebackend_web_1 exited with code 1

Emergency_response repo on Mac: succeeds/fails (but for an application-specific reason)

Emergency-response-backend under Docker Toolbox for Mac on OS X 10.11:

Step 14/14 : COPY . /code/
 ---> bc4a1bd8e372
Removing intermediate container e9b079ea31da
Successfully built bc4a1bd8e372
WARNING: Image for service web was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating emergencyresponsebackend_web_1
Attaching to emergencyresponsebackend_web_1
web_1 | Traceback (most recent call last):
web_1 | File "manage.py", line 10, in <module>
web_1 | execute_from_command_line(sys.argv)
web_1 | File "/usr/local/lib/python3.4/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line
web_1 | utility.execute()
web_1 | File "/usr/local/lib/python3.4/site-packages/django/core/management/__init__.py", line 316, in execute
web_1 | settings.INSTALLED_APPS
web_1 | File "/usr/local/lib/python3.4/site-packages/django/conf/__init__.py", line 53, in __getattr__
web_1 | self._setup(name)
web_1 | File "/usr/local/lib/python3.4/site-packages/django/conf/__init__.py", line 41, in _setup
web_1 | self._wrapped = Settings(settings_module)
web_1 | File "/usr/local/lib/python3.4/site-packages/django/conf/__init__.py", line 97, in __init__
web_1 | mod = importlib.import_module(self.SETTINGS_MODULE)
web_1 | File "/usr/local/lib/python3.4/importlib/__init__.py", line 109, in import_module
web_1 | return _bootstrap._gcd_import(name[level:], package, level)
web_1 | File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
web_1 | File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
web_1 | File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
web_1 | File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
web_1 | File "<frozen importlib._bootstrap>", line 1129, in _exec
web_1 | File "<frozen importlib._bootstrap>", line 1471, in exec_module
web_1 | File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
web_1 | File "/code/emergency_response_api/settings.py", line 16, in <module>
web_1 | from . import project_config
web_1 | ImportError: cannot import name 'project_config'
web_1 | Traceback (most recent call last):
web_1 | File "manage.py", line 10, in <module>
web_1 | execute_from_command_line(sys.argv)
web_1 | File "/usr/local/lib/python3.4/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line
web_1 | utility.execute()
web_1 | File "/usr/local/lib/python3.4/site-packages/django/core/management/__init__.py", line 316, in execute
web_1 | settings.INSTALLED_APPS
web_1 | File "/usr/local/lib/python3.4/site-packages/django/conf/__init__.py", line 53, in __getattr__
web_1 | self._setup(name)
web_1 | File "/usr/local/lib/python3.4/site-packages/django/conf/__init__.py", line 41, in _setup
web_1 | self._wrapped = Settings(settings_module)
web_1 | File "/usr/local/lib/python3.4/site-packages/django/conf/__init__.py", line 97, in __init__
web_1 | mod = importlib.import_module(self.SETTINGS_MODULE)
web_1 | File "/usr/local/lib/python3.4/importlib/__init__.py", line 109, in import_module
web_1 | return _bootstrap._gcd_import(name[level:], package, level)
web_1 | File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
web_1 | File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
web_1 | File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
web_1 | File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
web_1 | File "<frozen importlib._bootstrap>", line 1129, in _exec
web_1 | File "<frozen importlib._bootstrap>", line 1471, in exec_module
web_1 | File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
web_1 | File "/code/emergency_response_api/settings.py", line 16, in <module>
web_1 | from . import project_config
web_1 | ImportError: cannot import name 'project_config'
...

(Note: the runtime error in “manage.py” indicates that the command in docker-compose.yml was executed correctly, and there’s an app-specific issue in running one of the commands inside the docker-entrypoint.sh script that docker-compose.yml specifies.)

Potentially related issues re: Docker running on Windows

Absolute paths change with git bash on Windows

Docker Build Image fails with ‘NOT FOUND’ executing…

Bash script always prints command not found (perhaps a missing ‘execute’ bit, that only affects commands running in a Docker container when running on Docker Toolbox for Windows?)

 

Bug Reports: hoopla + comics

An occasional series of the bugs I attempt to report to vendors of software I enjoy using.

Bug #1: re-borrow, can’t read

I borrow a comics title on Hoopla, it eventually expires. I re-borrow it, then when I try to read it reports “There was an error loading Ex Machina Book Two.” error.

I tried a half-dozen times to Read it. I killed the app and restarted it, then tried to Read, still the same error.  I am unable to find a delete feature in the app, so I cannot delete and re-download the content.

This same error has happened to me twice with two different comics titles.  I only read comics via hoopla, so I cannot yet report if this happens for non-comics content.

Repro steps

  • Open Hoopla app on my device, browse to the title Ex Machina Book Two
  • Tap the Borrow button, complete the Downloading phase
  • Tap the Read button – result: content loads fine
  • Wait 21+ days for DRM license to expire
  • Browse to the same title, tap Borrow
    (Note: it take no time at all to switch to the Read button, which implies it just downloads a fresh DRM license file)
  • Tap the Read button

Expected Result

Book opens, content is readable.

Actual Result

App reports Error “There was an error loading…”, content does not load:

hoopla error re-borrowing comic.png

User Environment

iPad 3, iOS 9.3.5, hoopla app version 4.10.2

Bug #2: cannot re-sort comics

I browse the “Just added to hoopla” section of Comics, and no matter which sorting option I choose, the list of comics appears in the exact same order. Either this is a coincidence, or the sorting feature doesn’t work (at least in this particular scenario).

Repro steps

  • Open the hoopla app on my device, tap the Books tab
  • Tap the Comics selector across the top of the app window, then tap the Genres link at the top-right corner
  • Select the option Just added to hoopla
  • Scroll the resulting comics titles in the default popular view, noting that [at time of writing] three Jughead titles appear before Superman, Betty & Veronica and The Black Hood
  • Tap the new arrivals and/or A-Z view selectors along the top

Expected Result

The sort order of the displayed comics would change under one or both views (especially under the A-Z view, where Jughead titles would be listed after Betty & Veronica). The included titles may or may not change (perhaps some added, some removed in the new arrivals view, if this is meant to show just the most recently-added titles).

Actual Result

The sort order of the displayed comics appears identical to the naked eye.  Note that in the A-Z view, the Jughead comics continue to appear at the top, ahead of the Betty & Veronica comic:

hoopla sort order in A-Z view.png

User Environment

iPad 3, iOS 9.3.5, hoopla app version 4.10.2

Docker container commands: which goes where?

So far in my DevOps class I’ve encountered three separate places where we use parameterized commands to perform some of the Docker setup and runtime execution.

I’ve been having a conceptual crisis of confidence because I don’t definitively understand which kinds of commands go where.  So this is my exercise in deciphering the lay of the land.

Dockerfile, docker-compose.yml, docker-compose run

Here’s what I think I have inferred from what we’ve done in class tutorials:

  • Dockerfile
    • can contain one or more commands prefaced with the RUN directive
    • these will run commands outside of the to-be-built Docker container, to setup the appropriate files and environment to make the build successful
    • presumably you should do as little as necessary out here
    • example command: RUN pip install -r requirements.txt
  • docker-compose.yml
    • configures the set of containers that will be built and run together (one or more)
    • command to be run once the container is up and running
    • ?? Only one command can be configured per container ??
    • example command: command: gunicorn fooapi.wsgi:application -b :8000
  • docker-compose run [container_name]
    • commands can be run one time, arbitrarily, outside the build process itself
    • example command: docker-compose run web django-admin.py startproject fooapi .

My Confusions

  1. If my web application needs to have all the python packages installed as dependencies, why isn’t pip install -r requirements.txt being run once the appropriate container is up and running?
  2. If the use of a RUN command in Dockerfile creates a scratch space outside all containers, why would I need to install python dependencies to be able to create a PostgreSQL container cf. Assignment 3? [Leaving aside the advice I’ve heard that putting databases in containers isn’t generally necessary or advisable]
  3. What is the net effect of running commands inside the “docker-compose run [container_name]” wrapper?  Why couldn’t/shouldn’t I run that command as a RUN command from the Dockerfile, and then copy the resulting files into the /code folder that we’re creating in Assignment 3?
  4. Does docker-compose run run commands inside an already-built container?

As I learn answers to these questions, with any luck I’ll return here to annotate what I’ve learned.

Notes to self: merging my fork with upstream

It’s supposed to be as natural as breathing, right?  See a neat repository on Github, decide you want to use the code and make some minor changes to it right?  So you fork the sucker, commit some change, maybe push a PR back to the original repo?

Then, you want to keep your repo around – I dunno, maybe it’s for vanity, or maybe you’re continuing to make changes or use the project (and maybe, just maybe, you’ll find yourself wanting to push another PR in the future?).  Or maybe messages like this just bother your OCD:

github-branch-is-xx-commits-behind

Eventually, most developers will run into a situation in which they wish to re-sync their forked version of a project with the updates that have been made in “upstream”.

Should be dead easy, yes?  People are doing this all the time, yes?  Well, crap.  If that’s the case, then I’m an idiot because I’d tried this a half-dozen times and never before arrived at the beautiful message “This branch is even with…”.  So I figured I’d write it out (talk to the duck), and in so doing stumble on the solution.

GitHub help is supposed to help, e.g. Syncing a fork.  Which depends on Configuring a remote for a fork, and which is followed by Pushing to a remote.

Which for a foreign repo named e.g. “hackers/hackit” means the following stream of commands (after I’ve Forked the repo in GitHub.com and git clone‘d the repo on my local machine):

git remote add upstream git@github.com:hackers/hackit.git
git fetch upstream
git checkout master
git merge upstream/master

That last command will often result in a bunch of conflicts, if you’ve made any changes, e.g.:

git merge upstream/master
Auto-merging package.json
CONFLICT (content): Merge conflict in package.json
Auto-merging README.md
Auto-merging .travis.yml
CONFLICT (content): Merge conflict in .travis.yml
Auto-merging .babelrc
Automatic merge failed; fix conflicts and then commit the result.

At this point I temporarily abandon the command line and dive into my favourite editor (Visual Studio Code with a handful of extensions) to resolve the conflicting files.

Once I’d merged changes from both sources (mine and upstream), then it was a simple matter of the usual commands:

git add .
git commit -m "merged changes from upstream"
git push

And the result is…

github-branch-is-xx-commits-ahead

(No it wasn’t quite the “even” paradise, but I’ll take it.)

Aside

I somehow got myself into a state where I couldn’t get the normal commands to work.  For example, when I ran git push origin master, I get nowhere:

git push origin master
fatal: 'origin' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

Or git push:

git push
ERROR: Permission to hackers/hackit.git denied to MikeTheCanuck.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

Then when I added upstream…:

git remote add upstream git@github.com:hackers/hackit.git

…and ran git remote -v…:

git remote -v
upstream git@github.com:hackers/hackit.git (fetch)
upstream git@github.com:hackers/hackit.git (push)

…it appears I no longer had a reference to origin. (No idea how that happened, but hopefully these notes will help me not go astray again.)  Adding back the reference to origin seemed the most likely solution, but I didn’t get the kind of results I wanted:

git remote add origin git@github.com:mikethecanuck/hackit.git
git remote -v
origin git@github.com:mikethecanuck/hackit.git (fetch)
origin git@github.com:mikethecanuck/hackit.git (push)
upstream git@github.com:hackers/hackit.git (fetch)
upstream git@github.com:hackers/hackit.git (push)
git push origin master
To github.com:mikethecanuck/hackit.git
 ! [rejected]        master -> master (fetch first)
error: failed to push some refs to 'git@github.com:mikethecanuck/hackit.git'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

 

And when I pushed with no params, I went right back to the starting place:

git push
ERROR: Permission to hackers/hackit.git denied to MikeTheCanuck.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

(I finally rm -rf‘d my forked repo, cloned it again, and started over – that’s how I got to the first part of the article.)

AWS wrangling, round 4: automating the setup of a static website

This time around, let’s automate the steps I manually performed last time (as much as the S3 APIs allow CLI interaction, which is sadly not much).

Install the AWS CLI (command line interface)

Follow the instructions for your operating system from here:

https://aws.amazon.com/cli/

Run the following commands

Create a new bucket (CLI)

Note: “mikecanuckbucket” is the bucket I’m creating.

aws s3 mb s3://mikecanuckbucket

Which returns the following output:

make_bucket: mikecanuckbucket

Configure the static website (CLI)

aws s3 website s3://mikecanuckbucket --index-document index.html

Which returns nothing if it succeeds.

Set bucket-level permissions (Console)

You have to use the console for this one (or at least, I couldn’t find a CLI command to interact with bucket-level permissions).  This time around I tried the bucket policy approach, and used this policy example as my template:

  • Load the AWS S3 console
  • select your bucket
  • click the Properties button
  • expand the Permissions section
  • click the Add bucket policy button
  • Paste the bucket policy you’ve constructed – in my example, I simply substituted “mikecanuckbucket” for “examplebucket” from the example
  • Click Save

Note: the bucket policy is immediately applied.

Upload a web page (CLI)

Note: “helloworld.html” is the example file I’m uploading to my bucket.

aws s3 cp helloworld.html s3://mikecanuckbucket

Set file-level permissions (Console)

Hah!  If you used the bucket policy like me, this won’t actually be necessary.  One convenient advantage of this inconvenience.

If you didn’t, then you’ll have to configure file-level permissions like I did in the last post.

Retrieve the page link (Console)

  • Select the new file in your new bucket
  • Click the Properties button
  • Click (or copy) the URL listed as Link

Comments

I’m disappointed that the S3 team hasn’t made it possible to eliminate the manual (Console) steps in an operation like this.  I must be missing something, or perhaps the AWS shell is where all incomplete features will be added in the future?

I happened to notice a git issue asking whether aws-shell or SAWS was the future, and according to the most active developer in both, it appears to be aws-shell.  Presumably this project is being actively developed then (though after a year and a half, it’s still labelled “The aws-shell is currently in developer preview” – and the vast majority of commits are from a year ago).  However, it’s sad to see some pretty significant issues have remained open over a year – that’s a project that sounds like it’s on life support, not fully staffed.  Or maybe the AWS APIs are still just that incomplete?

It’s also discouraging to see it mention that “The aws-shell accepts the same commands as the AWS CLI, except you don’t need to provide the aws prefix”.  This implies that it’s simply relying on the AWS CLI to implement additional commands, rather that implement themselves.  And certainly my cursory inspection of the autocomplete commands bears this out (no new options to the base s3 command palette).  [That’s understandable – it’s painful as a development organization to duplicate functionality that’s ostensibly being supported already by one branch of the org.]

Still, it is disappointing to see that at this point in the maturity lifecycle of AWS, there are still this many discontinuities to create such a disjoint experience.  My experience with the official tutorials is similar – some are great, some are frankly terrible botched efforts, and I would’ve expected better from The Leader in Cloud.  [Hell, for that matter, some capabilities themselves such as CodeDeploy are still a botched mess, at least as far as how difficult it was for me to succeed EVEN WITH a set of interdependent – admittedly poorly-written – tutorials.]

 

AWS wrangling, round 3: simplest possible, manually-configured static website

Our DevOps instructor Dan asked us to host a static HelloWorld page in S3.  After last week’s over-scoped tutorial, I started digging around in the properties of an S3 bucket, and discovered it was staring me in the face (if only I’d stared back).

Somehow I’d foolishly gotten into the groove of AWS tutorials and assumed if I found official content, that must be the most well-constructed approach based on the minimal needs of most of their audience, so I didn’t question the complexity and poorly-constructed lessons until long after I was committed to see it through.  [Thankfully I was able to figure out at least one successful path through those vagaries, or else I’d probably still be stubborn-through-seething and trying to debug the black box that are IAM policy attachments.]

Starting Small: S3 bucket and nothing else

Create Bucket

Modify the bucket-level Permissions

  • Select the new bucket and click Properties
  • Expand the Permissions section, click Add more permissions
  • In the Grantee selector, choose Everyone
  • check the List checkbox
  • click Save

Enable static website hosting

  • In the Bucket Properties, expand the Static Website Hosting section
  • Select enable website hosting
  • In the Index Document textbox, enter your favoured homepage name (e.g. index.html)
  •  click Save

Upload your web content

  • In the Actions button (on the top-left of the page), select Upload
  • Click Add files, select a simple HTML file, and click Start Upload
    • If you don’t have a suitable HTML file, copy the following to a text editor on your computer and save it (e.g. as helloworld.html)
      <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
      <html>
      <head>
       <title>Hello, World!</title>
       <style>
       body {
       color: #ffffff;
       background-color: #0188cc;
       font-family: Arial, sans-serif; 
       font-size:14px;
       }
       </style>
      </head>
      <body>
       

      Hello, World!

      You have successfully uploaded a static web page to AWS S3

      </body> </html>

Modify the content-level Permissions

  • Select the newly-uploaded file, then click Properties
  • expand the Permissions section and click Add more permissions
  • In the Grantee selector, choose Everyone
  • check the Open/Download checkbox and click Save

Now to confirm that your web page is available to web users, find the Link in the Properties for the file and click it – here’s my test file:

Screenshot 2017-01-15 10.26.39.png

If you’ve done everything correctly, you should see something like this:

Screenshot 2017-01-15 10.31.01.png

If one or more of the Permissions aren’t correct, you’ll see this (or at least, that’s what I’m getting in Chrome):

Screenshot 2017-01-15 10.29.47.png