AWS wrangling, round 4: automating the setup of a static website

This time around, let’s automate the steps I manually performed last time (as much as the S3 APIs allow CLI interaction, which is sadly not much).

Install the AWS CLI (command line interface)

Follow the instructions for your operating system from here:

https://aws.amazon.com/cli/

Run the following commands

Create a new bucket (CLI)

Note: “mikecanuckbucket” is the bucket I’m creating.

aws s3 mb s3://mikecanuckbucket

Which returns the following output:

make_bucket: mikecanuckbucket

Configure the static website (CLI)

aws s3 website s3://mikecanuckbucket --index-document index.html

Which returns nothing if it succeeds.

Set bucket-level permissions (Console)

You have to use the console for this one (or at least, I couldn’t find a CLI command to interact with bucket-level permissions).  This time around I tried the bucket policy approach, and used this policy example as my template:

  • Load the AWS S3 console
  • select your bucket
  • click the Properties button
  • expand the Permissions section
  • click the Add bucket policy button
  • Paste the bucket policy you’ve constructed – in my example, I simply substituted “mikecanuckbucket” for “examplebucket” from the example
  • Click Save

Note: the bucket policy is immediately applied.

Upload a web page (CLI)

Note: “helloworld.html” is the example file I’m uploading to my bucket.

aws s3 cp helloworld.html s3://mikecanuckbucket

Set file-level permissions (Console)

Hah!  If you used the bucket policy like me, this won’t actually be necessary.  One convenient advantage of this inconvenience.

If you didn’t, then you’ll have to configure file-level permissions like I did in the last post.

Retrieve the page link (Console)

  • Select the new file in your new bucket
  • Click the Properties button
  • Click (or copy) the URL listed as Link

Comments

I’m disappointed that the S3 team hasn’t made it possible to eliminate the manual (Console) steps in an operation like this.  I must be missing something, or perhaps the AWS shell is where all incomplete features will be added in the future?

I happened to notice a git issue asking whether aws-shell or SAWS was the future, and according to the most active developer in both, it appears to be aws-shell.  Presumably this project is being actively developed then (though after a year and a half, it’s still labelled “The aws-shell is currently in developer preview” – and the vast majority of commits are from a year ago).  However, it’s sad to see some pretty significant issues have remained open over a year – that’s a project that sounds like it’s on life support, not fully staffed.  Or maybe the AWS APIs are still just that incomplete?

It’s also discouraging to see it mention that “The aws-shell accepts the same commands as the AWS CLI, except you don’t need to provide the aws prefix”.  This implies that it’s simply relying on the AWS CLI to implement additional commands, rather that implement themselves.  And certainly my cursory inspection of the autocomplete commands bears this out (no new options to the base s3 command palette).  [That’s understandable – it’s painful as a development organization to duplicate functionality that’s ostensibly being supported already by one branch of the org.]

Still, it is disappointing to see that at this point in the maturity lifecycle of AWS, there are still this many discontinuities to create such a disjoint experience.  My experience with the official tutorials is similar – some are great, some are frankly terrible botched efforts, and I would’ve expected better from The Leader in Cloud.  [Hell, for that matter, some capabilities themselves such as CodeDeploy are still a botched mess, at least as far as how difficult it was for me to succeed EVEN WITH a set of interdependent – admittedly poorly-written – tutorials.]

 

AWS wrangling, round 3: simplest possible, manually-configured static website

Our DevOps instructor Dan asked us to host a static HelloWorld page in S3.  After last week’s over-scoped tutorial, I started digging around in the properties of an S3 bucket, and discovered it was staring me in the face (if only I’d stared back).

Somehow I’d foolishly gotten into the groove of AWS tutorials and assumed if I found official content, that must be the most well-constructed approach based on the minimal needs of most of their audience, so I didn’t question the complexity and poorly-constructed lessons until long after I was committed to see it through.  [Thankfully I was able to figure out at least one successful path through those vagaries, or else I’d probably still be stubborn-through-seething and trying to debug the black box that are IAM policy attachments.]

Starting Small: S3 bucket and nothing else

Create Bucket

Modify the bucket-level Permissions

  • Select the new bucket and click Properties
  • Expand the Permissions section, click Add more permissions
  • In the Grantee selector, choose Everyone
  • check the List checkbox
  • click Save

Enable static website hosting

  • In the Bucket Properties, expand the Static Website Hosting section
  • Select enable website hosting
  • In the Index Document textbox, enter your favoured homepage name (e.g. index.html)
  •  click Save

Upload your web content

  • In the Actions button (on the top-left of the page), select Upload
  • Click Add files, select a simple HTML file, and click Start Upload
    • If you don’t have a suitable HTML file, copy the following to a text editor on your computer and save it (e.g. as helloworld.html)
      <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
      <html>
      <head>
       <title>Hello, World!</title>
       <style>
       body {
       color: #ffffff;
       background-color: #0188cc;
       font-family: Arial, sans-serif; 
       font-size:14px;
       }
       </style>
      </head>
      <body>
       

      Hello, World!

      You have successfully uploaded a static web page to AWS S3

      </body> </html>

Modify the content-level Permissions

  • Select the newly-uploaded file, then click Properties
  • expand the Permissions section and click Add more permissions
  • In the Grantee selector, choose Everyone
  • check the Open/Download checkbox and click Save

Now to confirm that your web page is available to web users, find the Link in the Properties for the file and click it – here’s my test file:

Screenshot 2017-01-15 10.26.39.png

If you’ve done everything correctly, you should see something like this:

Screenshot 2017-01-15 10.31.01.png

If one or more of the Permissions aren’t correct, you’ll see this (or at least, that’s what I’m getting in Chrome):

Screenshot 2017-01-15 10.29.47.png

 

AWS Tutorial wrangling, effort 2: HelloWorld via CodeDeploy

I’m taking a crash course in DevOps this winter, and our instructor assigned us a trivial task: get Hello World running in S3.

I found this tutorial (tantalizingly named “Deploy a Hello World Application with AWS CodeDeploy (Windows Server)”), figured it looked close enough (and if not, it’d keep me limber and help me narrow in on what I *do* need) so I foolishly dove right in.

TL;DR I found the tutorial damnably short on explicit clarity – lots of references to other tutorials, and plenty of incomplete or vague instructions, it seems this was designed by someone who’s already overly-familiar with AWS and didn’t realize the kinds of ambiguities they’d left behind.

I got myself all the way to Step 3 and was faced with this error – little did I know this was just the first of many IAM mysteries to solve:

Mac4Mike:aws mike$ aws iam attach-role-policy --role-name CodeDeployServiceRole --policy-arn arn:aws:iam::aws:policy:/service-role/AWSCodeDeployRole
An error occurred (AccessDenied) when calling the AttachRolePolicy operation: User: arn:aws:iam::720781686731:user/Mike is not authorized to perform: iam:AttachRolePolicy on resource: role CodeDeployServiceRole

Back the truck up, Mike

Hold on, what steps preceded this stumble?

CodeDeploy – Getting Started

Well, first I pursued the CodeDeploy Getting Started path:

  • Provision an IAM user
    • It wasn’t clear from the referring tutorials, so I learned by trial and error to create a user with CLI permissions (Access/Secret key authN), not Console permissions
  • Assigning them the specified policies (to enable CodeDeploy and CloudFormation)
  • Creating the specifiedCodeDeployServiceRole service role
    • The actual problem arose here, where I ran the command as specified in the guide

I tried this with different combinations of user context (Mike, who has all access to EC2, and Mike-CodeDeploy-cli, who has all the policies assigned in Step 2 of Getting Started) AND the –policy-arn parameter (both the Getting Started string and the one dumped out by the aws iam create-role command (arn:aws:iam::720781686731:role/CodeDeployServiceRole)).

And literally, searching on this error and variants of it, there appear to be no other people who’ve ever written about encountering this.  THAT’s a new one on me.  I’m not usually a trailblazer (even of the “how did he fuck this up *that* badly?” kind of trailblazing…)

OK, so then forget it – if the CLI + tutorial can’t be conquered, let’s try the Console-based tutorial steps.   [Note: in both places, they state it’s important that you “Make sure you are signed in to the AWS Management Console with the same account information you used in Getting Started.”  Why?  And what “account information” do they mean – the user with which you’re logged into the web console, or the user credentials you provisioned?]

I was able to edit the just-created CodeDeployServiceRole and confirm all the configurations they specified *except* where they state (in step 4), “On the Select Role Type page, with AWS Service Roles selected, next to AWS CodeDeploy, choose Select.”  Not sure what that means (which should’ve pulled me in the direction of “delete and recreate this role”), but I tried it out as-is anyway.  [The only change I had to make so far was to attach AWSCodeDeployRole.]

Reading up on the AttachRolePolicy action, it appears that –policy-arn refers to the permissions you wish to attach to the targeted role, and –role-name refers to the role getting additional permissions.  That would mean I’m definitely meant to attach “arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole” policy to CodeDeployServiceRole.  (Still doesn’t explain why I lack the AttachRolePolicy permission in either of the IAM Users I’ve defined, nor how to add that permission.)

Instead, with no help from any online docs or discussions, I discovered that it’s possible to assign the individual permission by starting with this interface: https://console.aws.amazon.com/iam/home?#/policies:

  • Click Create Policy
  • Select Policy Generator
  • AWS Service: Select AWS Identity and Access Management
  • Actions: Attach Role Policy
  • ARN: I tried constructing two policies with (arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole) and (arn:aws:iam::720781686731:role/CodeDeployServiceRole)
    • The first returned “AccessDenied” and with the second, the command I’m fighting with returned “InvalidInput”

Then I went to the Users console:

  • select the user of interest (Mike-CodeDeploy-cli in my journey)
  • click Add Permissions
  • select “Attach existing policies directly”
  • select the new Policy I just created (where type = Customer managed, so it’s relatively easy to spot among all the other “AWS managed” policies)

As I mentioned, the second construction returned this error to the command:

An error occurred (InvalidInput) when calling the AttachRolePolicy operation: ARN arn:aws:iam::720781686731:role/CodeDeployServiceRole is not valid.

Nope, wait, dammit – the ARN is the *policy*, not the *object* to which the policy grants permission…

Here’s where I started ranting to myself…

Tried it a couple more times, still getting AccessDenied, so screw it.  [At this point I conclude AWS IAS is an immature dog’s breakfast – even with an explicit map you still end up turned in knots.]

So I just went to the Role CodeDeployServiceRole and attached both policies (I’m pretty sure I only need to attach the policy AWSCodeDeployRole but I’m adding the custom AttachRolePolicy-CodeDeployRole because f it I just need to get through this trivial exercise).

[Would it kill the folks at AWS to draw a friggin picture of how all these capabilities with their overlapping terminology are related?  Cause I don’t know about you, but I am at the end of my rope trying to keep these friggin things straight.  Instead, they have a superfluous set of fragmented documented and tutorials, which it’s clear they’ve never usability tested end-to-end, and for which they assume way too much existing knowledge & context.]

I completed the rest of the InstanceProfile creation steps (though I had to create a second one near the end, because the console complained I was trying to create one that already existed).

CodeDeploy – create a Windows instance the “easy” way

Then of course we’re on to the fun of creating a Windows Instance in AWS.  Brave as I am, I tried it with CloudFormation.

I grabbed the CLI command and substituted the following two Parameter values in the command:

  • –template-url:

    ttp://s3-us-west-2.amazonaws.com/aws-codedeploy-us-west-2/templates/latest/CodeDeploy_SampleCF_Template.json (for the us-west-2 region I am closest to)

  • Parameter-Key=KeyPairName: MBP-2009 (for the .pem file I created a while back for use in SSH-managing all my AWS operations)

The first time I ran the command it complained:

You must specify a region. You can also configure your region by running "aws configure".

So I re-ran aws configure and filled in “us-west-2” when it prompted for “Default region name”.

Second time around, it spat out:

{
    "StackId": "arn:aws:cloudformation:us-west-2:720781686731:stack/CodeDeployDemoStack/d1c817d0-d93e-11e6-8ee1-503f20f2ade6"
}

They tell us not to proceed until this command reports “CREATE_COMPLETED”, but wow does it take a while to stop reporting “None”:

aws cloudformation describe-stacks --stack-name CodeDeployDemoStack --query "Stacks[0].StackStats" --output text

When I went looking at the cloudformation console (blame it on lack of patience), it reported my instance(s)’ status was “ROLLBACK_COMPLETE”.  Now, I’m no AWS expert, but that doesn’t sound like a successful install to me.  I headed to the details, and of course something else went horribly wrong:

  • CREATE_FAILED – AWS::EC2::Instance – API: ec2:RunInstances Not authorized for images: [ami-7f634e4f]

CodeDeploy – create a Windows instance the “hard” way

So let’s forget the “easy” path of CloudFormation.  Try the old-fashioned way of creating a Windows instance, and see if I can make it through this:

  • Deciding among Windows server AMI’s is a real blast – over 600 of them!
  • I narrowed it down to the “Windows_Server-2016-English-Full-Base-2016.12.24”
    • Nano is only available to Windows Assurance customers
    • Enterprise is way more than I’d need to serve a web page
    • Full gives you the Windows GUI to manage the server, whereas Core only includes the PowerShell (and may not even allow RDP access)
    • I wanted to see what the Manage Server GUI looks like these days, otherwise I probably would’ve tried Core
    • Note: there were three AMI all prefixed “Windows_Server-2016-English-Full-Base”, I just chose the one with the latest date suffix (assuming it’s slightly more up-to-date with patches)
  • I used the EC2 console to get the Windows password, then installed the Microsoft Remote Desktop client for Mac to enable me to interactively log in to the instance

Next is configuring the S3 bucket:

  • There is some awfully confusing and incomplete documentation here
  • There are apparently two policies to be configured, with helpful sample policies, but it’s unclear where to go to attach them, or what steps to take to make sure this occurs
  • It’s like the author has already done this a hundred times and knows all the steps by heart, but has forgotten that as part of a tutorial, the intended audience are people like me who have little or no familiarity with the byzantine interfaces of AWS to figure out where to attach these policies [or any of the other hundred steps I’ve been through over the last few weeks]
  • I *think* I found where to attach the first policy (giving permission to the Amazon S3 Bucket) – I attached this policy template (substituting both the AWS account ID [111122223333] and bucket name [codedeploydemobucket] for the ones I’m using):
    { "Statement": [ { "Action": ["s3:PutObject"], "Effect": "Allow", "Resource": "arn:aws:s3:::codedeploydemobucket/*", "Principal": { "AWS": [ "111122223333" ] } } ] }
  • I also decided to attach the second recommended policy to the same bucket as another bucket policy:
    { "Statement": [ { "Action": ["s3:Get*", "s3:List*"], "Effect": "Allow", "Resource": "arn:aws:s3:::codedeploydemobucket/*", "Principal": { "AWS": [ "arn:aws:iam::80398EXAMPLE:role/CodeDeployDemo" ] } } ] }
  • Where did I finally attach them?  I went to the S3 console, clicked on the bucket I’m going to use (called “hacku-devops-testing”), selected the Properties button, expanded the Permissions section, and clicked the Add bucket policy button the first time.  The second time, since it would only allow me to edit the bucket policy, I tried Add more permissions – but that don’t work, so I tried editing the damned bucket policy by hand and appending the second policy as another item in the Statement dictionary – after a couple of tries, I found a combination that the AWS bucket policy editor would accept, so I’m praying this is the intended combination that will all this seductive tutorial to complete:
    {
     "Version": "2008-10-17",
     "Statement": [
     {
     "Effect": "Allow",
     "Principal": {
     "AWS": "arn:aws:iam::720781686731:root"
     },
     "Action": "s3:PutObject",
     "Resource": "arn:aws:s3:::hacku-devops-testing/*"
     },
     {
     "Action": [
     "s3:Get*",
     "s3:List*"
     ],
     "Effect": "Allow",
     "Resource": "arn:aws:s3:::hacku-devops-testing/*",
     "Principal": {
     "AWS": [
     "arn:aws:iam::720781686731:role/CodeDeployDemo-EC2-Instance-Profile"
     ]
     }
     }
     ]
    }

CodeDeploy – actually deploying code

I followed the remaining commands (closely – gotta watch every parameter and fill in the correct details, ugh).  But thankfully this was the trivial part.  [I guess they got the name “CodeDeploy” right – it’s far more attractive than “CodeDeployOnceYouFoundTheLostArkOfTheCovenantToDecipherIAMIntricacies”.]

Result

Success!  Browsing to the public DNS of the EC2 instance showed me the Hello World page I’ve been trying to muster for the past three days!

Conclusion

This tutorial works as a demonstration of how to marshall a number of contributing parts of the AWS stack: CodeDeploy (whose “ease of deployment” benefits I can’t yet appreciate, considering how labourious and incomplete/error-prone this tutorial was), IAM (users, roles, groups, policies), S3, EC2 and AMI.

However, as a gentle introduction to a quick way to get some static HTML on an AWS endpoint, this is a terrible failure.  I attacked this over a span of three days.  Most of my challenges were in deciphering the mysteries of IAM across the various layers of AWS.

In a previous life I was a security infrastructure consultant, employed by Microsoft to help decipher and troubleshoot complex interoperable security infrastructures.  I prided myself on being able to take this down to the lowest levels and figure out *exactly* what’s going wrong.  And while I was able to find *a* working pathway through this maze, my experience here and my previous expertise tells me that AWS has a long way to go to make it easy for AWS customers to marshall all their resources for secure-by-default application deployments.  [Hell, I didn’t even try to enhance the default policies I encountered to limit the scope of what remote endpoints or roles would have access to the HTTP and SSH endpoints on my EC2 instance.  Maybe that’s a lesson for next time?]

 

This time, success: Flask-on-AWS tutorial (with advanced use of virtualenv)

Last time I tried this, I ended up semi-deliberately choosing to use Python 3 for a tutorial (I didn’t realize quickly enough) was built around Python 2.

After cleaning up my experiment I remembered that the default python on my MacBook was still python 2.7.10, which gave me the idea I might be able to re-run that tutorial with all-Python 2 dependencies.  Or so it seemed.

Strangely, the first step both went better and no better than last time:

Mac4Mike:flask-aws-tutorial mike$ virtualenv flask-aws
Using base prefix '/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5'
New python executable in /Users/mike/code/flask-aws-tutorial/flask-aws/bin/python3.5
Also creating executable in /Users/mike/code/flask-aws-tutorial/flask-aws/bin/python
Installing setuptools, pip, wheel...done.

Yes it didn’t throw any errors, but no it didn’t use the base Python 2 that I’d hoped.  Somehow the fact that I’ve installed Python 3 on my system is still getting picked up by virtualenv, so I needed to dig further into how virtualenv can be used to truly insulate from Python 3.

Found a decent article here that gave me hope, and even though they punted to using the virtualenvwrapper scripts, it still clued me in to the virtualenv parameter “-p”, so this seemed to work like a charm:

Mac4Mike:flask-aws-tutorial mike$ virtualenv flask-aws -p /usr/bin/python
Running virtualenv with interpreter /usr/bin/python
New python executable in /Users/mike/code/flask-aws-tutorial/flask-aws/bin/python
Installing setuptools, pip, wheel...done.

This time?  The requirements install worked like a charm:

Successfully installed Flask-0.10.1 Flask-SQLAlchemy-2.0 Flask-WTF-0.10.3 Jinja2-2.7.3 MarkupSafe-0.23 PyMySQL-0.6.3 SQLAlchemy-0.9.8 WTForms-2.0.1 Werkzeug-0.9.6 argparse-1.2.1 boto-2.28.0 itsdangerous-0.24 newrelic-2.74.0.54

Then (since I still had all the config in place), I ran pip install awsebcli and skipped all the way to the bottom of the tutorial and tried eb deploy:

INFO: Deploying new version to instance(s).                         
ERROR: Your requirements.txt is invalid. Snapshot your logs for details.
ERROR: [Instance: i-01b45c4d01c070555] Command failed on instance. Return code: 1 Output: (TRUNCATED)...)
  File "/usr/lib64/python2.7/subprocess.py", line 541, in check_call
    raise CalledProcessError(retcode, cmd)
CalledProcessError: Command '/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1. 
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03deploy.py failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
ERROR: Unsuccessful command execution on instance id(s) 'i-01b45c4d01c070555'. Aborting the operation.
ERROR: Failed to deploy application.

This kept barfing over and over until I remembered that the target environment was still configured for Python 3.4.  Fortunately or not, you can’t change major versions of the platform – so back to eb init I go (with the -i parameter to re-initialize).

This time around?  The command eb deploy worked like a charm.

Lesson: be *very* explicit about your Python versions when messing with someone else’s code.  [Duh.]

Getting to know AWS with Python (the free way)

I’ve run through all the easy AWS tutorials and I was looking to roll sleeves up and take it one level deeper, so (given I’ve recently completed a Python class, and given I’m trying to stay on a budget) I hunted around and found some ideas for putting a sample Flask app up on AWS Elastic Beanstalk, that cost as little as possible to use.  Here’s how I wove them together (into an almost-functional cloud solution first time around).

Set aside Anaconda, install Python3

First step in the tutorial I found was building the Python app locally (then eventually “freezing” it and uploading to AWS EB).  [Note there’s a similar tutorial from AWS here, which is good for comparison.]  [Also note: after I mostly finished my work, I confirmed that the tutorial I’m using was written for Python 2.7, and yet I’d blundered into it with Python 3.4.  Not for the faint of heart, nor for those who like non-Degraded health status.]

Which start with using virtualenv to build up the app profile.

But for me, virtualenv threw an error after I installed it (with pip install virtualenv):

ERROR: virtualenv is not compatible with this system or executable 

Okay…so Ryan Wilcox gave us a clue that you might be using the wrong python.  Running which python told me I’m directed to the anaconda version, so I commented that PATH modification out of .bash_profile and installed python3 using homebrew (which I’d previously installed on my MacBook).

Setup the Flask environment Debug virtualenv

Surprising no one but myself, by removing anaconda and installing python3, I lost access not only to virtualenv but also to pip, so I guess all that was wrapped in the anaconda environment.  Running brew install pip reports the following:

Updating Homebrew...
Error: No available formula with the name "pip" 
Homebrew provides pip via: `brew install python`. However you will then
have two Pythons installed on your Mac, so alternatively you can install
pip via the instructions at:
  https://pip.readthedocs.io/en/stable/installing/

The only option from that article seems to be running get-pip.py, so I ran python3 get-pip.py (in case the script installed a different pip based on which version of python was running the script).  Running pip –version returned this, so I felt safe enough to proceed:

pip 9.0.1 from /usr/local/lib/python3.5/site-packages (python 3.5)

Ran pip install virtualenv, now we’re back where I started (but without the anaconda crutch that I don’t entirely understand, and didn’t entirely trust to be compatible with these AWS EB tutorials).

Running virtualenv flask-aws from within the local clone of the tutorial repo throws this:

Using base prefix '/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5'
Overwriting /Users/mike/code/flask-aws-tutorial/flask-aws/lib/python3.5/orig-prefix.txt with new content
New python executable in /Users/mike/code/flask-aws-tutorial/flask-aws/bin/python3.5
Not overwriting existing python script /Users/mike/code/flask-aws-tutorial/flask-aws/bin/python (you must use /Users/mike/code/flask-aws-tutorial/flask-aws/bin/python3.5)
Traceback (most recent call last):
  File "/usr/local/bin/virtualenv", line 11, in 
    sys.exit(main())
  File "/usr/local/lib/python3.5/site-packages/virtualenv.py", line 713, in main
    symlink=options.symlink)
  File "/usr/local/lib/python3.5/site-packages/virtualenv.py", line 925, in create_environment
    site_packages=site_packages, clear=clear, symlink=symlink))
  File "/usr/local/lib/python3.5/site-packages/virtualenv.py", line 1370, in install_python
    os.symlink(py_executable_base, full_pth)
FileExistsError: [Errno 17] File exists: 'python3.5' -> '/Users/mike/code/flask-aws-tutorial/flask-aws/bin/python3'

Hmm, is this what virtualenv does?  OK, if that’s true why is something intended to insulate from external dependencies seemingly getting borked by external dependencies?

Upon closer examination, it appears that what the virtualenv script tried to do the first time was generate a symlink to a python3.5 interpreter, and got tripped when it found a symlink already there.  However, the second time I ran the command (hoping that it is self-healing), I got yelled at about “too many symlinks”, and discover that the targets now have a circular loop:

Mac4Mike:bin mike$ ls -la
total 24
drwxr-xr-x  5 mike  staff  170 Dec 11 12:26 .
drwxr-xr-x  6 mike  staff  204 Dec 11 12:26 ..
lrwxr-xr-x  1 mike  staff    9 Dec 11 12:26 python -> python3.5
lrwxr-xr-x  1 mike  staff    6 Dec 11 11:52 python3 -> python
lrwxr-xr-x  1 mike  staff    6 Dec 11 11:52 python3.5 -> python

If I’m reading the above stack trace correctly, they were trying to create a symlink from python3.5 to some other target.  So all it should require is deleting the python3.5 symlink, yes?  Easy.

Aside: then running virtualenv flask-aws from the correct location (but an old, not-refreshed shell) spills this:

Using base prefix '/Users/mike/anaconda3'
Overwriting /Users/mike/code/flask-aws-tutorial/flask-aws/lib/python3.5/orig-prefix.txt with new content
New python executable in /Users/mike/code/flask-aws-tutorial/flask-aws/bin/python
Traceback (most recent call last):
  File "/Users/mike/anaconda3/bin/virtualenv", line 11, in 
    sys.exit(main())
  File "/Users/mike/anaconda3/lib/python3.5/site-packages/virtualenv.py", line 713, in main
    symlink=options.symlink)
  File "/Users/mike/anaconda3/lib/python3.5/site-packages/virtualenv.py", line 925, in create_environment
    site_packages=site_packages, clear=clear, symlink=symlink))
  File "/Users/mike/anaconda3/lib/python3.5/site-packages/virtualenv.py", line 1387, in install_python
    raise e
  File "/Users/mike/anaconda3/lib/python3.5/site-packages/virtualenv.py", line 1379, in install_python
    stdout=subprocess.PIPE)
  File "/Users/mike/anaconda3/lib/python3.5/subprocess.py", line 947, in __init__
    restore_signals, start_new_session)
  File "/Users/mike/anaconda3/lib/python3.5/subprocess.py", line 1551, in _execute_child
    raise child_exception_type(errno_num, err_msg)
OSError: [Errno 62] Too many levels of symbolic links

Try again from a shell where anaconda’s been removed from $PATH?  Works fine:

Installing setuptools, pip, wheel...done.

Step 2: Setting up the Flask environment

Installing the requirements (using pip install -r requirements.txt) went *mostly* smooth, but it broke down on either the distribute or itsdangerous dependency:

Collecting distribute==0.6.24 (from -r requirements.txt (line 11))
  Downloading distribute-0.6.24.tar.gz (620kB)
    100% |████████████████████████████████| 624kB 1.1MB/s 
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "", line 1, in 
      File "/private/var/folders/dw/yycg4bz1347cx5v_crcjl5580000gn/T/pip-build-6gettd5v/distribute/setuptools/__init__.py", line 2, in 
        from setuptools.extension import Extension, Library
      File "/private/var/folders/dw/yycg4bz1347cx5v_crcjl5580000gn/T/pip-build-6gettd5v/distribute/setuptools/extension.py", line 2, in 
        from setuptools.dist import _get_unpatched
      File "/private/var/folders/dw/yycg4bz1347cx5v_crcjl5580000gn/T/pip-build-6gettd5v/distribute/setuptools/dist.py", line 103
        except ValueError, e:
                         ^
    SyntaxError: invalid syntax
    
    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/dw/yycg4bz1347cx5v_crcjl5580000gn/T/pip-build-6gettd5v/distribute/

Unfortunately the “/pip-build-6gettd5v/” folder was already deleted by the time I got there to look at its contents.  So I re-ran the script and noticed that all dependencies through to distribute reported as “Using cached distribute-0.6.24.tar.gz”.

Figured I’d just pip install itsdangerous by hand, and if I got into too much trouble I could always uninstall it right?  Well, that didn’t seem to help arrest this error – presumably pip caches all the requirements files first and then installs them – so I figured I’d divide the requirements.txt file into two (creating requirements2.txt and stuffing the last three files there instead) and see if that helped bypass the problem in installing the first 11 requirements.

Nope, that didn’t work, so I also moved the line “distribute==0.6.24” out of requirements.txt into requirements2.txt:

Successfully installed Flask-0.10.1 Flask-SQLAlchemy-2.0 Flask-WTF-0.10.3 Jinja2-2.7.3 MarkupSafe-0.23 PyMySQL-0.6.3 SQLAlchemy-0.9.8 WTForms-2.0.1 Werkzeug-0.9.6 argparse-1.2.1

Yup, that did it, so off to pip install requirements2.txt:

Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/dw/yycg4bz1347cx5v_crcjl5580000gn/T/pip-build-5y_r3gg0/distribute/

OK, so then I removed the “distribute==0.6.24” line entirely from requirements2.txt:

Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/dw/yycg4bz1347cx5v_crcjl5580000gn/T/pip-build-5g9i5fpl/wsgiref/

Crap, wsgiref too?  OK, pip install wsigiref==0.1.2 by hand:

Collecting wsgiref==0.1.2
  Using cached wsgiref-0.1.2.zip
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "", line 1, in 
      File "/private/var/folders/dw/yycg4bz1347cx5v_crcjl5580000gn/T/pip-build-suqskeut/wsgiref/setup.py", line 5, in 
        import ez_setup
      File "/private/var/folders/dw/yycg4bz1347cx5v_crcjl5580000gn/T/pip-build-suqskeut/wsgiref/ez_setup/__init__.py", line 170
        print "Setuptools version",version,"or greater has been installed."
                                 ^
    SyntaxError: Missing parentheses in call to 'print'

Shit, even worse – the package itself craps out.  Good news is, I know exactly what’s wrong here, because the print method (function?) requires () in Python3.  (See, I *did* learn something from that anti-Zed rant.)

Double crap – wsgiref‘s latest version IS 0.1.2.

Triple-crap – that code’s docs haven’t been touched in forever, and don’t exist in a git repo against which issues can be filed.

Damn, this *really* sucks.  Seems I’ve been trapped in the hell that is “tutorials written for Python2”.  The only way I could get around this error is to edit the source of the cached wsgiref-0.1.2.zip file – do such files get deleted from $TMPDIR when pip exits?  Or are they stuffed in ~/Library/Caches/pip in some obfuscated filename?

Ultimately it didn’t matter – I found that pip install –download ~/ wsgiref==0.1.2 worked to dump the file exactly where I wanted.

And yes, wrapping lines 170 and 171 with () after the print verb (and re-zip’ing the folder contents) allowed me to fully install wsgiref-0.1.2 using pip install –no-index –find-links=~/ wsgiref-0.1.2.zip.

OK, finally ran pip install boto==2.28.0 and that’s all the dependencies installed.

Next!

Step 3: Create the AWS RDS

While the AWS UI has changed in subtle but UX-improving ways, this part wasn’t hard to follow.  I wasn’t happy about seeing the “all traffic, all sources” security group applied here, but not knowing what my ultimate target configuration might look like, I made the classic blunder of saying to self, “I’ll lock it down later.”

Step 4: Add tables

It wasn’t entirely clear, so here’s the construction of the SQLALCHEMY_DATABASE_URI parameter in config.py:

  • = Master Username you chose on the Specify DB Details page
    e.g. “awsflasktst”
  • = Master Password you chose on the Specify DB Details page
    e.g. “awsflasktstpwd”
  • = Endpoint string from DB Instance page
    e.g. “flask_tst.cxcmcajseabc.us-west-2.rds.amazonaws.com:3306”
  • = Database Name you chose on the Configure Advanced Settings page
    e.g. “awstflasktstdb”

So in my case that line read (well, it didn’t because I’m not telling you my *actual* configuration, but this lets you know how I parsed the tutorial guidance):

SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://awsflasktst:awsflasktstpwd@flask_tst.cxcmcajseabc.us-west-2.rds.amazonaws.com:3306/awstflasktstdb'

Step 4.6 Install NewRelic

Because I’m a former Relic and I’m still enthused about APM technology, I decided to install the newrelic package via pip install newrelic.  A few steps into the Quick Start guide and it was reporting all the…tiny amounts of traffic I was giving myself.

God knows if this’ll survive the pip freeze, but it’s sure worth a shot.  [Spoiler: cloud deployment was ultimately unsuccessful, so it’s moot.]

Step 5: Elastic Beanstalk setup

Installing the CLI was easy enough:

Successfully installed awsebcli-3.8.7 blessed-1.9.5 botocore-1.4.85 cement-2.8.2 colorama-0.3.7 docker-py-1.7.2 dockerpty-0.4.1 docopt-0.6.2 docutils-0.13.1 jmespath-0.9.0 pathspec-0.5.0 python-dateutil-2.6.0 pyyaml-3.12 requests-2.9.1 semantic-version-2.5.0 six-1.10.0 tabulate-0.7.5 wcwidth-0.1.7 websocket-client-0.40.0

Setup of the user account was fairly easy, although again it’s clear that AWS has done a lot of refactoring of their UI flows.  Just read ahead a little in the tutorial and you’ll be able to fill in the blanks.  (Although again, it’s a little disturbing to see a tutorial be this cavalier about the access permissions being granted to a simple web app.)

When we got to the eb init command I ran into a snag:

ERROR: ConfigParseError :: Unable to parse config file: /Users/mike/.aws/config

I checked and I *have* a config file there, and it’s already been populated with Packer credentials info (for a separate experiment I was working on previously).

So what did I do?  I renamed the config and credentials files to set them aside for now, of course!

Like me, you might also see an EB application listed, so I chose “2” not “1”.  No big deal.

There was also an unexpected option to use AWS CodeCommit, but since I’m not even sure I’m keeping this app around, I did not take that choice for now.

(And once again, this tutorial was super-cavalier about ignoring the SSH option that *everyone* always uses, so just closed my eyes and prayed I never inherit these bad habits…)

“Time to deploy this bad boy.”  Bad boy indeed – Python 2.7, *no* security measures whatsoever.  This boy is VERY bad.

Step 6: Deployment

So there’s this apparent dependency between the EBCLI and your local git repo.  If your changes aren’t committed, then what gets deployed won’t reflect those changes.

However, the part about running git push on the repo seemed unnecessary.  If the local CLI tool checks my repo, they’re only checking the local one, not the remote one, so once we run git commit, it shouldn’t matter whether those changes were pushed upstream.

However, knowing that I monkeyed with the requirements.txt file, I thought I’d also git add that one to my commit history:

git add requirements.txt
git commit -m "added newrelic requirement"

After initiating eb create, I was presented with a question never mentioned in the tutorial (another new AWS feature!), and not documented specifically in any of the ELB documentation I reviewed:

Select a load balancer type
1) classic
2) application
(default is 1): 

While my experience in this space tells me “application” sounds like the right choice, I chose the default for sake of convenience/sanity.

Proceeded with eb deploy, and it was looking good until we ran into the problem I was expecting to see:

ERROR: Your requirements.txt is invalid. Snapshot your logs for details.
ERROR: [Instance: i-01b45c4d01c070555] Command failed on instance. Return code: 1 Output: (TRUNCATED)...)
  File "/usr/lib64/python2.7/subprocess.py", line 541, in check_call
    raise CalledProcessError(retcode, cmd)
CalledProcessError: Command '/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1. 
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03deploy.py failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
WARN: Environment health has transitioned from Pending to Degraded. Command failed on all instances. Initialization completed 7 seconds ago and took 3 minutes.
ERROR: Create environment operation is complete, but with errors. For more information, see troubleshooting documentation.

Lesson

Ultimately unsuccessful, but was it satisfying?  Heck yes, grappling with all the tools and troubleshooting various aspects of the procedures was a wonderful learning experience.  [And as it turned out, was a great primer to lead into a less hand-holding tutorial I found later that I’ll try next.]

Coda

Well, heck.  That was easier than I thought.  Succeeded after another run-through by forcing Python 2.7 on both the local app configuration and in a new Elastic Beanstalk app.

HOWTO: run the Puppet Learning VM in AWS EC2

TL;DR Puppet’s Education team have all the scripts necessary to enable you to run the Learning VM in AWS EC2 – clone this repo and have fun!

Problem/Background

Puppet’s Learning VM includes all the software necessary for a new Puppet administrator to try out all its critical capabilities – a server-in-a-box.

In my case, Puppet server (in its default configuration) exceeded the performance envelope of the aging Windows box I had lying around at home where I attempted to run it.  [Spoiler: It’s possible to get it to work – I’ll publish those tips in the near future.]

I wanted to be able to quickly get the Learning VM running in a cloud-hosted environment, and after more research than I probably should’ve invested, I determined that AWS EC2 was the most viable and supported option.

Lucky me – the Puppet team responsible for supporting the Learning VM (as well as the Puppet Enterprise engineering team) have been working with AWS for long enough that they’ve built the code necessary to make this work.

The only (*ahem*) challenge for me was getting all the tools orchestrated to make this work, without the years of hands-on AWS experience the Puppet folks have amassed (I bet they’ve forgotten more than I knew a few weeks ago).

First Step: Build the AMI

AWS EC2 uses the AMI (Amazon Machine Images) format to generate a hosted VM.  Building these out isn’t hard once you have the correct configuration and AWS credentials, but it’s a little mysterious to a first-timer.

[Note: I started out with an already-provisioned AWS EC2 account, so I’d already taken care ofsome of the initial AWS account setup weeks or months ago.  You may encounter additional hurdles, but if I can conquer it with AWS docs and Stackoverflow, I’m confident you can too.]

Our contact at Puppet mentioned the following:

until we’ve got a supported solution, if you want to roll your own, you can try building an AMI from this repo:

https://github.com/puppetlabs/education-builds

I just tested it and it seemed to work. The command would be `packer build –var-file=templates/learning.json templates/awsbuild.json`

My aborted attempt on WSL

First time in, I simply cloned the repo to my Windows box (running Bash on Ubuntu for Windows 10 aka “WSL”) and ran the above command.  Which required that I had a copy of packer installed on my system (grab it here).  These instructions were good enough to getting packer installed on WSL, but it turns out that there’s a bug for WSL that’s addressed only in “Windows Insider” builds of Win10 that I haven’t installed (and don’t want to mess with).  Due to this bug, running packer build in Bash returns this error:

Failed to initialize build ‘amazon-ebs’: error initializing builder ‘amazon-ebs’: Unrecognized remote plugin message: Error starting plugin server: listen unix /tmp/packer-plugin017900973: setsockopt: invalid argument

Progressive Troubleshooting from a Mac

So I abandoned WSL and went over to my Mac.  Installed packer there, ran the packer build and encountered this error:

==> amazon-ebs: Prevalidating AMI Name…

==> amazon-ebs: Error querying AMI: NoCredentialProviders: no valid providers in chain. Deprecated. 

==> amazon-ebs: For verbose messaging see aws.Config.CredentialsChainVerboseErrors

Build ‘amazon-ebs’ errored: Error querying AMI: NoCredentialProviders: no valid providers in chain. Deprecated. 

For verbose messaging see aws.Config.CredentialsChainVerboseErrors

I dug up Hashicorp’s docs for Packer for AWS AMI, which indicated I needed a credentials file under ~/.aws in my profile.

AWS docs outlined what to do there, which meant getting two KEYs from your AWS profile (doc’d here).  Which meant creating a user with appropriate group membership (permissions) – I just chose “Programmatic Access” and granted it the AmazonEC2FullAccess policy.

Generating those keys and inserting them into the credentials file got me past that error.

Leading to the next:

==> amazon-ebs: No AMI was found matching filters: {

==> amazon-ebs:   ImageIds: [“ami-c7d092f7”]

==> amazon-ebs: }

Build ‘amazon-ebs’ errored: No AMI was found matching filters: {

  ImageIds: [“ami-c7d092f7”]

 

Ah, this turns out to be easy – AWS’ base AMI’s are regularly updated – so regularly that (I suspect) by the time the Education team publishes their repo, AWS has inevitably published a new AMI for that OS/apps combo.  No problem, a helpful chap on the Internet documented how to pick the source_ami for insertion into the awsbuild.json for puppet.

To get a current AMI ID: follow the big button prompts (Continue, Manual Launch) for whichever base image you’re using (I chose CentOS 7 with Updates) until the “Launch on EC2” page where you’ll see listings under Launch, AMI IDs for each region – copy the string under the ID column next to the Region you use – e.g. for me, I’m working in “US West (Oregon)”, so I’m using the ID “ami-d2c924b2” that I see on the page like so:

screenshot-2016-12-07-09-15-02

Waiting

Give it time – the full build out took nearly an hour before it was truly up and running. $0.15 is a reasonable price to pay for a nearly-completely-hand-holding experience of building a complex system like this.

One of the near-final messages sent to the console said, “amazon-ebs: Waiting for AMI to become ready…“, which was helpful during a particularly long wait.  The actual final message read:

==> Builds finished. The artifacts of successful builds are:

–> amazon-ebs: AMIs were created:

us-west-2: ami-b73991d7

Launch the Instance

Go through the Launch Instance progression, and when you get to Step 6: Configure Security Group, don’t forget to Add Rules for HTTP and HTTPS (so you can access the PE console). Now personally, I like to set the Source for all Rules to “My IP” – the only trick there is your IP address will probably change (depending on what network you’re logging in from – home, coffee shop, work – and how often your upstream provider changes your IP – e.g. home broadband). I’m willing to deal with the pain of resetting these once in a while, but your mileage (and tolerance for risk that someone else might try to break into your LVM) may vary.

Once the EC2 console indicated that the Instance Status = “running”, I was able to browse to both the http (Quest Guide) and https (Puppet Console) interfaces on my AWS server.

Connecting to the instance with SSH

However, to complete the lessons that require command-line access on the box, I needed to be able to SSH into the server.

Once it was built, the only info I had out of the AWS EC2 Dashboard was this message:

“SSH to the instance and login as ‘centos’ using the key specified at launch. Additional information may be found at : http://wiki.centos.org/Cloud/AWS.”

 

I overlooked the big fat Connect button on the EC2 Dashboard which hit me with the cluehammer:

Connect to your instance using its Public DNS:

ec2-35-165-74-169.us-west-2.compute.amazonaws.com

Example:

ssh -i “MBP-2009.pem” root@ec2-35-165-74-169.us-west-2.compute.amazonaws.com

 

When I specified the identity_file, all was well in the world.

I hope this helps you surmount some of those intimidating hurdles for the first-timer.