HOWTO: run the Puppet Learning VM in AWS EC2

TL;DR Puppet’s Education team have all the scripts necessary to enable you to run the Learning VM in AWS EC2 – clone this repo and have fun!


Puppet’s Learning VM includes all the software necessary for a new Puppet administrator to try out all its critical capabilities – a server-in-a-box.

In my case, Puppet server (in its default configuration) exceeded the performance envelope of the aging Windows box I had lying around at home where I attempted to run it.  [Spoiler: It’s possible to get it to work – I’ll publish those tips in the near future.]

I wanted to be able to quickly get the Learning VM running in a cloud-hosted environment, and after more research than I probably should’ve invested, I determined that AWS EC2 was the most viable and supported option.

Lucky me – the Puppet team responsible for supporting the Learning VM (as well as the Puppet Enterprise engineering team) have been working with AWS for long enough that they’ve built the code necessary to make this work.

The only (*ahem*) challenge for me was getting all the tools orchestrated to make this work, without the years of hands-on AWS experience the Puppet folks have amassed (I bet they’ve forgotten more than I knew a few weeks ago).

First Step: Build the AMI

AWS EC2 uses the AMI (Amazon Machine Images) format to generate a hosted VM.  Building these out isn’t hard once you have the correct configuration and AWS credentials, but it’s a little mysterious to a first-timer.

[Note: I started out with an already-provisioned AWS EC2 account, so I’d already taken care ofsome of the initial AWS account setup weeks or months ago.  You may encounter additional hurdles, but if I can conquer it with AWS docs and Stackoverflow, I’m confident you can too.]

Our contact at Puppet mentioned the following:

until we’ve got a supported solution, if you want to roll your own, you can try building an AMI from this repo:

I just tested it and it seemed to work. The command would be `packer build –var-file=templates/learning.json templates/awsbuild.json`

My aborted attempt on WSL

First time in, I simply cloned the repo to my Windows box (running Bash on Ubuntu for Windows 10 aka “WSL”) and ran the above command.  Which required that I had a copy of packer installed on my system (grab it here).  These instructions were good enough to getting packer installed on WSL, but it turns out that there’s a bug for WSL that’s addressed only in “Windows Insider” builds of Win10 that I haven’t installed (and don’t want to mess with).  Due to this bug, running packer build in Bash returns this error:

Failed to initialize build ‘amazon-ebs’: error initializing builder ‘amazon-ebs’: Unrecognized remote plugin message: Error starting plugin server: listen unix /tmp/packer-plugin017900973: setsockopt: invalid argument

Progressive Troubleshooting from a Mac

So I abandoned WSL and went over to my Mac.  Installed packer there, ran the packer build and encountered this error:

==> amazon-ebs: Prevalidating AMI Name…

==> amazon-ebs: Error querying AMI: NoCredentialProviders: no valid providers in chain. Deprecated. 

==> amazon-ebs: For verbose messaging see aws.Config.CredentialsChainVerboseErrors

Build ‘amazon-ebs’ errored: Error querying AMI: NoCredentialProviders: no valid providers in chain. Deprecated. 

For verbose messaging see aws.Config.CredentialsChainVerboseErrors

I dug up Hashicorp’s docs for Packer for AWS AMI, which indicated I needed a credentials file under ~/.aws in my profile.

AWS docs outlined what to do there, which meant getting two KEYs from your AWS profile (doc’d here).  Which meant creating a user with appropriate group membership (permissions) – I just chose “Programmatic Access” and granted it the AmazonEC2FullAccess policy.

Generating those keys and inserting them into the credentials file got me past that error.

Leading to the next:

==> amazon-ebs: No AMI was found matching filters: {

==> amazon-ebs:   ImageIds: [“ami-c7d092f7”]

==> amazon-ebs: }

Build ‘amazon-ebs’ errored: No AMI was found matching filters: {

  ImageIds: [“ami-c7d092f7”]


Ah, this turns out to be easy – AWS’ base AMI’s are regularly updated – so regularly that (I suspect) by the time the Education team publishes their repo, AWS has inevitably published a new AMI for that OS/apps combo.  No problem, a helpful chap on the Internet documented how to pick the source_ami for insertion into the awsbuild.json for puppet.

To get a current AMI ID: follow the big button prompts (Continue, Manual Launch) for whichever base image you’re using (I chose CentOS 7 with Updates) until the “Launch on EC2” page where you’ll see listings under Launch, AMI IDs for each region – copy the string under the ID column next to the Region you use – e.g. for me, I’m working in “US West (Oregon)”, so I’m using the ID “ami-d2c924b2” that I see on the page like so:



Give it time – the full build out took nearly an hour before it was truly up and running. $0.15 is a reasonable price to pay for a nearly-completely-hand-holding experience of building a complex system like this.

One of the near-final messages sent to the console said, “amazon-ebs: Waiting for AMI to become ready…“, which was helpful during a particularly long wait.  The actual final message read:

==> Builds finished. The artifacts of successful builds are:

–> amazon-ebs: AMIs were created:

us-west-2: ami-b73991d7

Launch the Instance

Go through the Launch Instance progression, and when you get to Step 6: Configure Security Group, don’t forget to Add Rules for HTTP and HTTPS (so you can access the PE console). Now personally, I like to set the Source for all Rules to “My IP” – the only trick there is your IP address will probably change (depending on what network you’re logging in from – home, coffee shop, work – and how often your upstream provider changes your IP – e.g. home broadband). I’m willing to deal with the pain of resetting these once in a while, but your mileage (and tolerance for risk that someone else might try to break into your LVM) may vary.

Once the EC2 console indicated that the Instance Status = “running”, I was able to browse to both the http (Quest Guide) and https (Puppet Console) interfaces on my AWS server.

Connecting to the instance with SSH

However, to complete the lessons that require command-line access on the box, I needed to be able to SSH into the server.

Once it was built, the only info I had out of the AWS EC2 Dashboard was this message:

“SSH to the instance and login as ‘centos’ using the key specified at launch. Additional information may be found at :”


I overlooked the big fat Connect button on the EC2 Dashboard which hit me with the cluehammer:

Connect to your instance using its Public DNS:


ssh -i “MBP-2009.pem”


When I specified the identity_file, all was well in the world.

I hope this helps you surmount some of those intimidating hurdles for the first-timer.


2 thoughts on “HOWTO: run the Puppet Learning VM in AWS EC2

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s