Introduction to Salt-cloud (Part 2)

In part 1 of this series, we got a 10,000 foot view of salt-cloud. What it is, why you might want to use it, and the pieces that make it up. Now, it’s time to get our hands dirty and boot some VM’s.

The salt-cloud Command

Once you’ve installed the appropriate packages for your operating system, you should have the salt-cloud utility available. This CLI app is your interface to salt-cloud. For some examples of what it can do, check out the abridged version of the help output below (from salt-cloud 2014.7.1 on OS X):

jhenry:~ jhenry$ salt-cloud -h
Usage: salt-cloud

Options:
  -c CONFIG_DIR, --config-dir=CONFIG_DIR
                        Pass in an alternative configuration directory.
                        Default: /etc/salt

  Execution Options:
    -p PROFILE, --profile=PROFILE
                        Create an instance using the specified profile.
    -m MAP, --map=MAP   Specify a cloud map file to use for deployment. This
                        option may be used alone, or in conjunction with -Q,
                        -F, -S or -d.
    -d, --destroy       Destroy the specified instance(s).
    -P, --parallel      Build all of the specified instances in parallel.
    -u, --update-bootstrap
                        Update salt-bootstrap to the latest develop version on
                        GitHub.

  Query Options:
    -Q, --query         Execute a query and return some information about the
                        nodes running on configured cloud providers
    -F, --full-query    Execute a query and return all information about the
                        nodes running on configured cloud providers
    --list-providers    Display a list of configured providers.

  Cloud Providers Listings:
    --list-locations=LIST_LOCATIONS
                        Display a list of locations available in configured
                        cloud providers. Pass the cloud provider that
                        available locations are desired on, aka "linode", or
                        pass "all" to list locations for all configured cloud
                        providers
    --list-images=LIST_IMAGES
                        Display a list of images available in configured cloud
                        providers. Pass the cloud provider that available
                        images are desired on, aka "linode", or pass "all" to
                        list images for all configured cloud providers
    --list-sizes=LIST_SIZES
                        Display a list of sizes available in configured cloud
                        providers. Pass the cloud provider that available
                        sizes are desired on, aka "AWS", or pass "all" to list
                        sizes for all configured cloud providers

I’ve trimmed out some poorly documented options to focus on what we’ll use in this post (dumpster diving through the source code to determine what some of those options do may turn into a future article).

As you can see, most salt-cloud actions require either a profile or a map (remember those from part 1?) to execute. Given nothing but a profile (-p) or map (-m), salt-cloud will attempt to boot the named instance(s) in the associated provider’s cloud. Paired with destroy (-d), it will–wait for it–terminate the instance. With -Q or -F, it will query the provider for running instances that match the profile or map and return information about their state. The final set of --list options may be used to view the various regions, images and instance sizes available from a given provider. Handy if you regularly work with several different vendors and can’t keep them all straight.

Configuring a Provider

Time for some concrete examples. Let’s set up Amazon EC2 as a salt-cloud provider, using a config very much like the one that booted the instance where my blog lives.

ec2-dealwithit:
  id: 'Your IAM ID'
  key: 'Your IAM key'
  keyname: centos
  private_key: ~/.ssh/centos.pem
  securitygroup: www
  provider: ec2
  del_root_vol_on_destroy: True
  del_all_vols_on_destroy: True

I’ve stripped out a couple advanced options, but that’s the gist. It’s plain YAML syntax, like all Salt config. To break it down:

ec2-dealwithit: This is an arbitrary ID that serves as the name of your provider. You’ll reference this in other configs, such as profiles (see next section).

id and key: your AWS credentials, specifically an IAM id:key pair. Pretty self explanatory.

keyname and private_key: The name of an SSH keypair you have previously configured at EC2, and the local path to the private key for that same keypair. This is what allows salt-cloud to log into your freshly booted instance and perform some bootstrapping.

securitygroup: controls which security group (sort of a simple edge firewall, if you are not familiar with EC2) your instances should automatically join.

provider maps to one of salt-cloud’s supported cloud vendors, so it knows which API to speak.

del_root_vol_on_destroy and del_all_vols_on_destroy: determine what should happen to any EBS volumes created alongside your instances. In my case, I want them cleaned up when my instances die so I don’t end up paying for them forever. But YMMV, be sure you’re not going to be storing any critical data on these volumes before you configure them to self-destruct! Confusingly, you need to specify both if you want all EBS volumes to be destroyed. Some instances, such as the newer t2.micro, automatically create an EBS root volume on boot. Setting del_all_vols does not destroy this volume. It only destroys any others you may later attach. So again, consider the behavior you want and set these appropriately. The default behavior depends on which AMI you’re using for your instance, so it’s best to set these explicitly.

Configuring a Profile

Armed with your provider config, it’s time to create a profile. This builds on the provider and describes the details of an individual VM.

ec2-www:
  provider: ec2-dealwithit
  image: ami-96a818fe
  size: t2.micro
  ssh_username:
    - centos
  location: us-east-1
  availability_zone: us-east-1b
  block_device_mappings:
    - DeviceName: /dev/sda1
      Ebs.VolumeSize: 30
      Ebs.VolumeType: gp2

Once again, a fairly straightforward YAML file.

ec2-www: An arbitrary identifier used to reference your profile in other configs or from the CLI.

provider: The name of a provider you’ve previously defined in /etc/salt/cloud.profiles.d/. In this case, the one we just set up earlier.

image: An AMI image ID which will be the basis for your VM.

size: The size or “flavor” for your instance. You can print a list of available sizes for a given provider with a command like this: salt-cloud --list-sizes ec2-dealwithit

ssh_username: The user that the salt-bootstrap code should use to connect to your instance, using the SSH keypair you defined earlier in the provider config. This is baked into your AMI image. If you work with several images that use different default users, you can list them all and salt-cloud will try them one by one.

Location and availability_zone: The region and AZ where your instance will live (if you care). You can print a list of locations for a provider with salt-cloud --list-locations ec2-dealwithit.

block_device_mappings: Create or modify an EBS volume to attach to your instance. In my case, I’m using a t2.micro instance which comes with a very small (~6GB) root volume. The AWS free tier allows up to 30GB of EBS storage for free, so I opted to resize the disk to take advantage of that. I also used the gp2 (standard SSD) volume type for better performance. You can map as many EBS volumes as you like, or leave it off entirely if it’s not relevant to you.

Configuring a Map

The final config file–which is optional–that I want to touch on is a map. Remember, a map lays out multiple instances belonging to one or more profiles, allowing you to boot a full application stack with one command. Here’s a quick example:

ec2-www:
  - web1
  - web2
  - staging:
      minion:
        master: staging-master.example.com

ec2-www: This is the name of a profile that you’ve previously defined. Here, I’m using the ec2-www profile that we created above.

web1, web2, ...: These are the names of individual instances that will be booted based on the parent profile.

staging: Here, I’m defining an instance and overriding some default settings. Because I can! Specifically, I changed the minion config that salt-bootstrap will drop onto the newly booted host in /etc/salt/mimion. For example, you could set up a staging server where you test code before deploying it fully. This server might be pointed at a different salt-master to keep it segregated from production. Nearly any setting from the Core, Provider and Profile level can be overwritten to suit your needs.

Making It Rain

Ok, I had to get one bad cloud joke in. Lighten up. Anyway, now that we’ve laid out our config files, we can go about the business of actually managing our cloud(s).

salt-cloud -p ec2-www web1

Boom! You just booted a VM named web1 based on the ec2-www profile we created earlier. If it seems like it’s taking a long time, that’s because the salt-bootstrap deploy script runs on first boot, loading salt onto the new minion for management. Depending on the log level you’ve configured in the core config (/etc/salt/cloud by default), salt-cloud will either sit silently and eventually report success, or spam your console with excruciating detail about its progress. But either way, when it’s done, you’ll get a nice YAML-formatted report about your new VM.

salt-cloud -a reboot web1
[INFO    ] salt-cloud starting
The following virtual machines are set to be actioned with "reboot":
  www2

Proceed? [N/y] y
... proceeding
[INFO    ] Complete
ec2-www:
    ----------
    ec2:
        ----------
        web1:
            ----------
            Reboot:
                Complete

In this example, we’re using the -a (action) option to reboot the instance we just created. Salt-cloud loops through all of your providers, querying them for an instance with the name you provide. Once found, it sends the proper API call to the cloud vendor to reboot the instance.

salt-cloud -p ec2-www -d web1
[INFO    ] salt-cloud starting
The following virtual machines are set to be destroyed:
  ec2-www:
    ec2:
      web1

Proceed? [N/y] y
... proceeding
[INFO    ] Destroying in non-parallel mode.
[INFO    ] [{'instanceId': 'i-e7800116', 'currentState': {'code': '48', 'name': 'terminated'}, 'previousState': {'code': '80', 'name': 'stopped'}}]
ec2-www:
    ----------
    ec2:
        ----------
        web1:
            ----------
            currentState:
                ----------
                code:
                    48
                name:
                    terminated
            instanceId:
                i-e7800116
            previousState:
                ----------
                code:
                    80
                name:
                    stopped

Now that we’re done playing, I’ve deleted the instance we just booted. Easy come, easy go.

salt-cloud -m /etc/salt/cloud.maps.d/demo.map -P

In this last example, we’re booting the map we created earlier. This should bring up 3 VM’s: web1, web2, and staging. The -P option makes this happen in parallel rather than one at a time. The whole point of working in the cloud is speed, so why wait around?

Wrapping Up

That pretty well covers the basics of salt-cloud. What it is, how to configure it, and how to turn those configs into real, live VM’s at your cloud vendor(s) of choice. There’s certainly more to salt-cloud than what I’ve covered so far. The official docs could also stand some improvement, to put it mildly. So I definitely plan to revisit salt-cloud in future posts. I’m already planning one to talk about deploy scripts such as the default salt-bootstrap.

If you’re wondering “why go to all this trouble writing configs just to boot a dang VM?”, it’s a fair point. But there are reasons! One major benefit of salt-cloud is the way it abstracts away vendor details. You write your configs once, and then use the same CLI syntax to manage your VM’s wherever they may live. It also gives you the advantages of infrastructure as code. You can keep these configs in version control systems like git. You can see at a glance what VM’s should exist, and how they should be configured. It gives you a level of consistency and repeatability you don’t get from ad-hoc work at the command line or a web GUI. These are all basic tenets of good, modern system administration.

I hope that this series was helpful! Please feel free to leave a comment with any questions, corrections or discussion.

Leave a Reply

Your email address will not be published. Required fields are marked *