Why I Ditched Vagrant for LXD

Updated July 2022: This was getting a bit out of date in some places so I’ve fixed a few things. More importantly, I’ve run into to some issues with cgroups and lxc on Arch and added some notes below under the special note to Arch users

I’ve used Vagrant to manage my local development environment for quite some time. The developers I used to work with used it and, while I have no particular love for it, it works well enough. Eventually I got comfortable enough with Vagrant that I started using it in my own projects. I even wrote about setting up a custom Debian 9 Vagrant box to mirror the server running this site.

The problem with Vagrant is that I have to run a huge memory-hungry virtual machine when all I really want to do is run Django’s built-in dev server.

My laptop only has 8GB of RAM. My browser is usually taking around 2GB, which means if I start two Vagrant machines, I’m pretty much maxed out. Django’s dev server is also painfully slow to reload when anything changes.

Recently I was talking with one of Canonical’s MAAS developers and the topic of containers came up. When I mentioned I really didn’t like Docker, but hadn’t tried anything else, he told me I really needed to try LXD. Later that day I began reading through the LinuxContainers site and tinkering with LXD. Now, a few days later, there’s not a Vagrant machine left on my laptop.

Since it’s just me, I don’t care that LXC only runs on Linux. LXC/LXD is blazing fast, lightweight, and dead simple. To quote, Canonical’s Michael Iatrou, LXC “liberates your laptop from the tyranny of heavyweight virtualization and simplifies experimentation.”

Here’s how I’m using LXD to manage containers for Django development on Arch Linux. I’ve also included instructions and commands for Ubuntu since I set it up there as well.

What’s the difference between LXC, LXD and lxc

I wrote this guide in part because I’ve been hearing about LXC for ages, but it seemed unapproachable, overwhelming, too enterprisey you might say. It’s really not though, in fact I found it easier to understand than Vagrant or Docker.

So what is a LXC container, what’s LXD, and how are either different than say a VM or for that matter Docker?

  • LXC - low-level tools and a library to create and manage containers, powerful, but complicated.
  • LXD - is a daemon which provides a REST API to drive LXC containers, much more user-friendly
  • lxc - the command line client for LXD.

In LXC parlance a container is essentially a virtual machine, if you want to get pedantic, see Stéphane Graber’s post on the various components that make up LXD. For the most part though, interacting with an LXC container is like interacting with a VM. You say ssh, LXD says socket, potato, potahto. Mostly.

An LXC container is not a container in the same sense that Docker talks about containers. Think of it more as a VM that only uses the resources it needs to do whatever it’s doing. Running this site in an LXC container uses very little RAM. Running it in Vagrant uses 2GB of RAM because that’s what I allocated to the VM — that’s what it uses even if it doesn’t need it. LXC is much smarter than that.

Now what about LXD? LXC is the low level tool, you don’t really need to go there. Instead you interact with your LXC container via the LXD API. It uses YAML config files and a command line tool lxc.

That’s the basic stack, let’s install it.

Install LXD

On Arch I used the version of LXD in the AUR. Ubuntu users should go with the Snap package. The other thing you’ll want is your distro’s Btrfs or ZFS tools.

Part of LXC’s magic relies on either Btrfs and ZFS to read a virtual disk not as a file the way Virtualbox and others do, but as a block device. Both file systems also offer copy-on-write cloning and snapshot features, which makes it simple and fast to spin up new containers. It takes about 6 seconds to install and boot a complete and fully functional LXC container on my laptop, and most of that time is downloading the image file from the remote server. It takes about 3 seconds to clone that fully provisioned base container into a new container.

In the end I set up my Arch machine using Btrfs or Ubuntu using ZFS to see if I could see any difference (so far, that would be no, the only difference I’ve run across in my research is that Btrfs can run LXC containers inside LXC containers. LXC Turtles all the way down).

Assuming you have Snap packages set up already, Debian and Ubuntu users can get everything they need to install and run LXD with these commands:

apt install zfsutils-linux

And then install the snap version of lxd with:

snap install lxd

Once that’s done we need to initialize LXD. I went with the defaults for everything. I’ve printed out the entire init command output so you can see what will happen:

sudo lxd init
Create a new BTRFS pool? (yes/no) [default=yes]: 
would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: 
Create a new BTRFS pool? (yes/no) [default=yes]: 
Would you like to use an existing block device? (yes/no) [default=no]: 
Size in GB of the new loop device (1GB minimum) [default=15GB]: 
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like LXD to be available over the network? (yes/no) [default=no]:    
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes

LXD will then spit out the contents of the profile you just created. It’s a YAML file and you can edit it as you see fit after the fact. You can also create more than one profile if you like. To see all installed profiles use:

lxc profile list

To view the contents of a profile use:

lxc profile show <profilename>

To edit a profile use:

lxc profile edit <profilename>

So far I haven’t needed to edit a profile by hand. I’ve also been happy with all the defaults although, when I do this again, I will probably enlarge the storage pool, and maybe partition off some dedicated disk space for it. But for now I’m just trying to figure things out so defaults it is.

The last step in our setup is to add our user to the lxd group. By default LXD runs as the lxd group, so to interact with containers we’ll need to make our user part of that group.

sudo usermod -a -G lxd yourusername
Special note for Arch users.

To run unprivileged containers as your own user, you’ll need to jump through a couple extra hoops. As usual, the Arch User Wiki has you covered. Read through and follow those instructions and then reboot and everything below should work as you’d expect.

Or at least it did until about June of 2022 when something changed with cgroups and I stopped being able to run my lxc containers. I kept getting errors like:

Failed to create cgroup at_mnt 24() 
lxc debian-base 20220713145726.259 ERROR conf - ../src/lxc/conf.c:lxc_mount_auto_mounts:851 - No such file or directory - Failed to mount "/sys/fs/cgroup"

I tried debugging, and reading through all the bug reports I could find over the course of a couple of days and got nowhere. No one else seems to have this problem. I gave up and decided I’d skip virtualization and develop directly on Arch. I installed PostgreSQL… and it wouldn’t start, also throwing an error about cgroups. That is when I dug deeper into cgroups and found a way to revert to the older behavior. I added this line to my boot params (in my case that’s in /boot/loader/entries/arch.conf):

systemd.unified_cgroup_hierarchy=0

That fixed all the issues for me. If anyone can explain why I’d be interested to hear from you in the comments.

Create Your First LXC Container

Let’s create our first container. This website runs on a Debian VM currently hosted on Vultr.com so I’m going to spin up a Debian container to mirror this environment for local development and testing.

To create a new LXC container we use the launch command of the lxc tool.

There are four ways you can get LXC containers, local (meaning a container base you’ve downloaded), images (which come from https://images.linuxcontainers.org/, ubuntu (release versions of Ubuntu), and ubuntu-daily (daily images). The images on linuxcontainers are unofficial, but the Debian image I used worked perfectly. There’s also Alpine, Arch CentOS, Fedora, openSuse, Oracle, Palmo, Sabayon and lots of Ubuntu images. Pretty much every architecture you could imagine is in there too.

I created a Debian 9 Stretch container with the amd64 image. To create an LXC container from one of the remote images the basic syntax is lxc launch images:distroname/version/architecture containername. For example:

lxc launch images:debian/stretch/amd64 debian-base
Creating debian-base
Starting debian-base

That will grab the amd64 image of Debian 9 Stretch and create a container out of it and then launch it. Now if we look at the list of installed containers we should see something like this:

lxc list
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                                                                         
|    NAME     |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |                                                                                         
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                                                                         
| debian-base | RUNNING | 10.171.188.236 (eth0) | fd42:e406:d1eb:e790:216:3eff:fe9f:ad9b (eth0) | PERSISTENT |           |                                                                                         
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+  

Now what? This is what I love about LXC, we can interact with our container pretty much the same way we’d interact with a VM. Let’s connect to the root shell:

lxc exec debian-base -- /bin/bash

Look at your prompt and you’ll notice it says root@nameofcontainer. Now you can install everything you need on your container. For me, setting up a Django dev environment, that means Postgres, Python, Virtualenv, and, for this site, all the Geodjango requirements (Postgis, GDAL, etc), along with a few other odds and ends.

You don’t have to do it from inside the container though. Part of LXD’s charm is to be able to run commands without logging into anything. Instead you can do this:

lxc exec debian-base -- apt update
lxc exec debian-base -- apt install postgresql postgis virtualenv

LXD will output the results of your command as if you were SSHed into a VM. Not being one for typing, I created a bash alias that looks like this: alias luxdev='lxc exec debian-base -- ' so that all I need to type is luxdev <command>.

What I haven’t figured out is how to chain commands, this does not work:

lxc exec debian-base -- su - lxf && cd site && source venv/bin/activate && ./manage.py runserver 0.0.0.0:8000

According to a bug report, it should work in quotes, but it doesn’t for me. Something must have changed since then, or I’m doing something wrong.

The next thing I wanted to do was mount a directory on my host machine in the LXC instance. To do that you’ll need to edit /etc/subuid and /etc/subgid to add your user id. Use the id command to get your user and group id (it’s probably 1000 but if not, adjust the commands below). Once you have your user id, add it to the files with this one liner I got from the Ubuntu blog:

echo 'root:1000:1' | sudo tee -a /etc/subuid /etc/subgid

Then you need to configure your LXC instance to use the same uid:

lxc config set debian-base raw.idmap 'both 1000 1000'

The last step is to add a device to your config file so LXC will mount it. You’ll need to stop and start the container for the changes to take effect.

lxc config device add debian-base sitedir disk source=/path/to/your/directory path=/path/to/where/you/want/folder/in/lxc
lxc stop debian-base
lxc start debian-base

That replicates my setup in Vagrant, but we’ve really just scratched the surface of what you can do with LXD. For example you’ll notice I named the initial container “debian-base”. That’s because this is the base image (fully set up for Djano dev) which I clone whenever I start a new project. To clone a container, first take a snapshot of your base container, then copy that snapshot to create a new container:

lxc snapshot debian-base debian-base-configured
lxc copy debian-base/debian-base-configured mycontainer

Now you’ve got a new container named mycontainer. If you’d like to tweak anything, for example mount a different folder specific to this new project you’re starting, you can edit the config file like this:

lxc config edit mycontainer

I highly suggest reading through Stéphane Graber’s 12 part series on LXD to get a better idea of other things you can do, how to manage resources, manage local images, migrate containers, or connect LXD with Juju, Openstack or yes, even Docker.

Shoulders stood upon

  1. To be fair, I didn’t need to get rid of Vagrant. You can use Vagrant to manage LXC containers, but I don’t know why you’d bother. LXD’s management tools and config system works great, why add yet another tool to the mix? Unless you’re working with developers who use Windows, in which case LXC, which is short for, Linux Container, is not for you.