Getting Started with Firecracker
Contents
Prerequisites
If you need an opinionated way of running Firecracker, create an i3.metal instance using Ubuntu 18.04 on EC2. Firecracker uses KVM and needs read/write access that can be granted as shown below:
The generic requirements are explained below:
Linux 4.14+
Firecracker currently supports physical Linux x86_64 and aarch64 hosts, running kernel version 4.14 or later. However, the aarch64 support is not feature complete (alpha stage).
KVM
Please make sure that:
you have KVM enabled in your Linux kernel, and
you have read/write access to
/dev/kvm. If you need help setting up access to/dev/kvm, you should check out Appendix A.
To check if your system meets the requirements to run Firecracker, clone the repository and execute tools/devtool checkenv.
Getting the Firecracker Binary
Firecracker is linked statically against musl, having no library dependencies. You can just download the latest binary from our release page, and run it on your x86_64 or aarch64 Linux machine.
On the EC2 instance, this binary can be downloaded as:
Rename the binary to "firecracker":
If, instead, you'd like to build Firecracker yourself, you should check out the Building From Source section in this doc.
Running Firecracker
In production, Firecracker is designed to be run securely, inside an execution jail, carefully set up by the jailer binary. This is how our integration test suite does it. However, if you just want to see Firecracker booting up a guest Linux machine, you can do that as well.
First, make sure you have the firecracker binary available - either downloaded from our release page, or built from source.
Next, you will need an uncompressed Linux kernel binary, and an ext4 file system image (to use as rootfs).
To run an
x86_64guest you can download such resources from: kernel and rootfs.To run an
aarch64guest, download them from: kernel and rootfs.
Now, let's open up two shell prompts: one to run Firecracker, and another one to control it (by writing to the API socket). For the purpose of this guide, make sure the two shells run in the same directory where you placed the firecracker binary.
In your first shell:
make sure Firecracker can create its API socket:
then, start Firecracker:
In your second shell prompt:
get the kernel and rootfs, if you don't have any available:
set the guest kernel (assuming you are in the same directory as the above script was run):
set the guest rootfs:
start the guest machine:
Going back to your first shell, you should now see a serial TTY prompting you to log into the guest machine. If you used our hello-rootfs.ext4 image, you can login as root, using the password root.
When you're done, issuing a reboot command inside the guest will actually shutdown Firecracker gracefully. This is due to the fact that Firecracker doesn't implement guest power management.
Note: the default microVM will have 1 vCPU and 128 MiB RAM. If you wish to customize that (say, 2 vCPUs and 1024MiB RAM), you can do so before issuing the InstanceStart call, via this API command:
Configuring the microVM without sending API requests
If you'd like to boot up a guest machine without using the API socket, you can do that by passing the parameter --config-file to the Firecracker process. The command for starting Firecracker with this option will look like this:
path_to_the_configuration_file should represent the path to a file that contains a JSON which stores the entire configuration for all of the microVM's resources. The JSON must contain the configuration for the guest kernel and rootfs, as these are mandatory, but all of the other resources are optional, so it's your choice if you want to configure them or not. Because using this configuration method will also start the microVM, you need to specify all desired pre-boot configurable resources in that JSON. The names of the resources are the ones from the firecracker.yaml file and the names of their fields are the same that are used in API requests. You can find an example of configuration file at tests/framework/vm_config.json. After the machine is booted, you can still use the socket to send API requests for post-boot operations.
Building From Source
The quickest way to build and test Firecracker is by using our development tool (tools/devtool). It employs a per-architecture Docker container to store the software toolchain used throughout the development process. If you need help setting up Docker on your system, you can check out Appendix B: Setting Up Docker.
Getting the Firecracker Sources
Get a copy of the Firecracker sources by cloning our GitHub repo:
All development happens on the main branch and we use git tags to mark releases. If you are interested in a specific release (e.g. v0.10.1), you can check it out with:
Building Firecracker
Within the Firecracker repository root directory:
With the default musl target:
tools/devtool build(Experimental only) using the gnu target:
tools/devtool build -l gnu
This will build and place the two Firecracker binaries at:
build/cargo_target/${toolchain}/debug/firecrackerandbuild/cargo_target/${toolchain}/debug/jailer.
If you would like to test a new feature and work with dependencies on libraries located in private git repos, you can use the --ssh-keys flag to specify the paths to your public and private SSH keys on the host. Both of them are required for git authentication when fetching the repositories.
Please note that only a single set of credentials is supported. devtool cannot fetch multiple private repos which rely on different credentials.
The default build profile is debug. If you want to build the release binaries (optimized and stripped of debug info), use the --release option:
Extensive usage information about devtool and its various functions and arguments is available via:
Alternative: Building Firecracker using glibc
The toolchain that Firecracker is tested against and that is recommended for building production releases is the one that is automatically used by building using devtool. In this configuration, Firecracker is currently built as a static binary linked against the musl libc implementation.
Firecracker also builds using glibc toolchains, such as the default Rust toolchains provided in certain Linux distributions:
That being said, Firecracker binaries linked with glibc or built without devtool are always considered experimental and should not be used in production.
Running the Integration Test Suite
You can also use our development tool to run the integration test suite:
Please note that the test suite is designed to ensure our SLA parameters as measured on EC2 .metal instances and, as such, some performance tests may fail when run on a regular desktop machine. Specifically, don't be alarmed if you see tests/integration_tests/performance/test_process_startup_time.py failing when not run on an EC2 .metal instance. You can skip performance tests with:
Appendix A: Setting Up KVM Access
Some Linux distributions use the kvm group to manage access to /dev/kvm, while others rely on access control lists. If you have the ACL package for your distro installed, you can grant your user access via:
Otherwise, if access is managed via the kvm group:
If none of the above works, you will need to either install the file system ACL package for your distro and use the setfacl command as above, or run Firecracker as root (via sudo).
You can check if you have access to /dev/kvm with:
Note: If you've just added your user to the kvm group via usermod, don't forget to log out and then back in, so this change takes effect.
Appendix B: Setting Up Docker
To get Docker, you can either use the official Docker install instructions , or the package manager available on your specific Linux distribution:
on Debian / Ubuntu
on Fedora / CentOS / RHEL / Amazon Linux
Then, for any of the above, you will need to start the Docker daemon and add your user to the docker group.
Don't forget to log out and then back in again, so that the user change takes effect.
If you wish to have Docker started automatically after boot, you can:
We recommend testing your Docker configuration by running a lightweight test container and checking for net connectivity: