TNS
VOXPOP
What’s Slowing You Down?
What is your biggest inhibitor to shipping software faster?
Complicated codebase and technical debt.
0%
QA, writing tests, and debugging.
0%
Waiting for PR review or stakeholder approval.
0%
I'm always waiting due to long build times.
0%
Rework due to unclear or incomplete specifications.
0%
Inadequate tooling or infrastructure.
0%
Other.
0%
Security

How to Implement Secure Containers Using Google’s gVisor

This contributed post reviews the landscape of hypervisor-based secure containers, offering a tutorial on how to up Google's gVisor.
Dec 19th, 2018 9:00am by
Featued image for: How to Implement Secure Containers Using Google’s gVisor

Karthikeyan Shanmugam
Karthikeyan Shanmugam (Karthik) is an experienced Solutions Architect professional with about 17+ years of experience in design & development of applications across Banking, Financial Services and Aviation domains. Currently involved in Technical consulting & providing solutions in the Application Transformation space which includes modernization of legacy applications, managing transformation exercises and providing solution architecture for transformation.

Linux containers has been around since the early 2000s and architected into Linux in 2007. Due to small footprint and portability of containers, the same hardware can support an exponentially larger number of containers than VMs, dramatically reducing infrastructure costs and enabling more apps to deploy faster. But due to usability issues, it didn’t kick-off enough interest until Docker (2013) came into picture.

Unlike hypervisor (ex. Xen, hyper-v) virtualization, where virtual machines run on physical hardware via an intermediation layer (hypervisor), containers instead run userspace on top of an operating system’s kernel. That makes them very lightweight and fast.

Containers have also sparked an interest in microservice architecture, a design pattern for developing applications in which complex applications are broken down into smaller, composable services which work together.

Now with the increasing adoption of containers and microservices in the enterprises, there are also risks which comes along with containers. For example, if any one of the container breaks out, it can allow unauthorized access across containers, hosts or data centers etc., thus affecting all the containers hosted on the Host OS.

To mitigate these risks, we are going to take look at various approaches and specifically Google’s gVisor approach, which is kind of sandbox that helps provide secure isolation for containers. It also integrates with Docker and Kubernetes container platforms thus making it simple and easy to run sandboxed containers in production environments.

With this context, now let’s check out various approaches to implement sandboxed containers.

Roundup of Container Isolation Mechanisms

Machine-level virtualization exposes virtualized hardware to a guest kernel via a Virtual Machine Monitor (VMM). Running containers in distinct virtual machines can provide great isolation, compatibility and performance but it often requires additional proxies and agents, and may require a larger resource footprint and slower start-up times.

Machine Level Virtualization

Comparison between Conventional Platform vs Machine Level Virtualization Enabled Platform

KVM is one of the best examples for Machine-level virtualization. Recently Amazon has also launched Firecracker, a new virtualization technology that makes use of modified version of KVM. AWS Lambda/Fargate extensively uses Firecracker for provisioning and running secure sandboxes to execute customer functions.

KVM Virtualization infrastructure

Another notable project based on KVM is Kata containers which leverages lightweight virtual machine that seamlessly integrates within the container ecosystem like Docker or Kubernetes.

Rule-based execution for example seccomp filters, allows the specification of a fine-grained security policy for an application or container. However, in practice it can be extremely difficult to reliably define a policy for applications, making this approach challenging to apply for all scenarios.

Rule-Based Execution

To configure the same in Docker, Docker needs to be built with seccomp and the kernel is configured with CONFIG_SECCOMP enabled. To check if your kernel supports seccomp and configured.


Check if seccomp is enabled

Docker by default runs on default seccomp profile, to override use –security-opt option during Docker run command. For example, the following explicitly specifies a policy:


The default seccomp profile provides running containers with seccomp and disables around 44 system calls out of 300+. It is moderately protective while providing wide application compatibility. The default Docker profile can be found here.

The profile.json whitelists specific system calls and denies access to other system calls.

In the next section, we will look at gVisor (Google’s) approach to container isolation mechanisms.

Introducing gVisor

gVisor is a lightweight user-space kernel, written in Go, that implements a substantial portion of the Linux system surface. By implementing Linux system surface, it provides isolation between host and application. Also, it includes an Open Container Initiative (OCI) runtime called runsc so that isolation boundary between the application and the host kernel is maintained.

It intercepts all application system calls and acts as the guest kernel, without the need for translation through virtualized hardware. Also, gVisor does not simply redirect application system calls through to the host kernel. Instead, gVisor implements most kernel primitives (like signals, file systems, futexes, pipes, mm, etc.) and has complete system call handlers built on top of these primitives.

gVisor Kernel

Unlike the above mechanisms, gVisor provides a strong isolation boundary by intercepting application system calls and acting as the guest kernel, all while running in user-space. Unlike a VM which requires a fixed set of resources on creation, gVisor can accommodate changing resources over time as normal Linux processes do.

Although gVisor implements a large portion of the Linux surface and its broadly compatible, there are unimplemented features and bugs. Please file a bug here, if you run into issues.

How to Implement Sandboxed Containers Using gVisor (for Docker Applications)

The first step is to download runsc container runtime from the latest nightly build. Post downloading the binary, check it against the SHA512 checksum file.


runsc gVisor Docker runtime

Next step is to configure Docker to use runsc by adding a runtime entry to Docker configuration (/etc/docker/daemon.json)

Docker configuration for runsc

Restart the Docker daemon post making changes.

Now the gVisor configuration is complete, we can now test it by running hello world container using command docker run –runtime=runsc hello-world

Run hello world container using runsc (gVisor)

Let us try to run httpd server on gVisor, here test-apache-app would use httpd image with gVisor runtime.

Run httpd server on gVisor

The runsc runtime can also run sandboxed pods in a Kubernetes cluster through the use of either the cri-o or cri-containerd projects, which convert messages from the Kubelet into OCI runtime commands.

Congrats! we have learned how to implement Sandboxed containers using gVisor.

Additional Resources:

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker, Kubernetes.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.