TNS
VOXPOP
Why did I come to The New Stack today?
We're always glad to see you, but what is the reason for today's visit?
Researching a new technology and Google led me here.
0%
Social media previewed an intriguing post and I wanted to read the whole thing.
0%
I routinely stop by TNS for some good tech reading when I'm bored.
0%
For a glimpse of Alex Williams wearing his fedora. Grrr!
0%
Containers / Kubernetes / Microservices

Tutorial: Configuring a Kubernetes DevTest Cluster in DigitalOcean

Apr 20th, 2017 1:00am by
Featued image for: Tutorial: Configuring a Kubernetes DevTest Cluster in DigitalOcean

DigitalOcean is an affordable cloud computing platform for developers. With presence across America, Asia, and Europe, it is one of the fastest growing public cloud services company. Kubernetes is also gaining ground, in the container orchestration ecosystem. Many businesses are considering it as the container management platform for production workloads.

This tutorial walks you through the steps involved in configuring a multi-node Kubernetes cluster in DigitalOcean from a Mac, for purposes of development and testing. What’s unique about this guide is that it shows you the tips and tricks to take advantage of the features available in DigitalOcean. We will learn how to effectively use the concepts of cloud-config, and the recently announced load balancer feature in DigitalOcean.

Some of these tricks can also be easily applied to other public cloud environments.

The final step will show you how to deploy a microservices application exposed through a load balancer.

You can download the scripts and the YAML file for sample application from GitHub. There is also a video walk-through of the setup available on YouTube.

Here is an illustration of the deployment topology.

Kubernetes deployment topology on DigitalOcean

Setting up the Environment

Apart from an active account with DigitalOcean, we also need the CLI for both DigitalOcean and Kubernetes — doctl and kubectl.

Let’s start by downloading and installing the CLI for DigitalOcean.


Refer to this tutorial to associate DigitalOcean CLI with your account.

We will then download kubectl, the CLI for Kubernetes.


The next step is to generate an SSH key pair, and importing that into our DigitalOcean account. To securely access the cluster, we will create a pair of SSH keys associated with the Kubernetes master and nodes.


The above step copies the public key to the cloud, which can be used during the creation of droplets, the VMs in DigitalOcean.

We will also create a couple of tags in the DigitalOcean environment that we need later.


We need a token that the Kubernetes nodes will need to discover the master. We will generate it through a simple one-line Python script and then replace the placeholder in the master.sh and node.sh that we will use later.


Finally, let’s define a variable that holds the preferred region for the deployment.


To get a list of regions supported by DigitalOcean, you can use the below command


With the environment configuration in place, let’s go ahead and deploy the Kubernetes master.

Configuring the Kubernetes Master

We are using kubeadm, a tool that dramatically reduces the pain of installing Kubernetes. Kubeadm supports either CentOS 7 or Ubuntu 16.04 distributions. We are going for Ubuntu for our setup.

The following script configures the Kubernetes master.



You are free to change the Token used by the master and nodes to join the cluster. Since we need to tell the master the IP address on which the API will be exposed, we use DigitalOcean’s droplet metadata to retrieve the IP address dynamically. This script, which is available in master.sh is passed to the droplet through the cloud-config configuration. This technique gives us a hands-free mechanism to setup Kubernetes master. As the droplet gets provisioned, the script that we passed will execute. Within just a few minutes, we will have a fully configured master ready to accept nodes.

Run the following command to launch the Kubernetes master based on the 2GB droplet configuration running Ubuntu 16.04 64-bit.


Notice that we are passing the variables REGION and SSH_KEY that we populated during the environment setup. The script, master.sh is executed during the provisioning process. This single switch packs a lot of punch to the droplet creation by automating the installation of Kubernetes master. Give it 7 to 10 minutes before moving to the next step. The beauty of this approach is that you never to have to SSH into the droplet to confirm the installation. This is a fully-automated, hands-free setup.

Since we need the public IP address of master, run the below commands to populate a couple of environment variables.


After a few minutes, run the following command to grab the configuration file from the master. We can start using this file with the Kubernetes CLI, kubectl to access the cluster.


If you see “No such file or directory” error, that means the master is not ready yet.

It’s time to confirm that the master is successfully configured. Run the below command to see the available nodes.


This is a sign that the master is fully configured. Within a few minutes, the status changes to Ready.

With the Kubernetes master in place, let’s go ahead and configure the nodes.

Configuring the Kubernetes Nodes

Like master, we will also use a cloud-config script for configuring the nodes.



Ensure that the TOKEN environment variable is same as the master. The MASTER_IP variable tells the nodes where to look for the API server. This should point to the master.

The command below will update the script with the current IP address of the master.


With everything in place, let’s go ahead and launch two nodes.


The cloud-config script with the token and the IP address of the master will ensure that the nodes immediately register with the master.

After a few minutes, check the number of nodes again. Though it may show that the nodes are not ready, everything stabilizes within a few minutes.


Congratulations! You now have a full-blown Kubernetes cluster running in the cloud. It’s time for us to deploy a microservices application.

Deploying an Application

We will deploy a simple To-Do application based on the MEAN stack to our Kubernetes cluster. The following command uses a YAML file that contains the definition of pods, replication controllers, and services.


Check the available pods and services with kubectl.



The service web shows that the application is available on each node at port 32,360. At this point, you can grab the public IP address of any node to access the To-Do application at port 32,360.

Instead of hitting a specific node, let’s create a DigitalOcean load balancer that points port 80 to the port 32,360 on each node. This will make our application accessible through the load balancer’s public IP with the requests routed to each node through a round-robin mechanism.

Configuring the Load Balancer

We can populate an environment variable with the NodePort value of the Kubernetes web service. This will be handy to dynamically configure the load balancer through a script. The command below shows how to get the NodePort through kubectl.


We will now create a DigitalOcean load balancer with health checks and forwarding rules pointing to the microservices application. The forwarding rules will map load balancer’s port 80 to the NodePort of the Kubernetes service, where the application’s frontend is available.


We now have a fully configured environment with a sample application and a load balancer. It’s time to access the application.

Accessing the App

Run the following commands to open the application in the default browser.


The above commands get the public IP address of DigitalOcean’s load balancer and opens it in the default browser on Mac.

Accessing the microservices app from the browser via the load balancer

Adding Additional Nodes

How do you add new nodes to the cluster? It’s very simple — launch new droplets with the same configuration and parameters as the original nodes.


Because of node.sh, the script file has everything it takes to add new nodes to the cluster, it would work like a charm.

Here is another unique capability that we leveraged from DigitalOcean — Droplet Tags. Any new node launched with the tag k8s-node will be automatically discovered by the load balancer. That means when you scale the replication controller, and the pods get scheduled on the new nodes, they instantly become available to users. This is because the load balancer will route the traffic to any node that responds to the health check positively, including the ones that are recently added. This simple trick ensures that we are able to dynamically scale-out and scale-in the nodes.

Droplets with the tags k8s-nodes represent the Kubernetes nodes

Load Balancer dynamically discovers any droplet with tag k8s-nodes

Tear Down

Once you are done, you can run the following commands to tear down the environment without leaving any traces.





The objective of this tutorial was to show you how easy it is to configure a Kubernetes dev/test environment in DigitalOcean. The scripts are available on GitHub for you to try. With a few additional configuration changes, it can be easily adopted for production deployment.

Disclaimer: The setup process described in the article is not suitable  for production. It is only meant to explain the workflow involved in using the kubeadm tool. Please do not replicate the steps as is for configuring a production cluster.

We would like to thank Joe Beda for pointing the security flaws in the setup and helping us fix them. The Python-based Token Generator used in the script is borrowed from his POC  for deploying Kubernetes on GCE using Terraform. 

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Kubernetes.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.