Kubernetes Zone

Hands-On Workshop


Kubernetes Zone - April 17, 2017

bit.ly/k8s-zone

presented by…

@ryanj, Open Source Activist at CoreOS

&

Elsie Phillips, Community Lead at CoreOS

![CoreOS Logo](http://i.imgur.com/DRm4KEq.png "") Helping *Secure the Internet* by keeping your Container Linux hosts secure, up-to-date, and ready for the challenges of a modern world

Introduction

Workshop Overview

  1. Introduction
  2. Kubernetes Basics
  3. Kubernetes Architecture
  4. Kubernetes Extensibility
  5. Wrap-up

Intro Survey / Who are you?

  1. doing anything with containers today?
  2. have you tried Container Linux?
  3. do you have any experience using Kubernetes?
  4. do you consider yourself to be proficient with the kubectl cli tool?
  5. can you name five basic primitives or resource types?
  6. can you name five pieces of k8s architecture?
  7. can you confidently define the term "K8s operator"?
  8. do you have any hands-on experience using operators?
## Workshop Setup bring a laptop with the following: 1. [kubectl](#/kubectl) 2. [minikube](#/minikube) 3. [docker](#/docker) 4. [Optional tooling for advanced users](#/go) Or, [use GKE for a managed Kuberentes environment](http://cloud.google.com): [http://cloud.google.com](http://cloud.google.com)

install kubectl

linux amd64:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

osx amd64:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

To verify kubectl availability:

kubectl version

official kubectl setup notes

install minikube

linux/amd64:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

osx:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

to verify minikube availability:

minikube start

official minikube setup notes

minikube troubleshooting

If your minikube environment does not boot correctly:

  1. Minikube requires an OS virtualization back-end
  2. Most OSes include some support for virtualization
  3. You can use the --vm-driver flag to select a specific virt provider
minikube start --vm-driver=virtualbox

Check the project README for more information about supported virtualization options

ADVANCED CHALLENGE OPTION

rkt-powered minikube (optional)

To start minikube with rkt enabled, try:

minikube start --network-plugin=cni --container-runtime=rkt

to verify:

minikube ssh
docker ps # expect no containers here
rkt list  # list running containers

install the docker cli

Download and install binary from "the docker store"

Or, use a package manager to install:

brew install docker

To verify docker availability:

docker version

To reference minikube's docker daemon from your host, run:

eval $(minikube docker-env)
ADVANCED CHALLENGE OPTION

install go (optional)

Download and install binary from golang.org

Or, use a package manager to install:

brew install go
export GOPATH=$HOME/src/go
export GOROOT=/usr/local/opt/go/libexec
export PATH=$PATH:$GOPATH/bin
export PATH=$PATH:$GOROOT/bin

To verify go availability:

go version
# *Ready?*
# Kubernetes Basics

Why Kubernetes?

Kubernetes is...

  1. The best way to manage distributed solutions at scale, based on years of industry expertise (Google-scale experience)
  2. agreement on a basis for open source container-driven distributed solution delivery, featuring a modular, HA architecture, and an extensible distributed solutions modeling language
  3. An extensible modeling language with a huge community following
## An API API object primitives include the following attributes: 1. kind 2. apiVersion 3. metadata 4. spec 5. status *mostly true
### Basic K8s Terminology 1. [node](#/node) 2. [pod](#/po) 3. [service](#/svc) 4. [deployment](#/deploy) 5. [replicaSet](#/rs)
### Node A node is a host machine (physical or virtual) where containerized processes run. Node activity is managed via one or more Master instances.

Try using kubectl to list resources by type:

kubectl get nodes

Request the same info, but output the results as structured yaml:

kubectl get nodes -o yaml

Fetch an individual resource by type/id, output as json:

kubectl get node/minikube -o json

View human-readable API output:

kubectl describe node/minikube
### Observations: * Designed to exist on multiple machines (distributed system) * high availability of nodes * platform scale out * The API ambidextriously supports both json and yaml
### Pod A group of one or more co-located containers. Pods represent your minimum increment of scale. > "Pods Scale together, and they Fail together" @theSteve0

List resources by type:

kubectl get pods

Create a new resource based on a json object specification:

curl https://raw.githubusercontent.com/ryanj/metrics-k8s/master/pod.json
kubectl create -f https://raw.githubusercontent.com/ryanj/metrics-k8s/master/pod.json

List resources by type:

kubectl get pods

Fetch a resource by type and id, output the results as yaml:

kubectl get pod metrics-k8s -o yaml

Notice any changes?

### Observations: * pods are scheduled to be run on nodes * asyncronous fulfilment of requests * declarative specifications * automatic health checks, lifecycle management for containers (processes)
### Service Services (svc) establish a single endpoint for a collection of replicated pods, distributing inbound traffic based on label selectors In our K8s modeling language they represent a load balancer. Their implementation often varies per cloud provider

Contacting your App

Expose the pod by creating a new service (or "loadbalancer"):

kubectl expose pod/metrics-k8s --port 2015 --type=NodePort

Contact your newly-exposed pod using the associated service id:

minikube service metrics-k8s

Schedule a pod to be deleted:

kubectl delete pod metrics-k8s

Contact the related service. What happens?:

minikube service metrics-k8s

Delete the service:

kubectl delete service metrics-k8s
### Observations: * *"service"* basically means *"loadbalancer"* * Pods and Services exist independently, have disjoint lifecycles
### Deployment A `deployment` helps you specify container runtime requirements (in terms of pods)

Create a specification for your deployment:

kubectl run metrics-k8s --image=quay.io/ryanj/metrics-k8s \
--expose --port=2015 --service-overrides='{ "spec": { "type": "NodePort" } }' \
--dry-run -o yaml > deployment.yaml

View the generated deployment spec file:

cat deployment.yaml

Bug!: Edit the file, adding "---" (on it's own line) between resource 1 and resource 2 for a workaround.

Can you think of another way to fix this issue? json compatible?

Create a new resource based on your yaml specification:

kubectl create -f deployment.yaml

List resources by type:

kubectl get po,svc

Connect to your new deployment via the associated service id:

minikube service metrics-k8s

Replication

Scale up the metrics-k8s deployment to 3 replicas:

kubectl scale deploy/metrics-k8s --replicas=3

List pods:

kubectl get po

Edit deploy/metrics-k8s, setting spec.replicas to 5:

kubectl edit deploy/metrics-k8s -o json

Save and quit. What happens?

kubectl get pods

AutoRecovery

Watch for changes to pod resources:

kubectl get pods --watch

In another terminal, delete several pods by id:

kubectl delete pod $(kubectl get pods | grep ^metrics-k8s | cut -f1 -s -d' ' | head -n 3 | tr '\n' ' ')

What happend? How many pods remain?

kubectl get pods
### Observations: * Use the `--dry-run` flag to generate new resource specifications * A deployment spec contains a pod spec
### ReplicaSet A `replicaset` provides replication and lifecycle management for a specific image release

Watch deployments (leave this running until the 'cleanup' section):

kubectl get deploy --watch

View the current state of your deployment:

minikube service metrics-k8s

Rollouts

Update your deployment's image spec to rollout a new release:

kubectl set image deploy/metrics-k8s metrics-k8s=quay.io/ryanj/metrics-k8s:v1

Reload your browser to view the state of your deployment

kubectl get rs,deploy

Rollbacks

View the list of previous rollouts:

kubectl rollout history deploy/metrics-k8s

Rollback to the previous state:

kubectl rollout undo deployment metrics-k8s

Reload your browser to view the state of your deployment

Cleanup

Cleanup old resources if you don't plan to use them:

kubectl delete service,deployment metrics-k8s

Close any remaining --watch listeners

### Observations: * The API allows for watch operations (in addition to get, set, list) * ReplicaSets provide lifecycle management for pod resources * Deployments create ReplicaSets to manage pod replication per rollout (per change in podspec: image:tag, environment vars)
# Kubernetes Architecture
## etcd ![etcd logo](https://raw.githubusercontent.com/coreos/etcd/master/logos/etcd-glyph-color.png) * distributed key-value store * implements the RAFT consensus protocol
### CAP theorum 1. Consistency 2. Availability 3. Partition tolerance [etcd is "CA"](https://coreos.com/etcd/docs/latest/learning/api_guarantees.html)
## Degraded Performance Fault tolerance sizing chart: ![etcd cluster sizing chart](http://cloudgeekz.com/wp-content/uploads/2016/10/etcd-fault-tolerance-table.png)
### play.etcd.io [play.etcd.io/play](http://play.etcd.io/play)
## Kubernetes API * gatekeeper for etcd (the only way to access the db) * not required for pod uptime
### API outage simulation Example borrowed from [Brandon Philips' "Fire Drills" from OSCON 2016](https://github.com/philips/2016-OSCON-containers-at-scale-with-Kubernetes#fire-drills): https://github.com/philips/2016-OSCON-containers-at-scale-with-Kubernetes#fire-drills
Create a pod and a service (repeat our deployment drill). Verify that the service is responding. ssh into minikube, kill the control plane: ``` minikube ssh ps aux | grep "localkube" sudo killall localkube logout ``` Use kubectl to list pods: ``` kubectl get pods The connection to the server mycluster.example.com was refused - did you specify the right host or port? ``` The API server is down! Reload your service. Are your pods still available?
## Kubelet Runs on each node, listens to the API for new items with a matching `NodeName`
## Kubernetes Scheduler Assigns workloads to Node machines
## Bypass the Scheduler Create two pods: ``` kubectl create -f https://raw.githubusercontent.com/ryanj/metrics-k8s/master/pod.json kubectl create -f https://gist.githubusercontent.com/ryanj/893e0ac5b3887674f883858299cb8b93/raw/0cf16fd5b1c4d2bb1fed115165807ce41a3b7e20/pod-scheduled.json ``` View events: ``` kubectl get events ``` Did both pods get scheduled? run?
## Kube DNS
## Kube Proxy
## CNI * flannel * canal
## CRI * containerd * rkt * oci [https://coreos.com/blog/rkt-accepted-into-the-cncf.html](https://coreos.com/blog/rkt-accepted-into-the-cncf.html)
### K8s Controllers Controllers work to regulate the declarative nature of the platform state, reconsiling imbalances via a basic control loop https://kubernetes.io/docs/admin/kube-controller-manager/ Kubernetes allows you to introduce your own custom controllers!
### Architecture Diagram ![arch diagram](https://cdn.thenewstack.io/media/2016/08/Kubernetes-Architecture-1024x637.png)
### Interaction Diagram ![interaction diagram](https://i1.wp.com/blog.docker.com/wp-content/uploads/swarm_kubernetes2.png?resize=1024) [(copied from blog.docker.com)](https://blog.docker.com/2016/03/swarmweek-docker-swarm-exceeds-kubernetes-scale/)
# Kubernetes Extensibility

What is an SRE?

"how Google runs production systems"

  1. Google's SRE book - free to read online
  2. SRE blog post series on Medium

What are Operators?

Kube Operators establish a pattern for introducing higher-order interfaces that represent the logical domain expertise (and perhaps the ideal product output) of a Kubernetes SRE

blog post: "Introducing Operators"

### Third Party Resources (TPRs) TPRs allow you to establish new k8s primitives, extending the capabilities of the platform by allowing you to add your own terminology to the modeling language https://kubernetes.io/docs/user-guide/thirdpartyresources/
### Best Practices for Writing Operators https://coreos.com/blog/introducing-operators.html#how-can-you-create-an-operator
## Operator Examples
### Etcd blog post: https://coreos.com/blog/introducing-the-etcd-operator.html sources: https://github.com/coreos/etcd-operator demo video: https://www.youtube.com/watch?v=n4GYyo1V3wY
### Prometheus blog post: https://coreos.com/blog/the-prometheus-operator.html sources: https://github.com/coreos/prometheus-operator demo video: https://www.youtube.com/watch?v=GYSKEd9FePk
### Kube Cert Manager https://github.com/kelseyhightower/kube-cert-manager
### Rook (Storage) https://rook.io/
### Elastic Search https://github.com/upmc-enterprises/elasticsearch-operator
### PostgreSQL Postgres Operator from CrunchyData https://github.com/CrunchyData/postgres-operator
### Tectonic Tectonic uses operators to manage "self-hosted" Kubernetes [k8s cluster upgrades made easy](https://twitter.com/ryanj/status/846866079792062464)
## Operator Challenges
### Basic Challenge 1. Try the etcd operator 2. Identify new primitives and interfaces 3. Create a new etcd cluster 4. Test autorecovery, leader election 5. Clean up

Use an Operator

Try installing the etcd operator

kubectl create -f https://raw.githubusercontent.com/coreos/etcd-operator/master/example/deployment.yaml

Observations?

List TPRs to see if any new primitives have become available

kubectl get thirdpartyresources

Run etcd

Use the new TPR endpoint to create an etcd cluster

kubectl create -f https://raw.githubusercontent.com/coreos/etcd-operator/master/example/example-etcd-cluster.yaml

Test Autorecovery, Leader Election

  1. use kubectl to delete etcd members (pods)
    kubectl get pods
    kubectl delete pod pod-id-1 pod-id-2
  2. list pods to see if the cluster was able to recover automatically
    kubectl get pods
  3. experiment with other SRE-focused features provided by this operator

Clean Up

Clean up your work, remove the DB cluster and the new API primitives (TPR endpoints)

kubectl delete -f https://raw.githubusercontent.com/coreos/etcd-operator/master/example/deployment.yaml
kubectl delete endpoints etcd-operator
### Advanced Challenge 1. Check out and run [Eric's custom rollback-controller code](https://github.com/coreos/rollback-controller#example) 2. [Make a small change and test your work](https://github.com/coreos/rollback-controller#exercises) 3. Consider how a TPR might be used to expose similar functionality, extending the basic collection of primitives 4. Share your results with the CoreOS Community (email us at community at coreos.com)
## Wrap Up
### follow-up topics and links 1. [BrandonPhilips' TPR list](https://gist.github.com/philips/a97a143546c87b86b870a82a753db14c) 2. [Eric's "custom go controllers" presentation](https://github.com/ericchiang/go-1.8-release-party) 3. [Eric's rollback controller example](https://github.com/ericchiang/kube-rollback-controller) 4. [Josh's Operator talk from FOSDEM](https://docs.google.com/presentation/d/1MV029sDifRV2c33JW_83k1tjWDczCfVkFpKvIWuxT6E/edit#slide=id.g1c65fcd8a9_0_54 ) 5. [Video of Josh's talk from KubeCon EU](https://www.youtube.com/watch?v=cj5uk1uje_Y) 6. [etcd autorecovery demo from brandon](https://www.youtube.com/watch?v=9sD3mYCPSjc) 7. [Brandon Philips' "Admin Fire Drills" from OSCON 2016](https://github.com/philips/2016-OSCON-containers-at-scale-with-Kubernetes#fire-drills) 8. [helm support added to quay.io](https://coreos.com/blog/quay-application-registry-for-kubernetes.html) 9. [Sign up to receive the CoreOS Community Newsletter](http://coreos.com/newsletter)

Exit Interview

  1. can you name five Kubernetes primitives?
  2. do you consider yourself to be proficient with kubernetes and the kubectl cli tool?
  3. did this workshop provide enough hands-on experience with Kubernetes?
  4. can you name five architectural components?
  5. are you confident in your explanation of what a Kubernetes operator is?
  6. do feel like you know what it takes to build an operator, and where to look for follow-up info?
  7. are you ready to sign up to demo your new Kube operator at next month's meetup?
### CoreOS Training Want to learn more? Check out the lineup of pro training courses from CoreOS! [coreos.com/training](http://coreos.com/training)
### CoreOS Fest Tickets are on sale now! [coreos.com/fest](http://coreos.com/fest)
### Tectonic Free Tier Try CoreOS Tectonic today [coreos.com/tectonic](http://coreos.com/tectonic) Your first ten Enterprise-grade Kubernetes nodes are free!
### CoreOS is hiring! Join us in our mission to *Secure the Internet!* [coreos.com/careers](https://coreos.com/careers)

Thank You!

for joining us at the

Kubernetes Zone Workshop

in Austin, TX


bit.ly/k8s-zone
Runs on Kubernetes Presented by: @ryanj