Hands-on Training:

OpenShift WorkShop

at

bit.ly/workstack

Sunday, May 20, 9:00am-4:00pm
Vancouver Convention Centre West - Level Two - Room 213-214

presented by…

&

brought to you by [![Red Hat logo](https://i.imgur.com/ArZFG3e.png "")](https://redhat.com)

Introduction

Intro Survey / Who are you?

  1. how many times have you attended OpenStack Summit?
  2. do you have any experience using containers?
  3. have you completed all of the laptop setup tasks?
  4. do you have any experience using Kubernetes?
  5. do you consider yourself to be proficient with the oc or kubectl cli tools?
  6. can you name five basic primitives or resource types?
  7. can you name five pieces of k8s architecture?
  8. do you have a plan for iterative web development using containers?

Workshop Agenda

## OpenStack ❤ Kubernetes ❤ OpenShift

Similar goals:

  • Distributed Systems management vs. Distributed Solutions management

Different Approaches:

  • OpenStack: Expose cluster resources (infrastructure) via API endpoints

  • Kubernetes: API endpoints are used to model operational requirements (declaratively). Cluster resources are obscured, presented in aggregate, and are managed indirectly via automation

Choose the right tool for the task

IaaS (OpenStack) → distributed hardware

CaaS (Kubernetes) → distributed OS kernel

PaaS (OpenShift) → distributed OS distro

Kubernetes Concepts and Theory

Kubernetes is designed ...

  1. for managing distributed solutions at scale, based on years of industry expertise (Google-scale experience)
  2. for high availabilty of the control plane and user workloads (when using pod replication), avoiding most single points of failure
  3. with a modular control plane architecture, allowing many peices to be replaced without disrupting workload availability
  4. to persist all of it's internal platform state within an etcd database
## etcd ![etcd logo](https://raw.githubusercontent.com/coreos/etcd/master/logos/etcd-glyph-color.png) * distributed key-value store * implements the RAFT consensus protocol
### Play with etcd [play.etcd.io/play](http://play.etcd.io/play)
## Degraded Performance Fault tolerance sizing chart: ![etcd cluster sizing chart](http://cloudgeekz.com/wp-content/uploads/2016/10/etcd-fault-tolerance-table.png)
### CAP theorum 1. Consistency 2. Availability 3. Partition tolerance [etcd is "CA"](https://coreos.com/etcd/docs/latest/learning/api_guarantees.html)
## Kubernetes API * gatekeeper for etcd (the only way to access the db) * not required for pod uptime
### API outage simulations Example borrowed from [Brandon Philips' "Fire Drills" from OSCON 2016](https://github.com/philips/2016-OSCON-containers-at-scale-with-Kubernetes#fire-drills): https://github.com/philips/2016-OSCON-containers-at-scale-with-Kubernetes#fire-drills
Create a pod and a service. Verify that the service is responding. ``` kubectl run metrics-k8s --image=quay.io/ryanj/metrics-k8s \ --expose --port=2015 --service-overrides='{ "spec": { "type": "NodePort" } }' ``` ``` minikube service metrics-k8s ``` ssh into minikube, kill the control plane: ``` minikube ssh ps aux | grep "localkube" sudo killall localkube logout ``` Use kubectl to list pods: ``` kubectl get pods The connection to the server mycluster.example.com was refused - did you specify the right host or port? ``` The API server is down! Reload your service. Are your pods still available?
## Kubelet Runs on each node, listens to the API for new items with a matching `NodeName`
## Kubernetes Scheduler Assigns workloads to Node machines
## Bypass the Scheduler Create two pods: ``` kubectl create -f https://raw.githubusercontent.com/ryanj/metrics-k8s/master/pod.json kubectl create -f https://gist.githubusercontent.com/ryanj/893e0ac5b3887674f883858299cb8b93/raw/0cf16fd5b1c4d2bb1fed115165807ce41a3b7e20/pod-scheduled.json ``` View events: ``` kubectl get events ``` Did both pods get scheduled? run?
## Kube DNS
## Kube Proxy
## CNI * flannel * canal
## CRI * containerd (docker) * cri-o * rkt each compatible with the OCI image spec., runtime
### K8s Controllers Controllers work to regulate the declarative nature of the platform state, reconsiling imbalances via a basic control loop https://kubernetes.io/docs/admin/kube-controller-manager/ Kubernetes allows you to introduce your own custom controllers!
### Architecture Diagram ![arch diagram](https://cdn.thenewstack.io/media/2016/08/Kubernetes-Architecture-1024x637.png)
### Interaction Diagram ![interaction diagram](https://i1.wp.com/blog.docker.com/wp-content/uploads/swarm_kubernetes2.png?resize=1024) [(copied from blog.docker.com)](https://blog.docker.com/2016/03/swarmweek-docker-swarm-exceeds-kubernetes-scale/)
Kubernetes provides… # An API API object primitives include the following attributes: ``` kind apiVersion metadata spec status ``` *mostly true
### Basic K8s Terminology 1. [node](#/node) 2. [pod](#/po) 3. [service](#/svc) 4. [deployment](#/deploy) 5. [replicaSet](#/rs)
### Node A node is a host machine (physical or virtual) where containerized processes run. Node activity is managed via one or more Master instances.

Try using kubectl to list resources by type:

kubectl get nodes

Request the same info, but output the results as structured yaml:

kubectl get nodes -o yaml

Fetch an individual resource by type/id, output as json:

kubectl get node/minikube -o json

View human-readable API output:

kubectl describe node/minikube
### Observations: * Designed to exist on multiple machines (distributed system) * high availability of nodes * platform scale out * The API ambidextriously supports both json and yaml
### Pod A group of one or more co-located containers. Pods represent your minimum increment of scale. > "Pods Scale together, and they Fail together" @theSteve0

List resources by type:

kubectl get pods

Create a new resource based on a json object specification:

curl https://raw.githubusercontent.com/ryanj/metrics-k8s/master/pod.json
kubectl create -f https://raw.githubusercontent.com/ryanj/metrics-k8s/master/pod.json

List resources by type:

kubectl get pods

Fetch a resource by type and id, output the results as yaml:

kubectl get pod metrics-k8s -o yaml

Notice any changes?

### Observations: * pods are scheduled to be run on nodes * asyncronous fulfilment of requests * declarative specifications * automatic health checks, lifecycle management for containers (processes)
### Service Services (svc) establish a single endpoint for a collection of replicated pods, distributing inbound traffic based on label selectors In our K8s modeling language they represent a load balancer. Their implementation often varies per cloud provider

Contacting your App

Expose the pod by creating a new service (or "loadbalancer"):

kubectl expose pod/metrics-k8s --port 2015 --type=NodePort

Contact your newly-exposed pod using the associated service id:

minikube service metrics-k8s

Schedule a pod to be deleted:

kubectl delete pod metrics-k8s

Contact the related service. What happens?:

minikube service metrics-k8s

Delete the service:

kubectl delete service metrics-k8s
### Observations: * *"service"* basically means *"loadbalancer"* * Pods and Services exist independently, have disjoint lifecycles
### Deployment A `deployment` helps you specify container runtime requirements (in terms of pods)

Create a specification for your deployment:

kubectl run metrics-k8s --image=quay.io/ryanj/metrics-k8s \
--expose --port=2015 --service-overrides='{ "spec": { "type": "NodePort" } }' \
--dry-run -o yaml > deployment.yaml

View the generated deployment spec file:

cat deployment.yaml

Create a new resource based on your yaml specification:

kubectl create -f deployment.yaml

List resources by type:

kubectl get po,svc

Connect to your new deployment via the associated service id:

minikube service metrics-k8s

Replication

Scale up the metrics-k8s deployment to 3 replicas:

kubectl scale deploy/metrics-k8s --replicas=3

List pods:

kubectl get po

Edit deploy/metrics-k8s, setting spec.replicas to 5:

kubectl edit deploy/metrics-k8s -o json

Save and quit. What happens?

kubectl get pods

AutoRecovery

Watch for changes to pod resources:

kubectl get pods --watch

In another terminal, delete several pods by id:

kubectl delete pod $(kubectl get pods | grep ^metrics-k8s | cut -f1 -s -d' ' | head -n 3 | tr '\n' ' ')

What happend? How many pods remain?

kubectl get pods
### Observations: * Use the `--dry-run` flag to generate new resource specifications * A deployment spec contains a pod spec
### ReplicaSet to learn about ReplicaSets through command-line examples, finish this scenario at: bit.ly/k8s-kubectl#/rs

Hands-On with OpenShift

Ready?

Let's Go!


Today's hands-on workshop is available at:

http://content-workshop.apps.openstack.openshiftworkshop.com

Lunch

OpenShift on OpenStack

OpenShift Commons Presentation:

OpenShift on OpenStack

by Ramon Rodriguez of Red Hat

youtu.be/1DV0gk0V9iI

OpenStack Conference Session

OpenShift on OpenStack and Bare Metal

by Ramon Rodriguez of Red Hat

www.openstack.org/summit/vancouver-2018/summit-schedule/events/21818/openshift-on-openstack-and-bare-metal

Open Service Broker API

bit.ly/k8s-catalog

Extending Kubernetes

### More Ways to Extend the Platform * [Custom Resource Definitions](https://kubernetes.io/docs/concepts/api-extension/custom-resources/) * [custom controllers](https://github.com/kubernetes/sample-controller) * CRDs+Controllers ↦ [Operators](https://coreos.com/blog/introducing-operator-framework) * https://github.com/spotahome/redis-operator * https://github.com/jw-s/redis-operator * https://coreos.com/blog/introducing-operator-framework

Wrap Up

# Q&A
## Resources
### Kubernetes SIGs [Kubernetes Special Interest Groups (SIGs)](https://github.com/kubernetes/community/blob/master/sig-list.md)

OpenShift Commons Presentation:

OpenShift on OpenStack

by Ramon Rodriguez of Red Hat

youtu.be/1DV0gk0V9iI

OpenStack Conference Session

OpenShift on OpenStack and Bare Metal

by Ramon Rodriguez of Red Hat

www.openstack.org/summit/vancouver-2018/summit-schedule/events/21818/openshift-on-openstack-and-bare-metal

### More Ways to Try OpenShift * [OpenShift Learning Portal](http://learn.openshift.com) * [OpenShift Origin](https://github.com/openshift/origin) (and [minishift](https://github.com/minishift/minishift)) * [OpenShift Online (Starter and Pro plans available)](https://www.openshift.com/products/online/) * [OpenShift Dedicated (operated on AWS, GCE, and Azure)](https://www.openshift.com/products/dedicated/) * [OpenShift Container (supported on RHEL, CoreOS)](https://www.openshift.com/products/container-platform/)

Thank You!

@RyanJ

bit.ly/workstack

Runs on Kubernetes Presented by: @ryanj