presented by…
@ryanj, Open Source Activist at CoreOS
Elsie Phillips, Community Lead at CoreOS
kubectl
cli tool?linux amd64:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
osx amd64:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
To verify kubectl
availability:
kubectl version
linux/amd64:
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
osx:
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
to verify minikube
availability:
minikube start
If your minikube environment does not boot correctly:
--vm-driver
flag to select a specific virt providerminikube start --vm-driver=virtualbox
Check the project README
for more information about supported virtualization options
To start minikube
with rkt
enabled, try:
minikube start --network-plugin=cni --container-runtime=rkt
to verify:
minikube ssh
docker ps # expect no containers here
rkt list # list running containers
Download and install binary from "the docker store"
Or, use a package manager to install:
brew install docker
To verify docker
availability:
docker version
To reference minikube's docker daemon from your host, run:
eval $(minikube docker-env)
Download and install binary from golang.org
Or, use a package manager to install:
brew install go
export GOPATH=$HOME/src/go
export GOROOT=/usr/local/opt/go/libexec
export PATH=$PATH:$GOPATH/bin
export PATH=$PATH:$GOROOT/bin
To verify go
availability:
go version
Try using kubectl
to list resources by type:
kubectl get nodes
Request the same info, but output the results as structured yaml:
kubectl get nodes -o yaml
Fetch an individual resource by type/id
, output as json
:
kubectl get node/minikube -o json
View human-readable API output:
kubectl describe node/minikube
List resources by type:
kubectl get pods
Create a new resource based on a json object specification:
curl https://raw.githubusercontent.com/ryanj/metrics-k8s/master/pod.json
kubectl create -f https://raw.githubusercontent.com/ryanj/metrics-k8s/master/pod.json
List resources by type:
kubectl get pods
Fetch a resource by type and id, output the results as yaml
:
kubectl get pod metrics-k8s -o yaml
Notice any changes?
Expose the pod by creating a new service
(or "loadbalancer"):
kubectl expose pod/metrics-k8s --port 2015 --type=NodePort
Contact your newly-exposed pod using the associated service id:
minikube service metrics-k8s
Schedule a pod to be deleted:
kubectl delete pod metrics-k8s
Contact the related service. What happens?:
minikube service metrics-k8s
Delete the service:
kubectl delete service metrics-k8s
Create a specification for your deployment
:
kubectl run metrics-k8s --image=quay.io/ryanj/metrics-k8s \
--expose --port=2015 --service-overrides='{ "spec": { "type": "NodePort" } }' \
--dry-run -o yaml > deployment.yaml
View the generated deployment spec file:
cat deployment.yaml
Bug!: Edit the file, adding "---
" (on it's own line) between resource 1 and resource 2 for a workaround.
Can you think of another way to fix this issue? json compatible?
Create a new resource based on your yaml specification:
kubectl create -f deployment.yaml
List resources by type:
kubectl get po,svc
Connect to your new deployment via the associated service id:
minikube service metrics-k8s
Scale up the metrics-k8s
deployment to 3 replicas:
kubectl scale deploy/metrics-k8s --replicas=3
List pods:
kubectl get po
Edit deploy/metrics-k8s
, setting spec.replicas
to 5
:
kubectl edit deploy/metrics-k8s -o json
Save and quit. What happens?
kubectl get pods
Watch for changes to pod
resources:
kubectl get pods --watch
In another terminal, delete several pods by id:
kubectl delete pod $(kubectl get pods | grep ^metrics-k8s | cut -f1 -s -d' ' | head -n 3 | tr '\n' ' ')
What happend? How many pods remain?
kubectl get pods
Watch deployments (leave this running until the 'cleanup' section):
kubectl get deploy --watch
View the current state of your deployment:
minikube service metrics-k8s
Update your deployment's image spec to rollout a new release:
kubectl set image deploy/metrics-k8s metrics-k8s=quay.io/ryanj/metrics-k8s:v1
Reload your browser to view the state of your deployment
kubectl get rs,deploy
View the list of previous rollouts:
kubectl rollout history deploy/metrics-k8s
Rollback to the previous state:
kubectl rollout undo deployment metrics-k8s
Reload your browser to view the state of your deployment
Cleanup old resources if you don't plan to use them:
kubectl delete service,deployment metrics-k8s
Close any remaining --watch
listeners
"how Google runs production systems"
Kube Operators establish a pattern for introducing higher-order interfaces that represent the logical domain expertise (and perhaps the ideal product output) of a Kubernetes SRE
Try installing the etcd operator
kubectl create -f https://raw.githubusercontent.com/coreos/etcd-operator/master/example/deployment.yaml
List TPRs to see if any new primitives have become available
kubectl get thirdpartyresources
Use the new TPR endpoint to create an etcd cluster
kubectl create -f https://raw.githubusercontent.com/coreos/etcd-operator/master/example/example-etcd-cluster.yaml
kubectl get pods
kubectl delete pod pod-id-1 pod-id-2
kubectl get pods
Clean up your work, remove the DB cluster and the new API primitives (TPR endpoints)
kubectl delete -f https://raw.githubusercontent.com/coreos/etcd-operator/master/example/deployment.yaml
kubectl delete endpoints etcd-operator
for joining us at the
in Austin, TX