kops cluster management
Create cluster
Previous
export NAME=myfirstcluster.k8s.local export KOPS_STATE_STORE=s3://kubernetesaws-example-com-state-store
$ ssh-keygen -t rsa -f /home/vagrant/.ssh/id_rsa -q -P ""
Create cluster config
Store config at at S3 bucket
$ kops create cluster --zones us-east-1a ${NAME} Cluster configuration has been created. Suggestions: * list clusters with: kops get cluster * edit this cluster with: kops edit cluster myfirstcluster.k8s.local * edit your node instance group: kops edit ig --name=myfirstcluster.k8s.local nodes * edit your master instance group: kops edit ig --name=myfirstcluster.k8s.local master-us-east-1a Finally configure your cluster with: kops update cluster myfirstcluster.k8s.local --yes
Modify stored config
$ kops create secret --name ${NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub
Edit configuration
Modify kubernetes version to install:
$ kops edit cluster ${NAME} kubernetesVersion: 1.11.3
$ kops edit ig --name ${NAME} master-us-east-1a $ kops edit ig --name ${NAME} nodes
Deploy cluster
$ kops update cluster ${NAME} --yes Cluster is starting. It should be ready in a few minutes. Suggestions: * validate cluster: kops validate cluster * list nodes: kubectl get nodes --show-labels * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.myfirstcluster.k8s.local * the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS. * read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/addons.md.
Validate cluster
$ kops validate cluster Using cluster from kubectl context: myfirstcluster.k8s.local Validating cluster myfirstcluster.k8s.local INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master m3.medium 1 1 us-west-2a nodes Node t2.medium 2 2 us-west-2a NODE STATUS NAME ROLE READY ip-172-20-32-120.us-west-2.compute.internal master True ip-172-20-51-89.us-west-2.compute.internal node True ip-172-20-58-56.us-west-2.compute.internal node True Your cluster myfirstcluster.k8s.local is ready
$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-20-32-120.us-west-2.compute.internal Ready master 23m v1.10.3 ip-172-20-51-89.us-west-2.compute.internal Ready node 22m v1.10.3 ip-172-20-58-56.us-west-2.compute.internal Ready node 21m v1.10.3
$ kubectl cluster-info Kubernetes master is running at https://api-myfirstcluster-k8s-lo-hqulii-1192082441.us-west-2.elb.amazonaws.com KubeDNS is running at https://api-myfirstcluster-k8s-lo-hqulii-1192082441.us-west-2.elb.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl -n kube-system get po NAME READY STATUS RESTARTS AGE dns-controller-6d6b7f78b-8jv7z 1/1 Running 0 25m etcd-server-events-ip-172-20-34-76.ec2.internal 1/1 Running 0 24m etcd-server-ip-172-20-34-76.ec2.internal 1/1 Running 0 24m kube-apiserver-ip-172-20-34-76.ec2.internal 1/1 Running 0 24m kube-controller-manager-ip-172-20-34-76.ec2.internal 1/1 Running 0 24m kube-dns-5fbcb4d67b-n2srl 3/3 Running 0 22m kube-dns-5fbcb4d67b-rg84t 3/3 Running 0 25m kube-dns-autoscaler-6874c546dd-k8f4m 1/1 Running 0 25m kube-proxy-ip-172-20-34-76.ec2.internal 1/1 Running 0 24m kube-proxy-ip-172-20-51-76.ec2.internal 1/1 Running 0 23m kube-proxy-ip-172-20-63-90.ec2.internal 1/1 Running 0 23m kube-scheduler-ip-172-20-34-76.ec2.internal 1/1 Running 0 24m
Delete cluster
Complete delete cluster, content and all configuration.
Preview:
$ kops delete cluster --name ${NAME}
Apply changes:
$ kops delete cluster --name ${NAME} --yes Deleted kubectl config for myfirstcluster.k8s.local Deleted cluster: "myfirstcluster.k8s.local"
Update cluster
$ kops edit cluster ${NAME} ... kubernetesVersion: 1.10.5 ...
Preview:
$ kops update cluster ${NAME}
Apply
$ kops update cluster ${NAME} --yes
Rolling update preview
$ kops rolling-update cluster ${NAME}
Rolling update apply
$ kops rolling-update cluster ${NAME} --yes