kops cluster management

export NAME=myfirstcluster.k8s.local
export KOPS_STATE_STORE=s3://kubernetesaws-example-com-state-store
$ ssh-keygen -t rsa -f /home/vagrant/.ssh/id_rsa -q -P ""

Store config at at S3 bucket

$ kops create cluster --zones us-east-1a ${NAME}

Cluster configuration has been created.

 * list clusters with: kops get cluster
 * edit this cluster with: kops edit cluster myfirstcluster.k8s.local
 * edit your node instance group: kops edit ig --name=myfirstcluster.k8s.local nodes
 * edit your master instance group: kops edit ig --name=myfirstcluster.k8s.local master-us-east-1a

Finally configure your cluster with: kops update cluster myfirstcluster.k8s.local --yes

Modify stored config

$ kops create secret --name ${NAME} sshpublickey admin -i ~/.ssh/

Modify kubernetes version to install:

$ kops edit cluster ${NAME}

kubernetesVersion: 1.11.3
$ kops edit ig --name ${NAME} master-us-east-1a

$ kops edit ig --name ${NAME} nodes
$ kops update cluster ${NAME} --yes

Cluster is starting.  It should be ready in a few minutes.

 * validate cluster: kops validate cluster
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.myfirstcluster.k8s.local
 * the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
 * read about installing addons at:
$ kops validate cluster
Using cluster from kubectl context: myfirstcluster.k8s.local

Validating cluster myfirstcluster.k8s.local

master-us-west-2a	Master	m3.medium	1	1	us-west-2a
nodes			Node	t2.medium	2	2	us-west-2a

NAME						ROLE	READY	master	True	node	True	node	True

Your cluster myfirstcluster.k8s.local is ready
$ kubectl get nodes
NAME                                          STATUS    ROLES     AGE       VERSION   Ready     master    23m       v1.10.3    Ready     node      22m       v1.10.3    Ready     node      21m       v1.10.3
$ kubectl cluster-info
Kubernetes master is running at
KubeDNS is running at

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl -n kube-system get po
NAME                                                   READY     STATUS    RESTARTS   AGE
dns-controller-6d6b7f78b-8jv7z                         1/1       Running   0          25m
etcd-server-events-ip-172-20-34-76.ec2.internal        1/1       Running   0          24m
etcd-server-ip-172-20-34-76.ec2.internal               1/1       Running   0          24m
kube-apiserver-ip-172-20-34-76.ec2.internal            1/1       Running   0          24m
kube-controller-manager-ip-172-20-34-76.ec2.internal   1/1       Running   0          24m
kube-dns-5fbcb4d67b-n2srl                              3/3       Running   0          22m
kube-dns-5fbcb4d67b-rg84t                              3/3       Running   0          25m
kube-dns-autoscaler-6874c546dd-k8f4m                   1/1       Running   0          25m
kube-proxy-ip-172-20-34-76.ec2.internal                1/1       Running   0          24m
kube-proxy-ip-172-20-51-76.ec2.internal                1/1       Running   0          23m
kube-proxy-ip-172-20-63-90.ec2.internal                1/1       Running   0          23m
kube-scheduler-ip-172-20-34-76.ec2.internal            1/1       Running   0          24m

Complete delete cluster, content and all configuration.


$ kops delete cluster --name ${NAME}

Apply changes:

$ kops delete cluster --name ${NAME} --yes

Deleted kubectl config for myfirstcluster.k8s.local

Deleted cluster: "myfirstcluster.k8s.local"
$ kops edit cluster ${NAME}

kubernetesVersion: 1.10.5


$ kops update cluster ${NAME}


$ kops update cluster ${NAME} --yes

Rolling update preview

$ kops rolling-update cluster ${NAME}

Rolling update apply

$ kops rolling-update cluster ${NAME} --yes
  • docs/virtualizacion/kubernetes/kops_cluster_mamagement.txt
  • Última modificación: 2020/04/08 21:59
  • por rodolfo