使用kubeadm搭建k8s集群

K8S课程中都讲了kubeadm搭建的方法,脚本中是从google下载k8s的镜像需要翻墙。参考知乎上的方法,国内镜像:https://zhuanlan.zhihu.com/p/46341911

通过kubeadm部署k8s的组件,需要用到Kubeadm,Kubelet和Kubernetes-cni,多以需要添加源。国外的直接添加google源,具体可以网上搜索。国内的推荐中科大的源,命令如下:
cat < /etc/apt/sources.list.d/kubernetes.list
deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main
EOF

安装运行时docker、kubelet、cni、kubeadm等基础组件,但是可能会报错。
apt-get update && apt-get install -y docker.io kubelet kubernetes-cni=0.6.0-00 kubeadm

上述命令会出现错误,原因是缺少相应的key,可以通过下面命令添加,其中E084DAB9 为上面报错的信息中包含的key后8位,把后面八位复制下来,填写到下面的命令中执行一遍。

gpg –keyserver keyserver.ubuntu.com –recv-keys E084DAB9
gpg –export –armor E084DAB9 | sudo apt-key add –
以上的源设置,如果一切成功,则成功安装kubeadm、kubelet、kubenetes-cni等组件,但不包括各类kubenetes的components。

k8s的内部组件安装,默认用国外的源,被墙了,需要替换为国内的源。

获取需要的image版本信息

下面的镜像应该去除"k8s.gcr.io/"的前缀,版本换成上面获取到的版本
首先通过kubeadm config image list获取当前k8s组件的版本信息,如API Server、proxy、dns等。

root@k8s-m1:~# kubeadm config images list
k8s.gcr.io/kube-apiserver:
k8s.gcr.io/kube-controller-manager:
k8s.gcr.io/kube-scheduler:v1.17.5
k8s.gcr.io/kube-proxy:v1.17.5
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5

为K8S所需组件设置国内源

As of 2020-05-15,个版本信息如下:
编写脚本,设置名字为initk8s.sh
root@k8s-m1:~# vi initk8s.sh
写入如下内容:

#!/bin/bash
images=(
kube-apiserver:v1.17.5
kube-controller-manager:v1.17.5
kube-scheduler:v1.17.5
kube-proxy:v1.17.5
pause:3.1
etcd:3.4.3-0
coredns:1.6.5
)

for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/\(imageName k8s.gcr.io/\)imageName
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

然后保存。chmod a+x initk8s.sh,然后执行./initk8s.sh
运行结束后会出现很多pull image成功的信息,说明image已经下载成功。

没成功,看到文章中提到需要用命令更换源,也尝试了一下。
kubeadm config images pull –image-repository=http://registry.cn-hangzhou.aliyuncs.com/google_containers

运行kubeadm init

运行了两次,最后才成功了。在成功信息的最后显示了initialized successfully,并且提到了运行一些命令以及后续添加cluster的token。

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown \((id -u):\)(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.201:6443 –token 1u3ktz.278v2qvh25jymvyf
–discovery-token-ca-cert-hash sha256:c529b43da2886b171740167ae404d4845935697b658edc0e32c870b1d61a483c

查看master、node状态

获取所有的nodes,包括master、worker node等。

root@k8s-m1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-m1.vmtest.com NotReady master 9m25s v1.17.3

显示nodeready,说明还没有成功。可以看到master节点的名称为k8s-m1.vmtest.com,还没有成功。
此时要运行一下describe查看一下为什么没成功。

root@k8s-m1:~# kubectl describe nodes
Name: k8s-m1.vmtest.com
Roles: master
查看一下condition这下面的信息
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Sat, 16 May 2020 11:55:43 +0800 Sat, 16 May 2020 11:45:22 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 16 May 2020 11:55:43 +0800 Sat, 16 May 2020 11:45:22 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 16 May 2020 11:55:43 +0800 Sat, 16 May 2020 11:45:22 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Sat, 16 May 2020 11:55:43 +0800 Sat, 16 May 2020 11:45:22 +0800 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
event部分也有信息显示cluster没成功启动
Events:
Type Reason Age From Message


Normal Starting 11m kubelet, k8s-m1.vmtest.com Starting kubelet.
Normal NodeHasSufficientMemory 11m kubelet, k8s-m1.vmtest.com Node k8s-m1.vmtest.com status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet, k8s-m1.vmtest.com Node k8s-m1.vmtest.com status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m kubelet, k8s-m1.vmtest.com Node k8s-m1.vmtest.com status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 11m kubelet, k8s-m1.vmtest.com Updated Node Allocatable limit across pods
Normal Starting 11m kube-proxy, k8s-m1.vmtest.com Starting kube-proxy.

root@k8s-m1:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-5krq4 0/1 Pending 0 15m
coredns-6955765f44-cxrm8 0/1 Pending 0 15m
etcd-k8s-m1.vmtest.com 1/1 Running 0 15m
kube-apiserver-k8s-m1.vmtest.com 1/1 Running 0 15m
kube-controller-manager-k8s-m1.vmtest.com 1/1 Running 0 15m
kube-proxy-bw8d5 1/1 Running 0 15m
kube-scheduler-k8s-m1.vmtest.com 1/1 Running 0 15m

默认的k8s管理组件部署在默认的kube-system namespace中,可以看到这个namespace中包含各核心组建的pod,但是依赖于网络的都处于pending状态。

root@k8s-m1:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-5krq4 0/1 Pending 0 17m
coredns-6955765f44-cxrm8 0/1 Pending 0 17m
etcd-k8s-m1.vmtest.com 1/1 Running 0 17m
kube-apiserver-k8s-m1.vmtest.com 1/1 Running 0 17m
kube-controller-manager-k8s-m1.vmtest.com 1/1 Running 0 17m
kube-proxy-bw8d5 1/1 Running 0 17m
kube-scheduler-k8s-m1.vmtest.com 1/1 Running 0 17m

部署网络组件

由于依赖于网络的dns、proxy等处于pending状态,因此还需要部署网络组件,以weave为例
sysctl net.bridge.bridge-nf-call-iptables=1
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’)"
稍等几分钟,安装完成后,再次查看状态:

root@k8s-m1:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-5krq4 1/1 Running 0 13h
coredns-6955765f44-cxrm8 1/1 Running 0 13h
etcd-k8s-m1.vmtest.com 1/1 Running 0 13h
kube-apiserver-k8s-m1.vmtest.com 1/1 Running 0 13h
kube-controller-manager-k8s-m1.vmtest.com 1/1 Running 0 13h
kube-proxy-bw8d5 1/1 Running 0 13h
kube-scheduler-k8s-m1.vmtest.com 1/1 Running 0 13h
weave-net-xdrwr 2/2 Running 0 8m34s

安装dashboard

https://github.com/kubernetes/dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
部署过程稍慢,可以看到界面上提示会创建Kubernetes-dashboard的namespace,查看ns状态:

root@k8s-m1:~# kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-c79c65bb7-wd5xt 1/1 Running 0 2m56s
kubernetes-dashboard-56484d4c5-xdfgv 1/1 Running 0 2m56s
说明已经部署完毕。

暂无评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注


虚拟化 | 云计算 | 机器学习 | 股市复盘
© 2024 涛哥,版权所有, 京ICP备20014492-2号