cloud
cloud-native
云原生与 kubernetes
安装 Kubernetes
部署 K8S 控制节点

部署 K8S 控制节点

kubeadm 安装 Kubernetes 集群

在做好了准备工作之后,下面介绍如何使用 kubeadm 安装 Kubernetes 集群,我们将首先安装 master 节点,然后将 slave 节点一个个加入到集群中去。

在各节点开机启动kubelet服务:

systemctl enable kubelet.service

使用kubeadm config print init-defaults --component-configs KubeletConfiguration可以打印集群初始化默认的使用的配置:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.27.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

从默认的配置中可以看到,可以使用imageRepository定制在集群初始化时拉取k8s所需镜像的地址。基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件kubeadm.yaml:

advertiseAddress 192.168.88.101 注意修改为自己的 master ip

/etc/kubernetes/kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.88.101
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.28.2
imageRepository: registry.aliyuncs.com/google_containers
networking:
  podSubnet: 10.244.0.0/16
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
failSwapOn: false
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

这里定制了imageRepository为阿里云的registry,避免因gcr被墙,无法直接拉取镜像。criSocket设置了容器运行时为containerd。 同时设置kubelet的cgroupDriver为systemd,设置kube-proxy代理模式为ipvs。

在开始初始化集群之前可以使用kubeadm config images pull --config /etc/kubernetes/kubeadm-init.yaml预先在各个服务器节点上拉取所k8s需要的容器镜像。

kubeadm config images pull --config /etc/kubernetes/kubeadm-init.yaml

接下来使用kubeadm初始化集群 kubeadm init --config /etc/kubernetes/kubeadm-init.yaml

kubeadm init --config kubeadm-init.yaml

[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
        [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.88.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.88.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.88.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.003837 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
[bootstrap-token] Using token: 0qinrd.77drhpnlvc2fvllo
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.88.101:6443 --token 0qinrd.77drhpnlvc2fvllo \
        --discovery-token-ca-cert-hash sha256:01df34acd184e43e5ed37fc292e30394588a7476a4b907149adfece88b2361d8

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。 其中有以下关键内容:

  • [certs]生成相关的各种证书
  • [kubeconfig]生成相关的kubeconfig文件
  • [kubelet-start] 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"
  • [control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod
  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  • [addons]安装基本插件:CoreDNS, kube-proxy

下面的命令是配置常规用户如何使用kubectl访问集群:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

最后给出了将另外2个节点加入集群的命令:

kubeadm join 192.168.88.101:6443 --token 0qinrd.77drhpnlvc2fvllo \
        --discovery-token-ca-cert-hash sha256:01df34acd184e43e5ed37fc292e30394588a7476a4b907149adfece88b2361d8

MASTER 节点执行 如下命令来配置 kubectl。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

这样 master 的节点就配置好. 注意以下日志, slave 节点添加进来的时候需要


Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.88.101:6443 --token 0mynwm.10bk3hdn34e9lgo \
--discovery-token-ca-cert-hash sha256:51a66dd07069658f42d33923ecee7059a3b35820128a3d5acfaa7d374a9b8494

绑定 master 主机本地环境变量

Master (MASTER) 节点执行

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile

查看一下集群状态,确认个组件都处于healthy状态,结果出现了错误:

root@master:/mnt/nfs/share/kubernetes# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   3m38s   v1.28.2

这是正常的, 因为网络组件还没有装.

docker 仓库密钥管理

kubectl create secret docker-registry ccr.ccs.tencentyun.com --docker-server=ccr.ccs.tencentyun.com --docker-username=xxxxxx --docker-password=xxx  --docker-email="xxx@gmail.com"

附录. FAQ

Q: kube-flannel STATUS CrashLoopBackOff ?

root@l2:~# kubectl get pods -n kube-system
NAME                         READY   STATUS             RESTARTS   AGE
coredns-6c76c8bb89-d5kvk     1/1     Running            0          2m27s
coredns-6c76c8bb89-zfws2     1/1     Running            0          2m27s
etcd-l2                      1/1     Running            0          2m28s
kube-apiserver-l2            1/1     Running            0          2m28s
kube-controller-manager-l2   1/1     Running            0          2m28s
kube-flannel-ds-hftwx        0/1     CrashLoopBackOff   4          2m27s
kube-proxy-xj5c4             1/1     Running            0          2m27s
kube-scheduler-l2            1/1     Running            0          2m28s

A: For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.

https://stackoverflow.com/questions/52098214/kube-flannel-in-crashloopbackoff-status (opens in a new tab)

Q: coredns状态卡在ContainerCreating

root@l2:~# kubectl get pods -n kube-system
NAME                         READY   STATUS              RESTARTS   AGE
coredns-6c76c8bb89-hwklk     0/1     ContainerCreating   0          43s
coredns-6c76c8bb89-xxc6l     0/1     ContainerCreating   0          43s
etcd-l2                      0/1     Running             0          42s
kube-apiserver-l2            1/1     Running             0          42s
kube-controller-manager-l2   0/1     Running             0          42s
kube-flannel-ds-7nscs        1/1     Running             2          21s
kube-proxy-6zhks             1/1     Running             0          43s
kube-scheduler-l2            0/1     Running             0          42s

A: 【解决方法】 步骤一:在所有节点(master和slave节点)删除cni0,以及暂停k8s和docker。


kubeadm reset

systemctl stop kubelet

systemctl stop docker

rm -rf /var/lib/cni/

rm -rf /var/lib/kubelet/

rm -rf /etc/cni/

ifconfig cni0 down

ifconfig flannel.1 down

ifconfig docker0 down

ip link delete cni0

ip link delete flannel.1

步骤二:在所有节点重启kubelet和docker

systemctl start kubelet

systemctl start docker

步骤三:重新执行kubeadm init的操作

Q: 报错docker: Error response from daemon: cgroups: cgroup mountpoint does not exist: unknown. ?

Master / Worker 1 / Worker 2 都要执行

编辑 /etc/profile

加入

mkdir -p /sys/fs/cgroup/systemd
mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

试配置生效

source /etc/profile

Q: 安装出错, 重置 kubernetes ?

sudo kubeadm reset
rm -rf .kube/
sudo rm -rf /etc/kubernetes/
sudo rm -rf /var/lib/kubelet/
sudo rm -rf /var/lib/etcd