cloud
cloud-native
云原生与 kubernetes
安装 Kubernetes
配置 K8S Network 节点

安装 Calico 网卡 (二选一)

版本: v3.29.1

手动下载镜像

不幸的是,docker hub 被墙了, 幸运的是 quay.io 没有被墙,可以通过 quay.io 下载 calico 的镜像 参考官方文档: https://docs.tigera.io/calico/latest/operations/image-options/alternate-registry#alternate-registry (opens in a new tab)

⚠️

Master,Worker 节点均需下载镜像

拉取镜像

ctr -n k8s.io images pull quay.io/tigera/operator:v1.36.2
ctr -n k8s.io images pull quay.io/calico/typha:v3.29.1
ctr -n k8s.io images pull quay.io/calico/ctl:v3.29.1
ctr -n k8s.io images pull quay.io/calico/node:v3.29.1
ctr -n k8s.io images pull quay.io/calico/cni:v3.29.1
ctr -n k8s.io images pull quay.io/calico/apiserver:v3.29.1
ctr -n k8s.io images pull quay.io/calico/kube-controllers:v3.29.1
ctr -n k8s.io images pull quay.io/calico/dikastes:v3.29.1
ctr -n k8s.io images pull quay.io/calico/pod2daemon-flexvol:v3.29.1
ctr -n k8s.io images pull quay.io/calico/csi:v3.29.1
ctr -n k8s.io images pull quay.io/calico/node-driver-registrar:v3.29.1

打标签

 
ctr -n k8s.io images tag quay.io/calico/typha:v3.29.1        docker.io/calico/typha:v3.29.1
ctr -n k8s.io images tag quay.io/calico/ctl:v3.29.1         docker.io/calico/ctl:v3.29.1
ctr -n k8s.io images tag quay.io/calico/node:v3.29.1       docker.io/calico/node:v3.29.1
ctr -n k8s.io images tag quay.io/calico/cni:v3.29.1     docker.io/calico/cni:v3.29.1
ctr -n k8s.io images tag quay.io/calico/apiserver:v3.29.1        docker.io/calico/apiserver:v3.29.1
ctr -n k8s.io images tag quay.io/calico/kube-controllers:v3.29.1       docker.io/calico/kube-controllers:v3.29.1
ctr -n k8s.io images tag quay.io/calico/dikastes:v3.29.1       docker.io/calico/dikastes:v3.29.1
ctr -n k8s.io images tag quay.io/calico/pod2daemon-flexvol:v3.29.1       docker.io/calico/pod2daemon-flexvol:v3.29.1
ctr -n k8s.io images tag quay.io/calico/csi:v3.29.1       docker.io/calico/csi:v3.29.1
ctr -n k8s.io images tag quay.io/calico/node-driver-registrar:v3.29.1       docker.io/calico/node-driver-registrar:v3.29.1
 
 

将calicoctl安装为kubectl的插件:

cd /usr/local/bincd /usr/local/bin
curl -o kubectl-calico -O -L  "https://cdn.jansora.com/files/k8s/calico/v3.29.1/calicoctl-linux-amd64"
chmod +x kubectl-calico
curl -o kubectl-calico -O -L  "https://cdn.jansora.com/files/k8s/calico/v3.29.1/calicoctl-linux-amd64"
chmod +x kubectl-calico

验证插件正常工作:

kubectl calico -h

部署Pod Network组件Calico

选择calico作为k8s的Pod网络组件,下面使用helm在k8s集群中安装calico。

下载tigera-operator的helm chart:

wget https://cdn.jansora.com/files/k8s/calico/v3.29.1/tigera-operator-v3.29.1.tgz

使用helm安装calico:

helm install calico tigera-operator-v3.29.1.tgz -n kube-system  --create-namespace

等待并确认所有pod处于Running状态:

监测 calico 以及集群状态

kubectl get nodes
kubectl get pod -n kube-system | grep tigera-operator
kubectl get pods -A

验证k8s DNS是否可用

kubectl run curl --image=radial/busyboxplus:curl -it
If you don't see a command prompt, try pressing enter.
[ root@curl:/ ]$

进入后执行nslookup kubernetes.default确认解析正常:

nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
 
Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local