配置 K8S Network 节点
安装 Calico 网卡
版本: v3.29.1
手动下载镜像
不幸的是,docker hub 被墙了, 幸运的是 quay.io 没有被墙,可以通过 quay.io 下载 calico 的镜像 参考官方文档: https://docs.tigera.io/calico/latest/operations/image-options/alternate-registry#alternate-registry (opens in a new tab)
⚠️
Master,Worker 节点均需下载镜像
拉取镜像
ctr -n k8s.io images pull quay.io/tigera/operator:v1.36.2
ctr -n k8s.io images pull quay.io/calico/typha:v3.29.1
ctr -n k8s.io images pull quay.io/calico/ctl:v3.29.1
ctr -n k8s.io images pull quay.io/calico/node:v3.29.1
ctr -n k8s.io images pull quay.io/calico/cni:v3.29.1
ctr -n k8s.io images pull quay.io/calico/apiserver:v3.29.1
ctr -n k8s.io images pull quay.io/calico/kube-controllers:v3.29.1
ctr -n k8s.io images pull quay.io/calico/dikastes:v3.29.1
ctr -n k8s.io images pull quay.io/calico/pod2daemon-flexvol:v3.29.1
ctr -n k8s.io images pull quay.io/calico/csi:v3.29.1
ctr -n k8s.io images pull quay.io/calico/node-driver-registrar:v3.29.1
打标签
ctr -n k8s.io images tag quay.io/calico/typha:v3.29.1 docker.io/calico/typha:v3.29.1
ctr -n k8s.io images tag quay.io/calico/ctl:v3.29.1 docker.io/calico/ctl:v3.29.1
ctr -n k8s.io images tag quay.io/calico/node:v3.29.1 docker.io/calico/node:v3.29.1
ctr -n k8s.io images tag quay.io/calico/cni:v3.29.1 docker.io/calico/cni:v3.29.1
ctr -n k8s.io images tag quay.io/calico/apiserver:v3.29.1 docker.io/calico/apiserver:v3.29.1
ctr -n k8s.io images tag quay.io/calico/kube-controllers:v3.29.1 docker.io/calico/kube-controllers:v3.29.1
ctr -n k8s.io images tag quay.io/calico/dikastes:v3.29.1 docker.io/calico/dikastes:v3.29.1
ctr -n k8s.io images tag quay.io/calico/pod2daemon-flexvol:v3.29.1 docker.io/calico/pod2daemon-flexvol:v3.29.1
ctr -n k8s.io images tag quay.io/calico/csi:v3.29.1 docker.io/calico/csi:v3.29.1
ctr -n k8s.io images tag quay.io/calico/node-driver-registrar:v3.29.1 docker.io/calico/node-driver-registrar:v3.29.1
将calicoctl安装为kubectl的插件:
cd /usr/local/bin
curl -o kubectl-calico -O -L "https://github.com/projectcalico/calicoctl/releases/download/v3.21.5/calicoctl-linux-amd64"
chmod +x kubectl-calico
验证插件正常工作:
kubectl calico -h
部署Pod Network组件Calico
选择calico作为k8s的Pod网络组件,下面使用helm在k8s集群中安装calico。
下载tigera-operator
的helm chart:
wget https://github.com/projectcalico/calico/releases/download/v3.29.1/tigera-operator-v3.29.1.tgz
使用helm安装calico:
helm install calico tigera-operator-v3.29.1.tgz -n kube-system --create-namespace
等待并确认所有pod处于Running状态:
监测 calico 以及集群状态
kubectl get nodes
kubectl get pod -n kube-system | grep tigera-operator
kubectl get pods -A
验证k8s DNS是否可用
kubectl run curl --image=radial/busyboxplus:curl -it
If you don't see a command prompt, try pressing enter.
[ root@curl:/ ]$
进入后执行nslookup kubernetes.default确认解析正常:
nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
配置 K8S Network 节点
Network 节点的主要目的是集群整个流量的入口, 可以是一个或多个, 跟 Ingress 控制器一起使用, 用于负载均衡和路由流量到集群内部的服务。
这里将 worker2 (192.168.88.103
) 作为 Network 节点,打上Label:
kubectl label node worker2 node-role.kubernetes.io/network=edge
配置 nginx ingress controller
下载 helm 包
在 Network 节点上安装 nginx ingress controller
values.yaml
controller:
ingressClassResource:
name: nginx
enabled: true
default: true
controllerValue: "k8s.io/ingress-nginx"
admissionWebhooks:
enabled: false
replicaCount: 1
image:
# registry: registry.k8s.io
# image: ingress-nginx/controller
# tag: "v1.8.0"
registry: ccr.ccs.tencentyun.com
image: jansora/registry.k8s.io_ingress-nginx_controller
tag: "v1.8.0"
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/network: 'edge' # 标记 pod 运行到 network 节点
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx-ingress
- key: component
operator: In
values:
- controller
topologyKey: kubernetes.io/hostname
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: PreferNoSchedule
安装
helm install ingress-nginx -n ingress-nginx ./ingress-nginx-4.7.0.tgz --values ./values.yaml --create-namespace