Kubernetes 配置与初始化

本文最后更新于:2024年1月27日 中午

介绍

虚拟机的安装

  • 供测试用

  • 准备工作

    1. 镜像文件 Ubuntu Server 20.04 LTS
    2. vMWare WorkStation Pro 17
  • 安装过程

    1. CPU内核数 建议 插槽1 内核2
    2. 内存大小 建议 2GB
    3. 硬盘大小 建议20GB 不少于18GB
  • 同配置虚拟机创建三份

    • 或者等k8s安装完(没安装主节点)后克隆然后再改IP改Host Name
  • 开启虚拟机SSH,方便后续用Xshell连接

Kerbernetes配置

1.19.4基本配置

源配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# 备份source.list 源
cd /etc/apt
sudo cp sources.list sources.list.ubuntu.bak
# 删除旧源
sudo rm sources.list
# 添加阿里源
sudo vim sources.list
# ALi Source
deb http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse

#更新缓存和升级
sudo apt-get update
sudo apt-get upgrade

关闭防火墙

1
sudo ufw disable

关闭swap

1
2
3
4
5
# 关闭swap
sudo swapoff -a
# 永久关闭swap分区
sudo sed -i 's/.*swap.*/#&/' /etc/fstab

禁止selinux

1
2
3
4
5
6
7
8
9
10
11
禁止selinux
# 安装操控selinux的命令
sudo apt install -y selinux-utils
# 禁⽌selinux
sudo setenforce 0
# 重启操作系统
sudo shutdown -r now

# 重启后 查看selinux状态
sudo getenforce
# Disabled 已关闭

网络配置

初始化

1
2
3
4
5
6
7
8
9
# 创建k8s.conf
sudo vim /etc/sysctl.d/k8s.conf
# 添加以下内容
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0

# 使修改生效
sudo sysctl --system

配置静态IP

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 查看GateWay 和 addresses 
# nmcli dev show ens33

sudo vim /etc/netplan/00-installer-config.yaml
#
# This is the network config written by 'subiquity'
network:
ethernets:
ens33:
addresses: [192.168.231.128/24]
dhcp4: false
gateway4: 192.168.231.2
nameservers:
addresses: [192.168.231.2]
optional: true
version: 2

sudo netplan apply

修改Host

1
2
3
4
5
6
7
8
9
10
11
12
13
# 修改host文件
# 添加你的三台虚拟机的IP路由
192.168.231.xxx master
# 三台机器IP都改好后
# 在每个机器的
# Host文件里把互相的静态IP都填上
#192.168.231.xxx node01
#192.168.231.xxx node02

# 重启
sudo shutdown -r now
# 应用网络
sudo netplan apply

安装Docker

卸载旧版本

1
2
3
4
5
6
7
#御载旧版本docker
sudo apt-get remove docker docker-engine docker-ce docker.io
#清空旧版docker占用的内存
sudo apt-get remove --auto-remove docker

#更新系统源
sudo apt-get update

配置安装环境

1
2
#安装环境
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

添加阿里云密钥

1
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -

添加阿里云镜像源

1
2
3
sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
#更新
sudo apt-get update

安装具体版本

1
2
3
# 5:20.10.5~3-0~ubuntu-focal 实验室部署版本
sudo apt-get install -y docker-ce=5:20.10.5~3-0~ubuntu-focal
# sudo apt-get install -y docker-ce 安装最新版本

重启并查看安装版本

1
2
3
4
5
6
# 重启
sudo service docker restart
#或者
sudo systemctl restart docker
# 查看Docker版本
sudo docker version

部署阿里云镜像加速

登录 阿里云Docker镜像加速

执行控制台的命令

可以在daemon.json中加上

1
2
3
4
5
6
7
8
# 官网文件推荐添加提高稳定性
"exec-opts": ["native.cgroupdriver=systemd"]
# 这两个参数是设置docker日志的
#max-size=300m,意味着一个容器日志大小上限是300M,
#max-file=3,意味着一个容器有三个日志,分别是id+.json、id+1.json、id+2.json
"log-driver":"json-file",
"log-opts": {"max-size":"300m", "max-file":"3"}

重启Docker

1
2
sudo systemctl daemon-reload
sudo systemctl restart docker

添加用户组

1
2
# 添加用户组,防止每次运行docker命令需要sudo
sudo usermod -aG docker $USER

附录:一些Docker命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#查看已经停止的容器
docker ps -a |grep Exited |awk '{print $1}'
#删除所有已经停止的容器
docker rm `docker ps -a |grep Exited |awk '{print $1}'`

docker rmi tilemap_pkg_services:latest
#生成镜像docker build -t tilemap_pkg_services .
#启动镜像为容器
docker run -i -t -p 5001:5001 --name tilemap_pkg_services tilemap_pkg_services

#进入容器的系统:
# 然后用下面的命令进入容器,就可以使用bash命令浏览容器里的文件:
docker exec -it [CONTAINER ID] bash

安装K8S

修改sources.list

1
2
3
4
sudo vim /etc/apt/sources.list
#在/etc/apt/sources.list最后增加
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main

更新sources

1
2
3
4
5
6
7
sudo apt update
# 如果出错(No key)
# 执行
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add

sudo apt-get update && sudo apt-get install -y ca-certificates curl software-properties-common apt-transport-https curl

安装K8S

1
2
# 实验室 默认版本
sudo apt update && sudo apt-get install -y kubelet=1.19.4-00 kubeadm=1.19.4-00 kubectl=1.19.4-00 kubernetes-cni=0.8.7-00

查看是否安装成功

1
kubectl --help

设置K8S开机自启

1
sudo systemctl enable kubelet && systemctl start kubelet





三台机器都要配置

下面是在配置好一台机器后快速配置三台的方案

  1. 配置一台虚拟机,克隆三份,改IP和HOST NAME

    1
    sudo hostnamectl set-hostname server.local
    • 注意:登录用户名和密码不变,只是服务器名称变了。
  2. 再配置两个虚拟机,再手动配置(不推荐)

配置三台机器时候上述流程需要注意的地方

  1. 每台机器的静态IP
  2. 每台机器的host文件

当你三台机器都安装好K8S后

执行以下步骤





主节点配置

在Master虚拟机完成

主节点初始化

1
2
3
# apiserver-advertise-address=192.168.xxx.xxx 为你Master的IP地址 (host和静态IP里写过)
sudo kubeadm init --apiserver-advertise-address=192.168.xxx.xxx --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.19.4 --pod-network-cidr=10.244.0.0/16

保存Join代码(重要)

1
2
kubeadm join 192.168.231.128:6443 --token acqsqj.q62qu2owl78m7nem \
--discovery-token-ca-cert-hash sha256:d95f02022fb42a1ca6b0edeb0903408df241bd23ffbfd567221e281bb1dbcf75

主节点拷贝配置文件

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

部署网络插件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# 查看节点状态
kubectl get nodes
# master NotReady

# 安装网络插件
wget https://projectcalico.docs.tigera.io/archive/v3.20/manifests/calico.yaml

sudo vim calico.yaml

# 对文件进行编辑:
#修改typha_service_name的值为"calico-typha"
#修改CALICO_IPV4POOL_IPIP的值为Never
#取消CALICO_IPV4POOL_CIDR的注释,并将value修改为"10.244.0.0/16"

# 应用更改
kubectl apply -f calico.yaml

# 查看Running状态
kubectl get pod --all-namespaces

# 查看节点状态
kubectl get nodes
# master Ready
网络插件Calico与K8S版本关系
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Kubernetes 版本    Calico 版本    Calico 文档    
1.18、1.19、1.20 3.18
https://projectcalico.docs.tigera.io/archive/v3.18/getting-started/kubernetes/requirements
https://projectcalico.docs.tigera.io/archive/v3.18/manifests/calico.yaml
1.19、1.20、1.21 3.19
https://projectcalico.docs.tigera.io/archive/v3.19/getting-started/kubernetes/requirements
https://projectcalico.docs.tigera.io/archive/v3.19/manifests/calico.yaml
1.19、1.20、1.21 3.20
https://projectcalico.docs.tigera.io/archive/v3.20/getting-started/kubernetes/requirements
https://projectcalico.docs.tigera.io/archive/v3.20/manifests/calico.yaml
1.20、1.21、1.22 3.21
https://projectcalico.docs.tigera.io/archive/v3.21/getting-started/kubernetes/requirements
https://projectcalico.docs.tigera.io/archive/v3.21/manifests/calico.yaml
1.21、1.22、1.23 3.22
https://projectcalico.docs.tigera.io/archive/v3.22/getting-started/kubernetes/requirements
https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calico.yaml
1.21、1.22、1.23 3.23
https://projectcalico.docs.tigera.io/archive/v3.23/getting-started/kubernetes/requirements
https://projectcalico.docs.tigera.io/archive/v3.23/manifests/calico.yaml
1.22、1.23、1.24 3.24
https://projectcalico.docs.tigera.io/archive/v3.24/getting-started/kubernetes/requirements
https://projectcalico.docs.tigera.io/archive/v3.24/manifests/calico.yaml

两台子节点配置

在Node01、Node02虚拟机完成

加入主节点

这是你之前主节点K8S init后自动生成的Join代码

1
2
sudo kubeadm join 192.168.231.128:6443 --token acqsqj.q62qu2owl78m7nem \
--discovery-token-ca-cert-hash sha256:d95f02022fb42a1ca6b0edeb0903408df241bd23ffbfd567221e281bb1dbcf75

拷贝admin.conf

用到Xshell和XFTP进行不同机器间文件的拷贝

拷贝master的admin.conf拷贝到子节点,设置环境变量

1
2
3
4
5
6
7
8
9
10
# 主节点Xshell中
# 给 /etc/kubernetes文件夹权限
sudo chmod 777 -R /etc/kubernetes
# 在XFTP中将主节点/etc/kubernetes/admin.conf拷贝到桌面

# 子节点Xhell中
# 给 /etc/kubernetes文件夹权限
sudo chmod 777 -R /etc/kubernetes
# 在XFTP中将桌面中admin.conf拷贝到子节点/etc/kubernetes文件夹中

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Kuboard 安装

参考:安装 Kuboard v3 - kubernetes | Kuboard

1
kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
1
2
3
4
5
6
7
NAME                               READY   STATUS    RESTARTS   AGE
kuboard-agent-2-657745c889-r6sjm 1/1 Running 1 10m
kuboard-agent-9cd4fb78f-hzfnr 1/1 Running 1 10m
kuboard-etcd-v4mns 1/1 Running 0 11m
kuboard-questdb-7847f97ff-kcg9q 1/1 Running 0 10m
kuboard-v3-59ccddb94c-tvljh 1/1 Running 0 11m

访问Kuborad页面

  • 用户名:admin
  • 密码:Kuboard123

Kubernetes 操作

删除被驱除的节点 Evicted

kubectl get pods (-n kuboard) | grep Evicted | awk '{print $1}' | xargs kubectl delete pod

创建你的第一个镜像

在K8S上跑一个helloworld - 简书 (jianshu.com)

1
2
3
4
5
6
7
8
kubectl get namespaces #查看所有命名空间

kubectl get pods # 查看pods

kubectl delete -n default pod hello-node # 删除default命名空间下 名称为hello-node 的pod

kubectl get svc(service)
kubectl get deployment

使用NodePort服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# pod服务
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-node
namespace: k8s-test01 #注意命名空间
labels:
app: hello-node
spec:
replicas: 3
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node-container
image: hello-node:v1
ports:
- containerPort: 8080
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# web.yaml
# NodePort转发服务
# 本地统过node节点的IP:NodePort访问
apiVersion: v1
kind: Service
metadata:
name: web
namespace: k8s-test01 #注意命名空间
labels:
app: web
spec:
type: NodePort # 服务类型
ports:
- port: 8099 # Service端口 # ClusterIP (containerIP) + port访问
protocol: TCP # 协议
targetPort: 8080 # 容器端口 container服务的端口
nodePort: 30009 # 对外暴露的端口,可以指定 Node节点的IP+nodeport访问
selector:
app: hello-node # 指定关联Pod的标签

企业微信截图_16967410295609

创建私人镜像仓库

作用

避免在不同节点之间反复docker build image

k8s集群部署harbor镜像仓库_k8s配置镜像仓库_机灵的小小子的博客-CSDN博客

部署完之后,在不同节点机器上分别 docker login harborIP

build,push,pull Image

1
2
3
4
5
6
7
8

docker build -t hello-node:

docker tag SOURCE_IMAGE[:TAG] 192.168.235.131/k8s/REPOSITORY[:TAG]

docker push 192.168.235.131/k8s/REPOSITORY[:TAG]

docker pull 192.168.235.131/k8s/hello-node:v6

更新镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-node
namespace: k8s-test01
labels:
app: hello-node
spec:
replicas: 3
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node-container
image: 192.168.235.131/k8s/hello-node:v6 #改为harbor的镜像
ports:
- containerPort: 38080
imagePullSecrets:
- name: regcred

POD pull失败

[k8s] kubernetes从harbor拉取镜像没有权限解决方法 unauthorized_error response from daemon: unauthorized: unauthor-CSDN博客

1
2
3
4
5
6
7
# 实现Pod 的 docker login
# regcred 是 secret名称,可以自定义
kubectl create secret docker-registry regcred \
--docker-server=192.168.235.131 \ # harbor仓库名称
--docker-username=admin \ # 默认用户名密码
--docker-password=Harbor12345

更新yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-node
namespace: k8s-test01
labels:
app: hello-node
spec:
replicas: 3
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node-container
image: 192.168.235.131/k8s/hello-node:v6 #改为harbor的镜像
ports:
- containerPort: 38080
imagePullSecrets: #添加regcred密钥
- name: regcred

应用

1
kubectl apply -f deployment.yaml

节点被驱除

[kubelet 压力驱逐 - The node had condition:DiskPressure]_the node had condition: [memorypressure].-CSDN博客

集群升级

注意事项

1、每次升级一个版本

19.x > 20.x > 21.x >22.x ... 29.x

2、24.x后k8s使用containerd作为容器,不再使用docker需要切换容器服务

3、查看所有版本命令

1
2
3
apt-cache madison kubeadm
apt-cache madison kubectl
apt-cache madison kubelet

19.x > ... 23.x 通用

主节点升级

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# 以1.21.17 > 1.22.17为例!

# 保护主节点 (master为当前主节点名称)
kubectl cordon master
kubectl drain master --ignore-daemonsets

# 安装新版本kubeadm
sudo apt install -y kubeadm=1.22.17-00

# 查看可用升级计划
sudo kubeadm upgrade plan

# 使用升级命令
# 如果控制台提示stable版本的当前kubeadm版本不同,apply版本以kubeadm为准
sudo kubeadm upgrade apply v1.22.17

# 升级kubelet和kubectl
sudo apt install kubelet=1.22.17-00 kubectl=1.22.17-00

# 重启服务
systemctl daemon-reload && systemctl restart kubelet

# 取消保护主节点
kubectl uncordon master

# 查看节点情况和版本
kubectl get nodes -o wide

子节点升级

在主节点升级完成后

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 安装新版本kubeadm
sudo apt install -y kubeadm=1.22.17-00

# 保护子节点
kubectl cordon node01
kubectl drain node01 --ignore-daemonsets

# 升级 node
sudo kubeadm upgrade node

# 升级kubelet和kubectl
sudo apt install kubelet=1.22.17-00 kubectl=1.22.17-00

# 重启服务
systemctl daemon-reload && systemctl restart kubelet

# 取消保护
sudo kubectl uncordon node1

查看升级情况

1
2
# 查看节点情况和版本
kubectl get nodes

23.x > ... 29.x 通用

容器服务迁移

24.x版本及以后,容器服务使用containerd

从23.x版本升级到24.x版本,容器服务需要由docker迁移到containerd中!!!

只在这两个版本之前需要迁移,24.x后续版本使用containerd后不需再迁移!

24.x版本之后的升级忽略此步骤

1
2
3
# 查看 container-runtime
# 最后一列如果是containerd不要迁移,是docker则需要迁移
kubectl get nodes -o wide
1
2
3
4
5
6
7
8
9
10
11
12
# 所有节点的操作都一样,master,node01,node02...
# 容器迁移 dockers > containerd

# 保护主节点 (master为当前主节点名称)
kubectl cordon master
kubectl drain master --ignore-daemonsets

# 停止服务
sudo systemctl stop kubelet
sudo systemctl stop docker
sudo systemctl stop docker.socket
sudo systemctl stop containerd
1
2
3
4
5
6
7
8
9
# Docker 默认安装使用了 containerd 作为后端的容器运行,不需要额外安装containerd
# 生成默认containerd配置

sudo chmod 777 /etc/containerd/config.toml
sudo containerd config default > /etc/containerd/config.toml

# 查看k8s 运行参数
sudo cat /var/lib/kubelet/kubeadm-flags.env
# 抄出类似的内容 registry.aliyuncs.com/google_containers/pause:3.2
1
2
3
4
5
6
7
8
9
10
# 修改默认配置
sudo vim /etc/containerd/config.toml

# 找到disabled_plugins = [],将其注释
# 找到sandbox_image,将其值修改为"registry.aliyuncs.com/google_containers/pause:3.2"
# 找到 [plugins."io.containerd.grpc.v1.cri".registry.mirrors],在其下添加
# [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
# endpoint = ["https://bqr1dr1n.mirror.aliyuncs.com"]
# [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
# endpoint = ["https://registry.aliyuncs.com/k8sxio"]
1
2
3
4
5
6
7
8
# 修改k8s 运行参数,由docker转为containerd
sudo vim /var/lib/kubelet/kubeadm-flags.env

# 添加如下参数
# --container-runtime=remote
# --container-runtime-endpoint=unix:///run/containerd/containerd.sock
# 删除
# --network-plugin=cni
1
2
3
4
5
6
# 修改节点配置
kubectl edit nodes master

# kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
# 改为
# kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock
1
2
3
4
5
# 重启服务
sudo systemctl daemon-reload
sudo systemctl restart containerd
sudo systemctl restart docker
sudo systemctl restart kubelet
示例toml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
#disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
path = ""

[debug]
address = ""
format = ""
gid = 0
level = ""
uid = 0

[grpc]
address = "/run/containerd/containerd.sock"
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = ""
tcp_tls_ca = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0

[metrics]
address = ""
grpc_histogram = false

[plugins]

[plugins."io.containerd.gc.v1.scheduler"]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = "0s"
startup_delay = "100ms"

[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = false
disable_apparmor = false
disable_cgroup = false
disable_hugetlb_controller = true
disable_proc_mount = false
disable_tcp_service = true
enable_selinux = false
enable_tls_streaming = false
enable_unprivileged_icmp = false
enable_unprivileged_ports = false
ignore_image_defined_volumes = false
max_concurrent_downloads = 3
max_container_log_line_size = 16384
netns_mounts_under_state_dir = false
restrict_oom_score_adj = false
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.2"
selinux_category_range = 1024
stats_collect_period = 10
stream_idle_timeout = "4h0m0s"
stream_server_address = "127.0.0.1"
stream_server_port = "0"
systemd_cgroup = false
tolerate_missing_hugetlb_controller = true
unset_seccomp_profile = ""

[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
conf_template = ""
ip_pref = ""
max_conf_num = 1

[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
disable_snapshot_annotations = true
discard_unpacked_layers = false
ignore_rdt_not_enabled_errors = false
no_pivot = false
snapshotter = "overlayfs"

[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""

[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = "io.containerd.runc.v2"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = false

[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""

[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

[plugins."io.containerd.grpc.v1.cri".image_decryption]
key_model = "node"

[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""

[plugins."io.containerd.grpc.v1.cri".registry.auths]

[plugins."io.containerd.grpc.v1.cri".registry.configs]

[plugins."io.containerd.grpc.v1.cri".registry.headers]

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://bqr1dr1n.mirror.aliyuncs.com"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
endpoint = ["https://registry.aliyuncs.com/k8sxio"]
[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""

[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd"

[plugins."io.containerd.internal.v1.restart"]
interval = "10s"

[plugins."io.containerd.internal.v1.tracing"]
sampling_ratio = 1.0
service_name = "containerd"

[plugins."io.containerd.metadata.v1.bolt"]
content_sharing_policy = "shared"

[plugins."io.containerd.monitor.v1.cgroups"]
no_prometheus = false

[plugins."io.containerd.runtime.v1.linux"]
no_shim = false
runtime = "runc"
runtime_root = ""
shim = "containerd-shim"
shim_debug = false

[plugins."io.containerd.runtime.v2.task"]
platforms = ["linux/amd64"]
sched_core = false

[plugins."io.containerd.service.v1.diff-service"]
default = ["walking"]

[plugins."io.containerd.service.v1.tasks-service"]
rdt_config_file = ""

[plugins."io.containerd.snapshotter.v1.aufs"]
root_path = ""

[plugins."io.containerd.snapshotter.v1.btrfs"]
root_path = ""

[plugins."io.containerd.snapshotter.v1.devmapper"]
async_remove = false
base_image_size = ""
discard_blocks = false
fs_options = ""
fs_type = ""
pool_name = ""
root_path = ""

[plugins."io.containerd.snapshotter.v1.native"]
root_path = ""

[plugins."io.containerd.snapshotter.v1.overlayfs"]
mount_options = []
root_path = ""
sync_remove = false
upperdir_label = false

[plugins."io.containerd.snapshotter.v1.zfs"]
root_path = ""

[plugins."io.containerd.tracing.processor.v1.otlp"]
endpoint = ""
insecure = false
protocol = ""

[proxy_plugins]

[stream_processors]

[stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar"

[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
"io.containerd.timeout.bolt.open" = "0s"
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"

[ttrpc]
address = ""
gid = 0
uid = 0

迁移结果
containerd1
containerd3

主节点升级

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# 以1.23.17 > 1.24.17为例!

# 保护主节点 (master为当前主节点名称)
kubectl cordon master
kubectl drain master --ignore-daemonsets

# 安装新版本kubeadm
sudo apt install -y kubeadm=1.24.17-00

# 查看可用升级计划
sudo kubeadm upgrade plan

# 使用升级命令
# 如果控制台提示stable版本的当前kubeadm版本不同,apply版本以kubeadm为准
sudo kubeadm upgrade apply v1.24.17

# 升级kubelet和kubectl
sudo apt install kubelet=1.24.17-00 kubectl=1.24.17-00

# 重启服务
systemctl daemon-reload && systemctl restart kubelet

# 取消保护主节点
kubectl uncordon master

# 查看节点情况和版本
kubectl get nodes -o wide

子节点升级

在主节点升级完成后

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 安装新版本kubeadm
sudo apt install -y kubeadm=1.24.17-00

# 保护子节点
kubectl cordon node01
kubectl drain node01 --ignore-daemonsets

# 升级 node
sudo kubeadm upgrade node

# 升级kubelet和kubectl
sudo apt install kubelet=1.22.17-00 kubectl=1.22.17-00

# 重启服务
systemctl daemon-reload && systemctl restart kubelet

# 取消保护
sudo kubectl uncordon node1

查看升级情况

1
2
# 查看节点情况和版本
kubectl get nodes

升级结果

containerd

新版本配置

参考使用kubeadm部署Kubernetes 1.29 | 青蛙小白 (frognew.com)


Kubernetes 配置与初始化
https://anonymouslosty.ink/2023/07/04/Kubernetes-配置与初始化/
作者
Ling yi
发布于
2023年7月4日
更新于
2024年1月27日
许可协议