本来大差不差我已经不想再水Kubernetes安装教程的了,特别是可以使用二进制文件安装不需要再区分什么发行版系统的文章也水好(虽然没有发布)后我打算换个东西来水的了,但是这次比较特殊由于1.20开始继续使用Docker只是警告这次1.24的更新算是彻底不支持Docker了。
排在眼见的就只有containerd和CRI-O了吧?
思考再三后这次决定使用containerd,不使用CRI-O是因为我不会装😓
基础操作
修改主机名
hostnamectl --static set-hostname [hostname]
关闭防火墙
service firewalld stop systemctl disable firewalld
我的某篇水文里不是直接关闭防火墙而且开放对应使用到的端口的。
关闭selinux
# 查看状态 sestatus sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config setenforce 0 # 查看状态 sestatus
官方文档还是说要关闭selinux等待适配。
关闭虚拟内存
# 临时关闭(重启后失效,会导致k8s启动失败) swapoff -a # 永久关闭 # 编辑/etc/fstab注释/dev/mapper/centos-swap swap这一行 vi /etc/fstab # 将swap那行删除或者注释掉, 比如:#/dev/mapper/centos-swap swap swap defaults 0 0
新版的支持实验性开启虚拟内存了,但是要加个参数。
配置内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
配置软件源
docker-ce
cat <<EOF | tee /etc/yum.repos.d/docker-ce.repo [docker-ce-stable] name=Docker CE Stable - \$basearch baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/\$basearch/stable enabled=1 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-stable-debuginfo] name=Docker CE Stable - Debuginfo \$basearch baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/debug-\$basearch/stable enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-stable-source] name=Docker CE Stable - Sources baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/source/stable enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-test] name=Docker CE Test - \$basearch baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/\$basearch/test enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-test-debuginfo] name=Docker CE Test - Debuginfo \$basearch baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/debug-\$basearch/test enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-test-source] name=Docker CE Test - Sources baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/source/test enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-nightly] name=Docker CE Nightly - \$basearch baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/\$basearch/nightly enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-nightly-debuginfo] name=Docker CE Nightly - Debuginfo \$basearch baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/debug-\$basearch/nightly enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-nightly-source] name=Docker CE Nightly - Sources baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/source/nightly enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg EOF
Kubernetes
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF
我之前的教程如果使用cat写入的会出现动态参数字符有问题导致repo不正确会导致无法使用,这次根据k8s官方文档进行了修改。
安装软件
安装1.24.0的k8s,已经不支持使用Docker,这次将使用containerd调用链。
安装kubectl kubeadm kubelet
不指定版本,直接安装最新的版本(源的最新)
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
设置开机自启并现在就启动
systemctl enable --now kubelet
配置容器运行时
本节包含使用 containerd 作为 CRI 运行时的必要步骤。
使用以下命令在系统上安装 Containerd:
安装和配置的先决条件
cat <<EOF | tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF modprobe overlay modprobe br_netfilter # 设置必需的 sysctl 参数,这些参数在重新启动后仍然存在。 cat <<EOF | tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # 应用 sysctl 参数而无需重新启动 sysctl --system
安装 containerd
1.从官方Docker仓库安装 containerd.io 软件包。可以在 安装 Docker 引擎 中找到有关为各自的 Linux 发行版设置 Docker 存储库和安装 containerd.io 软件包的说明。(这是我刚开始的安装方法)
yum install containerd.io
或者使用二进制文件安装(后来的安装方法,这样版本比较新)
wget https://github.com/containerd/containerd/releases/download/v1.6.4/containerd-1.6.4-linux-amd64.tar.gz tar zxf containerd-1.6.4-linux-amd64.tar.gz -C /usr/local/
配置文件
mkdir -p /etc/containerd containerd config default > /etc/containerd/config.toml
containerd.service
touch /lib/systemd/system/containerd.service vi /lib/systemd/system/containerd.service
containerd.service内容
# Copyright The containerd Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=1048576 # Comment TasksMax if your systemd version does not supports it. # Only systemd 226 and above support this version. TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target
手动下载的二进制文件,缺少runc
导致无法正常拉起kube-proxy,需要手动上传到环境变量的路径(我从/usr/bin
的机子拷贝到/usr/local/bin
的,方便进行管理),并赋予权限chmod 755 /usr/local/bin/runc
。【大概也可以使用yum install runc
吧】
2.无论怎么安装都需要配置开机自启(并启动)
systemctl enable --now containerd
3.配置 containerd
mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml
使用 systemd cgroup
驱动程序,结合 runc
使用 systemd cgroup
驱动,在 /etc/containerd/config.toml
中设置。
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
修改数据目录到数据盘:
# 修改首行的路径指向 root = "/var/lib/containerd" state = "/run/containerd" # 按需修改到对应的大磁盘挂载路径上面
4.重新启动 containerd
systemctl restart containerd
配置crictl默认endpoint
vi /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false
需要拉取的镜像
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.24.0 [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.24.0 [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.24.0 [config/images] Pulled k8s.gcr.io/kube-proxy:v1.24.0 [config/images] Pulled k8s.gcr.io/pause:3.7 [config/images] Pulled k8s.gcr.io/etcd:3.5.3-0 [config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.6
初始化集群
kubeadm init \ --apiserver-advertise-address=192.168.1.80 \ --kubernetes-version v1.24.0 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=172.16.0.0/16
集群
[root@k8s-master ~]# kubectl get pod -o wide -A NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-77484fbbb5-l7s9k 1/1 Running 0 57m 172.16.85.193 k8s-node01 <none> <none> kube-system calico-node-5ln8p 1/1 Running 0 37m 192.168.1.81 k8s-node01 <none> <none> kube-system calico-node-lmqfr 1/1 Running 2 (4m7s ago) 31m 192.168.1.83 k8s-node03 <none> <none> kube-system calico-node-r8nct 1/1 Running 0 57m 192.168.1.80 k8s-master <none> <none> kube-system calico-node-vrm4l 1/1 Running 0 34m 192.168.1.82 k8s-node02 <none> <none> kube-system coredns-6d4b75cb6d-l9465 1/1 Running 0 60m 172.16.235.194 k8s-master <none> <none> kube-system coredns-6d4b75cb6d-zhdmt 1/1 Running 0 60m 172.16.235.193 k8s-master <none> <none> kube-system etcd-k8s-master 1/1 Running 0 60m 192.168.1.80 k8s-master <none> <none> kube-system kube-apiserver-k8s-master 1/1 Running 0 60m 192.168.1.80 k8s-master <none> <none> kube-system kube-controller-manager-k8s-master 1/1 Running 0 60m 192.168.1.80 k8s-master <none> <none> kube-system kube-proxy-8gm2m 1/1 Running 0 60m 192.168.1.80 k8s-master <none> <none> kube-system kube-proxy-cxrtq 1/1 Running 0 34m 192.168.1.82 k8s-node02 <none> <none> kube-system kube-proxy-kfsg7 1/1 Running 0 37m 192.168.1.81 k8s-node01 <none> <none> kube-system kube-proxy-krls9 1/1 Running 0 31m 192.168.1.83 k8s-node03 <none> <none> kube-system kube-scheduler-k8s-master 1/1 Running 0 60m 192.168.1.80 k8s-master <none> <none> [root@k8s-master ~]# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master Ready control-plane 60m v1.24.0 192.168.1.80 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 containerd://1.4.9 k8s-node01 Ready <none> 37m v1.24.0 192.168.1.81 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 containerd://1.6.4 k8s-node02 Ready <none> 34m v1.24.0 192.168.1.82 <none> AlmaLinux 8.6 (Sky Tiger) 4.18.0-372.9.1.el8.x86_64 containerd://1.6.4 k8s-node03 Ready <none> 31m v1.24.0 192.168.1.83 <none> AlmaLinux 8.6 (Sky Tiger) 4.18.0-372.9.1.el8.x86_64 containerd://1.6.4 [root@k8s-master ~]#
ChiuYut
2022年5月15日