虽然此系统此版本的已经水过了,但是之前的etcd是使用容器的方式运行的,本次的改成了外部etcd并且和再之前的教程不通的是这次的etcd启用了ssl。
在水这教程时由于只有一台机器,因此是单主机安装k8s并设置可被调度。
机器IP:172.30.88.191
主机名修改后为:node01
基础操作
修改主机名
# hostnamectl --static set-hostname [hostname] hostnamectl --static set-hostname node01
关闭防火墙
service firewalld stop systemctl disable firewalld
关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux setenforce 0
关闭虚拟内存
# 临时关闭(重启后失效,会导致k8s启动失败) swapoff -a # 永久关闭 # 编辑/etc/fstab注释/dev/mapper/centos-swap swap这一行 vi /etc/fstab #/dev/mapper/centos-swap swap swap defaults 0 0 # 或者可以使用sed替换(感觉有点危险) sed -i 's@/dev/mapper/centos-swap@#/dev/mapper/centos-swap@g' /etc/fstab
配置内核参数
完整的内核参数
echo ' net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward=1 net.ipv4.tcp_max_tw_buckets = 10000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_keepalive_time = 1200 net.ipv4.tcp_keepalive_probes = 9 net.ipv4.tcp_keepalive_intvl = 10 net.core.netdev_max_backlog = 65535 net.ipv4.tcp_max_syn_backlog = 65535 net.ipv4.tcp_syncookies = 1 net.core.somaxconn = 65535 net.ipv4.tcp_retries2 = 80 net.core.wmem_max = 21299200 net.core.rmem_max = 21299200 net.core.wmem_default = 21299200 net.core.rmem_default = 21299200 kernel.sem = 250 6400000 1000 25600 net.ipv4.tcp_rmem = 8192 250000 16777216 net.ipv4.tcp_wmem = 8192 250000 16777216 vm.overcommit_memory = 1 vm.swappiness=0 kernel.threads-max = 999999 vm.max_map_count = 999999 ' > /etc/sysctl.conf sysctl -p
最基础就能使用的内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
配置软件源
docker-ce
cat <<EOF | tee /etc/yum.repos.d/docker-ce.repo [docker-ce-stable] name=Docker CE Stable - \$basearch baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/\$basearch/stable enabled=1 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-stable-debuginfo] name=Docker CE Stable - Debuginfo \$basearch baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/debug-\$basearch/stable enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-stable-source] name=Docker CE Stable - Sources baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/source/stable enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-test] name=Docker CE Test - \$basearch baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/\$basearch/test enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-test-debuginfo] name=Docker CE Test - Debuginfo \$basearch baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/debug-\$basearch/test enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-test-source] name=Docker CE Test - Sources baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/source/test enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-nightly] name=Docker CE Nightly - \$basearch baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/\$basearch/nightly enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-nightly-debuginfo] name=Docker CE Nightly - Debuginfo \$basearch baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/debug-\$basearch/nightly enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg [docker-ce-nightly-source] name=Docker CE Nightly - Sources baseurl=https://repo.huaweicloud.com/docker-ce/linux/centos/\$releasever/source/nightly enabled=0 gpgcheck=1 gpgkey=https://repo.huaweicloud.com/docker-ce/linux/centos/gpg EOF
Kubernetes
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF
安装软件
根据兼容性文档,安装1.19.16的k8s,对应选择安装19.03.9的docker。
安装docker-ce
yum list docker-ce --showduplicates | sort -r yum install docker-ce-19.03.9
安装kubectl kubeadm kubelet
yum install kubelet-1.19.16 kubeadm-1.19.16 kubectl-1.19.16
安装并配置启动etcd
根据兼容性文档,安装1.19.16的k8s,对应选择使用3.4.13的etcd。
本次是单节点etcd,多节点的大同小异。
下载安装
# 下载etcd(amd64) wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz # 解压 tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz cd etcd-v3.4.13-linux-amd64/ cp etcd* /usr/local/bin/
生成ssl
下载安装必要的软件cfssl
# 首先下载安装包 wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 # 赋予执行权限 chmod +x cfssl* # 重命名 for x in cfssl*; do mv $x ${x%*_linux-amd64}; done # 移动文件到目录 (/usr/bin) mv cfssl* /usr/bin
配置 CA 并创建 TLS 证书
将使用 CloudFlare’s PKI 工具 cfssl 来配置 PKI Infrastructure,然后使用它去创建 Certificate Authority(CA), 并为 etcd创建 TLS 证书。
创建目录
mkdir /opt/etcd/{bin,cfg,ssl} -p cd /opt/etcd/ssl/
etcd ca配置
cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "876600h" }, "profiles": { "etcd": { "expiry": "876600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
etcd ca证书
cat << EOF | tee ca-csr.json { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "GuangZhou", "ST": "GuangDong" } ] } EOF
生成 CA 凭证和私钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
结果将生成以下两个文件
ca-key.pem ca.pem
etcd server证书
cat << EOF | tee server-csr.json { "CN": "etcd", "hosts": [ "172.30.88.191", "127.0.0.1" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "GuangZhou", "ST": "GuangDong" } ] } EOF
生成server证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
结果将生成以下两个文件
server-key.pem server.pem
配置service
cat <<EOF > /usr/lib/systemd/system/etcd.service [Service] Type=notify WorkingDirectory=/var/lib/etcd ExecStart=/usr/local/bin/etcd \ --name=node01 \ --cert-file=/opt/etcd/ssl/server.pem \ --key-file=/opt/etcd/ssl/server-key.pem \ --peer-cert-file=/opt/etcd/ssl/server.pem \ --peer-key-file=/opt/etcd/ssl/server-key.pem \ --trusted-ca-file=/opt/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \ --initial-advertise-peer-urls https://172.30.88.191:2380 \ --listen-peer-urls https://172.30.88.191:2380 \ --listen-client-urls https://172.30.88.191:2379,http://127.0.0.1:2379 \ --advertise-client-urls=https://172.30.88.191:2379 \ --initial-cluster-token=my-etcd-token \ --initial-cluster=node01=https://172.30.88.191:2380 \ --initial-cluster-state new \ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
启动etcd
systemctl daemon-reload mkdir -p /var/lib/etcd systemctl restart etcd
etcd查看集群状态
etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.30.88.191:2379" endpoint status --write-out=table
etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.30.88.191:2379" endpoint health
etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.30.88.191:2379" member list
备注(无效)
CD3.4版本ETCDCTL_API=3 etcdctl 和 etcd –enable-v2=false 成为了默认配置,如要使用v2版本,执行etcdctl时候需要设置ETCDCTL_API环境变量,例如:ETCDCTL_API=2 etcdctl
ETCD3.4版本会自动读取环境变量的参数,所以EnvironmentFile文件中有的参数,不需要再次在ExecStart启动参数中添加,二选一,如同时配置,会触发以下类似报错“etcd: conflicting environment variable “ETCD_NAME” is shadowed by corresponding command-line flag (either unset environment variable or disable flag)”
flannel操作etcd使用的是v2的API,而kubernetes操作etcd使用的v3的API
注意:flannel操作etcd使用的是v2的API,而kubernetes操作etcd使用的v3的API,为了兼容flannel,将默认开启v2版本,故需要配置文件/opt/soft/etcd/etcd.conf中设置 ETCD_ENABLE_V2=“true”
但是我是使用Calico应该可以关闭v2的吧?不过为了兼容还是可以打开的?
配置docker参数
mkdir /etc/docker/ touch /etc/docker/daemon.json cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "data-root": "/home/docker", "log-opts": { "max-size": "100m", "max-file": "5" }, "storage-driver": "overlay2" } EOF
重启Docker
service docker restart
初始化k8s
apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.19.16 apiServer: certSANs: - 172.30.88.191 - node01 - 127.0.0.1 extraArgs: insecure-bind-address: "0.0.0.0" insecure-port: "8080" service-node-port-range: "79-60000" networking: # This CIDR is a Calico default. Substitute or remove for your CNI provider. podSubnet: 172.20.0.0/16 etcd: external: endpoints: - https://172.30.88.191:2379 caFile: /opt/etcd/ssl/ca.pem certFile: /opt/etcd/ssl/server.pem keyFile: /opt/etcd/ssl/server-key.pem
kubeadm init --config=kubeadm-config.yaml
设置主控可调度
由于只有一台节点于是需要配置主控可调度
kubectl taint nodes node01 node-role.kubernetes.io/master-
安装calico
wget https://docs.projectcalico.org/manifests/calico.yaml kubectl apply -f calico.yaml
拓展操作
重置k8s
kubeadm reset rm -rf /etc/cni/net.d/* iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X rm $HOME/.kube/config service etcd stop rm -rf /var/lib/etcd/* reboot
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.
查看证书有效期
kubeadm alpha certs check-expiration
设置开机自启
systemctl enable docker systemctl enable etcd systemctl enable kubelet
过了今天Centos8就没有安全更新了(高危漏洞紧急更新?),所以现在一般使用的是Centos7,后续我可能会使用Ubuntu吧。虽然我是从Debian转Centos的。
ChiuYut
2021年12月31日