前言
操作系统:CentOS-7-x86_64-Minimal-2009.iso
k8s版本:kubernetes 1.23.4
https://www.52tect.com/container/2022/03/18/456.html
https://www.52tect.com/java/2022/03/18/453.html
https://www.52tect.com/container/2022/03/18/435.html
https://www.52tect.com/container/2022/03/17/421.html
1 基础配置
1 所有节点关闭防火墙
systemctl disable --now firewalld
2 所有节点禁用SELinux
#关闭selinux sed -ri 's/(^SELINUX=).*/\1disabled/' /etc/selinux/config setenforce 0
3 所有节点关闭Swap
#关闭swap sed -ri 's@(^.*swap *swap.*0 0$)@#\1@' /etc/fstab swapoff -a
4 所有节点关闭postfix邮件服务
systemctl disable --now postfix
5 所有节点关闭NetworkManager
linux之CentOS中network和NetworkManager
https://blog.csdn.net/weixin_41831919/article/details/101318928
systemctl disable --now NetworkManager systemctl start network && systemctl enable network
6 所有节点关闭dnsmasq
#关闭dnsmasq systemctl disable --now dnsmasq
7 所有节点安装基础软件
yum install curl conntrack ipvsadm ipset iptables jq sysstat libseccomp rsync wget psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 -y
8 所有节点修改内核参数
#cmd modprobe br_netfilter #cmd lsmod |grep br_netfilter br_netfilter 24576 0 bridge 200704 1 br_netfilter cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF #cmd sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
modprobe br_netfilter如果不加载次模块
sysctl -p /etc/sysctl.d/k8s.conf出现报错:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
如不开启net.bridge.bridge-nf-call-iptables
在centos下安装docker,执行docker info出现如下警告:
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
如不开启net.ipv4.ip_forward = 1参数
kubeadm初始化k8s如果报错: ERROR FileContent--proc-sys-net-ipv4-ip_forward] : .....
net.ipv4.ip_forward是数据包转发:
Linux系统默认禁止数据包转发。所谓转发即当主机拥有多于一块的网卡时,其中一块收到数据包,根据数据包的目的ip地址将数据包发往本机另一块网卡,该网卡根据路由表继续发送数据包。这通常是路由器所要实现的功能,要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。这个参数指定了Linux系统当前对路由转发功能的支持情况;
9 所有节点修改所有节点修改资源限制limits
ulimit -SHn 65535 cat >> /etc/security/limits.conf <<EOF * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * seft memlock unlimited * hard memlock unlimitedd EOF
10 所有节点升级内核
https://www.linuxprobe.com/update-kernel-centos7.html
CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定
# uname -sr Linux 3.10.0-1160.el7.x86_64 grubby --default-kernel /boot/vmlinuz-3.10.0-1160.el7.x86_64
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm # 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次! yum --enablerepo=elrepo-kernel install -y kernel-lt # 设置开机从新内核启动 grub2-set-default 'CentOS Linux (5.4.186-1.el7.elrepo.x86_64) 7 (Core)'
11 进行时间同步
11.1 时间master节点
主节点IP
#安装chrony 所有机器执行 yum install chrony -y #在其中一台主机配置为时间服务器 vim /etc/chrony.conf cat /etc/chrony.conf #从哪同步时间 server time2.aliyun.com iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync #允许的ntp客户端网段 allow 192.168.56.1/24 local stratum 10 logdir /var/log/chrony #重启服务 systemctl restart chronyd
11.2 时间客户端
#配置其他节点从服务端获取时间进行同步 cat /etc/chrony.conf server 192.168.56.101 iburst #重启验证 systemctl restart chronyd chronyc sources -v ^* master01 3 6 17 5 -10us[ -109us] +/- 28ms #这样表示正常
12 安装Iptables
安装并清空规则,关闭即可,k8s自己初始化
#cmd yum install iptables-services -y #cmd service iptables stop && systemctl disable iptables
13 开启ipvs
不开启ipvs将会使用iptables进行数据包转发,但是效率低,所以推荐开通ipvs。
#cmd vim /etc/sysconfig/modules/ipvs.modules #cmd cat /etc/sysconfig/modules/ipvs.modules modprobe -- ip_vs modprobe -- ip_vs_lc modprobe -- ip_vs_wlc modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_lblc modprobe -- ip_vs_lblcr modprobe -- ip_vs_dh modprobe -- ip_vs_sh modprobe -- ip_vs_nq modprobe -- ip_vs_sed modprobe -- ip_vs_ftp modprobe -- nf_conntrack #cmd chmod 755 /etc/sysconfig/modules/ipvs.modules #cmd bash /etc/sysconfig/modules/ipvs.modules #cmd lsmod | grep ip_vs
10 所有节点配置yum repo源
#cmd mkdir /root/repo.bak mv /etc/yum.repos.d/* ./repo.bak/ #使用阿里镜像源 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo #安装epel源 wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo vim /etc/yum.repos.d/k8s.repo cat /etc/yum.repos.d/k8s.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 #清除系统yum缓存,重新生成 yum clean all yum makecache -y #查看系统可用yum源和所有yum源 yum repolist enabled
13 安装docker
或者可以使用containerd,本次使用docker,安装docker后并配置镜像加速
#cmd yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 添加仓库自:https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #cmd sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo #cmd yum -y install docker-ce 已加载插件:fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * elrepo: hkg.mirror.rackspace.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com docker-ce-stable 依赖关系解决 ============================================================================================================================================================================================================ Package 架构 版本 源 大小 ============================================================================================================================================================================================================ 正在安装: docker-ce x86_64 3:20.10.13-3.el7 docker-ce-stable 22 M 为依赖而安装: audit-libs-python x86_64 2.8.5-4.el7 base 76 k checkpolicy x86_64 2.5-8.el7 base 295 k container-selinux noarch 2:2.119.2-1.911c772.el7_8 extras 40 k containerd.io x86_64 1.5.10-3.1.el7 docker-ce-stable 30 M docker-ce-cli x86_64 1:20.10.13-3.el7 docker-ce-stable 30 M docker-ce-rootless-extras x86_64 20.10.13-3.el7 docker-ce-stable 8.1 M docker-scan-plugin x86_64 0.17.0-3.el7 docker-ce-stable 3.7 M fuse-overlayfs x86_64 0.7.2-6.el7_8 extras 54 k fuse3-libs x86_64 3.6.1-4.el7 extras 82 k libcgroup x86_64 0.41-21.el7 base 66 k libsemanage-python x86_64 2.5-14.el7 base 113 k policycoreutils-python x86_64 2.5-34.el7 base 457 k python-IPy noarch 0.75-6.el7 base 32 k setools-libs x86_64 3.3.8-4.el7 base 620 k slirp4netns x86_64 0.4.3-4.el7_8 extras 81 k 事务概要 ============================================================================================================================================================================================================ 安装 1 软件包 (+15 依赖软件包) 总下载量:96 M 安装大小:387 M 已安装: docker-ce.x86_64 3:20.10.13-3.el7 作为依赖被安装: audit-libs-python.x86_64 0:2.8.5-4.el7 checkpolicy.x86_64 0:2.5-8.el7 container-selinux.noarch 2:2.119.2-1.911c772.el7_8 containerd.io.x86_64 0:1.5.10-3.1.el7 docker-ce-cli.x86_64 1:20.10.13-3.el7 docker-ce-rootless-extras.x86_64 0:20.10.13-3.el7 docker-scan-plugin.x86_64 0:0.17.0-3.el7 fuse-overlayfs.x86_64 0:0.7.2-6.el7_8 fuse3-libs.x86_64 0:3.6.1-4.el7 libcgroup.x86_64 0:0.41-21.el7 libsemanage-python.x86_64 0:2.5-14.el7 policycoreutils-python.x86_64 0:2.5-34.el7 python-IPy.noarch 0:0.75-6.el7 setools-libs.x86_64 0:3.3.8-4.el7 slirp4netns.x86_64 0:0.4.3-4.el7_8 完毕! #cmd mkdir /etc/docker #cmd vim /etc/docker/daemon.json { "registry-mirrors":[ "https://rsbud4vc.mirror.aliyuncs.com", "https://registry.docker-cn.com", "https://docker.mirrors.ustc.edu.cn", "https://dockerhub.azk8s.cn", "http://hub-mirror.c.163.com", "http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com" ], "exec-opts":[ "native.cgroupdriver=systemd" ] } #cmd systemctl daemon-reload systemctl restart docker docker version Engine: Version: 20.10.14
3 安装cfssl
3.1 安装
https://blog.csdn.net/weixin_50908696/article/details/123031783
cfssl 是一款证书签署工具,使用 cfssl 工具可以很简化证书签署过程,方便颁发自签证书。
cfssl 官方没有发行 arm64 版本的二进制程序,因此需要自行编译,如果在 amd64 架构下部署,则直接下载官方发布的二进制程序即可。
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64 [root@k8s-master1 ~]# ll cfssl* -rw-r--r-- 1 root root 16659824 2月 21 09:12 cfssl_1.6.1_linux_amd64 -rw-r--r-- 1 root root 13502544 2月 21 09:12 cfssl-certinfo_1.6.1_linux_amd64 -rw-r--r-- 1 root root 11029744 2月 21 09:12 cfssljson_1.6.1_linux_amd64 chmod +x cfssl* cp cfssl_1.6.1_linux_amd64 /usr/local/bin/cfssl cp cfssl-certinfo_1.6.1_linux_amd64 /usr/local/bin/cfssl-certinfo cp cfssljson_1.6.1_linux_amd64 /usr/local/bin/cfssljson
3.1 创建CA
cfssl print-defaults config > ca-config.json cfssl print-defaults csr > ca-csr.json vim ca-config.json cat ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } }
config.json:可以定义多个profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个profile;
signing: 表示该证书可用于签名其它证书;生成的ca.pem 证书中CA=TRUE;
server auth: 表示client 可以用该CA 对server 提供的证书进行校验;
client auth: 表示server 可以用该CA 对client 提供的证书进行验证。
vim ca-csr.json cat ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Zhejiang", "L": "Hangzhou", "O":"k8s", "OU":"system" } ], "ca": { "expiry": "87600h" } }
- CN: Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名(User Name);浏览器使用该字段验证网站是否合法;
- O: Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组(Group);kube-apiserver 将提取的 User、Group 作为 RBAC 授权的用户标识;
注意:
- 不同证书 csr 文件的 CN、C、ST、L、O、OU 组合必须不同,否则可能出现PEER’S CERTIFICATE HAS AN INVALID SIGNATURE 错误;
- 后续创建证书的 csr 文件时,CN 都不相同(C、ST、L、O、OU 相同),以达到区分的目的;
3.2 生成CA 证书和私钥
#cmd cfssl gencert -initca ca-csr.json | cfssljson -bare ca #cmd ll ca* -rw-r--r-- 1 root root 387 2月 21 09:20 ca-config.json -rw-r--r-- 1 root root 1045 2月 21 09:47 ca.csr -rw-r--r-- 1 root root 257 2月 21 09:45 ca-csr.json -rw------- 1 root root 1679 2月 21 09:47 ca-key.pem -rw-r--r-- 1 root root 1310 2月 21 09:47 ca.pem
3.3 分发证书
将生成的CA 证书、密钥文件、配置文件拷贝到所有机器的/etc/kubernetes/ssl目录下
#cmd mkdir -p /etc/kubernetes/ssl cp ca* /etc/kubernetes/ssl #暂时没有 ssh k8s-master2 "mkdir -p /etc/kubernetes/ssl" #暂时没有 ssh k8s-master3 "mkdir -p /etc/kubernetes/ssl" #cmd ssh k8s-node1 "mkdir -p /etc/kubernetes/ssl" ssh k8s-node2 "mkdir -p /etc/kubernetes/ssl" scp ca* k8s-master2:/etc/kubernetes/ssl/ scp ca* k8s-master3:/etc/kubernetes/ssl/ #cmd scp ca* k8s-node1:/etc/kubernetes/ssl/ #cmd scp ca* k8s-node2:/etc/kubernetes/ssl/
4 部署高可用etcd集群
kubernetes 系统使用etcd存储所有的数据,我们这里部署3个节点的etcd 集群,这3个节点直接复用kubernetes master的3个节点,分别命名为etcd01、etcd02、etcd03:
etcd00:192.168.56.101
etcd01:192.168.56.104
etcd02:192.168.56.105
etcd03:192.168.56.106
4.1 下载etcd
#cmd wget https://github.com/coreos/etcd/releases/download/v3.5.2/etcd-v3.5.2-linux-amd64.tar.gz tar -xvf etcd-v3.5.2-linux-amd64.tar.gz cp -p etcd-v3.5.2-linux-amd64/etcd* /usr/local/bin/ #暂时没有 scp -r etcd-v3.5.2-linux-amd64/etcd* k8s-master2:/usr/local/bin/ scp -r etcd-v3.5.2-linux-amd64/etcd* k8s-master3:/usr/local/bin/
4.2 创建TLS 密钥和证书
为了保证通信安全,客户端(如etcdctl)与etcd 集群、etcd 集群之间的通信需要使用TLS 加密。
创建etcd 证书签名请求:
#cmd vim etcd-csr.json # hosts 字段中,IP 为所有 etcd 集群节点地址, 这里可以做好规划,预留几个 IP,以备以后扩容。我这里写 6 个 IP, 将来做 etcd 集群,以及预留一些 IP 备用 cat etcd-csr.json { "CN":"etcd", "hosts":[ "127.0.0.1", "192.168.56.101", "192.168.56.104", "192.168.56.105", "192.168.56.106", "192.168.56.107" ], "key":{ "algo":"rsa", "size":2048 }, "names":[ { "C":"CN", "ST":"Zhejiang", "L":"Hangzhou", "O":"k8s", "OU":"system" } ] } #cmd cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubernetes/ssl/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd #cmd mkdir -p /etc/etcd/ssl cp etcd*.pem /etc/etcd/ssl/ cp ca*.pem /etc/etcd/ssl/ #暂时没有 ssh k8s-master2 mkdir -p /etc/etcd/ssl #暂时没有 ssh k8s-master3 mkdir -p /etc/etcd/ssl #暂时没有 scp /etc/etcd/ssl/* k8s-master2:/etc/etcd/ssl/ #暂时没有 scp /etc/etcd/ssl/* k8s-master3:/etc/etcd/ssl/
4.3 创建配置文件
#cmd cat /etc/etcd/etcd.conf #[Member] ETCD_NAME="etcd1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.3.61:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.3.61:2379,http://127.0.0.1:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.3.61:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.3.61:2379" ETCD_INITIAL_CLUSTER="etcd1=https://192.168.3.61:2380,etcd2=https://192.168.3.62:2380,etcd3=https://192.168.3.63:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #no scp /etc/etcd/etcd.conf k8s-master2:/etc/etcd/ etcd.conf 100% 520 457.0KB/s 00:00 #no scp /etc/etcd/etcd.conf k8s-master3:/etc/etcd/ etcd.conf 100% 520 445.7KB/s 00:00 #cmd vim /etc/etcd/etcd.conf #实际单台机器 #[Member] ETCD_NAME="etcd1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.56.101:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.56.101:2379,http://127.0.0.1:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.56.101:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.56.101:2379" ETCD_INITIAL_CLUSTER="etcd1=https://192.168.56.101:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
复制到另外两个节点的配置文件需要修改节点名称和IP 切记。
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
创建完配置文件顺便把数据目录也创建上
#cmd mkdir -p /var/lib/etcd/default.etcd #暂时没有 ssh k8s-master2 mkdir -p /var/lib/etcd/default.etcd #暂时没有 ssh k8s-master3 mkdir -p /var/lib/etcd/default.etcd
4.4 创建systemd unit文件
vim etcd.service cat etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=-/etc/etcd/etcd.conf WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \ --cert-file=/etc/etcd/ssl/etcd.pem \ --key-file=/etc/etcd/ssl/etcd-key.pem \ --trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-cert-file=/etc/etcd/ssl/etcd.pem \ --peer-key-file=/etc/etcd/ssl/etcd-key.pem \ --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-client-cert-auth \ --client-cert-auth Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target #cmd cp etcd.service /usr/lib/systemd/system/ #暂时没有 scp etcd.service k8s-master2:/usr/lib/systemd/system/etcd.service 100% 635 587.5KB/s 00:00 scp etcd.service k8s-master3:/usr/lib/systemd/system/etcd.service 100% 635 574.7KB/s 00:00
4.5 启动etcd
在所有节点上都执行
#cmd systemctl daemon-reload systemctl enable etcd.service systemctl start etcd systemctl status etcd ● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled) Active: active (running) since 三 2022-03-23 17:38:44 CST; 4s ago Main PID: 2394 (etcd) CGroup: /system.slice/etcd.service └─2394 /usr/local/bin/etcd --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem --trusted-ca-file=/etc/etcd/ssl/ca.pem --peer-cert-file=/etc/etcd/ssl/etcd.pem --peer-key-... 3月 23 17:38:44 k8s-master systemd[1]: Started Etcd Server. 3月 23 17:38:44 k8s-master etcd[2394]: {"level":"info","ts":"2022-03-23T17:38:44.105+0800","caller":"embed/serve.go:98","msg":"ready to serve client requests"} 3月 23 17:38:44 k8s-master etcd[2394]: {"level":"info","ts":"2022-03-23T17:38:44.106+0800","caller":"embed/serve.go:140","msg":"serving client traffic insecurely; this is strongly discou...7.0.0.1:2379"} 3月 23 17:38:44 k8s-master etcd[2394]: {"level":"info","ts":"2022-03-23T17:38:44.106+0800","caller":"embed/serve.go:98","msg":"ready to serve client requests"} 3月 23 17:38:44 k8s-master etcd[2394]: {"level":"info","ts":"2022-03-23T17:38:44.108+0800","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.56.104:2379"} 3月 23 17:38:44 k8s-master etcd[2394]: {"level":"info","ts":"2022-03-23T17:38:44.108+0800","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} 3月 23 17:38:44 k8s-master etcd[2394]: {"level":"info","ts":"2022-03-23T17:38:44.108+0800","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} 3月 23 17:38:44 k8s-master etcd[2394]: {"level":"info","ts":"2022-03-23T17:38:44.115+0800","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"11c2f93a...ersion":"3.5"} 3月 23 17:38:44 k8s-master etcd[2394]: {"level":"info","ts":"2022-03-23T17:38:44.115+0800","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} 3月 23 17:38:44 k8s-master etcd[2394]: {"level":"info","ts":"2022-03-23T17:38:44.115+0800","caller":"etcdserver/server.go:2504","msg":"cluster version is updated","cluster-version":"3.5"} Hint: Some lines were ellipsized, use -l to show in full.
4.6 验证集群
部署完etcd 集群后,在任一etcd 节点上执行下面命令,提示is healthy则表示正常
[root@k8s-master1 ~]# for ip in 192.168.56.104 192.168.3.62 192.168.3.63; do > ETCDCTL_API=3 /usr/local/bin/etcdctl \ > --endpoints=https://${ip}:2379 \ > --cacert=/etc/etcd/ssl/ca.pem \ > --cert=/etc/etcd/ssl/etcd.pem \ > --key=/etc/etcd/ssl/etcd-key.pem \ > endpoint health; done https://192.168.3.61:2379 is healthy: successfully committed proposal: took = 7.41116ms https://192.168.3.62:2379 is healthy: successfully committed proposal: took = 9.36961ms https://192.168.3.63:2379 is healthy: successfully committed proposal: took = 8.957572ms for ip in 192.168.56.101; do ETCDCTL_API=3 /usr/local/bin/etcdctl \ --endpoints=https://${ip}:2379 \ --cacert=/etc/etcd/ssl/ca.pem \ --cert=/etc/etcd/ssl/etcd.pem \ --key=/etc/etcd/ssl/etcd-key.pem \ endpoint health; done
文章评论