centos7 kubernetes之二进制安装1.23.4版本(2)

2022年3月30日 1048点热度 0人点赞 0条评论

5 安装K8s组件

5.1 下载包

#下载
wget https://storage.googleapis.com/kubernetes-release/release/v1.23.4/kubernetes-server-linux-amd64.tar.gz -O kubernetes-server-linux-amd64.tar.gz

tar zxvf kubernetes-server-linux-amd64.tar.gz

cd kubernetes/server/bin/

cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

#暂时没有
scp kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master2:/usr/local/bin

#暂时没有
scp kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master3:/usr/local/bin

scp kubelet kube-proxy k8s-node1:/usr/local/bin/

scp kubelet kube-proxy k8s-node2:/usr/local/bin/

5.2 部署apiserver

5.2.1 TLS Bootstrapping 机制

Master apiserver启用TLS认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。

为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

5.2.2 TLS bootstrapping 具体引导过程

1.TLS 作用

TLS 的作用就是对通讯加密,防止中间人窃听;同时如果证书不信任的话根本就无法与 apiserver 建立连接,更不用提有没有权限向apiserver请求指定内容。

2. RBAC 作用

当 TLS 解决了通讯问题后,那么权限问题就应由 RBAC 解决(可以使用其他权限模型,如 ABAC);RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O字段作为用户组.

以上说明:第一,想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,这样才能形成信任关系,建立 TLS 连接;第二,可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。

5.2.3 kubelet 首次启动流程

TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书,然后用于连接 apiserver;那么第一次启动时没有证书如何连接 apiserver ?

在apiserver 配置中指定了一个 token.csv 文件,该文件中是一个预设的用户配置;同时该用户的Token和由apiserver的CA签发的用户被写入了kubelet 所使用的bootstrap.kubeconfig 配置文件中;这样在首次请求时,kubelet 使用 bootstrap.kubeconfig 中被 apiserver CA 签发证书时信任的用户来与 apiserver 建立 TLS 通讯,使用 bootstrap.kubeconfig 中的用户 Token 来向 apiserver 声明自己的 RBAC 授权身份.token.csv格式:

3940fd7fbb391d1b4d861ad17a1f0613,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

首次启动时,可能与遇到 kubelet 报 401 无权访问 apiserver 的错误;这是因为在默认情况下,kubelet 通过 bootstrap.kubeconfig 中的预设用户 Token 声明了自己的身份,然后创建 CSR 请求;但是不要忘记这个用户在我们不处理的情况下他没任何权限的,包括创建 CSR 请求;所以需要创建一个 ClusterRoleBinding,将预设用户 kubelet-bootstrap 与内置的 ClusterRole system:node-bootstrapper 绑定到一起,使其能够发起 CSR 请求。

5.2.4 创建token.csv文件

#cmd
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
cat token.csv
3609f8980544889a3224dc8c57954976,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

5.2.5 创建csr请求文件

vim kube-apiserver-csr.json
cat kube-apiserver-csr.json
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.56.105",
    "192.168.56.106",
    "192.168.56.107",
    "192.168.56.104",
    "192.168.56.101",
    "10.255.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
   "names": [{
        "C": "CN",
        "ST": "Zhejiang",
        "L": "Hangzhou",
        "O":"k8s",
        "OU":"system"
    }]
}

# 10.255.0.1 kubernetes 服务IP(预先分配,一般为svc地址中的第一个IP)
# svc.cluster.local 如果不想用这个也可以修改

hosts 字段指定授权使用该证书的 IP 和域名列表,这里列出了 master 节点 IP、kubernetes 服务的 IP 和域名;

5.2.6 生成证书

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubernetes/ssl/ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

5.2.7 创建kube-apiserver 的systemd unit文件

修改另外节点的配置文件,master2和master3的apiserver.conf中的IP

--bind-address=192.168.56.101 \

--advertise-address=192.168.56.101 \

#cmd
vim kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
 
[Service]
#EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=192.168.56.101 \
  --secure-port=6443 \
  --advertise-address=192.168.56.101 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth=true \
  --service-cluster-ip-range=10.255.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-50000 \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-servers=https://192.168.56.101:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4
Restart=on-failure

--logtostderr:启用日志

--v:日志等级

--log-dir:日志目录

--etcd-servers:etcd集群地址

--bind-address:监听地址

--secure-port:https安全端口

--advertise-address:集群通告地址apiserver 对外通告的 IP(kubernetes 服务后端节点IP);

--allow-privileged:启用授权

--service-cluster-ip-range:Service虚拟IP地址段

--enable-admission-plugins:准入控制模块

--authorization-mode:认证授权,启用RBAC授权和节点自管理

--enable-bootstrap-token-auth:启用TLS bootstrap机制

--token-auth-file:bootstrap token文件

--service-node-port-range:Service nodeport类型默认分配端口范围

--kubelet-client-xxx:apiserver访问kubelet客户端证书

--tls-xxx-file:apiserver https证书

--etcd-xxxfile:连接Etcd集群证书 –

-audit-log-xxx:审计日志

5.2.8 分发apiserver的证书、配置文件、systemd

修改另外节点的配置文件,master2和master3的apiserver.conf中的IP

cp kube-apiserver*.pem /etc/kubernetes/ssl/
cp token.csv /etc/kubernetes/
#本地没有

cp kube-apiserver.service /usr/lib/systemd/system/

#如果有其他集群机器则进行下面复制
#其他集群机器
scp kube-apiserver*.pem k8s-master2:/etc/kubernetes/ssl/
#其他集群机器
scp kube-apiserver*.pem k8s-master3:/etc/kubernetes/ssl/
#其他集群机器
scp kube-apiserver.service k8s-master2:/usr/lib/systemd/system/
#其他集群机器
scp kube-apiserver.service k8s-master3:/usr/lib/systemd/system/

5.2.9 启动kube-apiserver

三个节点都需要执行

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

journalctl -u kubelet

6 安装kubectl工具

Kubectl是客户端工具,操作k8s资源的,如增删改查等。

Kubectl操作资源的时候,默认先找KUBECONFIG,如果没有KUBECONFIG变量,那就会使用~/.kube/config

可以设置一个环境变量KUBECONFIG

export KUBECONFIG =/etc/kubernetes/admin.conf

这样在操作kubectl,就会自动加载KUBECONFIG来操作要管理哪个集群的k8s资源了

也可以按照下面方法

cp /etc/kubernetes/admin.conf ~/.kube/config

这样我们在执行kubectl,就会加载~/.kube/config文件操作k8s资源了

6.1 编辑配置文件

#cmd
vim admin-csr.json
{
    "CN":"admin",
    "hosts":[

    ],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "ST":"Zhejiang",
            "L":"Hangzhou",
            "O":"system:masters",
            "OU":"system"
        }
    ]
}
  • O: system:masters :kube-apiserver 收到使用该证书的客户端请求后,为请求添加组(Group)认证标识 system:masters ;
  • 预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与Role cluster-admin 绑定,该 Role 授予操作集群所需的最高权限;
  • 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

注意:因为配错了O!!!

vim admin-csr.json
{
    "CN":"admin",
    "hosts":[

    ],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "ST":"Zhejiang",
            "L":"Hangzhou",
            "O":"k8s",
            "OU":"system"
        }
    ]
}

导致后面执行
[root@k8s-master ~]# kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Error from server (Forbidden): services is forbidden: User "admin" cannot list resource "services" in API group "" in the namespace "kube-system"

6.2 生成证书

#cmd
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#cmd
cp admin*.pem /etc/kubernetes/ssl/

6.3 创建kubeconfig配置文件

#cmd
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.56.101:6443 --kubeconfig=kube.config
#cmd
cat kube.config

6.4 设置客户端认证参数

--创建kubectl使用的用户admin,并指定了刚创建的证书和config

kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem --client-key=/etc/kubernetes/ssl/admin-key.pem --embed-certs=true --kubeconfig=kube.config

6.5 配置安全上下文

#添加上下文
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config

#cmd  #把刚添加的上下文设置成当前上下文,等同于切换上下文
kubectl config use-context kubernetes --kubeconfig=kube.config

#cmd
mkdir ~/.kube -p
cp kube.config ~/.kube/config
kubectl cluster-info

7 部署kube-controller-manager

vim kube-controller-manager-csr.json
cat kube-controller-manager-csr.json
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.56.101",
      "192.168.56.104",
      "192.168.56.105",
      "192.168.56.106",
      "192.168.56.107"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Zhejiang",
        "L": "Hangzhou",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}

7.1 生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

7.2 创建kube-controller-manager的kubeconfig

#cmd
#添加配置
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.56.101:6443 --kubeconfig=kube-controller-manager.kubeconfig
#创建用户
kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
#添加上下文
kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
#切换上下文
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

7.3 创建systemd unit启动文件

#cmd
vim kube-controller-manager.service
cat kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
  --secure-port=10257 \
  --bind-address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.255.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.0.0.0/16 \
  --experimental-cluster-signing-duration=87600h \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

7.4 分发配置

#cmd
cp kube-controller-manager*.pem /etc/kubernetes/ssl/

cp kube-controller-manager.kubeconfig /etc/kubernetes/

cp kube-controller-manager.service /usr/lib/systemd/system/

#暂时没有
scp kube-controller-manager*.pem k8s-master2:/etc/kubernetes/ssl/
scp kube-controller-manager*.pem k8s-master3:/etc/kubernetes/ssl/
#暂时没有
scp kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-master3:/etc/kubernetes/
scp kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-master2:/etc/kubernetes/
scp kube-controller-manager.service k8s-master3:/usr/lib/systemd/system/
scp kube-controller-manager.service k8s-master2:/usr/lib/systemd/system/

7.5 启动服务

三个节点都要执行

#cmd
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
查看日志
journalctl -u kube-controller-manager

8 部署kube-scheduler组件

vim kube-scheduler-csr.json
cat kube-scheduler-csr.json
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.56.101",
      "192.168.56.104",
      "192.168.56.105",
      "192.168.56.106",
      "192.168.56.107"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Zhejiang",
        "L": "Hangzhou",
        "O": "system:kube-scheduler",
        "OU": "system"
      }
    ]
}

8.1生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

8.2 创建kube-scheduler的kubeconfig

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.56.101:6443 --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

8.3 创建systemd unit服务启动文件

vim kube-scheduler.service
cat kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target

8.4 分发配置

cp kube-scheduler*.pem /etc/kubernetes/ssl/
cp kube-scheduler.kubeconfig /etc/kubernetes/
cp kube-scheduler.service /usr/lib/systemd/system/

#暂时没有
scp kube-scheduler*.pem k8s-master2:/etc/kubernetes/ssl/
scp kube-scheduler*.pem k8s-master3:/etc/kubernetes/ssl/
scp kube-scheduler.kubeconfig k8s-master2:/etc/kubernetes/
scp kube-scheduler.kubeconfig k8s-master3:/etc/kubernetes/
scp kube-scheduler.service k8s-master2:/usr/lib/systemd/system/
scp kube-scheduler.service k8s-master3:/usr/lib/systemd/system/

8.5 启动服务

三个节点都要执行

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

journalctl -u kube-scheduler

9 安装kubelet

9.1创建kubelet-bootstrap.kubeconfig

#cmd
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
echo $BOOTSTRAP_TOKEN
3609f8980544889a3224dc8c57954976

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.56.101:6443 --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

9.2 创建配置文件kubelet.json

k8s-node1

vim kubelet-node1.json
cat kubelet-node1.json
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.56.102",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.255.0.2"]
}

k8s-node2

vim kubelet-node2.json
cat kubelet-nodd2.json
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.56.103",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.255.0.2"]
}

9.3 创建systemd unit 服务启动文件

vim kubelet.service

cat kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
  --cert-dir=/etc/kubernetes/ssl \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --config=/etc/kubernetes/kubelet.json \
  --network-plugin=cni \
  --pod-infra-container-image=kubernetes/pause \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target

9.4 分发kubelet相关配置

分发到node节点上

ssh k8s-node1 mkdir -p /etc/kubernetes/ssl
scp kubelet-bootstrap.kubeconfig  k8s-node1:/etc/kubernetes/
scp kubelet-node1.json k8s-node1:/etc/kubernetes/kubelet.json


scp ca.pem k8s-node1:/etc/kubernetes/ssl/
scp kubelet.service k8s-node1:/usr/lib/systemd/system/

ssh k8s-node2 mkdir -p /etc/kubernetes/ssl
scp kubelet-bootstrap.kubeconfig k8s-node2:/etc/kubernetes/
scp kubelet-node2.json k8s-node2:/etc/kubernetes/kubelet.json

scp ca.pem k8s-node2:/etc/kubernetes/ssl/
scp kubelet.service k8s-node2:/usr/lib/systemd/system/

kubelete.json中的address改为各个节点的ip地址 ,需要修改一下,XXXXX

9.5 启动kubelet

ssh k8s-node1 mkdir -p  /var/lib/kubelet /var/log/kubernetes
ssh k8s-node1 systemctl daemon-reload
ssh k8s-node1 systemctl enable kubelet
ssh k8s-node1 systemctl start kubelet
ssh k8s-node1 systemctl status kubelet

ssh k8s-node2 mkdir -p  /var/lib/kubelet /var/log/kubernetes
ssh k8s-node2 systemctl daemon-reload
ssh k8s-node2 systemctl enable kubelet
ssh k8s-node2 systemctl start kubelet
ssh k8s-node2 systemctl status kubelet

#查看日志
journalctl -u kubelet

9.6 批准CSR 请求

node节点启动后会向主节点发送csr请求

批准前的状态时Pending

批准之后的状态是Approved,Issued

kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           REQUESTEDDURATION   CONDITION
node-csr-81Tdpbc3stJ4-evIgr7ghyKki2UZo6O2a5iJI-IH2ks   115s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending
node-csr-M3EwdsbTV99HzkU58ue0OVJkyDt1D0KjUGUNvWq2mKc   99s    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending
#cmd
kubectl certificate approve node-csr-81Tdpbc3stJ4-evIgr7ghyKki2UZo6O2a5iJI-IH2ks
certificatesigningrequest.certificates.k8s.io/node-csr-8wd3ortWqgOVERaFCVIeAaoaTvprMjJvWlZYE7VT4iI approved
#cmd
kubectl certificate approve node-csr-M3EwdsbTV99HzkU58ue0OVJkyDt1D0KjUGUNvWq2mKc
certificatesigningrequest.certificates.k8s.io/node-csr-dDXlMEvYaUQx7mgDXzRCm_JGvukVYYrE0m-v3Fg5Ogo approved

kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           REQUESTEDDURATION   CONDITION
node-csr-8wd3ortWqgOVERaFCVIeAaoaTvprMjJvWlZYE7VT4iI   3m5s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Approved,Issued
node-csr-dDXlMEvYaUQx7mgDXzRCm_JGvukVYYrE0m-v3Fg5Ogo   3m     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Approved,Issued

10 部署kube-proxy

10.1 创建csr

vim kube-proxy-csr.json
cat kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Zhejiang",
      "L": "Hangzhou",
      "O": "k8s",
      "OU": "system"
    }
  ]
}

10.2 生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2022/03/27 19:53:40 [INFO] generate received request
2022/03/27 19:53:40 [INFO] received CSR
2022/03/27 19:53:40 [INFO] generating key: rsa-2048
2022/03/27 19:53:40 [INFO] encoded CSR
2022/03/27 19:53:40 [INFO] signed certificate with serial number 67193882915301541115521746892337486394054330852
2022/03/27 19:53:40 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

10.3 创建kubeconfig文件

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.56.101:6443 --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

10.4 创建kube-proxy配置文件

每个节点的IP都需要修改 bindAddress healthzBindAddress metricsBindAddress

bindAddress地址为node的地址

k8s-node1

vim kube-proxy-node1.yaml
cat kube-proxy-node1.yaml

apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.56.102
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.56.1/24
healthzBindAddress: 192.168.56.102:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.56.102:10249
mode: "ipvs"

k8s-node2

vim kube-proxy-node2.yaml
cat kube-proxy-node2.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.56.103
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.56.1/24
healthzBindAddress: 192.168.56.103:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.56.103:10249
mode: "ipvs"

10.5 创建systemd unit服务启动文件

vim kube-proxy.service
cat kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

10.6 分发配置

#node1
scp kube-proxy.kubeconfig k8s-node1:/etc/kubernetes/
scp kube-proxy-node1.yaml k8s-node1:/etc/kubernetes/kube-proxy.yaml
scp kube-proxy.service k8s-node1:/usr/lib/systemd/system/

#node2
scp kube-proxy.kubeconfig k8s-node2:/etc/kubernetes/
scp kube-proxy-node2.yaml k8s-node2:/etc/kubernetes/kube-proxy.yaml
scp kube-proxy.service k8s-node2:/usr/lib/systemd/system/

10.7 启动kube-proxy

ssh k8s-node1 mkdir -p /var/lib/kube-proxy
ssh k8s-node1 systemctl daemon-reload
ssh k8s-node1 systemctl enable kube-proxy
ssh k8s-node1 systemctl start kube-proxy
ssh k8s-node1 systemctl status kube-proxy

ssh k8s-node2 mkdir -p /var/lib/kube-proxy
ssh k8s-node2 systemctl daemon-reload
ssh k8s-node2 systemctl enable kube-proxy
ssh k8s-node2 systemctl start kube-proxy
ssh k8s-node2 systemctl status kube-proxy

#查看日志
journalctl -u kube-proxy

11 安装calico网络插件

下载calico.yaml并修改内容,如下图红框内容,否则启动会报错

wget https://docs.projectcalico.org/manifests/calico.yaml

下载calico.yaml并修改内容,如下图红框内容,否则启动会报错

wget https://docs.projectcalico.org/manifests/calico.yaml 
2022-02-21 18:46:04 (691 KB/s) - 已保存 “calico.yaml” [217523/217523])

由于calico自身网络发现机制有问题,因为需要修改 calico使用的物理网卡,ens160为自己机器的网卡,支持正则,还有修改后面CALICO_IPV4POOL_CIDR,不能与初始值“192.168.0.0/16”相同。需要可kube-controller-manager中 --cluster-cidr=10.0.0.0/16相同

name: IP_AUTODETECTION_METHOD

value: "interface=ens33

否则会报错:

calico/node is not ready: BIRD is not ready: Failed to stat() nodename file: stat /var/lib/calico/nodename: no such file or directory

注意:IP_AUTODETECTION_METHOD 修改为自己的物理网卡IP,并且禁用IPV6

1648390427313-defc7c8f-5126-44fc-bc30-095fc631f8a8

11.1 验证calico插件

[root@k8s-master1 ~]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
[root@k8s-master1 ~]#
[root@k8s-master1 ~]#
[root@k8s-master1 ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS              RESTARTS      AGE
kube-system   calico-kube-controllers-566dc76669-jclvt   0/1     ContainerCreating   0             4s
kube-system   calico-node-m4d2j                          0/1     PodInitializing     0             5s
kube-system   calico-node-qxrm9                          0/1     Running             0             5s
[root@k8s-master1 ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS             RESTARTS      AGE
kube-system   calico-kube-controllers-566dc76669-jclvt   1/1     Running            0             11s
kube-system   calico-node-m4d2j                          1/1     Running            0             12s
kube-system   calico-node-qxrm9                          1/1     Running            0             12s
[root@k8s-master1 ~]# kubectl get node
NAME        STATUS   ROLES    AGE    VERSION
k8s-node1   Ready    <none>   148m   v1.23.4
k8s-node2   Ready    <none>   147m   v1.23.4

12 安装dns
复制yaml内容

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.sed

# Warning: This is a file generated from the base underscore template file: coredns.yaml.base

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes $DNS_DOMAIN in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
            max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: k8s-app
                    operator: In
                    values: ["kube-dns"]
              topologyKey: kubernetes.io/hostname
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: k8s.gcr.io/coredns/coredns:v1.8.6
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: $DNS_MEMORY_LIMIT
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: $DNS_SERVER_IP
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

kubernetes/coredns.yaml.sed at master · kubernetes/kubernetes · GitHub

从上面网址中复制内容到yaml文件,修改其中一下几个位置

clusterIP: $DNS_SERVER_IP的IP,这个IP和kubelet.json中的"clusterDNS": ["10.255.0.2"]对应

$DNS_DOMAIN 和apiserver里的host中的域名对应 cluster.local

memory: $DNS_MEMORY_LIMIT 随便写一个大于下面的70Mi就行,这里我加了100写的170Mi

image:k8s.gcr.io/coredns/coredns:v1.8.6 改成 image: coredns/coredns:1.8.6 源地址访问不到需要用镜像加速,之前安装docker的时候已经配置过了

[root@k8s-master1 addons]# vim coredns.yaml
... ...
 
 Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        #kubernetes $DNS_DOMAIN in-addr.arpa ip6.arpa {
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
 
... ...
 
 - name: coredns
        image: coredns/coredns:1.8.6
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            #memory: $DNS_MEMORY_LIMIT
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
 
... ...
 
spec:
  selector:
    k8s-app: kube-dns
  #clusterIP: $DNS_SERVER_IP
  clusterIP: 10.255.0.2
  ports:
  - name: dns
 
[root@k8s-master1 addons]# kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

12.1 验证coredns

kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-566dc76669-jclvt   1/1     Running   0          50m
calico-node-m4d2j                          1/1     Running   0          50m
calico-node-qxrm9                          1/1     Running   0          50m
coredns-648769df8c-jzpj9                   1/1     Running   0          4s

kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.255.0.2   <none>        53/UDP,53/TCP,9153/TCP   14s
kubectl run busybox --image busybox --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping www.baidu.com
PING www.baidu.com (39.156.66.14): 56 data bytes
64 bytes from 39.156.66.14: seq=0 ttl=52 time=10.728 ms
64 bytes from 39.156.66.14: seq=1 ttl=52 time=11.678 ms
64 bytes from 39.156.66.14: seq=2 ttl=52 time=11.675 ms
^C
--- www.baidu.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 10.728/11.360/11.678 ms
/ # exit
pod "busybox" deleted

13 安装dashboard

https://github.com/kubernetes/kubernetes/tree/release-1.23/cluster/addons/dashboard

下载dashboard.yaml,新增type: NodePort 默认为clusterIP 没法外部访问
1648446387191-4e522069-007e-4c01-b9be-6ef1d8b86e2d

13.1 部署dashboard

kubectl apply -f dashboard.yaml

[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS       AGE
calico-kube-controllers-56fcbf9d6b-nvtpm   1/1     Running   2 (134m ago)   17h
calico-node-wrt76                          1/1     Running   1 (134m ago)   172m
calico-node-xdhc2                          1/1     Running   1 (134m ago)   172m
coredns-648769df8c-4k645                   1/1     Running   3 (134m ago)   16h
[root@k8s-master ~]# kubectl get pods -n kube-syskubectl get pod,svc -n kubernetes-dashboard^Cm
[root@k8s-master ~]# kubectl get pod,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-546d6779cb-69vmg   1/1     Running   0          24m
pod/kubernetes-dashboard-6fdf56b6fd-rhpxd        1/1     Running   0          24m

NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.255.171.45   <none>        8000/TCP        24m
service/kubernetes-dashboard        NodePort    10.255.48.153   <none>        443:39644/TCP   24m

13.2 访问

通过访问node节点地址加上映射的端口访问,我们可以看到,映射的端口为39644,所以访问地址可以使用https://192.168.56.102:39644

进入登录界面,有两种登录模式
1648446707429-99023d9c-2f3b-4ea5-8803-716a017d3194
这个token可以通过kubectl命令获取到,获取命令如下

[root@k8s-master ~]# kubectl get secrets -n kubernetes-dashboard
NAME                               TYPE                                  DATA   AGE
default-token-9gkcz                kubernetes.io/service-account-token   3      28m
kubernetes-dashboard-certs         Opaque                                0      28m
kubernetes-dashboard-csrf          Opaque                                1      28m
kubernetes-dashboard-key-holder    Opaque                                2      28m
kubernetes-dashboard-token-68ggq   kubernetes.io/service-account-token   3      28m

最后一行token值就是登录所需token,登录上来会发现什么都看不了,右上角还提示信息

namespaces is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cann

原因是因为dashboard.yaml中自带的用户权限分配比较小,为了方便学习,我们直接创建个管理员权限用户

[root@k8s-master1 ~]# cat dashboard-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: dashboard-admin
subjects:
  - kind: ServiceAccount
    name: dashboard-admin
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
 
[root@k8s-master1 ~]#
[root@k8s-master1 ~]# kubectl apply -f dashboard-rbac.yaml
serviceaccount/dashboard-admin unchanged
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

[root@k8s-master ~]# kubectl -n kube-system describe secrets|grep dashboard-admin
Name:         dashboard-admin-token-6cxf2
Annotations:  kubernetes.io/service-account.name: dashboard-admin

[root@k8s-master ~]# kubectl -n kube-system describe secrets dashboard-admin-token-6cxf2
Name:         dashboard-admin-token-6cxf2
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 2a44440b-6b6d-48a9-bd52-adb17d23b8f8

Type:  kubernetes.io/service-account-token

Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ijh2LXRaVTFJVG9YYzVvd2d2MVlzVXI4ejFYWm0tSVRTRHNlNGtTSXpZbXcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNmN4ZjIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMmE0NDQ0MGItNmI2ZC00OGE5LWJkNTItYWRiMTdkMjNiOGY4Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.UTlAkP1IZt4OHDOK8niTV7paaXZCn51zoXI7vwCUYEXMRunBP8XXeir0CSau93hM-qtPLLEE4aJRtRlPpoiLGel5MdMqlAoHYGHvBhv2UG6859El0_JbLlHF-wU37ziqC3RiBYSSbw6uEbmQYAcpLOI27Co3i599Aqudkh2nl_sxwB1dpg9hD3mYkrQynKc5ECeH2e5iDtPvFS9Gz44YzflNRCpPZSwfKgYiVWxoW_26XMiiHsRgfXHKQYxdGJ4wrfDXDqPM7evdT9u7kI3CD2XO4unUjRvuJsLWWFl1i2DJAFPMJTescHgXgSMITbIl_bm5hUbVTB9cJVV3WxGKgA
ca.crt:     1314 bytes

使用如上token,在重新登录一下,已经可以看到内容,关于rbac细致的权限控制这里先不赘述

14 验证

14. 部署nginx

vim nginx.yaml 
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-controller
spec:
  replicas: 2
  selector:
    name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.19.6
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-nodeport
spec:
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30001
      protocol: TCP
  type: NodePort
  selector:
    name: nginx
[root@master1 ~]# kubectl apply -f nginx.yaml
[root@k8s-master ~]# kubectl get svc
NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes               ClusterIP   10.255.0.1     <none>        443/TCP        22h
nginx-service-nodeport   NodePort    10.255.9.146   <none>        80:30001/TCP   74m


[root@k8s-master ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
nginx-controller-2bfxc   1/1     Running   0          3m49s   10.0.36.70     k8s-node1   <none>           <none>
nginx-controller-drk9n   1/1     Running   0          3m49s   10.0.169.139   k8s-node2   <none>           <none>

3.5.2 验证

浏览器访问

1648466764095-4924e17a-2be9-463d-a554-b47c796e1cb8
ping验证nginx service

kubectl exec -it nginx-controller-drk9n -- /bin/bash
error: unable to upgrade connection: Forbidden (user=kubernetes, verb=create, resource=nodes, subresource=proxy)

kubectl exec 遇到 unable to upgrade connection Forbidden 的解决办法:

在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口,并且我使用的 Kubernetes 集群启用了 RBAC。

kube-apiserver-csr.json
{
  "CN": "kubernetes",
  ....
}

这里定义 RBAC 规则,授权 apiserver 使用的证书(ca.pem)用户名(CN:kuberntes)访问 kubelet API 的权限:

错误记录

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubernetes --user admin

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubernetes --user admin
error: failed to create clusterrolebinding: clusterrolebindings.rbac.authorization.k8s.io "kube-apiserver:kubelet-apis" already exists
#删除clusterrolebinding
kubectl delete clusterrolebinding kube-apiserver:kubelet-apis
clusterrolebinding.rbac.authorization.k8s.io "kube-apiserver:kubelet-apis" deleted
[root@k8s-master ~]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created
[root@k8s-master ~]# kubectl exec -it nginx-controller-drk9n -- /bin/bash
root@nginx-controller-drk9n:~# ping -c 5 nginx-service-nodeport
PING nginx-service-nodeport.default.svc.cluster.local (10.255.9.146) 56(84) bytes of data.
64 bytes from nginx-service-nodeport.default.svc.cluster.local (10.255.9.146): icmp_seq=1 ttl=64 time=0.035 ms
64 bytes from nginx-service-nodeport.default.svc.cluster.local (10.255.9.146): icmp_seq=2 ttl=64 time=0.072 ms
64 bytes from nginx-service-nodeport.default.svc.cluster.local (10.255.9.146): icmp_seq=3 ttl=64 time=0.084 ms
64 bytes from nginx-service-nodeport.default.svc.cluster.local (10.255.9.146): icmp_seq=4 ttl=64 time=0.054 ms
64 bytes from nginx-service-nodeport.default.svc.cluster.local (10.255.9.146): icmp_seq=5 ttl=64 time=0.066 ms

--- nginx-service-nodeport.default.svc.cluster.local ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 107ms
rtt min/avg/max/mdev = 0.035/0.062/0.084/0.017 ms
root@nginx-controller-drk9n:~#

15 参考资料

手动安装高可用k8s集群(二进制)v1.23.4版本
https://blog.csdn.net/weixin_50908696/article/details/123031783

https://www.oiox.cn/index.php/archives/90/

二进制安装Kubernetes(k8s) v1.23.3
https://www.oiox.cn/index.php/archives/90/

二进制部署 K8s 集群 1.23.1 版本
https://www.haxi.cc/archives/setup-k8s-1-23-1-cluster-using-binary.html

kubernetes之二进制安装1.23.+版本

https://zhangzhuo.ltd/articles/2022/01/09/1641717241819.html

K8S集群部署

https://hellogitlab.com/CI/k8s/deploy.html#_3-4-%E5%9C%A8master%E4%B8%8A%E8%BF%9B%E8%A1%8C%E9%9B%86%E7%BE%A4%E5%88%9D%E5%A7%8B%E5%8C%96

二进制包安装Kubernetes集群环境完整版

https://blog.51cto.com/heian99/3220596

centos7 下kubernetes高可用集群安装(二进制安装、v1.20.2版)

https://navww.com/index.php/2021/09/24/centos7-%E4%B8%8Bkubernetes%E9%AB%98%E5%8F%AF%E7%94%A8%E9%9B%86%E7%BE%A4%E5%AE%89%E8%A3%85%EF%BC%88%E4%BA%8C%E8%BF%9B%E5%88%B6%E5%AE%89%E8%A3%85%E3%80%81v1-20-2%E7%89%88%EF%BC%89/

管理员

这个人很懒,什么都没留下

文章评论