12-K8s手动部署-多master¶
kubeadm 和二进制安装 k8s 适用场景分析 kubeadm 是官方提供的开源工具,是一个开源项目,用于快速搭建 kubernetes 集群,目前是比较 方便和推荐使用的。kubeadm init 以及 kubeadm join 这两个命令可以快速创建 kubernetes 集群。 Kubeadm 初始化 k8s,所有的组件都是以 pod 形式运行的,具备故障自恢复能力。 kubeadm 是工具,可以快速搭建集群,也就是相当于用程序脚本帮我们装好了集群,属于自动部署,简化部署操作,证书、组件资源清单文件都是自动创建的 kubeadm 自动部署屏蔽了很多细节,使得对各个模块感知很少,如果对 k8s 架构组件理解不深的话,遇到问题比较难排查。 kubeadm 适合需要经常部署 k8s,或者对自动化要求比较高的场景下使用。
二进制:在官网下载相关组件的二进制包,如果手动安装,对 kubernetes 理解也会更全面。 Kubeadm 和二进制都适合生产环境,在生产环境运行都很稳定,具体如何选择,可以根据实际项目进行评估。
环境准备¶
配置机器主机名(master1 master2 node1节点执行)
master1节点执行:
hostnamectl set-hostname master1 && bash
master2节点执行:
hostnamectl set-hostname master2 && bash
node1节点执行:
hostnamectl set-hostname node1 && bash
设置/etc/hosts保证主机名能够解析(master1 master2 node1节点执行)
cat >>/etc/hosts<<EOF
192.168.1.26 master1
192.168.1.27 master2
192.168.1.28 node1
EOF
设置部署节点到其它所有节点的SSH免密码登录(master1节点执行)
yum -y install sshpass
cat >/root/.ssh/config<<EOF
Host *
Port 22
User root
StrictHostKeyChecking no
UserKnownHostsFile=/dev/nul
EOF
cd /root/
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
sshpass -pP@sswd ssh-copy-id master1
sshpass -pP@sswd ssh-copy-id master2
sshpass -pP@sswd ssh-copy-id node1
#检验测试master1节点是否可以免密所有机器
ssh master1 "hostname -I"
ssh master2 "hostname -I"
ssh node1 "hostname -I"
关闭交换分区 swap,提升性能
# 所有节点都要执行
swapoff -a
# Swap 是交换分区,如果机器内存不够,会使用 swap 分区,但是 swap 分区的性能较低,k8s 设计的
时候为了能提升性能,默认是不允许使用姜欢分区的。Kubeadm 初始化的时候会检测 swap 是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装 k8s 的时候可以指定--ignorepreflight-errors=Swap 来解决。
修改机器内核参数
# 所有节点都要执行
modprobe br_netfilter
echo "modprobe br_netfilter" >> /etc/profile
cat >/etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
配置阿里云yum仓库
rm -f /etc/yum.repos.d/*.repo
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
sed -i "/mirrors.aliyuncs.com/d" /etc/yum.repos.d/CentOS-Base.repo
sed -i "/mirrors.cloud.aliyuncs.com/d" /etc/yum.repos.d/CentOS-Base.repo
yum clean all
配置docker组件需要的阿里云的 repo 源
cat >/etc/yum.repos.d/docker-ce.repo<<\EOF
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
EOF
yum repolist
配置安装 k8s 组件需要的阿里云的 repo 源
cat >/etc/yum.repos.d/kubernetes.repo<<\EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
yum repolist
开启 ipvs功能
# 所有节点都要执行
cat >/etc/sysconfig/modules/ipvs.modules<<\EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ 0 -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的 4 层 LAN 交换,作为 Linux 内核的一部分。ipvs 运行在主机上,在真实服务器集群前充当负载均衡器。ipvs 可以将基于 TCP 和 UDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。
安装基本工具包
# 所有节点都要执行
yum -y install ipvsadm conntrack ntpdate telnet vim
安装nginx¶
master1和master2节点上都要执行
安装 nginx 主备:
yum install nginx nginx-mod-stream -y
修改 nginx 配置文件。主备一样
[root@master1 ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.1.26:6443; # Master1 APISERVER IP:PORT
server 192.168.1.27:6443; # Master2 APISERVER IP:PORT
}
server {
listen 16443;
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
启动nginx并设置开机自启动
systemctl start nginx
systemctl enable nginx
安装keepalived¶
master1和master2节点上都要执行
安装keepalive
yum -y install keepalived
keepalive 配置 master1节点上的操作
[root@master1 ~]# vim /etc/keepalived/keepalived.conf
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface eth0 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID 实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定 VRRP 心跳包通告间隔时间,默认 1 秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟 IP
virtual_ipaddress {
192.168.1.31/24
}
track_script {
check_nginx
}
}
keepalive 配置 master1节点上的操作
[root@master2 ~]# vim /etc/keepalived/keepalived.conf
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface eth0 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID 实例,每个实例是唯一的
priority 90 # 优先级,备服务器设置 90
advert_int 1 # 指定 VRRP 心跳包通告间隔时间,默认 1 秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟 IP
virtual_ipaddress {
192.168.1.31/24
}
track_script {
check_nginx
}
}
vrrp_script:指定检查 nginx 工作状态脚本(根据 nginx 状态判断是否故障转移)
virtual_ipaddress:虚拟 IP(VIP)
cat >/etc/keepalived/check_nginx.sh<<\EOF
#!/bin/bash
count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
systemctl stop keepalived
fi
EOF
给检查脚本添加执行权限
chmod +x /etc/keepalived/check_nginx.sh
启动服务
systemctl start keepalived
systemctl enable nginx keepalived
systemctl status keepalived
测试 vip 是否绑定成功
[root@master1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:83:db:25 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.26/24 brd 192.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.1.31/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe83:db25/64 scope link
valid_lft forever preferred_lft forever
现在我们开始验证高可用
首先将master1节点上的nginx服务关闭
[root@master1 ~]# systemctl stop nginx
现在检查vip是否已经在master2节点上啦
[root@master2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:83:8e:ac brd ff:ff:ff:ff:ff:ff
inet 192.168.1.27/24 brd 192.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.1.31/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe83:8eac/64 scope link
valid_lft forever preferred_lft forever
现在将master1节点上的nginx服务恢复
systemctl start nginx
systemctl restart keepalived
现在检查vip是否已经在master1节点上啦
[root@master1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:83:db:25 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.26/24 brd 192.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.1.31/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe83:db25/64 scope link
valid_lft forever preferred_lft forever
安装docker¶
安装 docker-ce
# 所有节点都要执行
yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io -y
启动 docker-ce
# 所有节点都要执行
systemctl start docker && systemctl enable docker.service ; systemctl status docker.service
配置 docker 镜像加速器和驱动
# 所有节点都要执行
cat >/etc/docker/daemon.json<<\EOF
{
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload
systemctl restart docker
systemctl status docker
#修改 docker 文件驱动为 systemd,默认为 cgroupfs,kubelet 默认使用 systemd,两者必须一致才可以。
安装master¶
安装初始化 k8s 需要的软件包
# 所有节点都要执行
yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
# 注:每个软件包的作用
Kubeadm: kubeadm 是一个工具,用来初始化 k8s 集群的
kubelet: 安装在集群所有节点上,用于启动 Pod 的
kubectl: 通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
启动kubelet服务
# 所有节点都要执行
systemctl enable kubelet ; systemctl start kubelet
sleep 5
systemctl status kubelet
#上面可以看到 kubelet 状态不是 running 状态,这个是正常的,不用管,等 k8s 组件起来这个kubelet 就正常了。
kubeadm 初始化 k8s 集群,使用 kubeadm 初始化 k8s 集群
[root@master1]# vim /root/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.6
controlPlaneEndpoint: 192.168.1.31:16443
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
certSANs:
- 192.168.1.26
- 192.168.1.27
- 192.168.1.28
- 192.168.1.31
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.10.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
开始初始化集群:
kubeadm init --config /root/kubeadm-config.yaml --ignore-preflight-errors=SystemVerification
下面是初始化master节点成功后的信息
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.26:6443 --token 95k0et.9c4uitzqasdx5axa \
--discovery-token-ca-cert-hash sha256:9495462d474420d5e4ee3b39bb8a258997f7dfb9d76926baa4aaeaba167b436d
配置 kubectl 的配置文件 config,相当于对 kubectl 进行授权,这样 kubectl 命令可以使用这个证书对 k8s 集群进行管理
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看集群信息
[root@master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 NotReady control-plane,master 2m6s v1.20.6
此时集群状态还是 NotReady 状态,因为没有安装网络插件。
添加master¶
把 master1 节点的证书拷贝到 master2 上
ssh master2 "cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/"
scp /etc/kubernetes/pki/ca.crt master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master2:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key master2:/etc/kubernetes/pki/etcd/
证书拷贝之后在 master2 上执行如下命令,大家复制自己的,这样就可以把master2 和加入到集群,成为控制节点:
在 master1 上查看加入节点的命令:
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.1.31:16443 --token naiq8m.kke0tfgfgzft2j07 --discovery-token-ca-cert-hash sha256:8e73c48d62b9971f2158903176808a8fcb3a65df211d478f02000645377d819c
在master2节点上开始扩容master
kubeadm join 192.168.1.31:16443 --token naiq8m.kke0tfgfgzft2j07 --discovery-token-ca-cert-hash sha256:8e73c48d62b9971f2158903176808a8fcb3a65df211d478f02000645377d819c --control-plane
出现如下信息则表示扩容成功
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
配置 kubectl 的配置文件 config,相当于对 kubectl 进行授权,这样 kubectl 命令可以使用这个证书对 k8s 集群进行管理
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
可以查看一下集群状态
[root@master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 NotReady control-plane,master 6m53s v1.20.6
master2 NotReady control-plane,master 63s v1.20.6
安装node¶
安装 k8s 集群-添加第一个工作节点
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.1.26:6443 --token nhw968.t906x9wfvgjxbgp1 --discovery-token-ca-cert-hash sha256:9495462d474420d5e4ee3b39bb8a258997f7dfb9d76926baa4aaeaba167b436d
node1节点执行如上命令
kubeadm join 192.168.1.26:6443 --token nhw968.t906x9wfvgjxbgp1 --discovery-token-ca-cert-hash sha256:9495462d474420d5e4ee3b39bb8a258997f7dfb9d76926baa4aaeaba167b436d
看到下面说明 node1 节点已经加入到集群了,充当工作节点
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在 master1 上查看集群节点状况:
[root@master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 NotReady control-plane,master 3m20s v1.20.6
node1 NotReady <none> 46s v1.20.6
安装calico¶
准备安装 calico 网络插件 yaml文件
[root@master1 ~]# yum -y install lrzsz
[root@master1 ~]# rz calico.yaml
注:在线下载配置文件地址是: https://docs.projectcalico.org/manifests/calico.yaml
使用 yaml 文件安装 calico 网络插件
kubectl apply -f calico.yaml
检查服务启动状态(要等到所有容器都启动后就代表部署完毕)
[root@master1 ~]# kubectl get pod -n kube-system|grep calico
calico-kube-controllers-6949477b58-4f66x 1/1 Running 0 98s
calico-node-gxh7k 1/1 Running 0 98s
calico-node-qlwsj 1/1 Running 0 98s
然后检查集群状态为Ready 则表示部署成功
[root@master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 9m31s v1.20.6
master2 Ready control-plane,master 3m41s v1.20.6
node1 Ready <none> 116s v1.20.6
验证集群¶
验证集群网络
创建一个测试用的deployment
kubectl run net-test --image=alpine --replicas=2 sleep 360000
查看获取IP情况
[root@master1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
net-test 1/1 Running 0 40s 10.244.104.1 node2 <none> <none>
测试集群pod网络是否可以联通
[root@master1 ~]# ping -c 4 10.244.104.1
PING 10.244.104.1 (10.244.104.1) 56(84) bytes of data.
64 bytes from 10.244.104.1: icmp_seq=1 ttl=63 time=0.384 ms
64 bytes from 10.244.104.1: icmp_seq=2 ttl=63 time=0.353 ms
64 bytes from 10.244.104.1: icmp_seq=3 ttl=63 time=0.415 ms
64 bytes from 10.244.104.1: icmp_seq=4 ttl=63 time=0.314 ms
验证集群服务
启动一个nginx服务
yum -y install git
git clone https://gitee.com/chriscentos/salt-kubebin.git
cd /root/salt-kubebin/example/
kubectl apply -f nginx-pod.yaml
查看pod启动情况
[root@master1 example]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
net-test 1/1 Running 0 3m44s 10.244.104.1 node2 <none> <none>
nginx-pod 1/1 Running 0 28s 10.244.104.2 node2 <none> <none>
验证nginx服务是否可用
[root@master1 ~]# curl 10.244.104.2
验证 coredns
coredns默认是安装的
[root@master1 ~]# kubectl get pod -ALL|grep dns
kube-system coredns-7f89b7bc75-bxwt5 1/1 Running 0 30m
kube-system coredns-7f89b7bc75-wl9pc 1/1 Running 0 30m
我们只需进入容器验证即可
[root@master1 ~]# kubectl exec -it net-test sh
/ # ping -c 4 www.baidu.com
PING www.baidu.com (180.101.49.12): 56 data bytes
64 bytes from 180.101.49.12: seq=0 ttl=48 time=24.178 ms
64 bytes from 180.101.49.12: seq=1 ttl=48 time=24.377 ms
64 bytes from 180.101.49.12: seq=2 ttl=48 time=23.878 ms
64 bytes from 180.101.49.12: seq=3 ttl=48 time=24.395 ms
扩容node¶
配置机器主机名(node2节点执行)
node2节点执行:
hostnamectl set-hostname node2 && bash
设置/etc/hosts保证主机名能够解析(node2节点执行)
node2节点执行:
cat >>/etc/hosts<<EOF
192.168.1.26 master1
192.168.1.27 master2
192.168.1.28 node1
192.168.1.29 node2
EOF
设置部署节点到其它所有节点的SSH免密码登录(master1节点执行)
cat >>/etc/hosts<<EOF
192.168.1.29 node2
EOF
sshpass -pP@sswd ssh-copy-id node2
#检验测试master1节点是否可以免密所有机器
ssh node2 "hostname -I"
关闭交换分区 swap,提升性能
# 所有节点都要执行
swapoff -a
# Swap 是交换分区,如果机器内存不够,会使用 swap 分区,但是 swap 分区的性能较低,k8s 设计的
时候为了能提升性能,默认是不允许使用姜欢分区的。Kubeadm 初始化的时候会检测 swap 是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装 k8s 的时候可以指定--ignorepreflight-errors=Swap 来解决。
修改机器内核参数
modprobe br_netfilter
echo "modprobe br_netfilter" >> /etc/profile
cat >/etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
配置阿里云yum仓库
rm -f /etc/yum.repos.d/*.repo
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
sed -i "/mirrors.aliyuncs.com/d" /etc/yum.repos.d/CentOS-Base.repo
sed -i "/mirrors.cloud.aliyuncs.com/d" /etc/yum.repos.d/CentOS-Base.repo
yum clean all
配置docker组件需要的阿里云的 repo 源
cat >/etc/yum.repos.d/docker-ce.repo<<\EOF
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
EOF
配置安装 k8s 组件需要的阿里云的 repo 源
cat >/etc/yum.repos.d/kubernetes.repo<<\EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
yum repolist
开启 ipvs功能
# 所有节点都要执行
cat >/etc/sysconfig/modules/ipvs.modules<<\EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ 0 -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的 4 层 LAN 交换,作为 Linux 内核的一部分。ipvs 运行在主机上,在真实服务器集群前充当负载均衡器。ipvs 可以将基于 TCP 和 UDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。
安装基本工具包
yum -y install ipvsadm conntrack ntpdate telnet vim
安装 docker-ce
yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io -y
启动 docker-ce
systemctl start docker && systemctl enable docker.service ; systemctl status docker.service
配置 docker 镜像加速器和驱动
cat >/etc/docker/daemon.json<<\EOF
{
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload
systemctl restart docker
systemctl status docker
#修改 docker 文件驱动为 systemd,默认为 cgroupfs,kubelet 默认使用 systemd,两者必须一致才可以。
安装初始化 k8s 需要的软件包
yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
# 注:每个软件包的作用
Kubeadm: kubeadm 是一个工具,用来初始化 k8s 集群的
kubelet: 安装在集群所有节点上,用于启动 Pod 的
kubectl: 通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
启动kubelet服务
systemctl enable kubelet ; systemctl start kubelet
sleep 5
systemctl status kubelet
#上面可以看到 kubelet 状态不是 running 状态,这个是正常的,不用管,等 k8s 组件起来这个kubelet 就正常了。
安装 k8s 集群-添加第三个工作节点
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.1.26:6443 --token vrx60x.1sq6s9g752fe1ufr --discovery-token-ca-cert-hash sha256:9495462d474420d5e4ee3b39bb8a258997f7dfb9d76926baa4aaeaba167b436d
在node2节点执行如上命令
看到下面说明 node1 节点已经加入到集群了,充当工作节点
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在 master2 上查看集群节点状况:
[root@master2 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 19m v1.20.6
master2 Ready control-plane,master 13m v1.20.6
node1 Ready <none> 11m v1.20.6
node2 Ready <none> 54s v1.20.6
缩容node¶
先查看一下这个node节点上的pod信息
[root@master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 20h v1.20.6
master2 Ready control-plane,master 20h v1.20.6
master3 Ready control-plane,master 19m v1.20.6
node1 Ready <none> 20h v1.20.6
node2 Ready <none> 19h v1.20.6
驱逐这个node节点上的pod
[root@master1 ~]# kubectl drain node2 --delete-local-data --force --ignore-daemonsets
Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
node/node2 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-xv5cp, kube-system/kube-proxy-b9mt5
evicting pod kube-system/coredns-7f89b7bc75-4tlt6
evicting pod default/nginx-deployment-5d47ff8589-fd68t
evicting pod default/nginx-deployment-5d47ff8589-kkjtv
evicting pod default/nginx-deployment-5d47ff8589-klqfc
evicting pod default/nginx-deployment-5d47ff8589-lwmn9
pod/coredns-7f89b7bc75-4tlt6 evicted
pod/nginx-deployment-5d47ff8589-fd68t evicted
pod/nginx-deployment-5d47ff8589-kkjtv evicted
pod/nginx-deployment-5d47ff8589-klqfc evicted
pod/nginx-deployment-5d47ff8589-lwmn9 evicted
node/node2 evicted
删除这个node节点
[root@master1 ~]# kubectl delete nodes node2
node "node2" deleted
然后在node2这个节点上执行如下命令:
[root@node2 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0419 15:59:16.691576 33193 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
恢复node¶
安装 k8s 集群-添加第三个工作节点
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.1.26:6443 --token vrx60x.1sq6s9g752fe1ufr --discovery-token-ca-cert-hash sha256:9495462d474420d5e4ee3b39bb8a258997f7dfb9d76926baa4aaeaba167b436d
在node2节点执行如上命令
看到下面说明 node1 节点已经加入到集群了,充当工作节点
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在 master2 上查看集群节点状况:
[root@master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 20h v1.20.6
master2 Ready control-plane,master 20h v1.20.6
node1 Ready <none> 20h v1.20.6
node2 Ready <none> 3s v1.20.6
扩容master¶
配置机器主机名(master3节点执行)
master3节点执行:
hostnamectl set-hostname master3 && bash
设置/etc/hosts保证主机名能够解析(node2节点执行)
master3节点执行:
cat >>/etc/hosts<<EOF
192.168.1.26 master1
192.168.1.27 master2
192.168.1.28 node1
192.168.1.29 node2
192.168.1.30 master3
EOF
设置部署节点到其它所有节点的SSH免密码登录(master1节点执行)
cat >>/etc/hosts<<EOF
192.168.1.30 master3
EOF
sshpass -pP@sswd ssh-copy-id master3
#检验测试master1节点是否可以免密所有机器
ssh master3 "hostname -I"
关闭交换分区 swap,提升性能
# 所有节点都要执行
swapoff -a
# Swap 是交换分区,如果机器内存不够,会使用 swap 分区,但是 swap 分区的性能较低,k8s 设计的
时候为了能提升性能,默认是不允许使用姜欢分区的。Kubeadm 初始化的时候会检测 swap 是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装 k8s 的时候可以指定--ignorepreflight-errors=Swap 来解决。
修改机器内核参数
modprobe br_netfilter
echo "modprobe br_netfilter" >> /etc/profile
cat >/etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
配置阿里云yum仓库
rm -f /etc/yum.repos.d/*.repo
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
sed -i "/mirrors.aliyuncs.com/d" /etc/yum.repos.d/CentOS-Base.repo
sed -i "/mirrors.cloud.aliyuncs.com/d" /etc/yum.repos.d/CentOS-Base.repo
yum clean all
配置docker组件需要的阿里云的 repo 源
cat >/etc/yum.repos.d/docker-ce.repo<<\EOF
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
EOF
配置安装 k8s 组件需要的阿里云的 repo 源
cat >/etc/yum.repos.d/kubernetes.repo<<\EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
yum repolist
开启 ipvs功能
# 所有节点都要执行
cat >/etc/sysconfig/modules/ipvs.modules<<\EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ 0 -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的 4 层 LAN 交换,作为 Linux 内核的一部分。ipvs 运行在主机上,在真实服务器集群前充当负载均衡器。ipvs 可以将基于 TCP 和 UDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。
安装基本工具包
yum -y install ipvsadm conntrack ntpdate telnet vim
安装 docker-ce
yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io -y
启动 docker-ce
systemctl start docker && systemctl enable docker.service ; systemctl status docker.service
配置 docker 镜像加速器和驱动
cat >/etc/docker/daemon.json<<\EOF
{
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload
systemctl restart docker
systemctl status docker
#修改 docker 文件驱动为 systemd,默认为 cgroupfs,kubelet 默认使用 systemd,两者必须一致才可以。
安装初始化 k8s 需要的软件包
yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
# 注:每个软件包的作用
Kubeadm: kubeadm 是一个工具,用来初始化 k8s 集群的
kubelet: 安装在集群所有节点上,用于启动 Pod 的
kubectl: 通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
启动kubelet服务
systemctl enable kubelet ; systemctl start kubelet
sleep 5
systemctl status kubelet
#上面可以看到 kubelet 状态不是 running 状态,这个是正常的,不用管,等 k8s 组件起来这个kubelet 就正常了。
把 master1 节点的证书拷贝到 master3 上
ssh master3 "cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/"
scp /etc/kubernetes/pki/ca.crt master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master3:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key master3:/etc/kubernetes/pki/etcd/
证书拷贝之后在 master2 上执行如下命令,大家复制自己的,这样就可以把master3 和加入到集群,成为控制节点:
安装 k8s 集群-添加第三个工作节点
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.1.26:6443 --token vrx60x.1sq6s9g752fe1ufr --discovery-token-ca-cert-hash sha256:9495462d474420d5e4ee3b39bb8a258997f7dfb9d76926baa4aaeaba167b436d
在master3节点执行如下命令
kubeadm join 192.168.1.26:6443 --token vrx60x.1sq6s9g752fe1ufr --discovery-token-ca-cert-hash sha256:9495462d474420d5e4ee3b39bb8a258997f7dfb9d76926baa4aaeaba167b436d --control-plane
看到下面说明 node1 节点已经加入到集群了,充当工作节点
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
配置 kubectl 的配置文件 config,相当于对 kubectl 进行授权,这样 kubectl 命令可以使用这个证书对 k8s 集群进行管理
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
在 master2 上查看集群节点状况:
[root@master3 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 29m v1.20.6
master2 Ready control-plane,master 23m v1.20.6
master3 Ready control-plane,master 22s v1.20.6
node1 Ready <none> 21m v1.20.6
node2 Ready <none> 10m v1.20.6
首先在master2节点上配置nginx 负载均衡器
[root@master2 ~]# vim /etc/nginx/nginx.conf
upstream k8s-apiserver {
server 192.168.1.26:6443; # Master1 APISERVER IP:PORT
server 192.168.1.27:6443; # Master2 APISERVER IP:PORT
server 192.168.1.30:6443; # Master3 APISERVER IP:PORT
}
master2节点重启nginx和keepalived服务
systemctl daemon-reload
systemctl restart nginx
systemctl restart keepalived
然后在master1节点上配置nginx 负载均衡器
[root@master1 ~]# vim /etc/nginx/nginx.conf
upstream k8s-apiserver {
server 192.168.1.26:6443; # Master1 APISERVER IP:PORT
server 192.168.1.27:6443; # Master2 APISERVER IP:PORT
server 192.168.1.30:6443; # Master3 APISERVER IP:PORT
}
Master1节点重启nginx和keepalived服务
systemctl daemon-reload
systemctl restart nginx
systemctl restart keepalived
最后检查vip依旧在master1节点
[root@master1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:83:db:25 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.26/24 brd 192.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.1.31/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe83:db25/64 scope link
缩容master¶
把master3的信息从etcd删除:
tar zxvf etcd-v3.3.4-linux-amd64.tar.gz
cd etcd-v3.3.4-linux-amd64
cp etcdctl /usr/local/sbin/
[root@master1 ~]# ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
显示如下:
24261b9554d2b04e, started, master3, https://192.168.1.30:2380, https://192.168.1.30:2379
d59c76c6a4473d61, started, master1, https://192.168.1.26:2380, https://192.168.1.26:2379
e8886f5021d5126e, started, master2, https://192.168.1.27:2380, https://192.168.1.27:2379
找到master3对应的hash值是:
24261b9554d2b04e
我们下一步就是根据hash删除etcd信息,执行如下命令
[root@master1 ~]# ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 24261b9554d2b04e
输出结果如下:
Member 24261b9554d2b04e removed from cluster 5043389292468b33
把master3从k8s集群删除,
[root@master1 ~]# kubectl delete nodes master3
node "master3" deleted
重置master3节点
[root@master3 ~]# kubeadm reset
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
最后我们检查集群的状态
[root@master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 18h v1.20.6
master2 Ready control-plane,master 18h v1.20.6
node1 Ready <none> 18h v1.20.6
node2 Ready <none> 18h v1.20.6
恢复master¶
把master1上的证书还是按照文档全都拷贝到master3机器上
ssh master3 "mkdir /etc/kubernetes/pki/etcd/"
scp /etc/kubernetes/pki/ca.crt master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master3:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key master3:/etc/kubernetes/pki/etcd/
查看加入集群命令:
[root@master1 ~]kubeadm token create --print-join-command
kubeadm join 192.168.40.199:16443 --token fvby4y.o2m8zhb7j9unpdxt --discovery-token-ca-cert-hash sha256:1ba1b274090feecfef58eddc2a6f45590299c1d0624618f1f429b18a064cb728
在master3执行如下命令,把节点加入k8s集群,充当控制节点:
[root@master3 ~]# kubeadm join 192.168.40.199:16443 --token fvby4y.o2m8zhb7j9unpdxt --discovery-token-ca-cert-hash sha256:1ba1b274090feecfef58eddc2a6f45590299c1d0624618f1f429b18a064cb728 --control-plane
查看集群是否加入成功:
[root@master3 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 19h v1.20.6
master2 Ready control-plane,master 19h v1.20.6
master3 Ready control-plane,master 117s v1.20.6
node1 Ready <none> 19h v1.20.6
node2 Ready <none> 19h v1.20.6
Nginx健康检查¶
配置nginx健康检查 master2
[root@master1 ~]# vim /etc/nginx/nginx.conf
upstream k8s-apiserver {
server 192.168.1.26:6443 max_fails=1 fail_timeout=10s; # Master1 APISERVER IP:PORT
server 192.168.1.27:6443 max_fails=1 fail_timeout=10s; # Master2 APISERVER IP:PORT
server 192.168.1.30:6443 max_fails=1 fail_timeout=10s; # Master3 APISERVER IP:PORT
}
systemctl daemon-reload
systemctl restart nginx
systemctl restart keepalived
配置nginx健康检查 master1
[root@master1 ~]# vim /etc/nginx/nginx.conf
upstream k8s-apiserver {
server 192.168.1.26:6443 max_fails=1 fail_timeout=10s; # Master1 APISERVER IP:PORT
server 192.168.1.27:6443 max_fails=1 fail_timeout=10s; # Master2 APISERVER IP:PORT
server 192.168.1.30:6443 max_fails=1 fail_timeout=10s; # Master3 APISERVER IP:PORT
}
systemctl daemon-reload
systemctl restart nginx
systemctl restart keepalived
max_fails=1和fail_timeout=10s 表示在单位周期为10s钟内,中达到1次连接失败,那么接将把节点标记为不可用,并等待下一个周期(同样时常为fail_timeout)再一次去请求,判断是否连接是否成功。
fail_timeout为10s,max_fails为1次。
测试高可用¶
现在我们将master1节点的网卡进行down掉后
[root@master2 ~]# ping -c 4 192.168.1.26
PING 192.168.1.26 (192.168.1.26) 56(84) bytes of data.
From 192.168.1.27 icmp_seq=27 Destination Host Unreachable
From 192.168.1.27 icmp_seq=28 Destination Host Unreachable
From 192.168.1.27 icmp_seq=29 Destination Host Unreachable
大约稍等一会查看节点状态
[root@master2 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 NotReady control-plane,master 20h v1.20.6
master2 Ready control-plane,master 19h v1.20.6
master3 Ready control-plane,master 13m v1.20.6
node1 Ready <none> 19h v1.20.6
node2 Ready <none> 19h v1.20.6
我们创建一个pod测试一下
kubectl run net-test-master2 --image=alpine --replicas=2 sleep 360000
查看创建的pod
[root@master2 ~]# kubectl get pod -o wide |grep net-test-master2
net-test-master2 1/1 Running 0 38s 10.244.166.132 node1 <none> <none>
测试创建的pod网络是否正常
[root@master2 ~]# ping -c 4 10.244.166.132
PING 10.244.166.132 (10.244.166.132) 56(84) bytes of data.
64 bytes from 10.244.166.132: icmp_seq=1 ttl=63 time=0.343 ms
64 bytes from 10.244.166.132: icmp_seq=2 ttl=63 time=0.332 ms
64 bytes from 10.244.166.132: icmp_seq=3 ttl=63 time=0.261 ms
最后清理刚刚创建的pod
[root@master2 ~]# kubectl delete pod net-test-master2
pod "net-test-master2" deleted
安装dashboard¶
1.部署dashboard
kubeadm部署 Dashboard2.0.3 https://blog.csdn.net/weixin_38849917/article/details/107539193
在Service里面添加NodePort访问类型以及端口,我的recommended.yaml文件如下:
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #添加访问类型
ports:
- port: 443
nodePort: 30001 #添加端口
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.3
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
应用配置文件:
[root@master ~]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
查看所属node以及端口
kubectl -n kubernetes-dashboard get pod,svc -o wide
通过任意节点ip以及service的端口30001访问dashboard页面
创建create-admin.yaml文件:
[root@master ~]# cat create-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
运行
kubectl apply -f create-admin.yaml
获取到用户的token以用作登录
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
把上面的token复制下来,登录https://192.168.1.26:30001/
删除dashboard:
kubectl delete -f create-admin.yaml
kubectl delete -f recommended.yaml
使用dashboard¶
使用dashboard创建pod容器
我们点击加号开始创建pod
等待pod创建的结果(创建成功)
当然也可以使用命令行查看
[root@master1 pki]# kubectl get pod
NAME READY STATUS RESTARTS AGE
net-test 1/1 Running 0 3h4m
nginx-pod 1/1 Running 0 3h1m
nginx01-55c8d4f7cd-ddkl5 1/1 Running 0 65s
安装metrics¶
安装 metrics-server 组件 metrics-server 是一个集群范围内的资源数据集和工具,同样的,metrics-server 也只是显示数据,并不提供数据存储服务,主要关注的是资源度量 API 的实现,比如 CPU、文件描述符、内存、请求延时等指标,metric-server 收集数据给 k8s 集群内使用,如 kubectl,hpa,scheduler 等
在/etc/kubernetes/manifests 里面改一下 apiserver 的配置
注意:这个是 k8s 在 1.17 的新特性,如果是 1.16 版本的可以不用添加,1.17 以后要添加。这个参
数的作用是 Aggregation 允许在不修改 Kubernetes 核心代码的同时扩展 Kubernetes API。
[root@master1 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
在如下内容
spec:
containers:
- command:
- kube-apiserver
增加如下内容:
- --enable-aggregator-routing=true
[root@master2 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
在如下内容
spec:
containers:
- command:
- kube-apiserver
增加如下内容:
- --enable-aggregator-routing=true
[root@master3 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
在如下内容
spec:
containers:
- command:
- kube-apiserver
增加如下内容:
- --enable-aggregator-routing=true
重新更新 apiserver 配置:
[root@master1 ~]# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
pod/kube-apiserver created
[root@master1 ~]# kubectl delete pods kube-apiserver -n kube-system
pod "kube-apiserver" deleted
[root@master2 ~]# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
pod/kube-apiserver created
[root@master2 ~]# kubectl delete pods kube-apiserver -n kube-system
pod "kube-apiserver" deleted
[root@master3 ~]# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
pod/kube-apiserver created
[root@master3 ~]# kubectl delete pods kube-apiserver -n kube-system
pod "kube-apiserver" deleted
检查更新状态
[root@master1 ~]# kubectl get pods -n kube-system|grep apiserver
kube-apiserver 0/1 CrashLoopBackOff 1 22s
kube-apiserver-master1 1/1 Running 0 45s
等到 kube-apiserver-master1 pod 运行起来后 把 kube-apiserver CrashLoopBackOff 状态的 pod 删除
[root@master1 ~]# kubectl delete pods kube-apiserver -n kube-system
pod "kube-apiserver" deleted
上传metrics相关的镜像文件,必须要上传导入到所有节点
docker load -i addon.tar.gz
docker load -i metrics-server-amd64-0-3-6.tar.gz
scp addon.tar.gz metrics-server-amd64-0-3-6.tar.gz root@master2:/root/
ssh master2 "docker load -i addon.tar.gz ; docker load -i metrics-server-amd64-0-3-6.tar.gz"
scp addon.tar.gz metrics-server-amd64-0-3-6.tar.gz root@master3:/root/
ssh master3 "docker load -i addon.tar.gz ; docker load -i metrics-server-amd64-0-3-6.tar.gz"
scp addon.tar.gz metrics-server-amd64-0-3-6.tar.gz root@node1:/root/
ssh node1 "docker load -i addon.tar.gz ; docker load -i metrics-server-amd64-0-3-6.tar.gz"
scp addon.tar.gz metrics-server-amd64-0-3-6.tar.gz root@node2:/root/
ssh node2 "docker load -i addon.tar.gz ; docker load -i metrics-server-amd64-0-3-6.tar.gz"
应用metrics yaml文件
[root@master1 ~]# rz metrics.yaml
[root@master1 ~]# kubectl apply -f metrics.yaml
检查metrics pod启动状态
[root@master1 ~]# kubectl get pods -n kube-system | grep metrics
metrics-server-6595f875d6-clxp6 2/2 Running 0 6s
测试kubectl top¶
查看pod的使用容量
[root@master1 ~]# kubectl top pods -n kube-system
NAME CPU(cores) MEMORY(bytes)
calico-kube-controllers-6949477b58-4f66x 1m 19Mi
calico-node-gxh7k 22m 93Mi
calico-node-jk7wv 21m 95Mi
calico-node-qlwsj 20m 96Mi
coredns-7f89b7bc75-bxwt5 2m 15Mi
coredns-7f89b7bc75-wl9pc 2m 17Mi
etcd-master1 9m 65Mi
kube-apiserver-master1 37m 429Mi
kube-controller-manager-master1 7m 53Mi
kube-proxy-4j24n 1m 17Mi
kube-proxy-7m8j7 1m 19Mi
kube-proxy-v9wsc 1m 16Mi
kube-scheduler-master1 3m 24Mi
metrics-server-6595f875d6-clxp6 75m 15Mi
查看集群node节点的使用容量
[root@master1 ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master1 124m 1% 1795Mi 5%
node1 75m 0% 1165Mi 3%
node2 75m 0% 1212Mi 3%
修改schedule绑定的端口¶
把 scheduler、controller-manager 端口变成物理机可以监听的端口
[root@master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
默认在 1.19 之后 10252 和 10251 都是绑定在 127 的,
如果想要通过 prometheus 监控,会采集不到数据,所以可以把端口绑定到物理机
可按如下方法处理:
[root@master1 ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml
修改如下内容:
把--bind-address=127.0.0.1 变成--bind-address=192.168.1.26
把 httpGet:字段下的 hosts 由 127.0.0.1 变成 192.168.1.26
把—port=0 删除
sed -i "s#127.0.0.1#192.168.1.26#g" /etc/kubernetes/manifests/kube-scheduler.yaml
sed -i "/--port/d" /etc/kubernetes/manifests/kube-scheduler.yaml
[root@master1 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
把--bind-address=127.0.0.1 变成--bind-address=192.168.1.26
把 httpGet:字段下的 hosts 由 127.0.0.1 变成 192.168.1.26
把—port=0 删除
sed -i "s#127.0.0.1#192.168.1.26#g" /etc/kubernetes/manifests/kube-controller-manager.yaml
sed -i "/--port/d" /etc/kubernetes/manifests/kube-controller-manager.yaml
#注意:192.168.1.26 是 k8s 的控制节点 master1 的 ip
sed -i "s#127.0.0.1#192.168.1.26#g" /etc/kubernetes/manifests/kube-scheduler.yaml
sed -i "s#127.0.0.1#192.168.1.26#g" /etc/kubernetes/manifests/kube-controller-manager.yaml
sed -i "/--port/d" /etc/kubernetes/manifests/kube-scheduler.yaml
sed -i "/--port/d" /etc/kubernetes/manifests/kube-controller-manager.yaml
systemctl restart kubelet
systemctl status kubelet
sed -i "s#127.0.0.1#192.168.1.27#g" /etc/kubernetes/manifests/kube-scheduler.yaml
sed -i "s#127.0.0.1#192.168.1.27#g" /etc/kubernetes/manifests/kube-controller-manager.yaml
sed -i "/--port/d" /etc/kubernetes/manifests/kube-scheduler.yaml
sed -i "/--port/d" /etc/kubernetes/manifests/kube-controller-manager.yaml
systemctl restart kubelet
systemctl status kubelet
sed -i "s#127.0.0.1#192.168.1.30#g" /etc/kubernetes/manifests/kube-scheduler.yaml
sed -i "s#127.0.0.1#192.168.1.30#g" /etc/kubernetes/manifests/kube-controller-manager.yaml
sed -i "/--port/d" /etc/kubernetes/manifests/kube-scheduler.yaml
sed -i "/--port/d" /etc/kubernetes/manifests/kube-controller-manager.yaml
systemctl restart kubelet
systemctl status kubelet
修改之后在 k8s 各个节点重启下 kubelet
# 所有节点都要执行
systemctl restart kubelet
systemctl status kubelet
可以看到相应的端口已经被物理机监听了
[root@master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@master1 ~]# ss -ltnp|grep 10251 ; ss -ltnp|grep 10252
LISTEN 0 128 :::10251 :::* users:(("kube-scheduler",pid=79687,fd=7))
LISTEN 0 128 :::10252 :::* users:(("kube-controller",pid=79787,fd=7))
[root@master2 ~]# ss -ltnp|grep 10251 ; ss -ltnp|grep 10252
LISTEN 0 128 :::10251 :::* users:(("kube-scheduler",pid=127649,fd=7))
LISTEN 0 128 :::10252 :::* users:(("kube-controller",pid=127751,fd=7))
[root@master3 ~]# ss -ltnp|grep 10251 ; ss -ltnp|grep 10252
LISTEN 0 128 :::10251 :::* users:(("kube-scheduler",pid=127649,fd=7))
LISTEN 0 128 :::10252 :::* users:(("kube-controller",pid=127751,fd=7))
延长证书时间¶
查看证书有效时间:
[root@master1 ~]# openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text |grep Not
Not Before: Apr 18 11:42:04 2022 GMT
Not After : Apr 15 11:42:04 2032 GMT
通过上面可看到ca证书有效期是10年,从2022到2032年
[root@master1 ~]# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep Not
Not Before: Apr 18 11:42:04 2022 GMT
Not After : Apr 19 06:35:35 2023 GMT
通过上面可看到apiserver证书有效期是1年,从2022到2023年:
延长证书过期时间 1.把update-kubeadm-cert.sh文件上传到master1、master2节点 master3节点
[root@master1 ~]# ll /root/update-kubeadm-cert.sh
-rw------- 1 root root 10756 Apr 10 15:09 /root/update-kubeadm-cert.sh
scp update-kubeadm-cert.sh master2:/root/
scp update-kubeadm-cert.sh master3:/root/
2.在每个节点都执行如下命令 1)给update-kubeadm-cert.sh证书授权可执行权限
[root@master1 ~]# chmod +x /root/update-kubeadm-cert.sh
ssh master2 "chmod +x /root/update-kubeadm-cert.sh"
ssh master3 "chmod +x /root/update-kubeadm-cert.sh"
2)执行下面命令,修改证书过期时间,把时间延长到10年
[root@master1 ~]# ./update-kubeadm-cert.sh all
[root@master2 ~]# ./update-kubeadm-cert.sh all
[root@master3 ~]# ./update-kubeadm-cert.sh all
3)在master1节点查询Pod是否正常,能查询出数据说明证书签发完成
[root@master1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
net-test 1/1 Running 0 19h
nginx-deployment-5d47ff8589-fd68t 1/1 Running 0 19h
能够看到pod信息,说明证书签发正常
验证证书有效时间是否延长到10年
[root@master1 ~]# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep Not
Not Before: Apr 19 07:42:41 2022 GMT
Not After : Apr 16 07:42:41 2032 GMT
通过上面可看到apiserver证书有效期是10年,从2022到2032年:
[root@master1 ~]# openssl x509 -in /etc/kubernetes/pki/apiserver-etcd-client.crt -noout -text |grep Not
Not Before: Apr 19 07:42:40 2022 GMT
Not After : Apr 16 07:42:40 2032 GMT
通过上面可看到etcd证书有效期是10年,从2022到2032年:
[root@master1 ~]# openssl x509 -in /etc/kubernetes/pki/front-proxy-ca.crt -noout -text |grep Not
Not Before: Apr 18 11:42:04 2022 GMT
Not After : Apr 15 11:42:04 2032 GMT
通过上面可看到fron-proxy证书有效期是10年,从2022到2032年