03-K8S分布式集群进阶¶
K8s组件深入介绍¶
总体架构¶
Master架构¶
API Server: 提供Kubernetes API接口,主要处理REST操作以及更新ETCD的对象,所有资源池增删改查的唯一入口。
Scheduler: 资源调度,负责Pod到Node的调度
Controller Manager:所有其他集群级别的功能,目前由控制器Manger执行,资源池对象的自动化控制中心。
ETCD:所有持久化的状态信息存储在ETCD中。
Node架构¶
Borg架构¶
K8s架构是基于borg架构独立重构出来得
Pod定义¶
Pod里只运行一个单独容器 one-container-per-Pod
Pod定义模板
apiVersion: v1 # 版本号
kind: Pod # 类型为Pod
metadata: # 元数据
name: nginx-pod # metadata name pod名称
labels: # metadata labels 自定义标签列表
app: nginx #程序名称 k8s区分pod 唯一的参考
spec: # Pod容器的详细定义
containers: # spec containers 容器列表
- name: nginx # spec containers name 容器名称
image: nginx:1.13.12 # spec containers image 容器镜像
ports: # 容器需要暴露的端口号列表
- containerPort: 80 # 容器监听的端口号
Pod定义模板格式说明
*规则1:缩进*
YAML使用一个固定的缩进风格表示数据层结构关系
Salt需要每个缩进级别由两个空格组成
一定不要使用tab键
*规则2:冒号*
选项与参数需要使用冒号分割,冒号后面需要有空格
apiVersion: v1 # 版本号
*规则3:短横线*
想要表示列表项,使用一个短横杠加一个空格,多个项使用同样的缩进级别作为同一列表的一部分
containers: # spec containers 容器列表
- name: nginx # spec containers name 容器名称
Replication controller RC¶
RC是k8s集群中最早得保证Pod高可用得API对象,通过监控运行中过的Pod来保证集群中运行指定数目得Pod副本
指定得数目可以是多个也可以是1个,少于指定数目,RC就会启动运行新得Pod副本,多于指定数目,RC就会杀死多余得Pod副本
及时在指定数目为1得情况下,通过RC运行Pod也比直接运行Pod更明智,因为RC也可以发挥它高可用得能力,保证永远有1个Pod在运行。
Replica Set RS¶
RS是新一代RC,提供同样得高可用能力,区别主要在于RS后来居上,能支持更多中得匹配模式,副本集对象一般不单独使用,而是作为部署得理想状态参数使用。
是K8s 1.2中出现得概念,是RC得升级,一般和deployment共同使用
应用容器化实战¶
创建私有本地仓库¶
登录docker仓库web界面,创建私有仓库
创建devopsedu私有仓库
创建完毕,检查一下
将容器上传到本地仓库¶
下载nginx镜像,模拟1个应用2个版本
[root@linux-node1 ~]# docker pull nginx:1.13.12
[root@linux-node1 ~]# docker pull nginx:1.14.0
检查镜像是否下载完成
[root@linux-node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx 1.14.0 ecc98fc2f376 22 months ago 109MB
nginx 1.13.12 ae513a47849c 2 years ago 109MB
修改docker镜像地址,所有node节点全部配置
cat >/etc/docker/daemon.json<<EOF
{
"registry-mirrors": ["https://dx5z2hy7.mirror.aliyuncs.com"],
"insecure-registries": ["192.168.56.10"],
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "3"
}
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl status docker
echo
登录远程docker本地镜像仓库
[root@linux-node200 ~]# ssh 192.168.56.11
Last login: Thu Aug 20 09:44:25 2020 from 192.168.56.200
[root@linux-node1 ~]# docker login 192.168.56.10
Username: admin
Password: Harbor12345
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
给镜像打标签
[root@linux-node1 ~]# docker tag nginx:1.13.12 192.168.56.10/devopsedu/nginx:1.13.12
[root@linux-node1 ~]# docker tag nginx:1.14.0 192.168.56.10/devopsedu/nginx:1.14.0
将镜像推送到docker镜像中
[root@linux-node1 ~]# docker push 192.168.56.10/devopsedu/nginx:1.13.12
The push refers to repository [192.168.56.10/devopsedu/nginx]
7ab428981537: Pushed
82b81d779f83: Pushed
d626a8ad97a1: Pushed
1.13.12: digest: sha256:e4f0474a75c510f40b37b6b7dc2516241ffa8bde5a442bde3d372c9519c84d90 size: 948
[root@linux-node1 ~]# docker push 192.168.56.10/devopsedu/nginx:1.14.0
The push refers to repository [192.168.56.10/devopsedu/nginx]
19c605f267f4: Pushed
f4a5f8f59caa: Pushed
237472299760: Pushed
1.14.0: digest: sha256:d43aa3719937f9df0502f8258f3034a21b720b5b9bbf01bbfdbd09871aac8930 size: 948
查看本地镜像仓库
将容器放置到Pod中¶
编写一个nginx pod
[root@linux-node1 example]# cat nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13.12
ports:
- containerPort: 80
创建运行一个nginx pod
[root@linux-node1 ~]# cd /root/salt-kubebin/example
[root@linux-node1 example]# kubectl create -f nginx-pod.yaml
pod "nginx-pod" created
查看nginx pod
[root@linux-node1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 44s
查看nginx pod容器列表得更多IP详细信息
[root@linux-node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-pod 1/1 Running 0 53s 10.2.36.17 192.168.56.12
查看nginx pod得详细执行进度
[root@linux-node1 ~]# kubectl describe pod nginx-pod
验证创建得nginx pod是否可用
[root@linux-node1 ~]# ping 10.2.36.17
PING 10.2.36.17 (10.2.36.17) 56(84) bytes of data.
64 bytes from 10.2.36.17: icmp_seq=1 ttl=61 time=0.541 ms
64 bytes from 10.2.36.17: icmp_seq=2 ttl=61 time=0.610 ms
64 bytes from 10.2.36.17: icmp_seq=3 ttl=61 time=0.456 ms
64 bytes from 10.2.36.17: icmp_seq=4 ttl=61 time=0.486 ms
^C
--- 10.2.36.17 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.456/0.523/0.610/0.060 ms
[root@linux-node1 ~]# curl --head 10.2.36.17
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Thu, 20 Aug 2020 02:08:47 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
查看nginx pod得日志
[root@linux-node1 ~]# kubectl logs pod/nginx-pod
10.2.63.0 - - [20/Aug/2020:02:08:47 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-"
10.2.63.0 - - [20/Aug/2020:02:09:22 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-"
10.2.63.0 - - [20/Aug/2020:02:09:23 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-"
删除nginx pod,以免影响后面的实验
[root@linux-node1 ~]# kubectl delete pod nginx-pod
pod "nginx-pod" deleted
为Pod配置Secret¶
登录镜像仓库
[root@linux-node1 ~]# docker login 192.168.56.10
Username: admin
Password: Harbor12345
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
查看镜像仓库密码
[root@linux-node1 ~]# cat /root/.docker/config.json
{
"auths": {
"192.168.56.10": {
"auth": "YWRtaW46SGFyYm9yMTIzNDU="
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.12 (linux)"
}
}
使用Base64转码
[root@linux-node1 ~]# cat /root/.docker/config.json | base64
ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjU2LjEwIjogewoJCQkiYXV0aCI6ICJZV1J0YVc0NlNH
RnlZbTl5TVRJek5EVT0iCgkJfQoJfSwKCSJIdHRwSGVhZGVycyI6IHsKCQkiVXNlci1BZ2VudCI6
ICJEb2NrZXItQ2xpZW50LzE5LjAzLjEyIChsaW51eCkiCgl9Cn0=
创建secret yaml文件
[root@linux-node1 example]# cat harbor-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: harbor-secret
namespace: default
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjU2LjEwIjogewoJCQkiYXV0aCI6ICJZV1J0YVc0NlNHRnlZbTl5TVRJek5EVT0iCgkJfQoJfSwKCSJIdHRwSGVhZGVycyI6IHsKCQkiVXNlci1BZ2VudCI6ICJEb2NrZXItQ2xpZW50LzE5LjAzLjEyIChsaW51eCkiCgl9Cn0=
type: kubernetes.io/dockerconfigjson
创建secret镜像仓库
[root@linux-node1 example]# kubectl create -f harbor-secret.yaml
secret "harbor-secret" created
查看secret镜像仓库
[root@linux-node1 ~]# kubectl get secret
NAME TYPE DATA AGE
default-token-dgs2w kubernetes.io/service-account-token 3 2d
harbor-secret kubernetes.io/dockerconfigjson 1 28s
拉取本地Harbor镜像¶
配置nginx pod使用本地镜像仓库
[root@linux-node1 ]# vim nginx-pod-local.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: 192.168.56.10/devopsedu/nginx:1.13.12
ports:
- containerPort: 80
imagePullSecrets:
- name: harbor-secret
开始启动一个nginx pod
[root@linux-node1 ~]# kubectl create -f nginx-pod-local.yaml
pod "nginx-pod" created
查看 nginx pod
[root@linux-node1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 22s
[root@linux-node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-pod 1/1 Running 0 31s 10.2.15.18 192.168.56.13
查看 nginx pod 详细信息
[root@linux-node1 ~]# kubectl describe pod nginx-pod
检查 nginx pod 是否可用
[root@linux-node1 ~]# ping 10.2.15.18
PING 10.2.15.18 (10.2.15.18) 56(84) bytes of data.
64 bytes from 10.2.15.18: icmp_seq=1 ttl=61 time=0.468 ms
64 bytes from 10.2.15.18: icmp_seq=2 ttl=61 time=0.892 ms
64 bytes from 10.2.15.18: icmp_seq=3 ttl=61 time=0.544 ms
^C
--- 10.2.15.18 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.468/0.634/0.892/0.186 ms
[root@linux-node1 ~]# curl --head 10.2.15.18
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Thu, 20 Aug 2020 03:31:27 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
删除nginx pod 以免影响后面得测试环境
[root@linux-node1 ~]# kubectl delete pod nginx-pod
pod "nginx-pod" deleted
Pause容器得作用¶
Pause容器得作用在Linux命名空间共享基础
启用PID命名空间,开启init进程
[root@linux-node2 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
[root@linux-node2 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \
--address=192.168.56.12 \
--hostname-override=192.168.56.12 \
--pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--cert-dir=/opt/kubernetes/ssl \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--network-plugin=cni \
--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/opt/kubernetes/bin/cni \
--cluster-dns=10.1.0.2 \
--cluster-domain=cluster.local. \
--hairpin-mode hairpin-veth \
--allow-privileged=true \
--anonymous-auth=false \
--fail-swap-on=false \
--logtostderr=true \
--v=2 \
--log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
使用Controllers管理Pod¶
使用Replication Controller管理Pod¶
编写nginx rc pod
[root@linux-node1 ~]# vim nginx-rc-local.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-rc
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: 192.168.56.10/devopsedu/nginx:1.13.12
ports:
- containerPort: 80
imagePullSecrets:
- name: harbor-secret
创建nginx rc pod
[root@linux-node1 ~]# kubectl create -f nginx-rc-local.yaml
replicationcontroller "nginx-rc" created
查看nginx rc
[root@linux-node1 ~]# kubectl get rc
NAME DESIRED CURRENT READY AGE
nginx-rc 3 3 3 26s
[root@linux-node1 ~]# kubectl get rc -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
nginx-rc 3 3 3 32s nginx 192.168.56.10/devopsedu/nginx:1.13.12 app=nginx
查看nginx rc pod详情
[root@linux-node1 ~]# kubectl describe rc nginx-rc
查看nginx rc pod
[root@linux-node1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 33s
nginx-rc-4nztb 1/1 Running 0 5s
nginx-rc-865q9 1/1 Running 0 5s
明明指定得是3个副本,为什么创建得时候是2个,因为nginx-pod得标签写得也是app=nginx
也就是说一共创建了3个副本,如果删除nginx-pod会如何?
[root@linux-node1 ~]# kubectl delete pod nginx-pod
pod "nginx-pod" deleted
这样就会出现创建3个 nginx rc pod
[root@linux-node1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-rc-4nztb 1/1 Running 0 52s
nginx-rc-865q9 1/1 Running 0 52s
nginx-rc-r9d6w 1/1 Running 0 14s
扩容nginx rc pod为4个
[root@linux-node1 ~]# kubectl scale rc nginx-rc --replicas=4
replicationcontroller "nginx-rc" scaled
检查是否扩容为4个
[root@linux-node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-rc-4nztb 1/1 Running 0 1m 10.2.36.20 192.168.56.12
nginx-rc-865q9 1/1 Running 0 1m 10.2.15.21 192.168.56.13
nginx-rc-r44xt 1/1 Running 0 15s 10.2.15.22 192.168.56.13
nginx-rc-r9d6w 1/1 Running 0 1m 10.2.36.21 192.168.56.12
缩容nginx rc pod为2个
[root@linux-node1 ~]# kubectl scale rc nginx-rc --replicas=2
replicationcontroller "nginx-rc" scaled
检查是否缩容为2个
[root@linux-node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-rc-4nztb 1/1 Running 0 2m 10.2.36.20 192.168.56.12
nginx-rc-865q9 1/1 Running 0 2m 10.2.15.21 192.168.56.13
升级nginx rc得版本(滚动升级)
[root@linux-node1 ~]# kubectl rolling-update nginx-rc --image=192.168.56.10/devopsedu/nginx:1.14.0
Created nginx-rc-ac89089c72420b62f4bd78e82ae85bca
Scaling up nginx-rc-ac89089c72420b62f4bd78e82ae85bca from 0 to 2, scaling down nginx-rc from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
Scaling nginx-rc-ac89089c72420b62f4bd78e82ae85bca up to 1
Scaling nginx-rc down to 1
Scaling nginx-rc-ac89089c72420b62f4bd78e82ae85bca up to 2
Scaling nginx-rc down to 0
Update succeeded. Deleting old controller: nginx-rc
Renaming nginx-rc-ac89089c72420b62f4bd78e82ae85bca to nginx-rc
replicationcontroller "nginx-rc" rolling updated
查看nginx rc升级pod
[root@linux-node1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-rc-ac89089c72420b62f4bd78e82ae85bca-4b7b4 1/1 Running 0 2m
nginx-rc-ac89089c72420b62f4bd78e82ae85bca-r9v58 1/1 Running 0 1m
[root@linux-node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-rc-ac89089c72420b62f4bd78e82ae85bca-4b7b4 1/1 Running 0 3m 10.2.36.22 192.168.56.12
nginx-rc-ac89089c72420b62f4bd78e82ae85bca-r9v58 1/1 Running 0 2m 10.2.15.23 192.168.56.13
检查nginx rc升级pod
[root@linux-node1 ~]# curl --head 10.2.36.22
HTTP/1.1 200 OK
Server: nginx/1.14.0 # 升级完成
Date: Thu, 20 Aug 2020 03:49:48 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 17 Apr 2018 13:46:53 GMT
Connection: keep-alive
ETag: "5ad5facd-264"
Accept-Ranges: bytes
删除nginx rs 以免影响后面得测试环境
[root@linux-node1 ~]# kubectl delete rs nginx-rs
replicaset.extensions "nginx-rs" deleted
使用Replica Set管理Pod¶
编写nginx rs
[root@linux-node1 ~]# vim nginx-rs-local.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx-rs
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 192.168.56.10/devopsedu/nginx:1.13.12
ports:
- containerPort: 80
imagePullSecrets:
- name: harbor-secret
创建nginx rs
[root@linux-node1 ~]# kubectl create -f nginx-rs-local.yaml
replicaset.apps "nginx-rs" created
查看nginx rs
[root@linux-node1 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-rs 3 3 3 24s
[root@linux-node1 ~]# kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
nginx-rs 3 3 3 28s nginx 192.168.56.10/devopsedu/nginx:1.13.12 app=nginx
查看nginx rs pod
[root@linux-node1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-rs-m4bzz 1/1 Running 0 13s
nginx-rs-rmdl4 1/1 Running 0 13s
nginx-rs-sbxm5 1/1 Running 0 13s
[root@linux-node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-rs-m4bzz 1/1 Running 0 18s 10.2.15.25 192.168.56.13
nginx-rs-rmdl4 1/1 Running 0 18s 10.2.36.25 192.168.56.12
nginx-rs-sbxm5 1/1 Running 0 18s 10.2.15.26 192.168.56.13
验证nginx rs pod
[root@linux-node1 ~]# curl --head 10.2.15.25
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Thu, 20 Aug 2020 03:56:49 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
删除nginx rs 以免影响后面实验deployment
[root@linux-node1 ~]# kubectl delete rs nginx-rs
replicaset.extensions "nginx-rs" deleted
使用Deployment管理Pod¶
编写nginx deployment
[root@linux-node1 ~]# vim nginx-deployment-local.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 192.168.56.10/devopsedu/nginx:1.13.12
ports:
- containerPort: 80
imagePullSecrets:
- name: harbor-secret
创建nginx deployment(不会记录日志)
[root@linux-node1 ~]# kubectl create -f nginx-deployment-local.yaml
deployment.apps "nginx-deployment" created
查看nginx deployment
[root@linux-node1 ~]# kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 22s
查看nginx deployment pod
[root@linux-node1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-6d7f844769-68vsh 1/1 Running 0 40s
nginx-deployment-6d7f844769-8k9s6 1/1 Running 0 40s
nginx-deployment-6d7f844769-gd2b9 1/1 Running 0 40s
[root@linux-node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-6d7f844769-68vsh 1/1 Running 0 44s 10.2.36.26 192.168.56.12
nginx-deployment-6d7f844769-8k9s6 1/1 Running 0 44s 10.2.15.27 192.168.56.13
nginx-deployment-6d7f844769-gd2b9 1/1 Running 0 44s 10.2.36.27 192.168.56.12
以上创建方式不建议生产环境使用,我们这边先删除
[root@linux-node1 ~]# kubectl delete deployment nginx-deployment
deployment.extensions "nginx-deployment" deleted
创建nginx deployment(生产环境推荐,会记录日志)
[root@linux-node1 ~]# kubectl create -f nginx-deployment-local.yaml --record
deployment.apps "nginx-deployment" created
[root@linux-node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-6d7f844769-hbbtf 1/1 Running 0 20s 10.2.15.29 192.168.56.13
nginx-deployment-6d7f844769-k7v5f 1/1 Running 0 20s 10.2.36.28 192.168.56.12
nginx-deployment-6d7f844769-skr59 1/1 Running 0 20s 10.2.15.28 192.168.56.13
更新nginx deployment 镜像版本
[root@linux-node1 ~]# kubectl set image deployment/nginx-deployment nginx=192.168.56.10/devopsedu/nginx:1.14.0
deployment.apps "nginx-deployment" image updated
查看nginx deployment更新状态
[root@linux-node1 ~]# kubectl rollout status deployment/nginx-deployment
deployment "nginx-deployment" successfully rolled out
[root@linux-node1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-55ff77b5bc-h2qh7 1/1 Running 0 50s 10.2.36.29 192.168.56.12
nginx-deployment-55ff77b5bc-k7jb8 1/1 Running 0 48s 10.2.15.30 192.168.56.13
nginx-deployment-55ff77b5bc-pwlff 1/1 Running 0 46s 10.2.15.31 192.168.56.13
[root@linux-node1 ~]# kubectl get deployment -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-deployment 3 3 3 3 2m nginx 192.168.56.10/devopsedu/nginx:1.14.0 app=nginx
验证nginx deployment 镜像版本
[root@linux-node1 ~]# curl --head 10.2.36.29
HTTP/1.1 200 OK
Server: nginx/1.14.0
Date: Thu, 20 Aug 2020 05:57:49 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 17 Apr 2018 13:46:53 GMT
Connection: keep-alive
ETag: "5ad5facd-264"
Accept-Ranges: bytes
查看nginx deployment 更新历史
[root@linux-node1 ~]# kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl create --filename=nginx-deployment-local.yaml --record=true
2 kubectl set image deployment/nginx-deployment nginx=192.168.56.10/devopsedu/nginx:1.14.0
如何查看第一次操作做了哪些事情
[root@linux-node1 ~]# kubectl rollout history deployment/nginx-deployment --revision=1
deployments "nginx-deployment" with revision #1
Pod Template:
Labels: app=nginx
pod-template-hash=2839400325
Annotations: kubernetes.io/change-cause=kubectl create --filename=nginx-deployment-local.yaml --record=true
Containers:
nginx:
Image: 192.168.56.10/devopsedu/nginx:1.13.12
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
如何查看第二次操作做了哪些事情
[root@linux-node1 ~]# kubectl rollout history deployment/nginx-deployment --revision=2
deployments "nginx-deployment" with revision #2
Pod Template:
Labels: app=nginx
pod-template-hash=1199336167
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=192.168.56.10/devopsedu/nginx:1.14.0
Containers:
nginx:
Image: 192.168.56.10/devopsedu/nginx:1.14.0
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
快速回滚nginx deployment 上一个镜像版本
[root@linux-node1 ~]# kubectl rollout undo deployment/nginx-deployment
deployment.apps "nginx-deployment"
快速回滚nginx deployment 指定镜像版本
[root@linux-node1 ~]# kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
2 kubectl set image deployment/nginx-deployment nginx=192.168.56.10/devopsedu/nginx:1.14.0
3 kubectl create --filename=nginx-deployment-local.yaml --record=true
扩容nginx deployment 副本数为5
[root@linux-node1 ~]# kubectl scale deployment nginx-deployment --replicas 5
deployment.extensions "nginx-deployment" scaled
[root@linux-node1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-6d7f844769-8qq8n 1/1 Running 0 2m
nginx-deployment-6d7f844769-dpqbl 1/1 Running 0 2m
nginx-deployment-6d7f844769-kprdz 1/1 Running 0 12s
nginx-deployment-6d7f844769-lcbr9 1/1 Running 0 2m
nginx-deployment-6d7f844769-q8rdr 1/1 Running 0 12s
[root@linux-node1 ~]# kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 5 5 5 5 10m
缩容nginx deployment 副本数为2
[root@linux-node1 ~]# kubectl scale deployment nginx-deployment --replicas 2
deployment.extensions "nginx-deployment" scaled
[root@linux-node1 ~]# kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 2 2 2 2 11m
[root@linux-node1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-6d7f844769-8qq8n 1/1 Running 0 2m
nginx-deployment-6d7f844769-lcbr9 1/1 Running 0 2m
使用Service管理Pod访问-4层代理¶
使用Service进行Pod负载均衡¶
编写nginx service服务(default命名空间)
[root@linux-node1 ~]# vim nginx-service.yaml
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
创建运行nginx service服务
[root@linux-node1 ~]# kubectl create -f nginx-service.yaml
查看nginx service服务
[root@linux-node1 ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 2d
nginx-service ClusterIP 10.1.159.207 <none> 80/TCP 2d
[root@linux-node1 ~]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 2d <none>
nginx-service ClusterIP 10.1.159.207 <none> 80/TCP 2d app=nginx
验证nginx service服务(需要在安装负载均衡得服务器上执行)
[root@linux-node2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.56.12:30001 rr
-> 10.2.36.14:8443 Masq 1 0 0
TCP 10.1.0.1:443 rr persistent 10800
-> 192.168.56.11:6443 Masq 1 2 0
TCP 10.1.0.2:53 rr
-> 10.2.15.12:53 Masq 1 0 0
-> 10.2.36.11:53 Masq 1 0 0
TCP 10.1.16.9:8086 rr
-> 10.2.36.13:8086 Masq 1 0 0
TCP 10.1.25.133:80 rr
-> 10.2.15.14:3000 Masq 1 0 0
TCP 10.1.59.46:443 rr
-> 10.2.36.14:8443 Masq 1 0 0
TCP 10.1.102.213:80 rr
-> 10.2.15.15:8082 Masq 1 0 0
TCP 10.1.159.207:80 rr
-> 10.2.15.32:80 Masq 1 0 0
-> 10.2.36.30:80 Masq 1 0 0
TCP 10.2.36.0:30001 rr
-> 10.2.36.14:8443 Masq 1 0 0
TCP 10.2.36.1:30001 rr
-> 10.2.36.14:8443 Masq 1 0 0
TCP 127.0.0.1:30001 rr
-> 10.2.36.14:8443 Masq 1 0 0
UDP 10.1.0.2:53 rr
-> 10.2.15.12:53 Masq 1 0 0
-> 10.2.36.11:53 Masq 1 0 0
验证nginx service服务
[root@linux-node2 ~]# curl --head 10.1.159.207
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Thu, 20 Aug 2020 06:11:22 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
如果想暴露多个端口得方式
[root@linux-node1 ~]# cat nginx-service.yaml
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 80
重新应用nginx service服务
[root@linux-node1 ~]# kubectl apply -f nginx-service.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
service "nginx-service" configured
验证重新应用nginx service服务
[root@linux-node1 ~]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 2d <none>
nginx-service ClusterIP 10.1.159.207 <none> 80/TCP,443/TCP 2d app=nginx
验证多个端口nginx service服务
[root@linux-node2 ~]# ipvsadm -Ln|grep 10.1.159.207
TCP 10.1.159.207:80 rr
TCP 10.1.159.207:443 rr
[root@linux-node2 ~]# curl --head http://10.1.159.207
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Thu, 20 Aug 2020 06:14:42 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
[root@linux-node2 ~]# curl --head 10.1.159.207:443
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Thu, 20 Aug 2020 06:14:58 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
Nginx deployment和nginx service可以整合到一起,将两个文件复制到一起就能完成
使用NodePort从集群外部访问¶
编写 nginx service NodePort服务
[root@linux-node1 ~]# cat nginx-service-nodeport.yaml
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
创建运行nginx service NodePort服务
[root@linux-node1 ~]# kubectl apply -f nginx-service-nodeport.yaml
service "nginx-service" configured
查看nginx service NodePort服务
[root@linux-node1 ~]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 2d <none>
nginx-service NodePort 10.1.159.207 <none> 80:28347/TCP 2d app=nginx
验证nginx service NodePort服务
[root@linux-node2 ~]# ipvsadm -Ln|grep 28347
TCP 192.168.56.12:28347 rr
TCP 10.2.36.0:28347 rr
TCP 10.2.36.1:28347 rr
TCP 127.0.0.1:28347 rr
[root@linux-node3 ~]# ipvsadm -Ln|grep 28347
TCP 192.168.56.13:28347 rr
TCP 10.2.15.0:28347 rr
TCP 10.2.15.1:28347 rr
TCP 127.0.0.1:28347 rr
使用Ingress提供外部访问-7层代理¶
Ingress可以使用7层得负载均衡,当然 nginx也可以实现
参考文档:http://k8s.unixhot.com/kubernetes/ingress-controller.html
比较流行得Ingress control
找到ingress部署文件
[root@linux-node1 ~]# ll salt-kubebin/addons/ingress/
total 12
-rw-r--r-- 1 root root 924 Aug 17 17:07 daemonset.yml
-rw-r--r-- 1 root root 359 Aug 17 17:07 ingress-rbac.yml
-rw-r--r-- 1 root root 464 Aug 17 17:07 traefik-ui.yml
为ingress打标签,目的是ingress仅运行在192.168.56.12节点
[root@linux-node1 ingress]# kubectl label nodes 192.168.56.12 edgenode=true
node "192.168.56.12" labeled
[root@linux-node1 ingress]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
192.168.56.12 Ready <none> 2d v1.10.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,edgenode=true,kubernetes.io/hostname=192.168.56.12
192.168.56.13 Ready <none> 2d v1.10.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.56.13
创建运行ingress service服务
[root@linux-node1 ingress]# kubectl create -f /root/salt-kubebin/addons/ingress/
daemonset.extensions "traefik-ingress-lb" created
serviceaccount "ingress" created
clusterrolebinding.rbac.authorization.k8s.io "ingress" created
service "traefik-web-ui" created
ingress.extensions "traefik-web-ui" created
查看ingress service服务pod
[root@linux-node1 ~]# kubectl get pod -n kube-system|grep trae
traefik-ingress-lb-kljvm 1/1 Running 0 47s
验证ingress service服务pod
[root@linux-node2 ~]# netstat -ltnp|grep 80
tcp 0 0 192.168.56.12:2380 0.0.0.0:* LISTEN 2650/etcd
tcp6 0 0 :::8580 :::* LISTEN 7297/traefik
tcp6 0 0 :::80 :::* LISTEN 7297/traefik
访问ingress service服务web界面
http://10.0.190.137:8580/dashboard/
创建模拟一个域名访问nginx
[root@linux-node1 ~]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 2d <none>
nginx-service NodePort 10.1.159.207 <none> 80:28347/TCP 2d app=nginx
编写nginx ingerss service服务
[root@linux-node1 ~]# vim nginx-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- host: www.example.com
http:
paths:
- path: /
backend:
serviceName: nginx-service
servicePort: 80
运行nginx ingerss service服务
[root@linux-node1 ~]# kubectl create -f salt-kubebin/example/nginx-ingress.yaml
ingress.extensions "nginx-ingress" created
查看模拟一个域名访问nginx
[root@linux-node1 ~]# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
nginx-ingress www.example.com 80 17s
本地电脑修改hosts文件
10.0.190.137 www.example.com
我们发现可以从外部访问nginx
检查nginx ingress
[root@linux-node1 ~]# kubectl describe ingress
Name: nginx-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
www.example.com
/ nginx-service:80 (<none>)
Annotations:
Events: <none>
生产环境可以结合keepalvied使用