admin管理员组

文章数量:821225

《K8S进阶》(上)

文章目录

    • 第7章:深入理解常用控制器
      • 7.1 Pod与controller的关系
      • 7.2 无状态应用部署控制器 Deployment
      • 7.3 守护进程控制器 DaemonSet
      • 7.4 批处理 Job & CronJob
    • 第8章:深入理解Service
      • 8.1 Service存在的意义
      • 8.2 Pod与Service的关系
      • 8.2 Service三种常用类型
      • 8.3 Service代理模式
      • Service DNS名称
      • 小结
    • 第9章:Ingress
      • 9.1 Ingress为弥补NodePort不足而生
      • 9.2 Pod与Ingress的关系
      • 9.3 Ingress Controller
      • 9.4 Ingress
      • 9.5 Annotations对Ingress个性化配置
      • 9.6 Ingress Controller高可用方案
    • 第10章:管理应用程序配置
      • 10.1 secret
      • 10.2 configmap
      • 10.3 **应用程序如何动态更新配置?**

第7章:深入理解常用控制器

7.1 Pod与controller的关系

  • controllers:在集群上管理和运行容器的对象。有时也称为工作负载(workload)

  • 通过label-selector相关联,如下图所示。

  • Pod通过控制器实现应用的运维,如伸缩,滚动升级等

7.2 无状态应用部署控制器 Deployment

Deployment功能:

  • 部署无状态应用(无状态应用简单来讲,就是Pod可以漂移任意节点,而不用考虑数据和IP变化)

  • 管理Pod和ReplicaSet(副本数量管理控制器)

  • 具有上线部署、副本设定、滚动升级、回滚等功能

  • 提供声明式更新,例如只更新一个新的Image

应用场景:Web服务,微服务

下图是Deployment 标准YAML,通过标签与Pod关联。

使用YAML部署一个java应用:

apiVersion: apps/v1
kind: Deployment
metadata:name: web
spec:replicas: 3    # 设置3个副本selector:matchLabels:app: webtemplate:metadata:labels:app: webspec:containers:- image: lizhenliang/java-demoname: java

将这个java应用暴露到集群外部访问:

apiVersion: v1
kind: Service
metadata:labels:app: webname: web
spec:ports:- port: 80             # 集群内容访问应用端口protocol: TCPtargetPort: 8080     # 容器镜像端口nodePort: 30008      # 对外暴露的端口selector:app: webtype: NodePort

查看资源:

kubectl get pods,svc
NAME                       READY   STATUS    RESTARTS   AGE
pod/web-7f9c858899-dcqwb   1/1     Running   0          18s
pod/web-7f9c858899-q26bj   1/1     Running   0          18s
pod/web-7f9c858899-wg287   1/1     Running   0          48sNAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.1.0.1      <none>        443/TCP        5m55s
service/web          NodePort    10.1.157.27   <none>        80:30008/TCP   48s

浏览器输入:http://NodeIP:30008 即可访问到该应用。

升级项目,即更新最新镜像版本,这里换一个nginx镜像为例:
kubectl set image deployment/web nginx=nginx:1.15
kubectl rollout status deployment/web # 查看升级状态如果该版本发布失败想回滚到上一个版本可以执行:
kubectl rollout undo deployment/web   # 回滚最新版本也可以回滚到指定发布记录:
kubectl rollout history deployment/web  # 查看发布记录
kubectl rollout undo deployment/web --revision=2  # 回滚指定版本扩容/缩容:
kubectl scale deployment nginx-deployment --replicas=5 
--replicas设置比现在值大就是扩容,反之就是缩容。

kubectl set image 会触发滚动更新,即分批升级Pod。

滚动更新原理其实很简单,利用新旧两个replicaset,例如副本是3个,首先Scale Up增加新RS副本数量为1,准备就绪后,Scale Down减少旧RS副本数量为2,以此类推,逐渐替代,最终旧RS副本数量为0,新RS副本数量为3,完成本次更新。这个过程可通过kubectl describe deployment web看到。

7.3 守护进程控制器 DaemonSet

DaemonSet功能:

  • 在每一个Node上运行一个Pod

  • 新加入的Node也同样会自动运行一个Pod

应用场景:Agent,例如监控采集工具,日志采集工具

7.4 批处理 Job & CronJob

Job:一次性执行

应用场景:离线数据处理,视频解码等业务

apiVersion: batch/v1
kind: Job
metadata:name: pi
spec:template:spec:containers:- name: piimage: perlcommand: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]restartPolicy: Never   # 作业失败后会不再尝试创建新的PodbackoffLimit: 4   # .spec.backoffLimit字段限制重试次数。默认情况下,这个字段默认值是6。

上述示例中将π计算到2000个位置并将其打印出来。完成大约需要10秒。

查看任务:

kubectl get pods,job 

CronJob:定时任务,像Linux的Crontab一样。

应用场景:通知,备份

apiVersion: batch/v1beta1
kind: CronJob
metadata:name: hello
spec:schedule: "*/1 * * * *"jobTemplate:spec:template:spec:containers:- name: helloimage: busyboxargs:- /bin/sh- -c- date; echo Hello from the Kubernetes clusterrestartPolicy: OnFailure  # 作业失败并返回状态码非0时,尝试创建新的Pod运行任务

上述示例中将每分钟打印一次Hello。

查看任务:

kubectl get pods,cronjob

第8章:深入理解Service

8.1 Service存在的意义

  • 防止Pod失联(服务发现)依靠pod的ip通讯不可靠, service把一组pod关联起来

  • 定义一组Pod的访问策略(负载均衡)为一组pod提供负载均衡

8.2 Pod与Service的关系

  • 通过label-selector相关联

  • 通过Service实现Pod的负载均衡( TCP/UDP 4层)

8.2 Service三种常用类型

  • ClusterIP:集群内部使用,默认**,**分配一个稳定的IP地址,即VIP,只能在集群内部访问(同Namespace内的Pod)。

  • NodePort:对外暴露应用。在每个节点上启用一个端口来暴露服务,可以在集群外部访问。也会分配一个稳定内部集群IP地址。访问地址::

  • LoadBalancer:对外暴露应用,适用公有云、与NodePort类似,在每个节点上启用一个端口来暴露服务。除此之外,Kubernetes会请求底层云平台上的负载均衡器,将每个Node([NodeIP]:[NodePort])作为后端添加进去, NodePort的增强版,主要用于公有云,对接公有云的负载均衡。

[root@k8s-master ~]# kubectl delete $(kubectl get pods -o name)  "批量删除pod资源"
[root@k8s-master ~]# kubectl delete $(kubectl get job -o name) "批量删除job"
[root@k8s-master ~]# kubectl delete $(kubectl get cronjob -o name)
[root@k8s-master ~]# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: webname: nginxweb
spec:replicas: 3selector:matchLabels:app: webstrategy: {}template:metadata:labels:app: webspec:containers:- image: nginxname: nginxresources: {}
[root@k8s-master ~]# kubectl apply -f deployment.yaml 
deployment.apps/nginxweb created
[root@k8s-master ~]# kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
nginxweb-5dcb957ccc-54b92   1/1     Running   0          61s
nginxweb-5dcb957ccc-dkcvq   1/1     Running   0          61s
nginxweb-5dcb957ccc-qlst5   1/1     Running   0          61s
[root@k8s-master ~]# kubectl get ep
NAME         ENDPOINTS                                        AGE
kubernetes   192.168.100.110:6443                             45h
tomcat       <none>                                           18h
nginxweb          10.244.0.172:80,10.244.0.173:80,10.244.1.20:80   7h37m
[root@k8s-master ~]# curl  10.244.0.173
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
.....
[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP          45h
tomcat       NodePort    10.0.0.26    <none>        8080:32120/TCP   18h
web          NodePort    10.0.0.46    <none>        80:30000/TCP     7h38m
[root@k8s-master ~]# curl 10.0.0.46
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
[root@k8s-node1 ~]# curl 10.0.0.46
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......
[root@k8s-node2 data]# curl 10.0.0.46
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......
//集群内部用service 的clouderip 10.0.0.46进行内部通讯
kubectl expose deployment web --port=80 --terget=8080 --type-port=NodePort -name=web --dry-run -o yaml > service.yaml--port=80 service管理的端口
--terget=8080 容器里跑的服务的端口kubectl get ep 查看service关联的pod
[root@k8s-master ~]# vim service.yaml
apiVersion: v1
kind: Service
metadata:creationTimestamp: nulllabels:app: apachewebname: apacheweb
spec:ports:- port: 80protocol: TCPtargetPort: 80nodePort: 30008 "指定对外暴露的端口"selector:app: apachewebtype: NodePort
[root@k8s-master ~]# kubectl apply -f service.yaml 
service/apacheweb created
[root@k8s-master ~]# kubectl get pods
No resources found in default namespace.
[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
apacheweb    NodePort    10.0.0.144   <none>        80:30008/TCP     8s
[root@k8s-node2 ~]# netstat -ntap | grep 30008
tcp        0      0 0.0.0.0:30008           0.0.0.0:*               LISTEN      24827/kube-proxy 
[root@k8s-node1 ~]# netstat -ntap | grep 30008
tcp        0      0 0.0.0.0:30008           0.0.0.0:*               LISTEN      24845/kube-proxy    
[root@k8s-master ~]# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: apacheweb "与service的selecter标签一致"name: apache
spec:replicas: 3selector:matchLabels:app: apacheweb "与service的selecter标签一致"strategy: {}template:metadata:labels:app: apacheweb "与service的selecter标签一致"spec:containers:- image: httpdname: httpdresources: {}
[root@k8s-master ~]# kubectl apply -f deployment.yaml 
deployment.apps/apache created
[root@k8s-master ~]# vim deployment.yaml 
[root@k8s-master ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
apache-774c5dd967-7ftpz   1/1     Running   0          14s
apache-774c5dd967-dqmvv   1/1     Running   0          14s
apache-774c5dd967-zqh55   1/1     Running   0          14s[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
apacheweb    NodePort    10.0.0.144   <none>        80:30008/TCP     5m32s[root@k8s-master ~]# kubectl get ep	"查看service与pod的对应关系"
NAME         ENDPOINTS                                        AGE
apacheweb    10.244.0.188:80,10.244.0.189:80,10.244.1.28:80   5m43s[root@k8s-master ~]# kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
apache-774c5dd967-7ftpz   1/1     Running   0          50s   10.244.1.28    k8s-node1   <none>           <none>
apache-774c5dd967-dqmvv   1/1     Running   0          50s   10.244.0.188   k8s-node2   <none>           <none>
apache-774c5dd967-zqh55   1/1     Running   0          50s   10.244.0.189   k8s-node2   <none>           <none>

8.3 Service代理模式

Iptables:

  • 灵活,功能强大
  • 默认的转发规则
  • 规则遍历匹配和更新,呈线性时延

IPVS:

  • 工作在内核态,有更好的性能

  • 调度算法丰富:rr,wrr,lc,wlc,ip hash…

  • 查看是否启用: lsmod|grep ip_vs

    启用ip_vs: modprobe ip_vs

//iptables示例
[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
apacheweb    NodePort    10.0.0.144   <none>        80:30008/TCP     3h8m
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP          2d2h
tomweb       NodePort    10.0.0.171   <none>        8080:30191/TCP   3h33m
web          NodePort    10.0.0.59    <none>        80:32689/TCP     3h41m
[root@k8s-master ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
apache-774c5dd967-7ftpz   1/1     Running   0          3h3m
apache-774c5dd967-dqmvv   1/1     Running   0          3h3m
apache-774c5dd967-zqh55   1/1     Running   0          3h3m
tomcat-779675bcd7-mn64n   1/1     Running   0          22s
[root@k8s-master ~]# kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
apache-774c5dd967-7ftpz   1/1     Running   0          3h3m   10.244.1.28    k8s-node1   <none>           <none>
apache-774c5dd967-dqmvv   1/1     Running   0          3h3m   10.244.0.188   k8s-node2   <none>           <none>
apache-774c5dd967-zqh55   1/1     Running   0          3h3m   10.244.0.189   k8s-node2   <none>           <none>
tomcat-779675bcd7-mn64n   1/1     Running   0          48s    10.244.0.190   k8s-node2   <none>           <none>[root@k8s-master ~]# iptables-save >a
[root@k8s-master ~]# vim a
"过滤30191"
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/tomweb:" -m tcp --dport 30191 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/tomweb:" -m tcp --dport 30191 -j KUBE-SVC-BBBKQKVZL6ERVXVD
/KUBE-SVC-BBBKQKVZL6ERVXVD "过滤"
-A KUBE-SVC-BBBKQKVZL6ERVXVD -m comment --comment "default/tomweb:" -j KUBE-SEP-F24DQXHWVSUKF2VX
/KUBE-SEP-F24DQXHWVSUKF2VX	"过滤"
-A KUBE-SEP-F24DQXHWVSUKF2VX -s 10.244.0.190/32 -m comment --comment "default/tomweb:" -j KUBE-MARK-MASQ
-A KUBE-SEP-F24DQXHWVSUKF2VX -p tcp -m comment --comment "default/tomweb:" -m tcp -j DNAT --to-destination 10.244.0.190:8080 "转发给10.244.0.190对应的node2""过滤30008"
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/apacheweb:" -m tcp --dport 30008 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/apacheweb:" -m tcp --dport 30008 -j KUBE-SVC-2FMJILYLKGPDN7PX
-j 跳转
/KUBE-SVC-2FMJILYLKGPDN7PX  "过滤"
-A KUBE-SVC-2FMJILYLKGPDN7PX -m comment --comment "default/apacheweb:" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-PPXFATWPNY2IRB6D
-A KUBE-SVC-2FMJILYLKGPDN7PX -m comment --comment "default/apacheweb:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-XEI3JOMLAULW7VFD
-A KUBE-SVC-2FMJILYLKGPDN7PX -m comment --comment "default/apacheweb:" -j KUBE-SEP-R6W6J2H44KT6O44B	"未设置概率"/KUBE-SEP-PPXFATWPNY2IRB6D
-A KUBE-SEP-PPXFATWPNY2IRB6D -s 10.244.0.188/32 -m comment --comment "default/apacheweb:" -j KUBE-MARK-MASQ
-A KUBE-SEP-PPXFATWPNY2IRB6D -p tcp -m comment --comment "default/apacheweb:" -m tcp -j DNAT --to-destination 10.244.0.188:80 "转发给node2概率1/3"/KUBE-SEP-XEI3JOMLAULW7VFD
-A KUBE-SEP-XEI3JOMLAULW7VFD -s 10.244.0.189/32 -m comment --comment "default/apacheweb:" -j KUBE-MARK-MASQ
-A KUBE-SEP-XEI3JOMLAULW7VFD -p tcp -m comment --comment "default/apacheweb:" -m tcp -j DNAT --to-destination 10.244.0.189:80 "转发给node2概率1/2"/KUBE-SEP-R6W6J2H44KT6O44B
-A KUBE-SEP-R6W6J2H44KT6O44B -s 10.244.1.28/32 -m comment --comment "default/apacheweb:" -j KUBE-MARK-MASQ
-A KUBE-SEP-R6W6J2H44KT6O44B -p tcp -m comment --comment "default/apacheweb:" -m tcp -j DNAT --to-destination 10.244.1.28:80 "转发给node1概率为设置"--probability 调度转发
//0.33333333349轮询匹配概率1/3
//0.50000000000轮询匹配概率1/2
iptables从上往下匹配
//ipvs
[root@k8s-master ~]# lsmod|grep ip_vs
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 	"ipvs的轮询模式"
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          133095  9 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
#modprobe -- ip_vs_rr "轮询"
#modprobe -- ip_vs_wrr "加权轮询"
#ip_vs_sh "会话保持"[root@k8s-master ~]# vim /etc/rc.local
modprobe ip_vs
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
"所有节点添加开机启动,centos7.6自带不用添加"[root@k8s-master ~]# kubectl edit cm kube-proxy -n kube-system
/mode 
ipvs: 
...
mode: "ipvs"		"保存启用ipvs"
[root@k8s-master ~]# kubelete delete pod kube-proxy-c67dg -n kube-system
"删除一个pod重建,比如选node1,让配置生效"[root@k8s-node1 ~]# yum install -y ipvsadm
[root@k8s-node1 ~]# ipvsadm -L -n 查看调度规则
weight权重  ActiveConn InAction(活动的链接) 请求信息[root@k8s-node1 ~]# ip a
"能够看见kube-ipvs0虚拟网卡的信息,它绑定到service的信息"

Service DNS名称

DNS服务监视Kubernetes API,为每一个Service创建DNS记录用于域名解析,解析service的名称。

ClusterIP A记录格式:..svc.cluster.local

示例:my-svc.my-namespace.svc.cluster.local

CoreDNS Pod -> 获取service(apiserver)-> 更新到本地

kubelet运行pod -> pod默认走coredns解析

user -> 域名 -> node ip:80/443 -> ingress controller -> 域名分流 -> pod

[root@k8s-master ~]# kubectl get cm -n kube-system
NAME                                 DATA   AGE
coredns                              1      2d3h
//coredns 提供域名解析
[root@k8s-master ~]# vim busybox.yaml
apiVersion: v1
kind: Pod
metadata:name: bsnamespace: default
spec:containers:- name: busyboximage: busybox:1.28.4command:- "/bin/sh"- "-c"- "sleep 3600"
[root@k8s-master ~]# kubectl apply -f busybox.yaml 
pod/bs created
[root@k8s-master ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
apache-774c5dd967-7ftpz   1/1     Running   0          4h23m
apache-774c5dd967-dqmvv   1/1     Running   0          4h23m
apache-774c5dd967-zqh55   1/1     Running   0          4h23m
bs                        1/1     Running   0          11s
tomcat-779675bcd7-mn64n   1/1     Running   0          80m
[root@k8s-master ~]# kubectl exec -it bs sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.localName:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
/ # nslookup apacheweb
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.localName:      apacheweb
Address 1: 10.0.0.144 apacheweb.default.svc.cluster.local
/ # 
kube-dns是早期的dns
coredns是现在常用的

小结

  1. 采用NodePort对外暴露应用,前面加一个LB实现统一访问入口

  2. 优先使用IPVS代理模式

  3. 集群内应用采用DNS名称访问

第9章:Ingress

9.1 Ingress为弥补NodePort不足而生

NodePort存在的不足:

  • 一个端口只能一个服务使用,端口需提前规划

  • 只支持4层负载均衡

9.2 Pod与Ingress的关系

  • 通过Service相关联获取pod的ip地址,端口

  • 通过Ingress Controller实现Pod的负载均衡

    • 支持TCP/UDP 4层和HTTP 7层

9.3 Ingress Controller

为了使Ingress资源正常工作,集群必须运行一个Ingress Controller(负载均衡实现)。

所以要想通过ingress暴露你的应用,大致分为两步:

  1. 部署Ingress Controller

  2. 创建Ingress规则

整体流程如下:

Ingress Controller有很多实现,我们这里采用官方维护的Nginx控制器。

部署文档:https😕/github/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md

注意事项:

  • 镜像地址修改成国内的:lizhenliang/nginx-ingress-controller:0.20.0

  • 使用宿主机网络:hostNetwork: true

//上传ingress-controller.yaml
# kubectl apply -f ingress-controller.yaml
# cat ingress-controller.yaml
.....hostNetwork: true		"使用宿主机网络"serviceAccountName: nginx-ingress-serviceaccountcontainers:- name: nginx-ingress-controllerimage: lizhenliang/nginx-ingress-controller:0.20.0 "使用的镜像"
......
//上传ingress-controller.yaml
# kubectl get pods -n ingress-nginx
NAME                             READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-5r4wg   1/1     Running   0          13s
nginx-ingress-controller-x7xdf   1/1     Running   0          13s

此时在任意Node上就可以看到该控制监听的80和443端口:

# netstat -natp |egrep ":80|:443"
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      104750/nginx: maste 
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      104750/nginx: maste 
"基于nginx监听,ingress这个就是局域nginx上修改的"

80和443端口就是接收来自外部访问集群中应用流量,转发对应的Pod上。

其他主流控制器:

Traefik: HTTP反向代理、负载均衡工具

Istio:服务治理,控制入口流量

9.4 Ingress

接下来,就可以创建ingress规则了。

在ingress里有三个必要字段:

  • host:访问该应用的域名,也就是域名解析
  • serverName:应用的service名称
  • serverPort:service端口

1、HTTP访问

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:name: example-ingress
spec:rules:- host: example.ctnrshttp:paths:- path: /backend:serviceName: webservicePort: 80

生产环境:example.ctnrs 域名是在你购买域名的运营商上进行解析,A记录值为K8S Node的公网IP(该Node必须运行了Ingress controller)。

测试环境:可以绑定hosts模拟域名解析(“C:\Windows\System32\drivers\etc\hosts”),对应IP是K8S Node的内网IP。例如:192.168.100.120 example.ctnrs

示例:

[root@k8s-master ~]# kubectl apply -f ingress-controller.yaml 
"创建ingress程序"
首先要有service 
[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP          2d7h
tomweb       NodePort    10.0.0.171   <none>        8080:30191/TCP   8h
web          NodePort    10.0.0.116   <none>        80:30008/TCP     3s
[root@k8s-master ~]# vim ingress.yaml apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:name: example-ingress
spec:rules:- host: example.ctnrs	"解析的域名"http:paths:- path: /backend:serviceName: web		"注意这里要和上面service管理的name一样"servicePort: 80
//修改宿主机的hosts文件
添加
192.168.100.120 example.ctnrs
192.168.100.130 example.ctnrs

​ 这样可以直接通过example.ctnrs域名访问也可以通过nodeip+port访问

2、HTTPS访问

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:name: tls-example-ingress
spec:tls:- hosts:- sslexample.ctnrssecretName: example-ctnrs-comrules:- host: sslexample.ctnrshttp:paths:- path: /backend:serviceName: webservicePort: 80

里面用到了secret名为secret-tls,用于保存https证书。

这里使用cfssl工具自签证书用于测试,先下载cfssl工具:

curl -s -L -o /usr/local/bin/cfssl .2/cfssl_linux-amd64
curl -s -L -o /usr/local/bin/cfssljson .2/cfssljson_linux-amd64
curl -s -L -o /usr/local/bin/cfssl-certinfo .2/cfssl-certinfo_linux-amd64
chmod +x /usr/local/bin/cfssl*

执行课件中的certs.sh脚本,生成证书:

ls *pem
ca.pem ca-key.pem example.ctnrs.pem example.ctrnrs-key.pem

将证书保存在secret里:

kubectl create secret tls example-ctnrs-com --cert=example.ctnrs.pem --key=example.ctrnrs-key.pem

这样,ingress就能通过secret名称拿到要用的证书了。

然后绑定本地hosts,就可以https访问了:https://example-ctnrs-com

示例:

//下载工具
[root@k8s-master cfssl]# cat cfssl.sh 
wget .2/cfssl_linux-amd64
wget .2/cfssljson_linux-amd64
wget .2/cfssl-certinfo_linux-amd64
chmod +x cfssl*
mv cfssl_linux-amd64 /usr/bin/cfssl
mv cfssljson_linux-amd64 /usr/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
[root@k8s-master cfssl]# bash cat cfssl.sh	"下载制作证书的工具"
//制作证书
[root@k8s-master ~]# cat certs.sh 
cat > ca-config.json <<EOF
{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}
}
EOFcat > ca-csr.json <<EOF
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing"}]
}
EOFcfssl gencert -initca ca-csr.json | cfssljson -bare ca -cat > blog.ctnrs-csr.json <<EOF
{"CN": "blog.ctnrs","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}]
}
EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes blog.ctnrs-csr.json | cfssljson -bare blog.ctnrs kubectl create secret tls blog-ctnrs-com --cert=blog.ctnrs.pem --key=blog.ctnrs-key.pem
"生成blog-ctnrs-com证书,给下面ingress规则引入证书"
[root@k8s-master cfssl]# bash certs.sh "执行脚本制作ca和blog-ctnrs-com自签证书"//创建ingress规则引入证书
[root@k8s-master ~]# vim ingress-https.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:name: tls-example-ingress
spec:tls:- hosts:- blog.ctnrssecretName: blog.ctnrs	"对应上文的证书名称"rules:- host: blog.ctnrshttp:paths:- path: /backend:serviceName: web	"要和service的name一样"servicePort: 80
[root@k8s-master ~]# kubectl apply -f ingress-https.yaml 
ingressworking.k8s.io/tls-example-ingress created
[root@k8s-master ~]# kubectl get ingress
NAME                  CLASS    HOSTS               ADDRESS   PORTS     AGE
example-ingress       <none>   example.ctnrs             80        59m
tls-example-ingress   <none>   blog.ctnrs                80, 443   4m13s
"监听了443端口"
//添加宿主机hosts解析
192.168.100.120 blog.ctnrs 
192.168.100.130 blog.ctnrs
//宿主机浏览器访问

3、根据URL路由到多个服务

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:name: url-ingressannotations:nginx.ingress.kubernetes.io/rewrite-target: /
spec:rules:- host: foobar.ctnrshttp:paths:- path: /foo "相当于location1"backend:serviceName: service1servicePort: 80- host: foobar.ctnrshttp:paths:- path: /bar	"相当于location2"backend:serviceName: service2servicePort: 80

工作流程:

foobar.ctnrs -> 178.91.123.132 -> / foo    service1:80/ bar    service2:80

示例:

[root@k8s-master ~]# kubectl create deploy web1 --image=tomcat
deployment.apps/web1 created
[root@k8s-master ~]# kubectl create deploy web2 --image=lizhenliang/java-demodeployment.apps/web2 created
[root@k8s-master ~]# kubectl expose deploy web1 --port=80
service/web1 exposed
[root@k8s-master ~]# kubectl expose deploy web2 --port=80
service/web2 exposed
[root@k8s-master ~]# kubectl get pods,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-5dcb957ccc-jzngk   1/1     Running   0          83m
pod/nginx-5dcb957ccc-s9jfl   1/1     Running   0          83m
pod/nginx-5dcb957ccc-tnwjv   1/1     Running   0          83m
pod/web1-647ccf6958-j22wk    1/1     Running   0          2m8s
pod/web2-788579f9cf-fcsdc    1/1     Running   0          97sNAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP          2d8h
service/tomweb       NodePort    10.0.0.171   <none>        8080:30191/TCP   9h
service/web          NodePort    10.0.0.116   <none>        80:30008/TCP     66m
service/web1         ClusterIP   10.0.0.241   <none>        80/TCP           55s
service/web2         ClusterIP   10.0.0.59    <none>        80/TCP           45s
[root@k8s-master ~]# kubectl  edit svc web1
service/web1 edited
[root@k8s-master ~]# kubectl  edit svc web2
....ports:- port: 80protocol: TCPtargetPort: 8080  "端口指定错了,tomcat改成8080"selector:app: web2
....
[root@k8s-master ~]# curl 10.0.0.241
!doctype html><html lang="en"><head><title>HTTP Status 404 – Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2,.....     "能访问,tomcat没有首页,写个首页就可以了"[root@k8s-master ~]# kubectl exec -it web1-647ccf6958-j22wk bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
root@web1-647ccf6958-j22wk:/usr/local/tomcat# cd webapps
root@web1-647ccf6958-j22wk:/usr/local/tomcat/webapps# mkdir ROOT/a
root@web1-647ccf6958-j22wk:/usr/local/tomcat/webapps# cd ROOT/a
root@web1-647ccf6958-j22wk:/usr/local/tomcat/webapps/ROOT/a# echo "hello tomcat" > index.html
root@web1-647ccf6958-j22wk:/usr/local/tomcat/webapps/ROOT/a# ls
index.html
root@web1-647ccf6958-j22wk:/usr/local/tomcat/webapps/ROOT# exit
exit
[root@k8s-master ~]# curl 10.0.0.241
hello tomcat[root@k8s-master ~]# curl 10.0.0.59
<!DOCTYPE html>
<html>
<head lang="en"><meta charset="utf-8"><meta http-equiv="X-UA-Compatible" content="IE=edge"><title>把美女带回家应用案例</title>
//现在两个web都能够内部访问,下面通过ingress规则给它暴露出去[root@k8s-master ~]# kubectl get pod -n ingress-nginx
NAME                                       READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-766fb9f77-45drq   0/1     Pending   0          110m
nginx-ingress-controller-fg8nt             1/1     Running   0          4h34m
nginx-ingress-controller-k6s5p             1/1     Running   0          4h34m
[root@k8s-master ~]# kubectl exec -it nginx-ingress-controller-fg8nt bash -n ingress-nginx
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
www-data@k8s-node1:/etc/nginx$ ls
fastcgi.conf		koi-win		    nginx.conf		   template
fastcgi.conf.default	lua		    nginx.conf.default	   uwsgi_params
fastcgi_params		mime.types	    opentracing.json	   uwsgi_params.default
fastcgi_params.default	mime.types.default  owasp-modsecurity-crs  win-utf
geoip			modsecurity	    scgi_params
koi-utf			modules		    scgi_params.defaultwww-data@k8s-node1:/etc/nginx$ cat nginx.conf
...set $proxy_upstream_name "-";location /b {set $namespace      "default";set $ingress_name   "example-ingress";set $service_name   "web2";set $service_port   "80";set $location_path  "/b";rewrite_by_lua_block {balancer.rewrite()......location /a {set $namespace      "default";set $ingress_name   "example-ingress";set $service_name   "web1";set $service_port   "80";set $location_path  "/a";rewrite_by_lua_block {
......				//web2这个同样道理创建webapps/ROOT/b/index.html

4、基于名称的虚拟主机

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:name: name-virtual-host-ingress
spec:rules:- host: foo.ctnrshttp:paths:- backend:serviceName: service1servicePort: 80- host: bar.ctnrshttp:paths:- backend:serviceName: service2servicePort: 80

工作流程:

foo.bar --|                 |-> service1:80| 178.91.123.132  |
bar.foo --|                 |-> service2:80

9.5 Annotations对Ingress个性化配置

参考文档 :.md

HTTP:配置Nginx常用参数

vim ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:name: example-ingressannotations:kubernetes.io/ingress.class: "nginx"  "多控制器的时候指定nginx.ingress这个空子器"nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"nginx.ingress.kubernetes.io/proxy-send-timeout: "600"nginx.ingress.kubernetes.io/proxy-read-timeout: "600"nginx.ingress.kubernetes.io/proxy-body-size: "10m"
spec:rules:- host: example.ctnrshttp:paths:- path: /backend:serviceName: webservicePort: 80
kubectl apply -f ingress.yaml 重新运行生效          

HTTPS:禁止访问HTTP强制跳转到HTTPS(默认开启)

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:name: tls-example-ingressannotations:kubernetes.io/ingress.class: "nginx“nginx.ingress.kubernetes.io/ssl-redirect: 'false' "默认访问http都会访问https,改为true,禁止重定向"
spec:tls:- hosts:- sslexample.ctnrssecretName: secret-tlsrules:- host: sslexample.ctnrshttp:paths:- path: /backend:serviceName: webservicePort: 80

总结:

Ingress工作原理:

ingress controller pod -> 获取service(apiserver)-> 应用到本地nginx

Ingress有两个功能:

1、控制器获取service关联的pod应用到nginx
2、nginx 提供七层负载均衡

9.6 Ingress Controller高可用方案

如果域名只解析到一台Ingress controller,是存在单点的,挂了就不能提供服务了。这就需要具备高可用,有两种常见方案:

左边:双机热备,选择两台Node专门跑Ingress controller,然后通过keepalived对其做主备。用户通过VIP访问。

右边:高可用集群(推荐),前面加一个负载均衡器,转发请求到后端多台Ingress controller。

1、固定ingress controller到两个node上(daemonset+nodeselector)
user -> 域名 -> vip(keepalived) ha -> pod

2、固定ingress controller到两个node上(daemonset+nodeselector)
user -> 域名 -> LB(nginx、lvs、haproxy) -> ingress controller -> pod

奇虎的架构用LVS

//基于刚刚的实验成功访问/ 我们来去做高可用LB
"新开一台测试机192.168.100.200"
[root@localhost ~]# yum install -y epel-release
[root@localhost ~]# yum install -y nginx
[root@localhost ~]# vim /etc/nginx/nginx.conf
......upstream ingress-controller {server 192.168.100.120:80;	"这里写后端节点地址"server 192.168.100.130:80;}server {......location / {proxy_pass http://ingress-controller; "LB的请求都转给后端node的ingress的pod"proxy_set_header Host $host; "这里要带域名转发请求,转给ingress再进行转发给node节点"}
.......[root@localhost ~]# systemctl restart nginx
[root@localhost ~]# systemctl status nginx//修改宿主机的hosts,解析LB
#192.168.100.130  example.ctnrs blog.ctnrs
#192.168.100.120  example.ctnrs blog.ctnrs
192.168.100.200  example.ctnrs blog.ctnrs//验证
[root@localhost ~]# tail -f /var/log/nginx/access.log192.168.100.1 - - [06/Oct/2020:20:19:09 +0800] "GET /a/ HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0" "-"
192.168.100.1 - - [06/Oct/2020:20:19:14 +0800] "GET /a/ HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0" "-"
192.168.100.1 - - [06/Oct/2020:20:20:43 +0800] "GET /a/ HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0" "-"

第10章:管理应用程序配置

10.1 secret

secret加密数据并存放Etcd中,让Pod的容器以挂载Volume方式访问。

应用场景:
1、https证书
2、secret存放docker registry认证信息
3、存放文件内容或者字符串,例如用户名密码

Pod使用secret两种方式:

  • 变量注入

  • 挂载

例如:创建一个secret用于保存应用程序用到的用户名和密码

echo -n 'admin' | base64
YWRtaW4=
echo -n '1f2d1e2e67df' | base64
MWYyZDFlMmU2N2Rm

创建secret:

apiVersion: v1
kind: Secret
metadata:name: mysecret
type: Opaque
data:username: YWRtaW4=password: MWYyZDFlMmU2N2Rm

变量注入方式在Pod中使用secret:

apiVersion: v1
kind: Pod
metadata:name: mypod
spec:containers:- name: nginximage: nginxenv:- name: SECRET_USERNAMEvalueFrom:secretKeyRef: "关键字"name: mysecret "刚创建的secrect名称"key: username "拿username这个用户名的值"- name: SECRET_PASSWORDvalueFrom:secretKeyRef:name: mysecretkey: password

进入到Pod中测试是否传入变量:

# kubectl exec -it mypod bash
# echo $SECRET_USERNAME
admin
# echo $SECRET_PASSWORD
1f2d1e2e67df

数据挂载方式在Pod中使用secret:

#secret-volume-pod.yaml
apiVersion: v1
kind: Pod
metadata:name: mypod
spec:containers:- name: nginximage: nginxvolumeMounts:- name: foomountPath: "/etc/foo"readOnly: truevolumes:- name: foosecret:secretName: mysecret[root@k8s-master ~]# kubectl delete pod mypod
pod "mypod" deleted
[root@k8s-master ~]# kubectl apply -f secret-volume-pod.yaml 
pod/mypod created
[root@k8s-master ~]# kubectl exec -it mypod bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
root@mypod:/# cat /etc/foo/username
adminroot@mypod:/# cat /etc/foo/password
1f2d1e2e67dfroot@mypod:/#       

进入到Pod中测试是否写入文件:

# kubectl exec -it mypod bash
#cat /etc/foo/username
admin
# cat /etc/foo/password
1f2d1e2e67df

如果你的应用程序使用secret,应遵循Pod获取该数据的方式。

10.2 configmap

与Secret类似,区别在于ConfigMap保存的是不需要加密配置信息。

应用场景:应用配置文件,不涉及加密

例如:创建一个configmap用于保存应用程序用到的字段值

apiVersion: v1
kind: ConfigMap
metadata:name: myconfignamespace: default
data:special.level: info "指定key的值赋予special.level"special.type: hello "指定key的值赋予special.type: hello"

变量注入方式在Pod中使用configmap:

apiVersion: v1
kind: Pod
metadata:name: mypod
spec:containers:- name: busybox "使用busybox测试镜像,命令执行输出变量值"image: busyboxcommand: [ "/bin/sh", "-c", "echo $(LEVEL) $(TYPE)" ]env:- name: LEVEL "定义变量"valueFrom: "数据来源"configMapKeyRef: "引入configmap字段"name: myconfig "引入configmap配置文件的名称,与上文对应"key: special.level "pod引入configmap的值"- name: TYPEvalueFrom:configMapKeyRef:name: myconfigkey: special.typerestartPolicy: Never "不重启策略,执行完结束"

查看Pod日志就可以看到容器里打印的键值了:

# kubectl logs mypod 
info hello

举一个常见的用法,例如将应用程序的配置文件保存到configmap中,这里以redis为例:

apiVersion: v1
kind: ConfigMap "创建pod类型configmap"
metadata: "元信息"name: redis-config "名称redis配置文件"
data:redis.properties: | "redis.properties文件包含下面以下内容"redis.host=127.0.0.1redis.port=6379redis.password=123456
---
apiVersion: v1
kind: Pod
metadata:name: mypod
spec:containers:- name: busyboximage: busybox "创建测试环境"command: [ "/bin/sh","-c","cat /etc/config/redis.properties" ]volumeMounts: "容器数据卷挂载信息"- name: config-volumemountPath: /etc/config "挂载点"volumes: "数据卷来源"- name: config-volumeconfigMap: "来自configmap"name: redis-config "名字引用上文configmap名称"restartPolicy: Never

查看Pod日志就可以看到容器里打印的文件了:

# kubectl logs mypod 
redis.host=127.0.0.1
redis.port=6379
redis.password=123456

示例:

[root@k8s-master ~]# kubectl delete pod mypod "删除mypod,下面重新创建,防止重名"
pod "mypod" deleted[root@k8s-master ~]# vim configMap-var-pod.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: myconfignamespace: default
data:special.level: infospecial.type: hello---apiVersion: v1
kind: Pod
metadata:name: mypod
spec:containers:- name: busyboximage: busyboxcommand: [ "/bin/sh", "-c", "echo $(LEVEL) $(TYPE)" ]env:- name: LEVELvalueFrom:configMapKeyRef:name: myconfigkey: special.level- name: TYPEvalueFrom:configMapKeyRef:name: myconfigkey: special.typerestartPolicy: Never
[root@k8s-master ~]# kubectl apply -f configMap-var-pod.yaml 
configmap/myconfig created
pod/mypod created
[root@k8s-master ~]# kubectl logs mypod
info hello
[root@k8s-master ~]# kubectl get cm "cm,configmap的缩写"
NAME       DATA   AGE
myconfig   2      8m27s
//创建configmap存放数据配置文件
//在pod中引用configmap的数据
[root@k8s-master ~]# vim configMap-volume-pod.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: redis-config
data:redis.properties: |redis.host=127.0.0.1redis.port=6379redis.password=123456
---
apiVersion: v1
kind: Pod
metadata:name: mypod
spec:containers:- name: busyboximage: busyboxcommand: [ "/bin/sh","-c","cat /etc/config/redis.properties" ]volumeMounts:- name: config-volumemountPath: /etc/configvolumes:- name: config-volumeconfigMap:name: redis-configrestartPolicy: Never[root@k8s-master ~]# kubectl apply -f configMap-volume-pod.yaml 
configmap/redis-config created
pod/mypod created
[root@k8s-master ~]# kubectl logs mypod
Error from server (BadRequest): container "busybox" in pod "mypod" is waiting to start: ContainerCreating
[root@k8s-master ~]# kubectl logs mypod
redis.host=127.0.0.1
redis.port=6379
redis.password=123456

10.3 应用程序如何动态更新配置?

ConfigMap更新时,业务也随之更新的方案:

  • 当ConfigMap发生变更时,应用程序动态加载
  • 触发滚动更新,即重启服务

示例:

//congigmap数据更新
[root@k8s-master ~]# vim configMap-volume-pod.yaml
...
data:redis.properties: |redis.host=192.168.100.200 "地址发生变化"redis.port=6379redis.password=123456...
//重新加载配置pod    
[root@k8s-master ~]# kubectl apply -f configMap-volume-pod.yaml 
configmap/redis-config configured
pod/mypod configured
//查看应用程序引用的数据
[root@k8s-master ~]# kubectl logs mypod
redis.host=127.0.0.1
redis.port=6379
redis.password=123456
//可以发现数据没有更新,下面解决
//可以发现数据没有更新,三种方法解决
1.重建pod
2.应用程序configmap本身实现监听本地配置文件,如果configmap发生变化触发配置热更新
3.业务端采用sidecar 监听configmap资源的变化,有变化则告诉业务更新

本文标签: 《K8S进阶》(上)