linux运维知识体系

Kubernetes的Pod调度

24/07/24
213
0

📊Kubernetes的Pod调度

⚙️Pod调度-nodeName

🔧什么是nodeName

  1. 就是指定pod调度到worker节点,该节点必须在etcd中有记录。
  2. 一般用户用于指定调度,如果使用了该字段,则不会使用k8s默认的scheduler调度器('default-scheduler')。

🧩案例

[root@master231 scheduler]# vim 01-deploy-scheduler-nodeName.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nodename
spec:
  replicas: 3
  selector:
    matchLabels:
      app: xiu
  template:
    metadata:
      labels:
        app: xiu
        version: v1
    spec:
      nodeName: worker233
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2
        ports:
        - containerPort: 80
[root@master231 scheduler]# kubectl apply -f 01-deploy-scheduler-nodeName.yaml 
[root@master231 scheduler]# kubectl get po -o wide
[root@master231 scheduler]# kubectl describe pod deploy-nodename-699559557c-p2twh 
Name:         deploy-nodename-699559557c-p2twh
Namespace:    default
Priority:     0
Node:         worker233/10.0.0.233
……
Events:
  Type    Reason   Age   From     Message
  ----    ------   ----  ----     -------
  Normal  Pulled   29s   kubelet  Container image "registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2" already present on machine
  Normal  Created  29s   kubelet  Created container c1
  Normal  Started  29s   kubelet  Started container c1
[root@master231 scheduler]# kubectl delete -f 01-deploy-scheduler-nodeName.yaml 

⚙️Pod调度-hostPort

🔧什么是hostPort

  1. hostPort会让worker节点添加转发规则,将监听端口流量转发到该容器的端口。
  2. 如果宿主机的端口被占用,则无法完成调度。

🧩案例

[root@master231 scheduler]# vim 02-deploy-scheduler-hostport.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-hostport
spec:
  replicas: 3
  selector:
    matchLabels:
      app: xiu
  template:
    metadata:
      labels:
        app: xiu
        version: v1
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2
        ports:
        - containerPort: 80
          hostPort: 90
[root@master231 scheduler]# kubectl apply -f 02-deploy-scheduler-hostport.yaml 
[root@master231 scheduler]# kubectl get po -o wide
[root@master231 scheduler]# kubectl describe po deploy-hostport-557b7f449b-t4mbd 
Name:           deploy-hostport-557b7f449b-t4mbd
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=xiu
                pod-template-hash=557b7f449b
                version=v1
Annotations:    <none>
Status:         Pending
……
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  30s   default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't have free ports for the requested pod ports.
[root@master231 scheduler]# kubectl delete -f 02-deploy-scheduler-hostport.yaml 

⚙️Pod调度-hostNetwork

🔧什么是hostNetwork

  1. 就是让pod使用宿主机的网络(net)名称空间。

🧩案例

[root@master231 scheduler]# vim 03-deploy-scheduler-hostNetwork.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-hostnetwork
spec:
  replicas: 3
  selector:
    matchLabels:
      app: xiu
  template:
    metadata:
      labels:
        app: xiu
        version: v1
    spec:
      hostNetwork: true
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2
        ports:
        - containerPort: 80

[root@master231 scheduler]# kubectl apply -f 03-deploy-scheduler-hostNetwork.yaml 
[root@master231 scheduler]# kubectl get po -o wide
[root@master231 scheduler]# kubectl describe pod deploy-hostnetwork-698ddcf86d-4kjht 
Name:           deploy-hostnetwork-698ddcf86d-4kjht
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=xiu
                pod-template-hash=698ddcf86d
                version=v1
Annotations:    <none>
Status:         Pending
……
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  31s   default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't have free ports for the requested pod ports.
[root@master231 scheduler]# kubectl delete -f 03-deploy-scheduler-hostNetwork.yaml 

⚙️resources

🔧什么是resources

  1. requests可以对容器的调度进行期望阈值,如果不符合期望则无法完成调度。
  2. limits用于控制容器对资源的使用上限,如果用户没有定义requests字段,则requests值默认和limits相等。

🧩案例

[root@master231 scheduler]# vim 04-deploy-scheduler-resources.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-resources
spec:
  replicas: 3
  selector:
    matchLabels:
      app: xiu
  template:
    metadata:
      labels:
        app: xiu
        version: v1
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2
        resources:
         # requests:
         #   cpu: 200m
         #   memory: 200Mi
         # limits:
         #   cpu: 0.5
         #   memory: 500Mi
         limits:
           cpu: 1
           memory: 500Mi
        ports:
        - containerPort: 80

[root@master231 scheduler]# kubectl apply -f 04-deploy-scheduler-resources.yaml 
[root@master231 scheduler]# kubectl get po -o wide
[root@master231 scheduler]# kubectl describe po deploy-resources-584fc548d5-29s4w 
Name:           deploy-resources-584fc548d5-29s4w
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=xiu
                pod-template-hash=584fc548d5
                version=v1
Annotations:    <none>
Status:         Pending
……
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  26s   default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.
[root@master231 scheduler]# kubectl delete -f 04-deploy-scheduler-resources.yaml

⚙️Pod调度-nodeSelector

🔧nodeSelector

  1. 可以根据节点的标签进行调度。

🧩案例

🔧给节点打标签

[root@master231 scheduler]# kubectl get nodes --show-labels
[root@master231 scheduler]# kubectl label nodes worker232 K8S=lili
[root@master231 scheduler]# kubectl label nodes worker233 K8S=linux
[root@master231 scheduler]# kubectl get nodes --show-labels -l K8S

🧩案例

[root@master231 scheduler]# vim 05-deploy-scheduler-nodeSelector.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nodeselector
spec:
  replicas: 5
  selector:
    matchLabels:
      app: xiu
  template:
    metadata:
      labels:
        app: xiu
        version: v1
    spec:
      nodeSelector:
        K8S: linux
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2
        ports:
        - containerPort: 80
[root@master231 scheduler]# kubectl apply -f 05-deploy-scheduler-nodeSelector.yaml 
[root@master231 scheduler]# kubectl get pods -o wide
[root@master231 scheduler]# kubectl get nodes -l K8S=linux

⚙️Pod调度基础-Taints

🔧什么是Taints

  1. Taints表示污点,作用在worker工作节点上。
  2. 污点类型分为三类:
    1. NoSchedule
      1. 不在接受新的Pod调度,已经调度到该节点的Pod不会被驱逐。
    2. PreferNoSchedule
      1. 优先将Pod调度到其他节点,当其他节点不可调度时,再往该节点调度。
    3. NoExecute
      1. 不在接受新的Pod调度,且已经调度到该节点的Pod会被立刻驱逐。
  3. 污点的格式
    1. key[=value]:effect

⚙️污点的基础管理

🔧查看污点

温馨提示:表示该节点没有污点。

[root@master231 scheduler]# kubectl describe nodes |grep Taints

🔧给指定节点打污点

[root@master231 scheduler]# kubectl taint node --all K8S=lin:PreferNoSchedule
node/master231 tainted
node/worker232 tainted
node/worker233 tainted
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
                    K8S=lin:PreferNoSchedule
Unschedulable:      false
--
Taints:             K8S=lin:PreferNoSchedule
Unschedulable:      false
Lease:
--
Taints:             K8S=lin:PreferNoSchedule
Unschedulable:      false
Lease:

🔧修改污点

只能修改value字段,修改effect则表示创建了一个新的污点类型

[root@master231 scheduler]# kubectl taint node worker233 K8S=laonanhai:PreferNoSchedule --overwrite 
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
                    K8S=lin:PreferNoSchedule
Unschedulable:      false
--
Taints:             K8S=lin:PreferNoSchedule
Unschedulable:      false
Lease:
--
Taints:             K8S=laonanhai:PreferNoSchedule
Unschedulable:      false
Lease:

🔧删除污点

[root@master231 scheduler]# kubectl taint node --all K8S-
node/master231 untainted
node/worker232 untainted
node/worker233 untainted
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:

⚙️测试污点

🔧添加污点

[root@master231 scheduler]# kubectl taint node worker233 K8S:NoSchedule
node/worker233 tainted
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             K8S:NoSchedule
Unschedulable:      false
Lease:

🧩测试案例

[root@master231 scheduler]# cat 06-deploy-scheduler-Taints.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-taints
spec:
  replicas: 5
  selector:
    matchLabels:
      app: xiu
  template:
    metadata:
      labels:
        app: xiu
        version: v1
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2
        resources:
          limits:
            cpu: 1
            memory: 500Mi
        ports:
        - containerPort: 80 
[root@master231 scheduler]# kubectl apply -f  06-deploy-scheduler-Taints.yaml 
deployment.apps/deploy-taints created
[root@master231 scheduler]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
deploy-taints-584fc548d5-2bdl6   0/1     Pending   0          3s    <none>        <none>      <none>           <none>
deploy-taints-584fc548d5-frd64   0/1     Pending   0          3s    <none>        <none>      <none>           <none>
deploy-taints-584fc548d5-g9vnq   1/1     Running   0          3s    10.100.1.44   worker232   <none>           <none>
deploy-taints-584fc548d5-m9pjc   1/1     Running   0          3s    10.100.1.43   worker232   <none>           <none>
deploy-taints-584fc548d5-sjn6c   0/1     Pending   0          3s    <none>        <none>      <none>           <none>
[root@master231 scheduler]# kubectl describe pod deploy-taints-584fc548d5-2bdl6 
Name:           deploy-taints-584fc548d5-2bdl6
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=xiu
                pod-template-hash=584fc548d5
                version=v1
Annotations:    <none>
Status:         Pending
...
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  78s   default-scheduler  0/3 nodes are available: 1 Insufficient cpu, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {K8S: }, that the pod didn't tolerate.

🔧修改污点类型

[root@master231 scheduler]# kubectl taint node worker233 K8S:PreferNoSchedule
node/worker233 tainted
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             K8S:NoSchedule
                    K8S:PreferNoSchedule
Unschedulable:      false
[root@master231 scheduler]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
deploy-taints-584fc548d5-2bdl6   0/1     Pending   0          3m18s   <none>        <none>      <none>           <none>
deploy-taints-584fc548d5-frd64   0/1     Pending   0          3m18s   <none>        <none>      <none>           <none>
deploy-taints-584fc548d5-g9vnq   1/1     Running   0          3m18s   10.100.1.44   worker232   <none>           <none>
deploy-taints-584fc548d5-m9pjc   1/1     Running   0          3m18s   10.100.1.43   worker232   <none>           <none>
deploy-taints-584fc548d5-sjn6c   0/1     Pending   0          3m18s   <none>        <none>      <none>           <none>
[root@master231 scheduler]# kubectl taint node worker233 K8S:NoSchedule-
node/worker233 untainted
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             K8S:PreferNoSchedule
Unschedulable:      false
Lease:
[root@master231 scheduler]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taints-584fc548d5-2bdl6   1/1     Running   0          4m14s   10.100.2.208   worker233   <none>           <none>
deploy-taints-584fc548d5-frd64   1/1     Running   0          4m14s   10.100.2.209   worker233   <none>           <none>
deploy-taints-584fc548d5-g9vnq   1/1     Running   0          4m14s   10.100.1.44    worker232   <none>           <none>
deploy-taints-584fc548d5-m9pjc   1/1     Running   0          4m14s   10.100.1.43    worker232   <none>           <none>
deploy-taints-584fc548d5-sjn6c   1/1     Running   0          4m14s   10.100.2.207   worker233   <none>           <none>

🧩再次修改污点类型

[root@master231 scheduler]# kubectl taint node worker233 K8S=lin:NoExecute
node/worker233 tainted
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             K8S=lin:NoExecute
                    K8S:PreferNoSchedule
Unschedulable:      false
[root@master231 scheduler]# kubectl get pods -o wide
NAME                             READY   STATUS        RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taints-584fc548d5-fl4hz   0/1     Pending       0          8s      <none>         <none>      <none>           <none>
deploy-taints-584fc548d5-frd64   1/1     Terminating   0          7m45s   10.100.2.209   worker233   <none>           <none>
deploy-taints-584fc548d5-g9vnq   1/1     Running       0          7m45s   10.100.1.44    worker232   <none>           <none>
deploy-taints-584fc548d5-m9pjc   1/1     Running       0          7m45s   10.100.1.43    worker232   <none>           <none>
deploy-taints-584fc548d5-qm5hl   0/1     Pending       0          8s      <none>         <none>      <none>           <none>
deploy-taints-584fc548d5-sjn6c   1/1     Terminating   0          7m45s   10.100.2.207   worker233   <none>           <none>
deploy-taints-584fc548d5-v678w   0/1     Pending       0          8s      <none>         <none>      <none>           <none>
[root@master231 scheduler]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
deploy-taints-584fc548d5-fl4hz   0/1     Pending   0          14s     <none>        <none>      <none>           <none>
deploy-taints-584fc548d5-g9vnq   1/1     Running   0          7m51s   10.100.1.44   worker232   <none>           <none>
deploy-taints-584fc548d5-m9pjc   1/1     Running   0          7m51s   10.100.1.43   worker232   <none>           <none>
deploy-taints-584fc548d5-qm5hl   0/1     Pending   0          14s     <none>        <none>      <none>           <none>
deploy-taints-584fc548d5-v678w   0/1     Pending   0          14s     <none>        <none>      <none>           <none>

🔧删除测试

[root@master231 scheduler]# kubectl taint node worker233 K8S-
node/worker233 untainted
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
[root@master231 scheduler]# kubectl delete -f 06-deploy-scheduler-Taints.yaml 
deployment.apps "deploy-taints" deleted

⚙️Pod调度基础-toleration

🔧什么是tolerations

  1. tolerations是污点容忍,用该技术可以让Pod调度到一个具有污点的节点。
  2. 一个Pod如果想要调度到某个worker节点,则必须容忍该worker的所有污点。

🧩案例

⚙️环境准备

[root@master231 scheduler]# kubectl taint node --all K8S=lin:NoSchedule
node/master231 tainted
node/worker232 tainted
node/worker233 tainted
[root@master231 scheduler]# kubectl taint node worker233 class=linux99:NoExecute
node/worker233 tainted
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
                    K8S=lin:NoSchedule
Unschedulable:      false
--
Taints:             K8S=lin:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             class=linux99:NoExecute
                    K8S=lin:NoSchedule
Unschedulable:      false

🧩测试

[root@master231 scheduler]# cat 07-deploy-scheduler-tolerations.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-tolerations
spec:
  replicas: 10
  selector:
    matchLabels:
      app: xiu
  template:
    metadata:
      labels:
        app: xiu
        version: v1
    spec:
      # 配置污点容忍
      tolerations:
        # 指定污点的key,如果不定义,则默认匹配所有的key。
      - key: K8S
        # 指定污点的value,如果不定义,则默认匹配所有的value。
        value: lin
        # 指定污点的effect类型,如果不定义,则默认匹配所有的effect类型。
        effect: NoSchedule
      - key: class
        # 注意,operator表示key和value的关系,有效值为: Exists and Equal,默认值为: Equal。
        # 如果只写key不写value,则表示匹配所有的value值。
        operator: Exists
        effect: NoExecute
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      # 如果将operator的值设置为: Exists,且不定义key,value,effect时,表示无视污点。
      #- operator: Exists
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2
        resources:
          limits:
            cpu: 1
            memory: 500Mi
        ports:
        - containerPort: 80
[root@master231 scheduler]# kubectl apply -f 07-deploy-scheduler-tolerations.yaml
deployment.apps/deploy-tolerations created
[root@master231 scheduler]# kubectl get pods -o wide
NAME                                  READY   STATUS              RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-tolerations-74b865776b-55pqw   0/1     Pending             0          3s    <none>         <none>      <none>           <none>
deploy-tolerations-74b865776b-5gv7x   1/1     Running             0          3s    10.100.2.225   worker233   <none>           <none>
deploy-tolerations-74b865776b-7kmx2   0/1     Pending             0          3s    <none>         <none>      <none>           <none>
deploy-tolerations-74b865776b-c5f8g   0/1     ContainerCreating   0          3s    <none>         master231   <none>           <none>
deploy-tolerations-74b865776b-c6j7h   1/1     Running             0          3s    10.100.2.224   worker233   <none>           <none>
deploy-tolerations-74b865776b-f8j4k   0/1     ContainerCreating   0          3s    <none>         master231   <none>           <none>
deploy-tolerations-74b865776b-hkvll   1/1     Running             0          3s    10.100.1.60    worker232   <none>           <none>
deploy-tolerations-74b865776b-j2smm   1/1     Running             0          3s    10.100.2.223   worker233   <none>           <none>
deploy-tolerations-74b865776b-lbgwc   1/1     Running             0          3s    10.100.1.61    worker232   <none>           <none>
deploy-tolerations-74b865776b-xchp7   0/1     ContainerCreating   0          3s    <none>         master231   <none>           <none>
[root@master231 scheduler]# kubectl get pods -o wide
NAME                                  READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-tolerations-74b865776b-55pqw   0/1     Pending   0          6s    <none>         <none>      <none>           <none>
deploy-tolerations-74b865776b-5gv7x   1/1     Running   0          6s    10.100.2.225   worker233   <none>           <none>
deploy-tolerations-74b865776b-7kmx2   0/1     Pending   0          6s    <none>         <none>      <none>           <none>
deploy-tolerations-74b865776b-c5f8g   1/1     Running   0          6s    10.100.0.11    master231   <none>           <none>
deploy-tolerations-74b865776b-c6j7h   1/1     Running   0          6s    10.100.2.224   worker233   <none>           <none>
deploy-tolerations-74b865776b-f8j4k   1/1     Running   0          6s    10.100.0.10    master231   <none>           <none>
deploy-tolerations-74b865776b-hkvll   1/1     Running   0          6s    10.100.1.60    worker232   <none>           <none>
deploy-tolerations-74b865776b-j2smm   1/1     Running   0          6s    10.100.2.223   worker233   <none>           <none>
deploy-tolerations-74b865776b-lbgwc   1/1     Running   0          6s    10.100.1.61    worker232   <none>           <none>
deploy-tolerations-74b865776b-xchp7   1/1     Running   0          6s    10.100.0.12    master231   <none>           <none>
[root@master231 scheduler]# kubectl describe pod deploy-tolerations-74b865776b-55pqw 
Name:           deploy-tolerations-74b865776b-55pqw
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=xiu
                pod-template-hash=74b865776b
                version=v1
Annotations:    <none>
Status:         Pending
...
Containers:
  c1:
    Image:      registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2
    Port:       80/TCP
    Host Port:  0/TCP
    Limits:
      cpu:     1
      memory:  500Mi
    Requests:
      cpu:        1
      memory:     500Mi
	...
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  16s (x2 over 18s)  default-scheduler  0/3 nodes are available: 3 Insufficient cpu.

🔧删除污点

[root@master231 scheduler]# kubectl delete -f 07-deploy-scheduler-tolerations.yaml 
deployment.apps "deploy-tolerations" deleted
[root@master231 scheduler]# kubectl taint node --all K8S-
node/master231 untainted
node/worker232 untainted
node/worker233 untainted
[root@master231 scheduler]# kubectl taint node worker233 class-
node/worker233 untainted
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:

⚙️Pod调度-nodeAffinity

🔧什么是nodeAffinity

  1. nodeAffinity的作用和nodeSelector类似,但功能更强大。
  2. nodeSelector可以基于节点的标签进行调度,但是匹配节点标签时,key和value必须相同。
  3. 而nodeAffinity则可以让key相同,value不相同。

🧩案例

🔧修改节点的标签

[root@master231 scheduler]# kubectl label nodes master231 K8S=yitiantian
[root@master231 scheduler]# kubectl get nodes --show-labels -l K8S |grep K8S

🧩案例

[root@master231 scheduler]# vim 08-deploy-scheduler-nodeAffinity.yaml  
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nodeaffinity
spec:
  replicas: 5
  selector:
    matchLabels:
      app: xiu
  template:
    metadata:
      labels:
        app: xiu
        version: v1
    spec:
      # 配置Pod的粘性(亲和性)
      affinity:
        # 配置Pod更倾向于哪些节点进行调度,匹配条件基于节点标签实现。
        nodeAffinity:
          # 硬限制要求,必须满足
          requiredDuringSchedulingIgnoredDuringExecution:
            # 基于节点标签匹配
            nodeSelectorTerms:
              # 基于表达式匹配节点标签
            - matchExpressions:
                # 指定节点标签的key
              - key: K8S
                # 指定节点标签的value 
                values:
                - linux
                - yitiantian
                # 指定key和values之间的关系。
                operator: In
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2
       # resources:
       #   limits:
       #     cpu: 1
       #     memory: 500Mi
        ports:
        - containerPort: 80

[root@master231 scheduler]# kubectl apply -f 08-deploy-scheduler-nodeAffinity.yaml 
[root@master231 scheduler]# kubectl get po -o wide
[root@master231 scheduler]# kubectl delete -f 08-deploy-scheduler-nodeAffinity.yaml 

⚙️Pod调度-podAffinity

🔧什么是podAffinity

  1. 所谓的podAffinity指的是某个Pod调度到特定的拓扑域(暂时理解为'机房')后,后续的所有Pod都往该拓扑域调度。

🧩案例

🔧给节点打标签

[root@master231 scheduler]# kubectl label nodes master231 dc=jiuxianqiao
[root@master231 scheduler]# kubectl label nodes worker232 dc=lugu
[root@master231 scheduler]# kubectl label nodes worker233 dc=zhaowei
[root@master231 scheduler]# kubectl get nodes --show-labels |grep dc

🧩案例

[root@master231 scheduler]# cat 09-deploy-scheduler-podAffinity.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-podaffinity
spec:
  replicas: 5
  selector:
    matchLabels:
      app: xiu
  template:
    metadata:
      labels:
        app: xiu
        version: v1
    spec:
      affinity:
        #配置Pod的亲和信
        podAffinity:
          # 硬限制要求,必须满足
          requiredDuringSchedulingIgnoredDuringExecution:
            # 指定拓扑域的key
          - topologyKey: dc
            # 指定标签选择器关联Pod
            labelSelector:
              matchExpressions:
              - key: app
                values:
                - xiu
                operator: In
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2
[root@master231 scheduler]# kubectl apply -f 09-deploy-scheduler-podAffinity.yaml 
[root@master231 scheduler]# kubectl get po -o wide
[root@master231 scheduler]# kubectl delete pods --all
[root@master231 scheduler]# kubectl get po -o wide
[root@master231 scheduler]# kubectl get po -o wide
[root@master231 scheduler]# kubectl label nodes master231 dc=lugu --overwrite
[root@master231 scheduler]# kubectl get nodes --show-labels |grep dc
[root@master231 scheduler]# kubectl delete pods --all
[root@master231 scheduler]# kubectl get po -o wide
[root@master231 scheduler]# kubectl delete -f 09-deploy-scheduler-podAffinity.yaml 

⚙️Pod调度-PodAntiAffinity

🔧什么是PodAntiAffinity

  1. 所谓的PodAntiAffinity和PodAffinity的作用相反,表示pod如果调度到某个拓扑域后,后续的Pod不会往该拓扑域调度。

🧩案例

🔧查看标签

[root@master231 scheduler]# kubectl get no --show-labels |grep dc

🧩案例

[root@master231 scheduler]# vim 10-deploy-scheduler-podAntAffinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-podaffinity
spec:
  replicas: 5
  selector:
    matchLabels:
      app: xiu
  template:
    metadata:
      labels:
        app: xiu
        version: v1
    spec:
      affinity:
        # 配置Pod的反亲和性
        podAntiAffinity:
          # 硬限制要求,必须满足
          requiredDuringSchedulingIgnoredDuringExecution:
            # 指定拓扑域的key
          - topologyKey: dc
            # 指定标签选择器关联Pod
            labelSelector:
              matchExpressions:
              - key: app
                values:
                - xiu
                operator: In
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2
[root@master231 scheduler]# kubectl apply -f 10-deploy-scheduler-podAntAffinity.yaml 
[root@master231 scheduler]# kubectl get po -o wide

🔧修改节点的标签并验证Pod调度情况

[root@master231 scheduler]# kubectl label nodes master231 dc=jiuxianqiao --overwrite
[root@master231 scheduler]# kubectl get nodes --show-labels |grep dc
[root@master231 scheduler]# kubectl get po -o wide
[root@master231 scheduler]# kubectl delete -f 10-deploy-scheduler-podAntAffinity.yaml 

⚙️Pod调度基础-cordon

🔧什么是cordon

  1. cordon标记节点不可调度,一般用于集群维护。
  2. cordon的底层实现逻辑其实就给节点打污点。

🧩案例

[root@master231 ~]# kubectl get nodes
[root@master231 ~]# kubectl cordon worker233
[root@master231 ~]# kubectl get nodes
[root@master231 ~]# kubectl describe nodes |grep Taints -A 2
[root@master231 ~]# cd /lin/manifests/scheduler/
[root@master231 scheduler]# vim 06-deploy-scheduler-Taints.yaml
[root@master231 scheduler]# cat 06-deploy-scheduler-Taints.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-taints
spec:
  replicas: 5
  selector:
    matchLabels:
      app: xiu
  template:
    metadata:
      labels:
        app: xiu
        version: v1
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2
        resources:
          limits:
            cpu: 1
            memory: 500Mi
        ports:
        - containerPort: 80
[root@master231 scheduler]# kubectl apply -f 06-deploy-scheduler-Taints.yaml 
[root@master231 scheduler]# kubectl get po -o wide

⚙️Pod调度基础-uncordon

🔧什么是uncordon

  1. uncordon的操作和cordon操作相反,表示取消节点不可调度功能。

🧩案例

[root@master231 scheduler]# kubectl uncordon worker233
[root@master231 scheduler]# kubectl get nodes
NAME        STATUS   ROLES                  AGE   VERSION
master231   Ready    control-plane,master   15d   v1.23.17
worker232   Ready    <none>                 15d   v1.23.17
worker233   Ready    <none>                 15d   v1.23.17
[root@master231 scheduler]# kubectl describe nodes |grep Taints -A 2
[root@master231 scheduler]# kubectl get po -o wide

⚙️同时可以对多个节点操作

[root@master231 scheduler]# kubectl get no
[root@master231 scheduler]# kubectl cordon worker232 worker233 
[root@master231 scheduler]# kubectl get no
[root@master231 scheduler]# kubectl uncordon worker232 worker233
[root@master231 scheduler]# kubectl get no

⚙️Pod调度基础-drain

🔧什么是drain

  1. drain其实就是将所在节点的pod进行驱逐的操作,说白了,就是将当前节点的Pod驱逐到其他节点运行。
  2. 在驱逐Pod时,需要忽略ds控制器创建的pod。
  3. 驱逐的主要应用场景是集群的缩容。
  4. drain底层调用的cordon。

🧩案例

⚙️部署测试服务

[root@master231 scheduler]# vim 06-deploy-scheduler-Taints.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-taints
spec:
  replicas: 5
  selector:
    matchLabels:
      app: xiu
  template:
    metadata:
      labels:
        app: xiu
        version: v1
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/lili-k8s/apps:v2
        #resources:
        #  limits:
        #    cpu: 1
        #    memory: 500Mi
        ports:
        - containerPort: 80
[root@master231 scheduler]# kubectl apply -f 06-deploy-scheduler-Taints.yaml 
[root@master231 scheduler]# kubectl get po -o wide

🔧驱逐worker233节点的Pod

[root@master231 scheduler]# kubectl get no
[root@master231 scheduler]# kubectl get po -o wide
[root@master231 scheduler]# kubectl drain worker233 --ignore-daemonsets --delete-emptydir-data
[root@master231 scheduler]# kubectl get nodes
[root@master231 scheduler]# kubectl describe nodes |grep Taints -A 2
[root@master231 scheduler]# kubectl get po -o wide

⚙️limits

  1. 限制Pod的资源使用情况,若pod的容器没有配置resources,则默认使用limits预定义的资源限制。
  2. 如果没有使用limits,也没有定义resources,则默认使用资源的上限worker节点的所有资源。

⚙️quota

  1. 可以限制pod,deploy,svc等的资源数量,控制用户创建过多的资源。