TOC

AutoScaler

    HPA(Horizontal Pod Autoscaler)自动弹性Pod缩放器,它是借助于metrics-server提供的resource metrics API提供的度量数据,来对其管理的replicaset副本进行动态扩缩容,所以我们在使用HPA的前提是必须先运行一个metrics-server的Pod来做指标提供方;
    无论是Deployment还是StatefulSet这样的控制器,他们所控制的Pod数量是用户在定义YAML格式的配置清单时指定的,一旦特定应用的现有Pod过多,或者过少时,我们需要自己根据当前的资源占用状态,来评估它到底是多,还是少,而后,再计算出一个合理的值,手动调整当前Pod的副本数量;
    但是此刻的多,未必意味着未来多,此刻的少也未必意味着未来少,因此这种手动调整方式,它有很大的滞后性,而且也会给我们的管理员带来极大的负担,所以Kubernetes提供了一种专门的资源类型,叫做自动弹性缩放器,我们称之为AutoScaler,AutoScaler是一类弹性伸缩器的统称,其中用得比较多的是HPA(Horizontal Pod Autoscaler),HPA也分为v1和v2两个版本;
    v1仅支持基于核心指标CPU、Memory资源利用率进行弹性伸缩,而v2就可以使用自定义的资源指标,和核心指标进行弹性伸缩;
    而除了HPA之外,Kubernetes所支持的弹性伸缩器,还有CA(Cluster Autoscaler)集群弹性伸缩,基于云计算实现自动增加集群节点;
    VPA(Vertical Pod Autoscaler)垂直Pod弹性伸缩,水平伸缩通过增加Pod减少Pod实现,那么垂直伸缩就是增大单个Pod的资源能力,因为一个Pod的资源是其内部的request定义的,那VPA无非就是动态的去调整CPU和Memory资源就可以了,但是VPA目前还处于实验性解决方案;
    还有一种叫做AR(Addon Resizer)从某种意义上来讲,它其实就是一个简化版本的Pod应用垂直伸缩工具,我们可以把它以边车形式,放在一个Pod当中,因为在生产环境当中,我们应该也必须给每一个Pod定义它的资源上下线,那么一旦定义了,这个资源上下线就固定了,但是在某一个时刻,比如我们的访问量激增,此时对我们的CPU利用率可能是非常高的,那么如果我们的Pod其实没什么任务需要处理的,那么又可能会出现资源利用率过低的问题,所以一旦对应的Pod资源不够用,或者资源浪费,都会带来一些不匹配的状况,那么AR就能够根据你实际的资源使用状态来弹性的扩展或者缩减对应的Pod的request资源的上限或者下限;
    目前的话Kubernetes的AutoScaler就有这四种类型,而易用并且实用的目前来看,就是HPA了;
HPA控制器
    HPA自身也是一个控制器,因此它也是一个控制循环,无非就是周期性的检测,底层的Deployment或者StatefulSet等控制器控制之下的Pod资源的指定的指标占用率,是否已经达到了所定义的临界点,如果是那么它就可以根据计算的结果,来动态的去修改底层的Deployment或者StatefulSet的副本数量;
    从而激活底层的控制器,来增加或减少Pod副本数量,那很显然,要想知道当前每一个Pod资源使用状况,那么就必须要能够查询Pod的资源利用率才可以的,这也就是为了HPA无论是v1还是v2都必须强烈依赖于,我们核心资源指标的功能,当然v2还要依赖于自定义的指标功能才可以的;
    不过需要注意的是,如果使用HPA来管理Pod副本规模的时候,由其所评估的指标具有动态变动的特性,如果恰好遇到一个业务,一会儿遇到峰值一会儿遇到低谷,一会儿遇到峰值一会儿遇到低谷,那就会遇到Pod规模的不断抖动,一会儿添加一会儿减少,一会儿添加一会儿减少,也会遇到问题的,因此,这个增加或者减少,不要立即反应,最好是延迟一段时间,如果观察到指定指标已经超出了我们定义的上限了,只要不是特别严重,那么我们可以等待一段时间再去处理;
    但是这种等待也可能会带来另一个问题,假设等待了三五分钟,那么这三五分钟就有可能满载了,那不等也会有问题,因为频繁的修改,可能会导致规模抖动,因此这儿其实没有特别好的解决方案;
HPA版本
HPAv1(kubectl explain hpa):仅支持基于CPU利用率来做Pod副本数量调整,指标由Heapster提供;
HPAv2(kubectl explain hpa --api-version=autoscaling/v2beta1):不仅支持CPU利用率来做调整,还支持自定义扩展指标来进行调整,支持从metrics-server中请求核心指标,或者从k8s-prometheus-adapter一类自定义API中获取自定义指标数据,多个指标计算时,结果中数值较大的胜出;
HPAv2(kubectl explain hpa --api-version=autoscaling/v2beta2):同上,稍作修改;
Metrics类型
Metrics API INFO
Resource metrics.k8s.io Pod的资源指标,计算的时要除以Pod数目再对比阈值进行判断
Custom custom.metrics.k8s.io Object: CRD等对象的监控指标,直接计算指标比对阈值Pods : 每个Pod的自定义指标,计算时要除以Pods的数目
External external.metrics.k8s.io External:集群指标的监控指标,通常由云厂商实现
配置段介绍
metrics:计算所需Pod副本数量的指标列表,每个指标单独计算,取所有计算结果的最大值作为最终副本数量;
    external:引用非附属于任何对象的全局指标,可以是集群之外的组件的指标数据,如消息队列长度;
    object:引用描述集群中某单一对象的特定指标,如Ingress对象上的hits-per-second等;
    pods:引用当前被弹性伸缩的Pod对象的特定指标;
    resource:引用资源指标,即当前被弹性伸缩的Pod对象中容器的requests和limits中定义的指标;
    type:指标源的类型,可为Objects、Pods、Resource;
target共有3种类型:Utilization(平均值)、Value(裸指)、AverageValue(平均使用率);
Resource:指的是当前伸缩对象下的pod的cpu和memory指标,只支持Utilization和AverageValue类型的目标值;
Object:指的是指定k8s内部对象的指标,数据需要第三方adapter提供,只支持Value和AverageValue类型的目标值;
Pods:指的是伸缩对象(statefulSet、replicaController、replicaSet)底下的Pods的指标,数据需要第三方的adapter提供,并且只允许AverageValue类型的目标值;
External:指的是k8s外部的指标,数据同样需要第三方的adapter提供,只支持Value和AverageValue类型的目标值;
查看相关的metrics
# nodes
[root@node1 ~]# kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq .items[].metadata.selfLink
"/apis/metrics.k8s.io/v1beta1/nodes/node3.cce.com"
"/apis/metrics.k8s.io/v1beta1/nodes/node1.cce.com"
"/apis/metrics.k8s.io/v1beta1/nodes/node2.cce.com"
# pods
[root@node1 ~]# kubectl get --raw "/apis/metrics.k8s.io/v1beta1/pods" | jq .items[].metadata.selfLink
"/apis/metrics.k8s.io/v1beta1/namespaces/custom-metrics/pods/custom-metrics-apiserver-747b58c66-4m45z"
"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-proxy-5prf6"
"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-proxy-m46k6"
HPA(v1)
# 创建一个deployment
[root@node1 ~]# cat myapp.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
  labels:
    app: myapp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        resources:
          requests:
            cpu: 50m
            memory: 64Mi
          limits:
            cpu: 50m
            memory: 64Mi
---
apiVersion: v1
kind: Service
metadata:
  name: myapp-svc
  labels:
    app: myapp
  namespace: default
spec:
  selector:
    app: myapp
  ports:
  - name: http
    port: 80
    targetPort: 80
  type: NodePort
# 查看该deployment的service
[root@node1 ~]# kubectl get svc -l app=myapp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp-svc NodePort 10.109.235.54 <none> 80:31431/TCP 4m30s
# 测试请求
[root@node1 ~]# curl 172.16.1.2:31431
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
# 查看刚创建的deployment下面的Pod的资源使用率
[root@node1 ~]# kubectl top pods
NAME CPU(cores) MEMORY(bytes)   
myapp-d48f86cd4-jt9jv 0m 2Mi # 当前CPU是0m          
myapp-d48f86cd4-v46fb 0m 2Mi
# 测试压测
[root@node2 ~]# while true; do curl 172.16.1.2:31431;done
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
# 查看此时的Pod的资源使用率
[root@node1 ~]# kubectl top pods
NAME CPU(cores) MEMORY(bytes)   
myapp-d48f86cd4-jt9jv 36m 2Mi             
myapp-d48f86cd4-v46fb 46m 2Mi
基于CPU资源利用率
    基于上面的deployment进行HPA测试;
# 停下压测重新创建deployment用一个新的环境做hpa测试
[root@node1 ~]# kubectl apply -f myapp.yaml 
deployment.apps/myapp created
service/myapp-svc unchanged
# 此时有两个Pod
[root@node1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-d48f86cd4-7gllf 1/1 Running 0 65s
myapp-d48f86cd4-vll2p 1/1 Running 0 65s
# 查看资源使用率
[root@node1 ~]# kubectl top pods
NAME CPU(cores) MEMORY(bytes)   
myapp-d48f86cd4-7gllf 1m 2Mi # 此时的CPU使用率为0                    
myapp-d48f86cd4-vll2p 1m 2Mi  
# 测试给该deployment创建一个HPAv1的控制器
[root@node1 ~]# kubectl autoscale deployment myapp --max=5 --min=1 --cpu-percent=30
[root@node1 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp 0%/30% 1 5 2 2m49s
# 测试压测
[root@node2 ~]# while true; do curl 172.16.1.2:31807;done
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
# 监控HPA控制器
[root@node1 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp 0%/30% 1 5 2 64s
[root@node1 ~]# kubectl get hpa -w
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp 0%/30% 1 5 2 67s

myapp Deployment/myapp 32%/30% 1 5 2 76s # 可以看到已经超过我们定义的阈值了
myapp Deployment/myapp 87%/30% 1 5 2 2m16s # 此时还没增加
myapp Deployment/myapp 87%/30% 1 5 4 2m32s # 此时我们的副本增加了2个
myapp Deployment/myapp 87%/30% 1 5 5 2m47s # 此时我们的副本数量为5
# 查看我们的副本是数量(一个五个)
[root@node1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-d48f86cd4-7gllf 1/1 Running 0 5m15s
myapp-d48f86cd4-d9bq5 1/1 Running 0 96s
myapp-d48f86cd4-r7x55 1/1 Running 0 80s
myapp-d48f86cd4-tn9h9 1/1 Running 0 96s
myapp-d48f86cd4-vll2p 1/1 Running 0 5m15s

# 测试停掉压力测试,查看Pod
[root@node1 ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
myapp-d48f86cd4-7gllf 1/1 Running 0 7m28s
myapp-d48f86cd4-d9bq5 1/1 Running 0 3m49s
myapp-d48f86cd4-r7x55 1/1 Running 0 3m33s
myapp-d48f86cd4-tn9h9 1/1 Running 0 3m49s
myapp-d48f86cd4-vll2p 1/1 Running 0 7m28s

myapp-d48f86cd4-d9bq5 1/1 Terminating 0 7m51s # 逐渐的在删除Pod
myapp-d48f86cd4-r7x55 1/1 Terminating 0 7m35s
myapp-d48f86cd4-tn9h9 1/1 Terminating 0 7m51s
myapp-d48f86cd4-vll2p 1/1 Terminating 0 11m
myapp-d48f86cd4-d9bq5 0/1 Terminating 0 7m52s
myapp-d48f86cd4-tn9h9 0/1 Terminating 0 7m53s
myapp-d48f86cd4-r7x55 0/1 Terminating 0 7m37s
myapp-d48f86cd4-vll2p 0/1 Terminating 0 11m
myapp-d48f86cd4-d9bq5 0/1 Terminating 0 7m53s
myapp-d48f86cd4-d9bq5 0/1 Terminating 0 7m53s
myapp-d48f86cd4-d9bq5 0/1 Terminating 0 7m53s
myapp-d48f86cd4-r7x55 0/1 Terminating 0 7m38s
myapp-d48f86cd4-r7x55 0/1 Terminating 0 7m38s
myapp-d48f86cd4-tn9h9 0/1 Terminating 0 7m54s
myapp-d48f86cd4-tn9h9 0/1 Terminating 0 7m54s
myapp-d48f86cd4-vll2p 0/1 Terminating 0 11m
myapp-d48f86cd4-vll2p 0/1 Terminating 0 11m
# 查看HPA
[root@node1 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp 0%/30% 1 5 2 64s
[root@node1 ~]# kubectl get hpa -w
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp 0%/30% 1 5 2 67s

myapp Deployment/myapp 32%/30% 1 5 2 76s
myapp Deployment/myapp 87%/30% 1 5 2 2m16s
myapp Deployment/myapp 87%/30% 1 5 4 2m32s
myapp Deployment/myapp 87%/30% 1 5 5 2m47s

myapp Deployment/myapp 47%/30% 1 5 5 3m17s
myapp Deployment/myapp 46%/30% 1 5 5 4m18s
myapp Deployment/myapp 0%/30% 1 5 5 5m19s # 当CPU利用率为0的时候,没有立即干掉多余的Pod
myapp Deployment/myapp 0%/30% 1 5 5 10m
myapp Deployment/myapp 0%/30% 1 5 1 10m # 变成了一个
[root@node1 ~]# kubectl describe hpa myapp 
Name: myapp
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 26 Feb 2020 12:11:21 +0800
Reference: Deployment/myapp
Metrics: ( current / target )
  resource cpu on pods (as a percentage of request): 0% (0) / 30%
Min replicas: 1
Max replicas: 5
Deployment pods: 1 current / 1 desired
Conditions:
  Type Status Reason Message
  ---- ------ ------ -------
  AbleToScale True ReadyForNewScale recommended size matches current size
  ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  ScalingLimited True TooFewReplicas the desired replica count is less than the minimum replica count
Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal SuccessfulRescale 17m horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target # 高于目标
  Normal SuccessfulRescale 17m horizontal-pod-autoscaler New size: 5; reason: cpu resource utilization (percentage of request) above target
  Normal SuccessfulRescale 9m59s horizontal-pod-autoscaler New size: 1; reason: All metrics below target

# 附上YAML格式的HPA控制器
[root@node1 ~]# cat myapp-hpa.yaml 
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa
  namespace: default
spec:
  maxReplicas: 10
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1 # 必须加上,不然检测不到后端Pod
    kind: Deployment
    name: myapp
  targetCPUUtilizationPercentage: 30
HPA(v2beta1)
    HPAv2在autoscaling/v2beta1和autoscaling/v2beta2都有,autoscaling/v2beta1支持自定义指标,autoscaling/v2beta2支持扩展指标,那么我采们用最新版本的autoscaling/v2beta2,支持以自定义指标为HPA评估标准,这个自定义指标也可以从Pod的/metrics中来,当然也可以从metrics-server的指标中来;
基于核心资源指标
# 创建配置清单
[root@node1 ~]# cat metrics.yaml
# 定义Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: metrics-app
  name: metrics-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: metrics-app
  template:
    metadata:
      labels:
        app: metrics-app
      annotations:
        prometheus.io/scrape: "true" # 允许自动发现
        prometheus.io/port: "80" # 发现的端口是80
        prometheus.io/path: "/metrics" # 其内部输出指标的url是/metrics
    spec:
      containers:
      - image: ikubernetes/metrics-app
        name: metrics-app
        ports:
        - name: web
          containerPort: 80
        resources:
          requests:
            cpu: 200m
            memory: 256Mi
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 5
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 5
---
# 定义Service
apiVersion: v1
kind: Service
metadata:
  name: metrics-app
  labels:
    app: metrics-app
spec:
  ports:
  - name: web
    port: 80
    targetPort: 80
    nodePort: 30000
  selector:
    app: metrics-app
  type: NodePort
---
# 定义HPA
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
  name: metrics-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: metrics-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource # 使用核心资源类型的metrics
    resource: # metrics指标信息
      name: cpu # 指标名字(metrics name)
      targetAverageUtilization: 10 # 平均使用率,百分比
  - type: Resource # 使用核心资源类型的metrics
    resource: # metrics指标信息
      name: memory # 指标名字(metrics name)
      targetAverageValue: 30Mi # 平均使用值,具体的数值
# 创建的Pod
[root@node1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
metrics-app-7c575b688c-7fnz6 1/1 Running 0 5m22s
metrics-app-7c575b688c-xlc82 1/1 Running 0 5m22s
# 创建的SVC
[root@node1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 114d
metrics-app NodePort 10.99.27.26 <none> 80:30000/TCP 5m40s
# 查看此时的Pod资源是使用率
[root@node1 ~]# kubectl top pods
NAME CPU(cores) MEMORY(bytes)   
metrics-app-7c575b688c-7fnz6 1m 9Mi             
metrics-app-7c575b688c-xlc82 1m 9Mi   
# 进行压力测试
[root@node2 ~]# while true;do curl 172.16.1.2:30000;done
# 查看Pod状态
[root@node1 ~]# kubectl top pods
NAME CPU(cores) MEMORY(bytes)   
metrics-app-7c575b688c-7fnz6 1m 9Mi             
metrics-app-7c575b688c-xlc82 1m 9Mi             
# 数秒后会发现,慢慢的在扩展Pod副本
[root@node1 ~]# kubectl get pods 
NAME READY STATUS RESTARTS AGE
metrics-app-7c575b688c-2hrcm 0/1 Running 0 20s
metrics-app-7c575b688c-7fnz6 1/1 Running 0 9m18s
metrics-app-7c575b688c-bhzfm 0/1 ContainerCreating 0 5s
metrics-app-7c575b688c-dhssf 0/1 ContainerCreating 0 5s
metrics-app-7c575b688c-mnq47 0/1 ContainerCreating 0 5s
metrics-app-7c575b688c-ws5xj 0/1 Running 0 20s
metrics-app-7c575b688c-xlc82 1/1 Running 0 9m18s
# 查看HPA信息
[root@node1 ~]# kubectl describe hpa metrics-app-hpa 
...
  Normal SuccessfulRescale 82s horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
  Normal SuccessfulRescale 67s horizontal-pod-autoscaler New size: 7; reason: cpu resource utilization (percentage of request) above target
自定义Metrics
    这个就跟你部署的那个deployment的具体镜像k8s-prometheus-adapter-amd64它使用的配置文件有关,也就是custom-metrics-config-map.yaml里面定义的rules,文件中所有的seriesQuery项在prometheus中查询后的结果都没有http_request_totals指标,所以也就肯定找不到,把下面的规则追加到cm文件里面,即可;
    - seriesQuery: '{__name__=~"^http_requests_.*",kubernetes_pod_name!="",kubernetes_namespace!=""}'
      seriesFilters: []
      resources:
        overrides:
          kubernetes_namespace:
            resource: namespace
          kubernetes_pod_name:
            resource: pod
      name:
        matches: ^(.*)_(total)$
        as: "${1}"
      metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)
获取自定义Metrics
[root@node1 ~]# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .
{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests"
  },
  "items": []
}

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注