资源指标
自定义指标
资源指标:metrics-server
自定义指标:prometheus, k8s-prometheus-adapter
新一代架构:
核心指标流水线:由kubelet、metrics-server以及由API server提供的api组成;CPU、内存实时使用率、Pod的资源占用率及容器的磁盘占用率;
监控流水线:用于从系统收集各种指标数据并提供终端用户、存储 系统以及HPA,它们包含核心指标及许多非核心指标,非核心指标本身不能被k8s所解析。
metrics-server: API server
/apis/metrics.k8s.io/v1beta1


master:
[root@master metrics]# kubectl api-versions admissionregistration.k8s.io/v1beta1 apiextensions.k8s.io/v1beta1 apiregistration.k8s.io/v1 apiregistration.k8s.io/v1beta1 apps/v1 apps/v1beta1 apps/v1beta2 authentication.k8s.io/v1 authentication.k8s.io/v1beta1 authorization.k8s.io/v1 authorization.k8s.io/v1beta1 autoscaling/v1 autoscaling/v2beta1 batch/v1 batch/v1beta1 certificates.k8s.io/v1beta1 crd.projectcalico.org/v1 events.k8s.io/v1beta1 extensions/v1beta1 networking.k8s.io/v1 policy/v1beta1 rbac.authorization.k8s.io/v1 rbac.authorization.k8s.io/v1beta1 scheduling.k8s.io/v1beta1 storage.k8s.io/v1 storage.k8s.io/v1beta1 v1 [root@master metrics]# ls grafana.yaml heapster.yaml influxdb.yaml pod-demo.yaml [root@master metrics]# kubectl delete -f ./ [root@master metrics]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE canal-997tb 3/3 Running 0 28d canal-j6t4j 3/3 Running 0 28d canal-jxq25 3/3 Running 0 28d coredns-78fcdf6894-bt5g6 1/1 Running 1 67d coredns-78fcdf6894-zzbll 1/1 Running 1 67d etcd-master.smoke.com 1/1 Running 1 67d kube-apiserver-master.smoke.com 1/1 Running 1 67d kube-controller-manager-master.smoke.com 1/1 Running 1 67d kube-flannel-ds-g69pn 1/1 Running 0 4d kube-flannel-ds-rkd4c 1/1 Running 0 4d kube-flannel-ds-stnlp 1/1 Running 0 4d kube-proxy-5jppm 1/1 Running 1 66d kube-proxy-7lg96 1/1 Running 1 67d kube-proxy-qmrq7 1/1 Running 1 66d kube-scheduler-master.smoke.com 1/1 Running 1 67d kubernetes-dashboard-6948bdb78-fdpt2 1/1 Running 0 12d
部署metrics server
https://github.com/kubernetes-sigs/metrics-server/tree/v0.3.0
master:
[root@master metrics]# git clone https://github.com/kubernetes-sigs/metrics-server.git
[root@master metrics]# ll
总用量 20
-rw-r--r--. 1 root root 2357 8月 19 22:15 grafana.yaml
-rw-r--r--. 1 root root 1182 8月 18 22:01 heapster.yaml
-rw-r--r--. 1 root root 1025 8月 17 22:06 influxdb.yaml
drwxr-xr-x. 8 root root 4096 8月 30 2018 metrics-server-0.3.0
-rw-r--r--. 1 root root 318 8月 11 22:04 pod-demo.yaml
[root@master metrics]# cd metrics-server-0.3.0/
[root@master metrics-server-0.3.0]# cd deploy/
[root@master deploy]# cd 1.8+/
[root@master 1.8+]# ll
总用量 24
-rw-r--r--. 1 root root 308 8月 30 2018 auth-delegator.yaml
-rw-r--r--. 1 root root 329 8月 30 2018 auth-reader.yaml
-rw-r--r--. 1 root root 298 8月 30 2018 metrics-apiservice.yaml
-rw-r--r--. 1 root root 829 8月 30 2018 metrics-server-deployment.yaml
-rw-r--r--. 1 root root 249 8月 30 2018 metrics-server-service.yaml
-rw-r--r--. 1 root root 612 8月 30 2018 resource-reader.yaml
[root@master 1.8+]# less metrics-apiservice.yaml
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
[root@master 1.8+]# cat metrics-server-deployment.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: gcr.io/google_containers/metrics-server-amd64:v0.2.1
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir
mountPath: /tmp
[root@master 1.8+]# cat resource-reader.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- deployments
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
[root@master 1.8+]# kubectl apply -f .
[root@master 1.8+]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 68d
kubernetes-dashboard NodePort 10.96.159.129 <none> 443:31762/TCP 38d
metrics-server ClusterIP 10.97.161.186 <none> 443/TCP 28s
[root@master 1.8+]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
canal-997tb 3/3 Running 0 29d
canal-j6t4j 3/3 Running 0 29d
canal-jxq25 3/3 Running 0 29d
coredns-78fcdf6894-bt5g6 1/1 Running 1 68d
coredns-78fcdf6894-zzbll 1/1 Running 1 68d
etcd-master.smoke.com 1/1 Running 1 68d
kube-apiserver-master.smoke.com 1/1 Running 1 68d
kube-controller-manager-master.smoke.com 1/1 Running 1 68d
kube-flannel-ds-g69pn 1/1 Running 0 5d
kube-flannel-ds-rkd4c 1/1 Running 0 5d
kube-flannel-ds-stnlp 1/1 Running 0 5d
kube-proxy-5jppm 1/1 Running 1 67d
kube-proxy-7lg96 1/1 Running 1 68d
kube-proxy-qmrq7 1/1 Running 1 67d
kube-scheduler-master.smoke.com 1/1 Running 1 68d
kubernetes-dashboard-6948bdb78-fdpt2 1/1 Running 0 13d
metrics-server-f5bc46bd7-dx2dz 0/1 ContainerCreating 0 46s
[root@master 1.8+]# kubectl describe pods metrics-server-f5bc46bd7-dx2dz -n kube-system
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 28m (x275 over 23h) kubelet, node02.smoke.com pulling image "gcr.io/google_containers/metrics-server-amd64:v0.2.1"
Warning BackOff 3m (x5652 over 22h) kubelet, node02.smoke.com Back-off restarting failed container
[root@master 1.8+]# kubectl logs metrics-server-f5bc46bd7-dx2dz -n kube-system #直接使用kubernetes-sigs/metrics-server部署出了问题
I0824 13:37:21.919087 1 heapster.go:71] /metrics-server
I0824 13:37:21.919176 1 heapster.go:72] Metrics Server version v0.2.1
F0824 13:37:21.919182 1 heapster.go:79] Failed to get kubernetes address: No kubernetes source found.
更换metrics-server版本
https://github.com/kubernetes/kubernetes/tree/v1.11.1/cluster/addons/metrics-server
master:
[root@master 1.8+]# kubectl delete -f .
[root@master 1.8+]# cd ../../../
[root@master metrics]# rm -rf metrics-server-0.3.0/
[root@master metrics]# mkdir metrics-server
[root@master metrics]# cd metrics-server/
for file in auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml; do wget https://raw.githubusercontent.com/kubernetes/
kubernetes/v1.11.1/cluster/addons/metrics-server/$file; done
[root@master metrics-server]# ll
总用量 24
-rw-r--r--. 1 root root 398 8月 24 22:02 auth-delegator.yaml
-rw-r--r--. 1 root root 419 8月 24 22:02 auth-reader.yaml
-rw-r--r--. 1 root root 393 8月 24 22:02 metrics-apiservice.yaml
-rw-r--r--. 1 root root 2647 8月 24 22:02 metrics-server-deployment.yaml
-rw-r--r--. 1 root root 336 8月 24 22:02 metrics-server-service.yaml
-rw-r--r--. 1 root root 801 8月 24 22:02 resource-reader.yaml
[root@master metrics-server]# kubectl apply -f .
[root@master metrics-server]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
canal-997tb 3/3 Running 0 30d
canal-j6t4j 3/3 Running 0 30d
canal-jxq25 3/3 Running 0 30d
coredns-78fcdf6894-bt5g6 1/1 Running 1 69d
coredns-78fcdf6894-zzbll 1/1 Running 1 69d
etcd-master.smoke.com 1/1 Running 1 69d
kube-apiserver-master.smoke.com 1/1 Running 1 69d
kube-controller-manager-master.smoke.com 1/1 Running 1 69d
kube-flannel-ds-g69pn 1/1 Running 0 6d
kube-flannel-ds-rkd4c 1/1 Running 0 6d
kube-flannel-ds-stnlp 1/1 Running 0 6d
kube-proxy-5jppm 1/1 Running 1 68d
kube-proxy-7lg96 1/1 Running 1 69d
kube-proxy-qmrq7 1/1 Running 1 68d
kube-scheduler-master.smoke.com 1/1 Running 1 69d
kubernetes-dashboard-6948bdb78-fdpt2 1/1 Running 0 14d
metrics-server-v0.2.1-597c89dc98-v95ms 0/2 ContainerCreating 0 24s
[root@master metrics-server]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
canal-997tb 3/3 Running 0 30d
canal-j6t4j 3/3 Running 0 30d
canal-jxq25 3/3 Running 0 30d
coredns-78fcdf6894-bt5g6 1/1 Running 1 69d
coredns-78fcdf6894-zzbll 1/1 Running 1 69d
etcd-master.smoke.com 1/1 Running 1 69d
kube-apiserver-master.smoke.com 1/1 Running 1 69d
kube-controller-manager-master.smoke.com 1/1 Running 1 69d
kube-flannel-ds-g69pn 1/1 Running 0 6d
kube-flannel-ds-rkd4c 1/1 Running 0 6d
kube-flannel-ds-stnlp 1/1 Running 0 6d
kube-proxy-5jppm 1/1 Running 1 68d
kube-proxy-7lg96 1/1 Running 1 69d
kube-proxy-qmrq7 1/1 Running 1 68d
kube-scheduler-master.smoke.com 1/1 Running 1 69d
kubernetes-dashboard-6948bdb78-fdpt2 1/1 Running 0 14d
metrics-server-v0.2.1-fd596d746-d2fc4 2/2 Running 0 2m
[root@master metrics-server]# cat metrics-server-deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metrics-server-config
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
data:
NannyConfiguration: |-
apiVersion: nannyconfig/v1alpha1
kind: NannyConfiguration
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: metrics-server-v0.2.1
namespace: kube-system
labels:
k8s-app: metrics-server
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
version: v0.2.1
spec:
selector:
matchLabels:
k8s-app: metrics-server
version: v0.2.1
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
version: v0.2.1
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.2.1
command:
- /metrics-server
- --source=kubernetes.summary_api:''
ports:
- containerPort: 443
name: https
protocol: TCP
- name: metrics-server-nanny
image: k8s.gcr.io/addon-resizer:1.8.1
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 5m
memory: 50Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: metrics-server-config-volume
mountPath: /etc/config
command:
- /pod_nanny
- --config-dir=/etc/config
- --cpu=40m
- --extra-cpu=0.5m
- --memory=40Mi
- --extra-memory=4Mi
- --threshold=5
- --deployment=metrics-server-v0.2.1
- --container=metrics-server
- --poll-period=300000
- --estimator=exponential
volumes:
- name: metrics-server-config-volume
configMap:
name: metrics-server-config
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
[root@master metrics-server]# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
crd.projectcalico.org/v1
events.k8s.io/v1beta1
extensions/v1beta1
metrics.k8s.io/v1beta1 #新添加的API
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
[root@master metrics-server]# kubectl proxy --port=8080 #开启本地代理
[root@master metrics-server]# curl http://localhost:8080/metrics.k8s.io/v1beta1
{
"paths": [
"/apis",
"/apis/",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/healthz",
"/healthz/etcd",
"/healthz/ping",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/metrics",
"/openapi/v2",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger.json",
"/swaggerapi",
"/version"
]
}
[root@master metrics-server]# curl http://localhost:8080/metrics.k8s.io/v1beta1/nodes #获取node的监控数据
{
"paths": [
"/apis",
"/apis/",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/healthz",
"/healthz/etcd",
"/healthz/ping",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/metrics",
"/openapi/v2",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger.json",
"/swaggerapi",
"/version"
]
}
[root@master metrics-server]# curl http://localhost:8080/metrics.k8s.io/v1beta1/pods
{
"paths": [
"/apis",
"/apis/",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/healthz",
"/healthz/etcd",
"/healthz/ping",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/metrics",
"/openapi/v2",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger.json",
"/swaggerapi",
"/version"
]
}
[root@master metrics-server]# kubectl top nodes #获取nodes数据
error: metrics not available yet
[root@master metrics-server]# kubectl top pods
W0824 22:15:45.731761 78167 top_pod.go:263] Metrics not available for pod default/myapp-deploy-5d9c6985f5-7sbdk, age: 336h29m36.731666374s
error: Metrics not available for pod default/myapp-deploy-5d9c6985f5-7sbdk, age: 336h29m36.731666374s
[root@master metrics-server]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
canal-997tb 3/3 Running 0 30d
canal-j6t4j 3/3 Running 0 30d
canal-jxq25 3/3 Running 0 30d
coredns-78fcdf6894-bt5g6 1/1 Running 1 69d
coredns-78fcdf6894-zzbll 1/1 Running 1 69d
etcd-master.smoke.com 1/1 Running 1 69d
kube-apiserver-master.smoke.com 1/1 Running 1 69d
kube-controller-manager-master.smoke.com 1/1 Running 1 69d
kube-flannel-ds-g69pn 1/1 Running 0 6d
kube-flannel-ds-rkd4c 1/1 Running 0 6d
kube-flannel-ds-stnlp 1/1 Running 0 6d
kube-proxy-5jppm 1/1 Running 1 68d
kube-proxy-7lg96 1/1 Running 1 69d
kube-proxy-qmrq7 1/1 Running 1 68d
kube-scheduler-master.smoke.com 1/1 Running 1 69d
kubernetes-dashboard-6948bdb78-fdpt2 1/1 Running 0 14d
metrics-server-v0.2.1-fd596d746-d2fc4 2/2 Running 0 12m
[root@master metrics-server]# kubectl logs metrics-server-v0.2.1-fd596d746-d2fc4 -c metrics-server -n kube-system #还是又错误
I0824 14:04:37.410109 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:''
I0824 14:04:37.410309 1 heapster.go:72] Metrics Server version v0.2.1
I0824 14:04:37.411173 1 configs.go:61] Using Kubernetes client with master "https://10.96.0.1:443" and version
I0824 14:04:37.411217 1 configs.go:62] Using kubelet port 10255
I0824 14:04:37.412743 1 heapster.go:128] Starting with Metric Sink
I0824 14:04:44.012095 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
I0824 14:04:56.910986 1 heapster.go:101] Starting Heapster API server...
[restful] 2020/08/24 14:04:56 log.go:33: [restful/swagger] listing is available at https:///swaggerapi
[restful] 2020/08/24 14:04:56 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/
I0824 14:04:56.913015 1 serve.go:85] Serving securely on 0.0.0.0:443
E0824 14:05:05.004959 1 summary.go:97] error while getting metrics summary from Kubelet master.smoke.com(172.20.0.70:10255): Get http://172.20.0.70:10255/stats/summary/: dial tcp 172.20.0.70:10255:
getsockopt: connection refused
E0824 14:05:05.008233 1 summary.go:97] error while getting metrics summary from Kubelet node02.smoke.com(172.20.0.67:10255): Get http://172.20.0.67:10255/stats/summary/: dial tcp 172.20.0.67:10255:
getsockopt: connection refused
[root@master metrics-server]# vim metrics-server-deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metrics-server-config
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
data:
NannyConfiguration: |-
apiVersion: nannyconfig/v1alpha1
kind: NannyConfiguration
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: metrics-server-v0.2.1
namespace: kube-system
labels:
k8s-app: metrics-server
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
version: v0.2.1
spec:
selector:
matchLabels:
k8s-app: metrics-server
version: v0.2.1
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
version: v0.2.1
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.2.1
command:
- /metrics-server
# - --source=kubernetes.summary_api:'' #默认情况下从kubernetes.summary_api接口获取数据,summary_api默认使用10255端口获取数据,10255是一个http协议的端口,而10250是一个https协议的端口,他们会认为10255端口获取数据不安全,敏感
信息会被泄漏,所以k8s在使用kubeadm部署的kubelet直接把10255端口禁了,所以导致默认通过summary_api获取数据获取不到
- --source=kubernetes.summary_api:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true #仍然改成通过summary_api,但是不通过默认的链接获取数据,指明要使用https到kubernetes服务default名称空间
给它传参数,kubeletHttps=true的方式,而且端口是10250,insecure=true仍然通过非安全方式通信
ports:
- containerPort: 443
name: https
protocol: TCP
- name: metrics-server-nanny
image: k8s.gcr.io/addon-resizer:1.8.1
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 5m
memory: 50Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: metrics-server-config-volume
mountPath: /etc/config
command:
- /pod_nanny
- --config-dir=/etc/config
- --cpu=40m
- --extra-cpu=0.5m
- --memory=40Mi
- --extra-memory=4Mi
- --threshold=5
- --deployment=metrics-server-v0.2.1
- --container=metrics-server
- --poll-period=300000
- --estimator=exponential
volumes:
- name: metrics-server-config-volume
configMap:
name: metrics-server-config
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
[root@master metrics-server]# cp resource-reader.yaml resource-reader.yaml.bak #更改以后通过原有的resource-reader授权是不足够的
[root@master metrics-server]# ll
总用量 28
-rw-r--r--. 1 root root 398 8月 24 22:02 auth-delegator.yaml
-rw-r--r--. 1 root root 419 8月 24 22:02 auth-reader.yaml
-rw-r--r--. 1 root root 393 8月 24 22:02 metrics-apiservice.yaml
-rw-r--r--. 1 root root 2767 8月 24 22:39 metrics-server-deployment.yaml
-rw-r--r--. 1 root root 336 8月 24 22:02 metrics-server-service.yaml
-rw-r--r--. 1 root root 801 8月 24 22:02 resource-reader.yaml
-rw-r--r--. 1 root root 801 8月 24 22:40 resource-reader.yaml.bak
[root@master metrics-server]# vim resource-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats #这节点获取数据专门获取路径,它是单独配置的
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- deployments
verbs:
- get
- list
- update
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
[root@master metrics-server]# kubectl apply -f .
[root@master metrics-server]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes
{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": [
{
"metadata": {
"name": "master.smoke.com",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/master.smoke.com",
"creationTimestamp": "2020-08-24T14:47:31Z"
},
"timestamp": "2020-08-24T14:47:00Z",
"window": "1m0s",
"usage": {
"cpu": "213m",
"memory": "1253892Ki"
}
},
{
"metadata": {
"name": "node01.smoke.com",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node01.smoke.com",
"creationTimestamp": "2020-08-24T14:47:31Z"
},
"timestamp": "2020-08-24T14:47:00Z",
"window": "1m0s",
"usage": {
"cpu": "89m",
"memory": "862592Ki"
}
},
{
"metadata": {
"name": "node02.smoke.com",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node02.smoke.com",
"creationTimestamp": "2020-08-24T14:47:31Z"
},
"timestamp": "2020-08-24T14:47:00Z",
"window": "1m0s",
"usage": {
"cpu": "89m",
"memory": "851804Ki"
}
}
]
}
[root@master metrics-server]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods
{
"kind": "PodMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/pods"
},
"items": [
{
"metadata": {
"name": "default-http-backend-846b65fb5f-4489p",
"namespace": "ingress-nginx",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/ingress-nginx/pods/default-http-backend-846b65fb5f-4489p",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "default-http-backend",
"usage": {
"cpu": "0",
"memory": "5176Ki"
}
}
]
},
{
"metadata": {
"name": "myapp-deploy-5d9c6985f5-ssdf6",
"namespace": "default",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/myapp-deploy-5d9c6985f5-ssdf6",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "myapp",
"usage": {
"cpu": "0",
"memory": "2884Ki"
}
}
]
},
{
"metadata": {
"name": "canal-jxq25",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/canal-jxq25",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "calico-node",
"usage": {
"cpu": "18m",
"memory": "53220Ki"
}
},
{
"name": "install-cni",
"usage": {
"cpu": "0",
"memory": "2484Ki"
}
},
{
"name": "kube-flannel",
"usage": {
"cpu": "3m",
"memory": "13392Ki"
}
}
]
},
{
"metadata": {
"name": "kube-flannel-ds-stnlp",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-flannel-ds-stnlp",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "kube-flannel",
"usage": {
"cpu": "4m",
"memory": "16404Ki"
}
}
]
},
{
"metadata": {
"name": "nginx-ingress-controller-d658896cd-krhh5",
"namespace": "ingress-nginx",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/ingress-nginx/pods/nginx-ingress-controller-d658896cd-krhh5",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "nginx-ingress-controller",
"usage": {
"cpu": "3m",
"memory": "107820Ki"
}
}
]
},
{
"metadata": {
"name": "etcd-master.smoke.com",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/etcd-master.smoke.com",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "etcd",
"usage": {
"cpu": "19m",
"memory": "113336Ki"
}
}
]
},
{
"metadata": {
"name": "kube-flannel-ds-g69pn",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-flannel-ds-g69pn",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "kube-flannel",
"usage": {
"cpu": "4m",
"memory": "16108Ki"
}
}
]
},
{
"metadata": {
"name": "metrics-server-v0.2.1-84678c956-bd8dr",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/metrics-server-v0.2.1-84678c956-bd8dr",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "metrics-server",
"usage": {
"cpu": "0",
"memory": "13400Ki"
}
},
{
"name": "metrics-server-nanny",
"usage": {
"cpu": "0",
"memory": "9676Ki"
}
}
]
},
{
"metadata": {
"name": "canal-j6t4j",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/canal-j6t4j",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "calico-node",
"usage": {
"cpu": "22m",
"memory": "17604Ki"
}
},
{
"name": "install-cni",
"usage": {
"cpu": "0",
"memory": "16736Ki"
}
},
{
"name": "kube-flannel",
"usage": {
"cpu": "4m",
"memory": "15776Ki"
}
}
]
},
{
"metadata": {
"name": "kube-proxy-7lg96",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-proxy-7lg96",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "kube-proxy",
"usage": {
"cpu": "3m",
"memory": "24308Ki"
}
}
]
},
{
"metadata": {
"name": "coredns-78fcdf6894-bt5g6",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/coredns-78fcdf6894-bt5g6",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "coredns",
"usage": {
"cpu": "1m",
"memory": "19988Ki"
}
}
]
},
{
"metadata": {
"name": "kube-proxy-qmrq7",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-proxy-qmrq7",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "kube-proxy",
"usage": {
"cpu": "2m",
"memory": "24628Ki"
}
}
]
},
{
"metadata": {
"name": "canal-997tb",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/canal-997tb",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "calico-node",
"usage": {
"cpu": "19m",
"memory": "19604Ki"
}
},
{
"name": "install-cni",
"usage": {
"cpu": "0",
"memory": "13972Ki"
}
},
{
"name": "kube-flannel",
"usage": {
"cpu": "4m",
"memory": "15948Ki"
}
}
]
},
{
"metadata": {
"name": "kube-scheduler-master.smoke.com",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-master.smoke.com",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "kube-scheduler",
"usage": {
"cpu": "9m",
"memory": "26680Ki"
}
}
]
},
{
"metadata": {
"name": "kubernetes-dashboard-6948bdb78-fdpt2",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kubernetes-dashboard-6948bdb78-fdpt2",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "kubernetes-dashboard",
"usage": {
"cpu": "0",
"memory": "18264Ki"
}
}
]
},
{
"metadata": {
"name": "kube-apiserver-master.smoke.com",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-apiserver-master.smoke.com",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "kube-apiserver",
"usage": {
"cpu": "42m",
"memory": "582116Ki"
}
}
]
},
{
"metadata": {
"name": "myapp-deploy-5d9c6985f5-7sbdk",
"namespace": "default",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/myapp-deploy-5d9c6985f5-7sbdk",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "myapp",
"usage": {
"cpu": "0",
"memory": "2936Ki"
}
}
]
},
{
"metadata": {
"name": "myapp-deploy-5d9c6985f5-rcxvj",
"namespace": "default",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/myapp-deploy-5d9c6985f5-rcxvj",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "myapp",
"usage": {
"cpu": "0",
"memory": "3036Ki"
}
}
]
},
{
"metadata": {
"name": "coredns-78fcdf6894-zzbll",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/coredns-78fcdf6894-zzbll",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "coredns",
"usage": {
"cpu": "1m",
"memory": "13080Ki"
}
}
]
},
{
"metadata": {
"name": "kube-proxy-5jppm",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-proxy-5jppm",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "kube-proxy",
"usage": {
"cpu": "2m",
"memory": "25148Ki"
}
}
]
},
{
"metadata": {
"name": "kube-flannel-ds-rkd4c",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-flannel-ds-rkd4c",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "kube-flannel",
"usage": {
"cpu": "3m",
"memory": "12968Ki"
}
}
]
},
{
"metadata": {
"name": "pod1",
"namespace": "prod",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/prod/pods/pod1",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "myapp",
"usage": {
"cpu": "0",
"memory": "3480Ki"
}
}
]
},
{
"metadata": {
"name": "kube-controller-manager-master.smoke.com",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-controller-manager-master.smoke.com",
"creationTimestamp": "2020-08-24T14:48:22Z"
},
"timestamp": "2020-08-24T14:48:00Z",
"window": "1m0s",
"containers": [
{
"name": "kube-controller-manager",
"usage": {
"cpu": "39m",
"memory": "88308Ki"
}
}
]
}
]
}
[root@master metrics-server]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master.smoke.com 218m 10% 1225Mi 71%
node01.smoke.com 88m 4% 842Mi 48%
node02.smoke.com 85m 4% 834Mi 48%
[root@master metrics-server]# kubectl top pods
NAME CPU(cores) MEMORY(bytes)
myapp-deploy-5d9c6985f5-7sbdk 0m 2Mi
myapp-deploy-5d9c6985f5-rcxvj 0m 2Mi
myapp-deploy-5d9c6985f5-ssdf6 0m 2Mi
[root@master metrics-server]# kubectl top pods -n kube-system
NAME CPU(cores) MEMORY(bytes)
canal-997tb 23m 48Mi
canal-j6t4j 26m 49Mi
canal-jxq25 21m 67Mi
coredns-78fcdf6894-bt5g6 1m 19Mi
coredns-78fcdf6894-zzbll 1m 12Mi
etcd-master.smoke.com 19m 93Mi
kube-apiserver-master.smoke.com 43m 568Mi
kube-controller-manager-master.smoke.com 38m 86Mi
kube-flannel-ds-g69pn 4m 15Mi
kube-flannel-ds-rkd4c 4m 12Mi
kube-flannel-ds-stnlp 5m 15Mi
kube-proxy-5jppm 3m 24Mi
kube-proxy-7lg96 3m 23Mi
kube-proxy-qmrq7 2m 24Mi
kube-scheduler-master.smoke.com 9m 26Mi
kubernetes-dashboard-6948bdb78-fdpt2 0m 17Mi
metrics-server-v0.2.1-84678c956-bd8dr 0m 24Mi
node01:
[root@node01 containers]# cd /var/log/containers/ [root@node01 containers]# ls canal-997tb_kube-system_calico-node-b03c01e40e55f8eb6605d9f9b7f5fbd15931b7c6945db5f901361997ad59c9eb.log canal-997tb_kube-system_install-cni-0f87f5f1cb6dc08ecad013d0a2ca30a6ffb32be9a2724ff9bdcff7a4a9ae683e.log canal-997tb_kube-system_kube-flannel-6dad9a0a4f45437f153e998b92e0712aabecd65007c805d5339d79414ee6cec3.log default-http-backend-846b65fb5f-4489p_ingress-nginx_default-http-backend-0e79b771c6c4d70ea88fb15af61720dc249ea7196bcae88d17588bb4fe830b38.log kube-flannel-ds-stnlp_kube-system_install-cni-88fd65a289c845de8760270adf201a85e15c60097aeed92b7d27d0a37d3cfb94.log kube-flannel-ds-stnlp_kube-system_kube-flannel-f80b0764bd9a08d21be5cb61d74a460d33426e662215ab46c9d3d52fe321db4a.log kube-proxy-5jppm_kube-system_kube-proxy-7109c344391fe15224a9f4abd1cd7791bc0be71a8ae044ea90973c37457eebd8.log kube-proxy-5jppm_kube-system_kube-proxy-89ced1b545d2846ada6149a624d6b66608868cda5348ef2fc9b74e06216b6288.log myapp-deploy-5d9c6985f5-ssdf6_default_myapp-0cb0c3ddca341a272b97083f36e2fc125b5619af9539340779f00a52c0e134dc.log nginx-ingress-controller-d658896cd-krhh5_ingress-nginx_nginx-ingress-controller-419067b721d757a0fd3715f553c9ca3238d709ea88e45a490d90245ed9dd9628.log nginx-ingress-controller-d658896cd-krhh5_ingress-nginx_nginx-ingress-controller-d3bdbce67eccaa64cbfd74266b1c108d95c0f1935b2102d64c44d06bb1abdd1c.log pod1_prod_myapp-57ac45ff76d0976c345482abc2aaa8c018ce64bd1d314c6d21aaf5cf2eeb9b54.log
部署prometheus
https://github.com/kubernetes/kubernetes/tree/v1.11.1/cluster/addons/prometheus
https://github.com/iKubernetes/k8s-prom
master:
[root@master metrics]# git clone https://github.com/iKubernetes/k8s-prom.git
[root@master metrics]# ll
总用量 16
-rw-r--r--. 1 root root 2357 8月 19 22:15 grafana.yaml
-rw-r--r--. 1 root root 1182 8月 18 22:01 heapster.yaml
-rw-r--r--. 1 root root 1025 8月 17 22:06 influxdb.yaml
drwxr-xr-x. 8 root root 167 8月 30 22:00 k8s-prom
drwxr-xr-x. 2 root root 221 8月 24 22:44 metrics-server
-rw-r--r--. 1 root root 318 8月 11 22:04 pod-demo.yaml
[root@master metrics]# cd k8s-prom/
[root@master k8s-prom]# ls
k8s-prometheus-adapter kube-state-metrics namespace.yaml node_exporter podinfo prometheus README.md
[root@master k8s-prom]# kubectl apply -f namespace.yaml
[root@master prometheus]# ls
prometheus-cfg.yaml prometheus-deploy.yaml prometheus-rbac.yaml prometheus-svc.yaml
[root@master k8s-prom]# cd node_exporter/
[root@master node_exporter]# ls
node-exporter-ds.yaml node-exporter-svc.yaml
[root@master node_exporter]# kubectl apply -f .
[root@master node_exporter]# kubectl get pods -n prom
NAME READY STATUS RESTARTS AGE
prometheus-node-exporter-m5mkq 1/1 Running 0 1m
prometheus-node-exporter-r9m9l 1/1 Running 0 1m
prometheus-node-exporter-tbd2r 1/1 Running 0 1m
[root@master node_exporter]# cd ../prometheus/
[root@master prometheus]# ls
prometheus-cfg.yaml prometheus-deploy.yaml prometheus-rbac.yaml prometheus-svc.yaml
[root@master prometheus]# kubectl apply -f .
[root@master prometheus]# kubectl get all -n prom
NAME READY STATUS RESTARTS AGE
pod/prometheus-node-exporter-m5mkq 1/1 Running 0 3m
pod/prometheus-node-exporter-r9m9l 1/1 Running 0 3m
pod/prometheus-node-exporter-tbd2r 1/1 Running 0 3m
pod/prometheus-server-65f5d59585-6f4wx 0/1 Pending 0 16s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus NodePort 10.101.97.199 <none> 9090:30090/TCP 16s
service/prometheus-node-exporter ClusterIP None <none> 9100/TCP 3m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prometheus-node-exporter 3 3 3 3 3 <none> 3m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/prometheus-server 1 1 1 0 16s
NAME DESIRED CURRENT READY AGE
replicaset.apps/prometheus-server-65f5d59585 1 1 0 16s
[root@master prometheus]# kubectl describe pod/prometheus-server-65f5d59585-6f4wx -n prom #内存不足
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 45s (x25 over 2m) default-scheduler 0/3 nodes are available: 3 Insufficient memory.
[root@master prometheus]# vim prometheus-deploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-server
namespace: prom
labels:
app: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
component: server
#matchExpressions:
#- {key: app, operator: In, values: [prometheus]}
#- {key: component, operator: In, values: [server]}
template:
metadata:
labels:
app: prometheus
component: server
annotations:
prometheus.io/scrape: 'false'
spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: prom/prometheus:v2.2.1
imagePullPolicy: Always
command:
- prometheus
- --config.file=/etc/prometheus/prometheus.yml
- --storage.tsdb.path=/prometheus
- --storage.tsdb.retention=720h
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: /etc/prometheus/prometheus.yml
name: prometheus-config
subPath: prometheus.yml
- mountPath: /prometheus/
name: prometheus-storage-volume
volumes:
- name: prometheus-config
configMap:
name: prometheus-config
items:
- key: prometheus.yml
path: prometheus.yml
mode: 0644
- name: prometheus-storage-volume
emptyDir: {}
[root@master prometheus]# kubectl apply -f prometheus-deploy.yaml
[root@master prometheus]# kubectl get pods -n prom
NAME READY STATUS RESTARTS AGE
prometheus-node-exporter-m5mkq 1/1 Running 0 9m
prometheus-node-exporter-r9m9l 1/1 Running 0 9m
prometheus-node-exporter-tbd2r 1/1 Running 0 9m
prometheus-server-7c8554cf-ddl8n 1/1 Running 0 1m
[root@master prometheus]# kubectl logs prometheus-server-7c8554cf-ddl8n -n prom
通过宿主机浏览器访问172.20.0.66:30090

master:
[root@master prometheus]# cd ../kube-state-metrics/
[root@master kube-state-metrics]# ls
kube-state-metrics-deploy.yaml kube-state-metrics-rbac.yaml kube-state-metrics-svc.yaml
[root@master kube-state-metrics]# kubectl apply -f .
[root@master kube-state-metrics]# kubectl get all -n prom
NAME READY STATUS RESTARTS AGE
pod/kube-state-metrics-58dffdf67d-t2c4d 0/1 ErrImagePull 0 1m
pod/prometheus-node-exporter-m5mkq 1/1 Running 0 35m
pod/prometheus-node-exporter-r9m9l 1/1 Running 0 35m
pod/prometheus-node-exporter-tbd2r 1/1 Running 0 35m
pod/prometheus-server-7c8554cf-ddl8n 1/1 Running 0 27m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-state-metrics ClusterIP 10.108.99.47 <none> 8080/TCP 1m
service/prometheus NodePort 10.101.97.199 <none> 9090:30090/TCP 32m
service/prometheus-node-exporter ClusterIP None <none> 9100/TCP 35m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prometheus-node-exporter 3 3 3 3 3 <none> 35m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-state-metrics 1 1 1 0 1m
deployment.apps/prometheus-server 1 1 1 1 32m
NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-state-metrics-58dffdf67d 1 1 0 1m
replicaset.apps/prometheus-server-65f5d59585 0 0 0 32m
replicaset.apps/prometheus-server-7c8554cf 1 1 1 27m
[root@master kube-state-metrics]# cd ../k8s-prometheus-adapter/
[root@master k8s-prometheus-adapter]# ls
custom-metrics-apiserver-auth-delegator-cluster-role-binding.yaml custom-metrics-apiserver-service-account.yaml custom-metrics-resource-reader-cluster-role.yaml
custom-metrics-apiserver-auth-reader-role-binding.yaml custom-metrics-apiserver-service.yaml hpa-custom-metrics-cluster-role-binding.yaml
custom-metrics-apiserver-deployment.yaml custom-metrics-apiservice.yaml
custom-metrics-apiserver-resource-reader-cluster-role-binding.yaml custom-metrics-cluster-role.yaml
[root@master k8s-prometheus-adapter]# cat custom-metrics-apiserver-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
namespace: prom
spec:
replicas: 1
selector:
matchLabels:
app: custom-metrics-apiserver
template:
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
spec:
serviceAccountName: custom-metrics-apiserver
containers:
- name: custom-metrics-apiserver
image: directxman12/k8s-prometheus-adapter-amd64
args:
- /adapter
- --secure-port=6443
- --tls-cert-file=/var/run/serving-cert/serving.crt
- --tls-private-key-file=/var/run/serving-cert/serving.key
- --logtostderr=true
- --prometheus-url=http://prometheus.prom.svc:9090/
- --metrics-relist-interval=30s
- --rate-interval=5m
- --v=10
ports:
- containerPort: 6443
volumeMounts:
- mountPath: /var/run/serving-cert
name: volume-serving-cert
readOnly: true
volumes:
- name: volume-serving-cert
secret:
secretName: cm-adapter-serving-certs
[root@master metrics-server]# cd /etc/kubernetes/pki/
[root@master pki]# (umask 077; openssl genrsa -out serving.key 2048)
[root@master pki]# openssl req -new -key serving.key -out serving.csr -subj "/CN=serving"
[root@master pki]# openssl x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 3650
[root@master pki]# kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt=./serving.crt --from-file=serving.key=./serving.key -n prom
[root@master pki]# kubectl get secrets -n prom
NAME TYPE DATA AGE
cm-adapter-serving-certs Opaque 2 27s
default-token-9wnmr kubernetes.io/service-account-token 3 52m
kube-state-metrics-token-njvm8 kubernetes.io/service-account-token 3 16m
prometheus-token-c568w kubernetes.io/service-account-token 3 47m
[root@master k8s-prometheus-adapter]# kubectl apply -f .
[root@master k8s-prometheus-adapter]# kubectl get all -n prom
NAME READY STATUS RESTARTS AGE
pod/custom-metrics-apiserver-5f6b4d857d-w2sdg 0/1 CrashLoopBackOff 257 22h
pod/kube-state-metrics-58dffdf67d-t2c4d 1/1 Running 0 22h
pod/prometheus-node-exporter-m5mkq 1/1 Running 0 23h
pod/prometheus-node-exporter-r9m9l 1/1 Running 0 23h
pod/prometheus-node-exporter-tbd2r 1/1 Running 0 23h
pod/prometheus-server-7c8554cf-ddl8n 1/1 Running 0 23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/custom-metrics-apiserver ClusterIP 10.106.178.196 <none> 443/TCP 22h
service/kube-state-metrics ClusterIP 10.108.99.47 <none> 8080/TCP 22h
service/prometheus NodePort 10.101.97.199 <none> 9090:30090/TCP 23h
service/prometheus-node-exporter ClusterIP None <none> 9100/TCP 23h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prometheus-node-exporter 3 3 3 3 3 <none> 23h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/custom-metrics-apiserver 1 1 1 0 22h
deployment.apps/kube-state-metrics 1 1 1 1 22h
deployment.apps/prometheus-server 1 1 1 1 23h
NAME DESIRED CURRENT READY AGE
replicaset.apps/custom-metrics-apiserver-5f6b4d857d 1 1 0 22h
replicaset.apps/kube-state-metrics-58dffdf67d 1 1 1 22h
replicaset.apps/prometheus-server-65f5d59585 0 0 0 23h
replicaset.apps/prometheus-server-7c8554cf 1 1 1 23h
[root@master k8s-prometheus-adapter]# kubectl describe pod/custom-metrics-apiserver-5f6b4d857d-w2sdg -n prom
[root@master k8s-prometheus-adapter]# kubectl logs custom-metrics-apiserver-5f6b4d857d-w2sdg -n prom
[root@master k8s-prometheus-adapter]# vim custom-metrics-apiserver-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
namespace: prom
spec:
replicas: 1
selector:
matchLabels:
app: custom-metrics-apiserver
template:
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
spec:
serviceAccountName: custom-metrics-apiserver
containers:
- name: custom-metrics-apiserver
image: directxman12/k8s-prometheus-adapter-amd64:v0.2.1 #更改镜像版本
args:
- /adapter
- --secure-port=6443
- --tls-cert-file=/var/run/serving-cert/serving.crt
- --tls-private-key-file=/var/run/serving-cert/serving.key
- --logtostderr=true
- --prometheus-url=http://prometheus.prom.svc:9090/
- --metrics-relist-interval=30s
- --rate-interval=5m
- --v=10
ports:
- containerPort: 6443
volumeMounts:
- mountPath: /var/run/serving-cert
name: volume-serving-cert
readOnly: true
volumes:
- name: volume-serving-cert
secret:
secretName: cm-adapter-serving-certs
[root@master k8s-prometheus-adapter]# kubectl apply -f custom-metrics-apiserver-deployment.yaml
[root@master k8s-prometheus-adapter]# kubectl get pod -n prom #还是不行
NAME READY STATUS RESTARTS AGE
custom-metrics-apiserver-5f6b4d857d-w2sdg 0/1 CrashLoopBackOff 265 23h
custom-metrics-apiserver-86ccf774d5-5fc74 0/1 CrashLoopBackOff 7 16m
kube-state-metrics-58dffdf67d-t2c4d 1/1 Running 0 23h
prometheus-node-exporter-m5mkq 1/1 Running 0 1d
prometheus-node-exporter-r9m9l 1/1 Running 0 1d
prometheus-node-exporter-tbd2r 1/1 Running 0 1d
prometheus-server-7c8554cf-ddl8n 1/1 Running 0 1d
更换custom-metrics-apiserver-deployment.yaml
https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/v0.2.2/deploy/manifests/custom-metrics-apiserver-deployment.yaml
增加custom-metrics-config-map.yaml
https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/v0.2.2/deploy/manifests/custom-metrics-config-map.yaml
master:
[root@master k8s-prometheus-adapter]# mv custom-metrics-apiserver-deployment.yaml{,.bak}
[root@master k8s-prometheus-adapter]# wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/v0.2.2/deploy/manifests/custom-metrics-apiserver-deployment.yaml
[root@master k8s-prometheus-adapter]# vim custom-metrics-apiserver-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
namespace: prom
spec:
replicas: 1
selector:
matchLabels:
app: custom-metrics-apiserver
template:
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
spec:
serviceAccountName: custom-metrics-apiserver
containers:
- name: custom-metrics-apiserver
image: directxman12/k8s-prometheus-adapter-amd64
args:
- /adapter
- --secure-port=6443
- --tls-cert-file=/var/run/serving-cert/serving.crt
- --tls-private-key-file=/var/run/serving-cert/serving.key
- --logtostderr=true
- --prometheus-url=http://prometheus.prom.svc:9090/
- --metrics-relist-interval=1m
- --v=10
- --config=/etc/adapter/config.yaml
ports:
- containerPort: 6443
volumeMounts:
- mountPath: /var/run/serving-cert
name: volume-serving-cert
readOnly: true
- mountPath: /etc/adapter/
name: config
readOnly: true
- mountPath: /tmp
name: tmp-vol
volumes:
- name: volume-serving-cert
secret:
secretName: cm-adapter-serving-certs
- name: config
configMap:
name: adapter-config
- name: tmp-vol
emptyDir: {}
[root@master k8s-prometheus-adapter]# wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/v0.2.2/deploy/manifests/custom-metrics-config-map.yaml
[root@master k8s-prometheus-adapter]# vim custom-metrics-config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: adapter-config
namespace: prom
data:
config.yaml: |
rules:
- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
seriesFilters: []
resources:
overrides:
namespace:
resource: namespace
pod_name:
resource: pod
name:
matches: ^container_(.*)_seconds_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[5m]))
by (<<.GroupBy>>)
- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
seriesFilters:
- isNot: ^container_.*_seconds_total$
resources:
overrides:
namespace:
resource: namespace
pod_name:
resource: pod
name:
matches: ^container_(.*)$
as: ""
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}) by (<<.GroupBy>>)
- seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
seriesFilters:
- isNot: .*_total$
resources:
template: <<.Resource>>
name:
matches: ""
as: ""
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)
- seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
seriesFilters:
- isNot: .*_seconds_total
resources:
template: <<.Resource>>
name:
matches: ^(.*)_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[5m])) by (<<.GroupBy>>)
- seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
seriesFilters: []
resources:
template: <<.Resource>>
name:
matches: ^(.*)_seconds_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[5m])) by (<<.GroupBy>>)
[root@master k8s-prometheus-adapter]# kubectl apply -f custom-metrics-config-map.yaml
[root@master k8s-prometheus-adapter]# kubectl apply -f custom-metrics-apiserver-deployment.yaml
[root@master k8s-prometheus-adapter]# kubectl get cm -n prom
NAME DATA AGE
adapter-config 1 11m
prometheus-config 1 1d
[root@master k8s-prometheus-adapter]# kubectl get all -n prom
NAME READY STATUS RESTARTS AGE
pod/custom-metrics-apiserver-84b6dbb4b4-qf697 1/1 Running 0 4m
pod/kube-state-metrics-58dffdf67d-t2c4d 1/1 Running 0 1d
pod/prometheus-node-exporter-m5mkq 1/1 Running 0 2d
pod/prometheus-node-exporter-r9m9l 1/1 Running 0 2d
pod/prometheus-node-exporter-tbd2r 1/1 Running 0 2d
pod/prometheus-server-7c8554cf-ddl8n 1/1 Running 0 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/custom-metrics-apiserver ClusterIP 10.106.178.196 <none> 443/TCP 1d
service/kube-state-metrics ClusterIP 10.108.99.47 <none> 8080/TCP 1d
service/prometheus NodePort 10.101.97.199 <none> 9090:30090/TCP 2d
service/prometheus-node-exporter ClusterIP None <none> 9100/TCP 2d
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prometheus-node-exporter 3 3 3 3 3 <none> 2d
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/custom-metrics-apiserver 1 1 1 1 4m
deployment.apps/kube-state-metrics 1 1 1 1 1d
deployment.apps/prometheus-server 1 1 1 1 2d
NAME DESIRED CURRENT READY AGE
replicaset.apps/custom-metrics-apiserver-84b6dbb4b4 1 1 1 4m
replicaset.apps/kube-state-metrics-58dffdf67d 1 1 1 1d
replicaset.apps/prometheus-server-65f5d59585 0 0 0 2d
replicaset.apps/prometheus-server-7c8554cf 1 1 1 2d
[root@master k8s-prometheus-adapter]# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
crd.projectcalico.org/v1
custom.metrics.k8s.io/v1beta1 #出现的新的api接口
events.k8s.io/v1beta1
extensions/v1beta1
metrics.k8s.io/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
[root@master ~]# kubectl proxy --port=8080
[root@master k8s-prometheus-adapter]# curl http://localhost:8080/apis/custom.metrics.k8s.io/v1beta1
[root@master k8s-prometheus-adapter]# cp ../../grafana.yaml .
[root@master k8s-prometheus-adapter]# vim grafana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: prom
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: grafana
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:
- name: grafana
image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certificates
readOnly: true
- mountPath: /var
name: grafana-storage
env:
#- name: INFLUXDB_HOST
# value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: "3000"
# The following env variables are required to make Grafana accessible via
# the kubernetes api-server proxy. On production clusters, we recommend
# removing these env variables, setup auth for grafana, and expose the grafana
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
# If you're only using the API Server proxy, set this value instead:
# value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
value: /
volumes:
- name: ca-certificates
hostPath:
path: /etc/ssl/certs
- name: grafana-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: prom
spec:
# In a production setup, we recommend accessing Grafana through an external Loadbalancer
# or through a public IP.
# type: LoadBalancer
# You could also use NodePort to expose the service at a randomly-generated port
# type: NodePort
ports:
- port: 80
targetPort: 3000
selector:
k8s-app: grafana
type: NodePort
[root@master k8s-prometheus-adapter]# kubectl apply -f grafana.yaml
[root@master k8s-prometheus-adapter]# kubectl get pods -n prom
NAME READY STATUS RESTARTS AGE
custom-metrics-apiserver-84b6dbb4b4-qf697 1/1 Running 0 19m
kube-state-metrics-58dffdf67d-t2c4d 1/1 Running 0 2d
monitoring-grafana-ffb4d59bd-qfszk 1/1 Running 0 10s
prometheus-node-exporter-m5mkq 1/1 Running 0 2d
prometheus-node-exporter-r9m9l 1/1 Running 0 2d
prometheus-node-exporter-tbd2r 1/1 Running 0 2d
prometheus-server-7c8554cf-ddl8n 1/1 Running 0 2d
[root@master k8s-prometheus-adapter]# kubectl get svc -n prom
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
custom-metrics-apiserver ClusterIP 10.106.178.196 <none> 443/TCP 1d
kube-state-metrics ClusterIP 10.108.99.47 <none> 8080/TCP 2d
monitoring-grafana NodePort 10.108.72.239 <none> 80:32169/TCP 43s
prometheus NodePort 10.101.97.199 <none> 9090:30090/TCP 2d
prometheus-node-exporter ClusterIP None <none> 9100/TCP 2d
通过宿主机访问172.20.0.66:32169

点击Configuration -- Data Source -- Add Data Souce;Type选择Prometheus,URL填写prometheus的域名加pod端口,pod内部通信;

点击DashBoards, 将Prometheus Stats和Prometheus 2.0 Stats模板import进来;

点击Dashboards -- home --Prometheus 2.0 Stats

可以到https://grafana.com/grafana/dashboards?dataSource=prometheus&search=kubernetes官网搜索对应的grafana dashboard;
我下载的是这个https://grafana.com/grafana/dashboards/6417,点击下载JSON;
点击Dashboard -- home -- import dashboard -- Upload .json File选择下载的模板JSON文件,选择Prometheus数据源;

点击Import

master:
[root@master k8s-prometheus-adapter]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-deploy-5d9c6985f5-7sbdk 1/1 Running 0 22d
myapp-deploy-5d9c6985f5-rcxvj 1/1 Running 0 22d
myapp-deploy-5d9c6985f5-ssdf6 1/1 Running 0 22d
[root@master k8s-prometheus-adapter]# kubectl explain hpa
[root@master k8s-prometheus-adapter]# kubectl explain hpa.spec
[root@master k8s-prometheus-adapter]# kubectl explain hpa.spec.scaleTargetRef
[root@master k8s-prometheus-adapter]# kubectl explain hpa.spec.scaleTargetRef.name
[root@master k8s-prometheus-adapter]# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
crd.projectcalico.org/v1
custom.metrics.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
metrics.k8s.io/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
[root@master k8s-prometheus-adapter]# cd /root/manifests/
[root@master manifests]# kubectl delete -f deploy-demo.yaml
[root@master manifests]# kubectl get pods
[root@master manifests]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=1 --requests='cpu=50m,memory=256Mi' --limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80
[root@master manifests]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6985749785-5lprz 1/1 Running 0 15s
[root@master manifests]# kubectl describe pods myapp-6985749785-5lprz
QoS Class: Guaranteed
[root@master manifests]# kubectl autoscale --help
[root@master manifests]# kubectl autoscale deployment myapp --min=1 --max=8 --cpu-percent=60
[root@master manifests]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp 0%/60% 1 8 1 20m
[root@master manifests]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 78d
myapp ClusterIP 10.104.159.45 <none> 80/TCP 30m
[root@master manifests]# kubectl patch svc myapp -p '{"spec":{"type":"NodePort"}}'
[root@master manifests]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 78d
myapp NodePort 10.104.159.45 <none> 80:30807/TCP 32m
[root@master manifests]# yum -y install httpd-tools
[root@master manifests]# ab -c 1000 -n 500000 http://172.20.0.66:30807/index.html
[root@master manifests]# kubectl describe hpa
Name: myapp
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 02 Sep 2020 21:54:34 +0800
Reference: Deployment/myapp
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 40% (10m) / 60%
Min replicas: 1
Max replicas: 8
Deployment pods: 1 current / 1 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events: <none>
[root@master manifests]# kubectl top pods
NAME CPU(cores) MEMORY(bytes)
myapp-6985749785-5lprz 47m 4Mi
[root@master manifests]# kubectl describe hpa
Name: myapp
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 02 Sep 2020 21:54:34 +0800
Reference: Deployment/myapp
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 16% (8m) / 60%
Min replicas: 1
Max replicas: 8
Deployment pods: 2 current / 2 desired #已成成为2个
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale False BackoffBoth the time since the previous scale is still within both the downscale and upscale forbidden windows
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 34s horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
[root@master manifests]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6985749785-5lprz 1/1 Running 0 42m
myapp-6985749785-nwh5x 1/1 Running 0 1m
[root@master manifests]# cd metrics/metrics-server/
[root@master metrics-server]# vim hpa-v2-demo.yaml
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa-v2
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 55
- type: Resource
resource:
name: memory
targetAverageValue: 50Mi
[root@master metrics-server]# kubectl delete hpa myapp
[root@master metrics-server]# kubectl apply -f hpa-v2-demo.yaml
[root@master metrics-server]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp-hpa-v2 Deployment/myapp 4759552/50Mi, 0%/55% 1 10 1 9m
[root@master metrics-server]# ab -c 1000 -n 500000 http://172.20.0.66:30807/index.html
[root@master manifests]# kubectl describe hpa
Name: myapp-hpa-v2
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"myapp-hpa
-v2","namespace":"default"},"spec":{...
CreationTimestamp: Wed, 02 Sep 2020 23:04:56 +0800
Reference: Deployment/myapp
Metrics: ( current / target )
resource memory on pods: 4063232 / 50Mi
resource cpu on pods (as a percentage of request): 89% (44m) / 55%
Min replicas: 1
Max replicas: 10
Deployment pods: 2 current / 2 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale False BackoffBoth the time since the previous scale is still within both the downscale and upscale forbidden windows
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 2m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
[root@master manifests]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6985749785-9zzkl 1/1 Running 0 9m
myapp-6985749785-j6k6q 1/1 Running 0 3m
[root@master manifests]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp-hpa-v2 Deployment/myapp 4118528/50Mi, 91%/55% 1 10 4 9m
[root@master metrics-server]# cp hpa-v2-demo.yaml hpa-v2-custom.yaml
[root@master metrics-server]# vim hpa-v2-custom.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa-v2
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 1
maxReplicas: 10
metrics:
- type: Pods
pods:
metricName: http_requests
targetAverageValue: 800m
[root@master metrics-server]# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
crd.projectcalico.org/v1
custom.metrics.k8s.io/v1beta1 #自定义指标API
events.k8s.io/v1beta1
extensions/v1beta1
metrics.k8s.io/v1beta1 #第三方外部api-server核心指标API
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
浙公网安备 33010602011771号