单机k8s部署Prometheus+Grafana

Prometheus(普罗米修斯)是一款开源的监控软件,能够监测各类数据的变化,而Grafana可以用Prometheus获取的各类数据制作出各种可视化的表盘,帮助我们更方便的分析数据的变化,对系统可能会出现的一些异常进行预警和维护

作为一名运维工程师,监控是我们如何都要掌握的一项技能,因此学会Prometheus+Grafana的部署和使用就非常重要

今天我们就来学习一下如何在单机k8s部署Prometheus+Grafana
k8s的部署见往期文章:
《ubuntu22.04部署单机k8s》:https://www.cnblogs.com/OM-dyc/articles/18957330
《Ubuntu 20.04 下单机k8s的网络部署》:https://www.cnblogs.com/OM-dyc/articles/18959567

第一部分、Prometheus

1.先创建一个专属的命名空间monitoring来部署Prometheus
kubectl create namespace monitoring

2.创建一个config文件
vim prometheus-config.yaml

apiVersion: v1
kind: ConfigMap  # 指定资源类型为ConfigMap
metadata:
  name: prometheus-config
  namespace: monitoring
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
      evaluation_interval: 15s

    scrape_configs:
      - job_name: 'prometheus'
        static_configs:
          - targets: ['localhost:9090']

      - job_name: 'kubernetes-apiservers'
        kubernetes_sd_configs:
          - role: endpoints
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        relabel_configs:
          - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
            action: keep
            regex: default;kubernetes;https

      - job_name: 'kubernetes-nodes'
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        kubernetes_sd_configs:
          - role: node
        relabel_configs:
          - action: labelmap
            regex: __meta_kubernetes_node_label_(.+)
          - target_label: __address__
            replacement: kubernetes.default.svc:443
          - source_labels: [__meta_kubernetes_node_name]
            regex: (.+)
            target_label: __metrics_path__
            replacement: /api/v1/nodes/${1}/proxy/metrics

      - job_name: 'kubernetes-node-exporter'
        kubernetes_sd_configs:
          - role: endpoints
        relabel_configs:
          - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
            action: keep
            regex: node-exporter;metrics
          - action: labelmap
            regex: __meta_kubernetes_node_label_(.+)
          - source_labels: [__meta_kubernetes_node_name]
            target_label: instance

      - job_name: 'process-exporter'
        static_configs:
          - targets: ['process-exporter.monitoring.svc.cluster.local:9256']

      - job_name: 'kube-state-metrics'
        static_configs:
          - targets: ['kube-state-metrics.monitoring.svc.cluster.local:8080']

3.创建node-exporter.yaml
vim node-exporter.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring
  labels:
    app: node-exporter
spec:
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      labels:
        app: node-exporter
    spec:
      hostPID: true
      hostIPC: true
      hostNetwork: true
      containers:
        - name: node-exporter
          image: quay.io/prometheus/node-exporter:v1.6.1
          args:
            - --path.procfs=/host/proc
            - --path.sysfs=/host/sys
            - --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)$
          ports:
            - name: metrics
              containerPort: 9100
          volumeMounts:
            - name: proc
              mountPath: /host/proc
              readOnly: true
            - name: sys
              mountPath: /host/sys
              readOnly: true
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: sys
          hostPath:
            path: /sys

4.创建process-exporter.yaml文件
vim process-exporter.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: process-exporter-config
  namespace: monitoring
data:
  config.yaml: |
    process_names:
      - name: "{{.Matches}}"
        cmdline:
          - '.+'

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: process-exporter
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: process-exporter
  template:
    metadata:
      labels:
        app: process-exporter
    spec:
      containers:
        - name: process-exporter
          image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/ncabatoff/process-exporter:latest
          args:
            - --config.path=/etc/process-exporter/config.yaml
          ports:
            - containerPort: 9256
          volumeMounts:
            - name: config-volume
              mountPath: /etc/process-exporter
      volumes:
        - name: config-volume
          configMap:
            name: process-exporter-config

5.创建kube-state-metrics.yaml
vim kube-state-metrics.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-state-metrics
  namespace: monitoring

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kube-state-metrics
rules:
  - apiGroups: [""]
    resources:
      - configmaps
      - secrets
      - nodes
      - pods
      - services
      - resourcequotas
      - replicationcontrollers
      - limitranges
      - persistentvolumeclaims
      - persistentvolumes
      - namespaces
      - endpoints
    verbs: ["list", "watch"]
  - apiGroups: ["extensions"]
    resources:
      - daemonsets
      - deployments
      - replicasets
    verbs: ["list", "watch"]
  - apiGroups: ["apps"]
    resources:
      - statefulsets
      - daemonsets
      - deployments
      - replicasets
    verbs: ["list", "watch"]
  - apiGroups: ["batch"]
    resources:
      - cronjobs
      - jobs
    verbs: ["list", "watch"]
  - apiGroups: ["autoscaling"]
    resources:
      - horizontalpodautoscalers
    verbs: ["list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kube-state-metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-state-metrics
subjects:
  - kind: ServiceAccount
    name: kube-state-metrics
    namespace: monitoring

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-state-metrics
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kube-state-metrics
  template:
    metadata:
      labels:
        app: kube-state-metrics
    spec:
      serviceAccountName: kube-state-metrics
      containers:
        - name: kube-state-metrics
          image: registry.cn-wulanchabu.aliyuncs.com/moge1/kube-state-metrics:v2.3.0
          ports:
            - containerPort: 8080

6.应用配置文件

kubectl apply -f prometheus-config.yaml
kubectl apply -f node-exporter.yaml
kubectl apply -f process-exporter.yaml
kubectl apply -f kube-state-metrics.yaml

8.创建服务service文件

(1)创建 kube-state-metrics 服务
vim kube-state-metrics-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: kube-state-metrics
  namespace: monitoring
spec:
  selector:
    app: kube-state-metrics  # 需与Deployment的labels一致
  ports:
    - port: 8080
      targetPort: 8080

(2)创建 process-exporter 服务
vim process-exporter-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: process-exporter
  namespace: monitoring
spec:
  selector:
    app: process-exporter  # 需与Deployment的labels一致
  ports:
    - port: 9256
      targetPort: 9256

(3)创建node-exporter服务
vim node-exporter-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: node-exporter
  namespace: monitoring
spec:
  ports:
    - name: metrics
      port: 9100
      targetPort: 9100
  selector:
    app: node-exporter

(4)创建kubernetes-service服务
vim kubernetes-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubernetes
  namespace: default
spec:
  clusterIP: 10.244.0.1  # 通常为集群默认IP
  ports:
    - name: https
      port: 443
      targetPort: 6443
  selector:
    component: kube-apiserver

(5)创建RBAC服务
vim prometheus-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: monitoring

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus-api-access
rules:
- apiGroups: [""]
  resources: ["services", "endpoints", "pods"]
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus-api-access
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: monitoring
roleRef:
  kind: ClusterRole
  name: prometheus-api-access
  apiGroup: rbac.authorization.k8s.io

(6)创建prometheus-deployment
vim prometheus-deployment.yaml

apiVersion: v1
kind: Service
metadata:
  name: prometheus-service
  namespace: monitoring
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/port:   '9090'
spec:
  selector:
    app: prometheus
  type: NodePort
  ports:
    - port: 9090
      targetPort: 9090
      nodePort: 30005

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-deployment
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      serviceAccountName: prometheus
      containers:
        - name: prometheus
          image: quay.io/prometheus/prometheus:v2.47.0
          args:
            - "--config.file=/etc/prometheus/prometheus.yml"
            - "--storage.tsdb.path=/prometheus/"
            - "--web.console.libraries=/etc/prometheus/console_libraries"
            - "--web.console.templates=/etc/prometheus/consoles"
          ports:
            - containerPort: 9090
          volumeMounts:
            - name: prometheus-config-volume
              mountPath: /etc/prometheus/
            - name: prometheus-storage-volume
              mountPath: /prometheus/
      volumes:
        - name: prometheus-config-volume
          configMap:
            defaultMode: 420
            name: prometheus-config
        - name: prometheus-storage-volume
          emptyDir: {}

9.应用部署各服务

kubectl apply -f kube-state-metrics-service.yaml
kubectl apply -f process-exporter-service.yaml
kubectl apply -f node-exporter-service.yaml
kubectl apply -f kubernetes-service.yaml
kubectl apply -f prometheus-rbac.yaml
kubectl apply -f prometheus-deployment.yaml

10.访问Prometheus
http://IP地址:30005
在status的targets里查看各监控项的状态,都处于up,则部署成功
微信截图_20250704164735

恭喜你!到这里Prometheus的部署工作就全部结束了!休息一下,我们开始部署Grafana吧

第二部分、Grafana

1.创建grafana.yaml

vim grafana.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-datasources
  namespace: monitoring
data:
  prometheus.yaml: |
    apiVersion: 1
    datasources:
      - name: Prometheus
        type: prometheus
        url: http://prometheus-service.monitoring.svc.cluster.local:9090
        access: proxy
        isDefault: true

---

apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: monitoring
spec:
  selector:
    app: grafana
  type: NodePort
  ports:
    - port: 3000
      targetPort: 3000
      nodePort: 30006

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
        - name: grafana
          image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/grafana/grafana:latest
          ports:
            - containerPort: 3000
          volumeMounts:
            - name: grafana-datasources
              mountPath: /etc/grafana/provisioning/datasources
      volumes:
        - name: grafana-datasources
          configMap:
            defaultMode: 420
            name: grafana-datasources

2.应用配置文件
kubectl apply -f grafana.yaml

3.登录Grafana
http://IP:30006
首次登录用户名/密码为admin/admin
微信截图_20250704171245

第三部分、Prometheus+Grafana的联动
在Grafana中新建数据源
在url项中填入:http://prometheus-service.monitoring.svc.cluster.local:9090
这是在grafana的配置文件中设置的
微信截图_20250704173647
然后点击最底下的save and test按钮,如图所示则配置成功
微信截图_20250704173822

如此便配置完成了,你可以自由的配置你想看到的数据仪表盘
微信截图_20250704174021

我们今天的学习就到这里结束了

如果你有任何的问题或者建议,欢迎在评论区相互交流

byebye~~

posted @ 2025-07-04 17:42  努力成为OM大师  阅读(64)  评论(0)    收藏  举报