使用 Helm 部署 Prometheus (配置阿里云 Prometheus 远程存储),Grafana 和 kafka-exporter
背景
因为服务器资源有限,想通过购买阿里云的 "可观测监控Prometheus版" 用来存储 Prometheus 的数据。
购买阿里云的可观测监控Prometheus版
1.购买的版本是 "Prometheus实例 for 通用"
2.创建 RAM 用户并授权
3.创建Remote Write并获取读写URL
上面的第二,第三步参考阿里云的文档:https://help.aliyun.com/zh/prometheus/user-guide/create-a-prometheus-instance-for-remote-storage
部署 Prometheus
mkdir -p /data/yaml/basic-monitor/prometheus && cd /data/yaml/basic-monitor/prometheus
kubectl create ns basic-monitor
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm pull prometheus-community/prometheus
tar zxvf prometheus-25.4.0.tgz
vi my-values.yaml
server:
resources:
limits:
cpu: 4
memory: 16Gi
requests:
cpu: 1
memory: 2Gi
service:
type: NodePort
nodePort: "30009"
persistentVolume:
storageClass: nfs-client
size: 80Gi
retention: "1d"
remoteWrite:
- url: "http://cn-shenzhen-intranet.arms.aliyuncs.com/prometheus/xxxxxxxx/xxxxxx/cn-shenzhen/api/v3/write" # 改成自己购买的阿里云可观测监控Prometheus版信息
basic_auth:
username: xxxxxxxxxxxxxxxxxxxxx # 改成自己购买的
password: xxxxxxxxxxxxxxxxxxxxx # 改成自己购买的
remoteRead:
- url: "http://cn-shenzhen-intranet.arms.aliyuncs.com:9090/api/v1/prometheus/xxxxxxxxxxxxx/xxxxxxxxxxxxxxx/xxxxxxxxxxxxxxx/cn-shenzhen/api/v1/read" # 改成自己购买的阿里云可观测监控Prometheus版信息
read_recent: true
alertmanager:
enabled: false
kube-state-metrics:
enabled: false
prometheus-node-exporter:
enabled: false
prometheus-pushgateway:
service:
type: NodePort
nodePort: 30011
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 1
memory: 2Gi
persistentVolume:
enabled: true
size: 30Gi
storageClass: nfs-client
serverFiles:
prometheus.yml: # 下面的配置根据自身情况进行修改
scrape_configs:
- job_name: prod-kafka
static_configs:
- targets:
- prometheus-kafka-exporter.basic-monitor:9308
- job_name: "doris_job"
static_configs:
- targets: ["172.18.255.118:8030","172.18.255.120:8030"]
labels:
group: fe
- targets: ["172.18.255.119:8040","172.18.255.120:8040","172.18.255.121:8040"]
labels:
group: be
- job_name: "pushgateway"
honor_labels: true
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name]
action: keep
regex: basic-monitor;prometheus-prometheus-pushgateway
helm -n basic-monitor install prometheus -f my-values.yaml ./prometheus --dry-run
helm -n basic-monitor install prometheus -f my-values.yaml ./prometheus
部署 Grafana
mkdir -p /data/yaml/basic-monitor/grafana
cd /data/yaml/basic-monitor/grafana
helm pull bitnami/grafana
tar zxvf grafana-9.5.2.tgz
vi my-values.yaml
global:
storageClass: "nfs-client"
persistence:
size: 50Gi
service:
type: NodePort
nodePorts:
grafana: 30010
admin:
password: "admin12345"
grafana:
resources:
limits:
cpu: 4
memory: 8Gi
requests:
cpu: 500m
memory: 4Gi
helm -n basic-monitor install grafana -f my-values.yaml ./grafana --dry-run
helm -n basic-monitor install grafana -f my-values.yaml ./grafana
注意: granfana 是可以通过远程的地址读取阿里云的可观测监控Prometheus版存储的数据,在阿里云控制台中有对应的地址信息
部署 kafka-exporter
mkdir -p /data/yaml/basic-monitor/prometheus-kafka-exporter
cd /data/yaml/basic-monitor/prometheus-kafka-exporter
helm pull prometheus-community/prometheus-kafka-exporter
tar zxvf prometheus-kafka-exporter-2.7.0.tgz
vi my-values.yaml
kafkaServer:
- kafka.basic-service:9092 # 目前我环境的 kafka 在 basic-service 命名空间
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: ""
prometheus.io/port: "9308"
helm -n basic-monitor install prometheus-kafka-exporter -f my-values.yaml ./prometheus-kafka-exporter --dry-run
helm -n basic-monitor install prometheus-kafka-exporter -f my-values.yaml ./prometheus-kafka-exporter
参考
https://artifacthub.io/packages/helm/prometheus-community/prometheus
https://help.aliyun.com/zh/prometheus/user-guide/create-a-prometheus-instance-for-remote-storage
https://artifacthub.io/packages/helm/prometheus-community/prometheus-kafka-exporter#configuration

浙公网安备 33010602011771号