ceph dashboard重新配置

环境:
Os:Centos 7
ceph:14.2.2(N版)

1.禁用模块
ceph mgr module disable dashboard

 

2.删除原来的配置
ceph config rm mgr mgr/dashboard/server_addr
ceph config rm mgr mgr/dashboard/server_port
ceph config rm mgr mgr/dashboard/ssl
ceph config rm mgr mgr/dashboard/ssl_certificate
ceph config rm mgr mgr/dashboard/ssl_server_key

 

3.查看服务
[root@master ~]# ceph mgr services
{}

 

4.查看当前启用的包

[root@master ~]# ceph mgr module ls |head -n 30
{
    "always_on_modules": [
        "balancer",
        "crash",
        "devicehealth",
        "orchestrator_cli",
        "progress",
        "rbd_support",
        "status",
        "volumes"
    ],
    "enabled_modules": [
        "iostat",
        "restful"
    ]

发现已经没有了dashboard

 

#############################重新部署########################

1.开启dashboard模块
[root@master ceph]# cd /opt/ceph
[root@master ceph]# ceph mgr module enable dashboard

 

2.查看服务

[root@master ceph]# ceph mgr services
{
    "dashboard": "https://node2:8443/"
}

 

执行了 ceph mgr module enable dashboard,默认就启用了每个节点的8443端口了
好像我这里都不需要做如下的配置了
[root@master mgr-dashboard]# ceph config set mgr mgr/dashboard/server_addr 192.168.1.108
[root@master mgr-dashboard]# ceph config set mgr mgr/dashboard/server_port 8443

 

3.查看

[root@master ~]# ceph -s
  cluster:
    id:     1508a2da-5991-487a-836c-d6e6527b1dc7
    health: HEALTH_WARN
            mons master,node1,node2 are low on available space
 
  services:
    mon: 3 daemons, quorum node1,node2,master (age 48s)
    mgr: node1(active, since 5m), standbys: master, node2
    osd: 3 osds: 3 up (since 45s), 3 in (since 2d)
 
  data:
    pools:   2 pools, 16 pgs
    objects: 19 objects, 37 MiB
    usage:   3.1 GiB used, 2.9 GiB / 6.0 GiB avail
    pgs:     16 active+clean

 

这里有3个mgr,当前运行在node1,若node1 down后,会自动转移到其他可用的节点

 

posted @ 2025-10-14 14:37  slnngk  阅读(10)  评论(0)    收藏  举报