Ceph Cluster Deploy Ceph 集群部署
Ceph Cluster Deploy
Minimal deployment for experiment and training
We will deploy a Ceph cluster. The cluster will consists of three nodes: one for Ceph monitor and manager and two for OSD daemons.
Ceph is an open-source and distributed storage system. With proper configuration, it can provide
- HA (Highly Available) storage
- Scalable storage on commodity hardware beyond a single storage server can provide
- Various interface: block, file and object storage
- Multi-tenancy
- Tightly-managed, extendable large Persistent Volumes (PVs) with Rook-Ceph
The following diagram shows the nodes for the Ceph deployment.

Preparation of the deployment:
We set up the following four things on each node
- A Linux user account cluster-admin with sudo permissions
- /etc/hosts entries
- docker.io installation
- Password-less ssh on cluster-admin (not strictly necessary) from ceph-mon node
Ceph Deployment
First we need to decide which version of Ceph to install. The release versions are listed in https://docs.ceph.com/en/latest/releases/index.html. In this deployment, we will use Pacific 16.2.10 version.
Then we will use cephadm to install.
The last command in the above steps will bootstrap the first monitor node. In the bootstrap_output.txt, there will be information on Ceph Dashboard.
Ceph Dashboard is now available at:
URL: https://experiment-ceph-mon:8443/
User: admin
Password: m3wm9y0bta
Now, we can access it through a web browser. After login, we will see this portal.

Cluster status is ‘HEALTH_WARN’ because the number of OSDs is less than default pool replication size. Since we will have only two OSDs, we want to change the default replication size to be two, rather than three.
The following code snippet show how to do that.
We also changed the placement of monitor (mon) and manager (mgr). We want to have just one monitor and one manager. Without this change, additional mon/mgr will be deployed when we add hosts.

As seen above, by default, 5 monitors and 2 managers will be deployed. The last two commands will change these numbers to one and both mon and mgr will be placed on the ceph-mon node.
As you can see, there is a grafana with node exporter and alertmanager. We can access grafana dashboards at https://experiment-ceph-mon:3000/. We don’t need a login for the grafana website.

Now, we are ready to add OSD nodes and OSD disks. Before we add nodes to the ceph cluster, we need to set up passwordless ssh on root account. To do it, a public key at /etc/ceph/ceph.pub need to be appended to /root/.ssh/authorized_keys. This is already done for ceph-mon node when we run bootstrap commands.
Now, we run the following commands to add OSD nodes and OSD drives.
Here are screenshots from the first set of commands.

Now, we have a healthy Ceph cluster, ready for configuring storage services.
Final status of the Ceph cluster
https://daegonk.medium.com/ceph-cluster-deploy-843f9d1fe93d
https://readmedium.com/ceph-cluster-deploy-843f9d1fe93d