How to Install and Configure a Ceph Storage Cluster?

PowerADM.com / Linux / How to Install and Configure a Ceph Storage Cluster?

Ceph is an open-source software-defined distributed file system. Ceph allows you to create a fault-tolerant distributed data storage available over the TCP/IP protocol. Ceph offers several storage access interfaces: object, block, and file. In this article, we’ll show how to install and configure a ceph cluster using the Quincy release (17) in this example.

In the minimum configuration, it is recommended to use three hosts (nodes) for a ceph cluster with 2 CPUs and 4 GB of RAM. In my example, I’m using Rocky Linux 8.

The ceph cluster storage system consists of several daemons:

  • MON (Ceph monitor) – cluster monitor that tracks its status. All nodes in the cluster report information about their state to the monitors;
  • OSD (Object Storage Device) – cluster element that stores data and processes client requests. Data in the OSD is stored in blocks;
  • MDS (Metadata Server Daemon) – metadata server. Used to operate the CephFS file system (not used in this example);
  • MGR (Manager Daemon) – monitoring service.

configure ceph cluster on linux

You must first prepare all the cluster nodes:

  • Update operating system and packages to the latest versions
  • Configure time synchronization using Chrony
  • Create DNS entries for hosts. In hostname, you need to use the short name, not the FQDN.

Then you need to install Python3 and Podman on all servers (Ceph services run in containers). It is OK to use Docker instead of Podman:

# dnf install python39
# python3 --version

Python 3.9.7

# dnf install podman
# podman -v

podman version 4.0.2

Install the cephadm tool, which we will use to create and configure the cluster:

# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
# chmod +x cephadm
# ./cephadm add-repo --release quincy

Writing repo to /etc/yum.repos.d/ceph.repo...
Enabling EPEL...
Completed adding repo.

# ./cephadm install

Installing packages ['cephadm']...

To create a ceph cluster, cephadm bootstrap is used. This command will start the ceph services and the first Monitor on the specified node, create a cluster, generate keys, configuration file, etc.

In a production environment, it is recommended to use a separate network for replication traffic between OSDs. Therefore, you must first configure the network interfaces on the hosts.

# cephadm bootstrap --mon-ip 19.168.10.23 --cluster-network 192.168.1.0/24

Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 4.0.2 is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: GUID
Adding host ceph01...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Ceph Dashboard will also be launched:

Ceph Dashboard is now available at:
URL: https://ceph-01:8443/
User: admin
Password: passworf

List running containers on the host:

# podman ps

quay.io/ceph/ceph:v17
quay.io/ceph/ceph@sha256:
quay.io/prometheus/node-exporter:v1.3.1
quay.io/ceph/ceph-grafana:8.3.5
quay.io/prometheus/alertmanager:v0.23.0
quay.io/prometheus/prometheus:v2.33.4
quay.io/ceph/ceph:v17

Check cluster status:

# cephadm install ceph-common
# ceph -s

In this example, we use the first node for management tasks, so you need to install SSH keys and place them in /etc/ceph on the rest nodes:

# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-02
# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-03

Add hosts to the ceph cluster:

# ceph orch host add ceph-02
# ceph orch host add ceph-03

List hosts:

# ceph orch host ls

After some time, Ceph will start the Monitor and Mgr services on certain hosts according to its policy.

You can force the hosts where you want to run the ceph monitor:

# ceph orch apply mon --placement="2 ceph-01 ceph-02"

Check cluster status and active roles:

# ceph -s

Now you can start the OSD services and specify the disks that can be used for ceph storage:

# ceph orch daemon add osd ceph-01:/dev/sdb,/dev/sdc,/dev/sdd
# ceph orch daemon add osd ceph-02:/dev/sdb,/dev/sdc,/dev/sdd
# ceph orch daemon add osd ceph-03:/dev/sdb,/dev/sdc,/dev/sdd

In a productive environment, it is recommended to move DB and WAL (Write-ahead log) to SSD. Use the command:

# ceph orch daemon add osd host:data_devices=/dev/sdb,/dev/sdc,db_devices=/dev/sdd,osds_per_device=2

In this example, data_devices is a regular HDD and db_devices is an SSD device.

Check OSD status:

# ceph osd tree

A separate OSD daemon is created for each disk.

health: HEALTH_WARN
1 stray daemon(s) not managed by cephadm

Restart the ceph-mgr service on this node:

# ceph orch daemon restart mgr.ceph-01.dseows

This completes the ceph cluster setup, and you can connect your clients using the following protocols:

  • Block access (RBD, Rados Block Device);
  • File (CephFS);
  • Object – S3 (RadosGW).
Leave a Reply

Your email address will not be published. Required fields are marked *