MicroCeph: Setting up a multi-node Ceph cluster

MicroCeph is a utility mantained by Canonical that allows us to deploy a Ceph cluster without dealing with all the configurations and commands usually required. In this tutorial we're going to create a Ceph cluster allocating three servers, each server will contain one OSD as Ceph requires at least three ODS to work. Prepare nodes On all nodes we should install and hold the MicroCeph utility as a snap package. sudo snap install microceph sudo snap refresh --hold microceph Prepare the cluster On one of the nodes (node-1) we should bootstrap the Ceph cluster. sudo microceph cluster bootstrap --public-network 0.0.0.0/0 Join the cluster Executing the following command on the main node we'll obtain a token that we'll use to join the cluster. sudo microceph cluster add node-2 Now on the node we want to join to the cluster (node-2), we should execute this command with the token we just obtained: sudo microceph cluster join Repeat these steps on all nodes that should join the cluster. Add storage Using the following command we can see the disk that are attached to our node: lsblk | grep -v loop NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 400G 0 disk ├─sda1 8:1 0 399G 0 part / ├─sda14 8:14 0 4M 0 part ├─sda15 8:15 0 106M 0 part /boot/efi └─sda16 259:0 0 913M 0 part /boot sdb 8:0 0 40G 0 disk If we have a disk availabe, like sdb, we can allocate the whole disk to the cluster: sudo microceph disk add /dev/sdb --wipe If you have multiple disk available per node, you can add them using a single command: sudo microceph disk add /dev/sdb /dev/sdc /dev/sdd --wipe Setting up a loop device If you only have one disk available and it contains the operating system, you can create a file-backed storage using loop devices. MicroCeph has a command to directly create a loop device and add it to the cluster: sudo microceph disk add loop,20G,1 This command will create a loop device of 20 GiB and add it as a OSD to the Ceph cluster. The OSD and the loop file are stored at /var/snap/microceph/common/data/osd. Check cluster status Once we've added the storage on all nodes, we check the status of the cluster: sudo ceph status cluster: id: 755f2ac4-cd4e-44c9-8bd6-70cc8c95af5b health: HEALTH_OK services: mon: 3 daemons, quorum node-01,node-02,node-03 (age 14m) mgr: node-01(active, since 43m), standbys: node-02, node-03 osd: 3 osds: 3 up (since 4s), 3 in (since 6s) data: pools: 1 pools, 1 pgs objects: 2 objects, 577 KiB usage: 82 MiB used, 60 GiB / 60 GiB avail pgs: 1 active+clean Now we have a working Ceph cluster with three nodes and three OSDs.

Apr 30, 2025 - 09:42
 0
MicroCeph: Setting up a multi-node Ceph cluster

MicroCeph is a utility mantained by Canonical that allows us to deploy a Ceph cluster without dealing with all the configurations and commands usually required.

In this tutorial we're going to create a Ceph cluster allocating three servers, each server will contain one OSD as Ceph requires at least three ODS to work.

Prepare nodes

On all nodes we should install and hold the MicroCeph utility as a snap package.

sudo snap install microceph
sudo snap refresh --hold microceph

Prepare the cluster

On one of the nodes (node-1) we should bootstrap the Ceph cluster.

sudo microceph cluster bootstrap --public-network 0.0.0.0/0

Join the cluster

Executing the following command on the main node we'll obtain a token that we'll use to join the cluster.

sudo microceph cluster add node-2

Now on the node we want to join to the cluster (node-2), we should execute this command with the token we just obtained:

sudo microceph cluster join 

Repeat these steps on all nodes that should join the cluster.

Add storage

Using the following command we can see the disk that are attached to our node:

lsblk | grep -v loop
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda       8:0    0   400G  0 disk
├─sda1    8:1    0   399G  0 part /
├─sda14   8:14   0     4M  0 part
├─sda15   8:15   0   106M  0 part /boot/efi
└─sda16 259:0    0   913M  0 part /boot
sdb       8:0    0   40G  0 disk

If we have a disk availabe, like sdb, we can allocate the whole disk to the cluster:

sudo microceph disk add /dev/sdb --wipe

If you have multiple disk available per node, you can add them using a single command:

sudo microceph disk add /dev/sdb /dev/sdc /dev/sdd --wipe

Setting up a loop device

If you only have one disk available and it contains the operating system, you can create a file-backed storage using loop devices.

MicroCeph has a command to directly create a loop device and add it to the cluster:

sudo microceph disk add loop,20G,1

This command will create a loop device of 20 GiB and add it as a OSD to the Ceph cluster.

The OSD and the loop file are stored at /var/snap/microceph/common/data/osd.

Check cluster status

Once we've added the storage on all nodes, we check the status of the cluster:

sudo ceph status
cluster:
  id:     755f2ac4-cd4e-44c9-8bd6-70cc8c95af5b
  health: HEALTH_OK

services:
  mon: 3 daemons, quorum node-01,node-02,node-03 (age 14m)
  mgr: node-01(active, since 43m), standbys: node-02, node-03
  osd: 3 osds: 3 up (since 4s), 3 in (since 6s)

data:
  pools:   1 pools, 1 pgs
  objects: 2 objects, 577 KiB
  usage:   82 MiB used, 60 GiB / 60 GiB avail
  pgs:     1 active+clean

Now we have a working Ceph cluster with three nodes and three OSDs.