Testing the Linux Kernel CephFS Client with xfstests

I do a lot of testing with the kernel cephfs client these days, and have had a number of people ask about how I test it. For now, I’ll gloss over the cluster setup since there are other tutorials for that.

Test Environment

For the cluster, I have a separate machine dedicated to running a set of 3 KVMs (8G each, running centos-stream8). I use cephadm to build a cluster that uses those machines as cluster hosts. Each KVM has a dedicated SSD so I get OK-ish performance (but not stellar).

Occasionally, I’ll also need to test against a vstart cluster, usually when I need work with some bleeding-edge userland changes, but for the most part I rely on my 3 node KVM setup.

The machines are connected via 1GB ethernet.

Cluster Setup

The cephadm cluster has 3 KVM hosts that act as cluster nodes. I run a mon on each, and each gets an OSD daemon.

From there I usually create two separate cephfs’s. One named “test” and one named “scratch”. I then bump up the MDS count in the orchestrator and max_mds on each filesystem to give each fs a set of 3 active MDSs, and one standby MDS.

I also enable random pinning on both fs’s with a 0.1% frequency, just to thrash things a bit more.

Client Configuration

I run a KVM on my main workstation that acts as a client (with 16G of memory). The client VM is Fedora 34 (but I’ll probably upgrade it soon). Make sure the ceph-common package is installed (so you have the mount.ceph binary).

Next, you’ll need to set up the configuration. Here’s the script I use:

#!/bin/bash

# final locations
CONF=/etc/ceph/ceph.conf
KEYRING=/etc/ceph/ceph.keyring
CEPHADM=./cephadm

CONFTMP=`mktemp`
ssh $1 "sudo $CEPHADM shell ceph config generate-minimal-conf" > $CONFTMP
sudo chown root:ceph $CONFTMP
sudo chmod 0644 $CONFTMP
sudo cp -p $CONFTMP $CONF

KEYTMP=`mktemp`
ssh $1 "sudo $CEPHADM shell ceph auth get-or-create client.admin" > $KEYTMP
sudo chown root:ceph $KEYTMP
sudo chmod 0640 $KEYTMP
sudo cp -p $KEYTMP $KEYRING

Run it with the hostname of a cluster node where you have an acct as the first argument. Be sure to set $CEPHADM to the right location for the cephadm script on the machine.

Once you run that, you’ll have a minimal config on the client. You may want to test it by running “ceph -s” or something similar.

At this point, I usually mount up both the test and scratch filesystems and create a directory under there with the hostname of the client (client1 in this example), and set it up to do random pinning. This way I can run multiple clients and let them test in their own areas of each fs.

xfstests

So you now have a cluster and a client, and the client’s userland code can talk to the cluster.

On the client, you’ll need to pull down the xfstests tree, and build it (e.g. run “make”). You may need to install some prerequisite packages (see the README file in the xfstests sources).

To run xfstests, you’ll need an appropriate config file. Here’s the main one I use (hopefully with some helpful comments). I usually replace local.config in the xfstests tree with this, just to make test running easy. You’ll need to do adjust this for your own environment, of course:

#
# This config file is or running xfstests on kcephfs. You'll require
# an existing cluster to test against (a vstart cluster is fine).
# To understand the mount syntax and the available options, see
# mount.ceph(8).
#
export TEST_DIR=/mnt/test
export SCRATCH_MNT=/mnt/scratch

#
# "check" can't automatically detect ceph device strings, so we must
# explicitly declare that we want to use "-t ceph".
#
export FSTYP=ceph

#
# In this example, we've created two different named cephfs filesystems:
# "test" and "scratch. They must be pre-created on the ceph cluster before
# the test is run.
#
# The check script gets very confused when two different mounts use
# the same device string. There is a project to change how the mount device
# syntax works, but it's not yet merged.
#
# For now, we must declare the location of the mons explicitly. Note that we're
# using two different monaddrs here on different hosts, though these are using
# the same cluster.  The monaddrs must also match the type of ms_mode option
# that is in effect (i.e.  ms_mode=legacy requires v1 monaddrs).
#
export TEST_DEV=192.168.1.81:3300:/client1
export SCRATCH_DEV=192.168.1.82:3300:/client1

#
# TEST_FS_MOUNT_OPTS is for /mnt/test, and MOUNT_OPTONS is for /mnt/scratch
#
# Here, we're using the admin account for both mounts. The credentials
# should be in a standard keyring location somewhere. See:
#
# https://docs.ceph.com/en/latest/rados/operations/user-management/#keyring-management
#
COMMON_OPTIONS="name=admin"

# if you want to use an explicit secret instead of finding it in a ceph keyring
# COMMON_OPTIONS+=",secret=AQAkaM5g7+GuIRAAM3xLNwSQc8953uo3/1QkLw=="

# use msgr2 in crc mode
COMMON_OPTIONS+=",ms_mode=crc"

# asynchronous directory ops
COMMON_OPTIONS+=",nowsync"

# enable copy offload
COMMON_OPTIONS+=",copyfrom"

# now for the per-mount options
TEST_FS_MOUNT_OPTS="-o ${COMMON_OPTIONS}"
MOUNT_OPTIONS="-o ${COMMON_OPTIONS}"

# select the correct cephfs
TEST_FS_MOUNT_OPTS+=",mds_namespace=test"
MOUNT_OPTIONS+=",mds_namespace=scratch"

# fscache -- each fs needs its own fsc= tag
TEST_FS_MOUNT_OPTS+=",fsc=test"
MOUNT_OPTIONS+=",fsc=scratch"

export TEST_FS_MOUNT_OPTS
export MOUNT_OPTIONS

Finally, you just need to run the tests. Some xfstests can take a very long time to run on cephfs. I often run the quick test group. It still takes a couple of hours on ceph, but it covers a variety of things.

Some tests always fail on cephfs. generic/003, for example complains about atime handling, and ceph really can’t (easily) offer the semantics it wants. I have a file called ceph.exclude in the root of the xfstests tree and I have a single line in it so I can skip that one:

generic/003

Now we can run the tests!

$ sudo ./check -g quick -E ./ceph.exclude

If there are failures, please report them to the ceph-devel mailing list and we’ll try to help troubleshoot what happened.

2 thoughts on “Testing the Linux Kernel CephFS Client with xfstests

  1. Pingback: Testing the #Linux #Kernel #CephFS Client with #xfstests https://jt… | Dr. Roy Schestowitz (罗伊)

  2. Pingback: Links 29/11/2021: NuTyX 21.10.5 and CrossOver 21.1.0 | Techrights

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s