Minikube

Testing out rook/EdgeFS + NFS (RTRD) on Minikube

I’ve been checking out rook lately and it seems like a great project. They’re trying to, somewhat, standardize the approach to
storage from k8s’ perspective. Particularly, I was impressed with EdgeFS’ capabilities. Object Storage, iSCSI block storage, Scale-out NFS (as they call it), geo-transparent storage solution. Man, that, in itself, is a mouthful and an impressive set of features! In any case, I was set to test it. In fact, I will write a few articles in that regard. The first one; this one, is a guide on how to setup rook/EdgeFS on minikube; the RTRD (Replica Transport over Raw Disk) way on CloudSigma’s platform! ;D

Minikube

version: v1.9.2

For this to work, minikube must be ran using kvm2 and libvirt. Here’s my sample
configuration (~/.minikube/config/config.json):

A few notes on my minikube config:

* If it works with cri-o, it works with docker. So don’t mind that one.
* Yeah, I know 16 GB of RAM might be overkill. You can use 8 or 4 or whatever. Just be mindful of the resource usage. In my
experience, minikube requires ~2 GiB for running; probably twice that much when installing.
* The feature gates are very important. Make sure you add them.

After that, one starts minikube:

Now, wait for it to fully start. In addition, I like enabling some addons:

This is completely optional, though.

I’d wait until everything is deployed. Monitor stuff with:

When that is done, stop minikube and create the drives that rook/edgefs will use.

There are a few important things here. First of all, the subdriver and image format must be raw because minikube sets it’s storage
that way. Also, you need to set serial numbers for the drives, otherwise, rook/edgefs will not be able to recognize them since they
will not generate entries in /dev/disk/by-id; which is crucial for rook/EdgeFS to work.

NFS service

OK, now, we get the code.

Next, we will create a few files that will help us simplify the setup:

Just change the relevant part on this one.

Next, copy nfs.yaml as minikube-nfs-object.yaml:

At the time of writing, the operator had an issue with drive detection and we had to get back on using v1.2.7; so, please, edit
operator.yaml and change: image: rook/edgefs:v1.3.1 to image: rook/edgefs:v1.2.7.
. The CTO of Nexenta recently told me; in Slack, that this wasn’t necessary in the latest version.

Finally, when this is done, you can proceed with the setup:

Now, we have the NFS service up and running. We checked it is exporting stuff and, now, we want to configure CSI in order to be able
to use those exports as storage for our pods.

CSI configuration

OK, so, we’re going to add a few Custom Resource Definitions (CRDs) so that we may use the Container Storage Interface (CSI) to
create persistent volumes.

So, first, we want to have the --allow-privileged in our kubelet. This is done already if you deploy using kubeadm; which the
minikube configuration I proposed does already. ;D

Next, the feature gates. We want them both. Again, the proposed minikube configuration.

Then, we want to add a pair of CRDs:

Please, verify the version of the recommended version to install here; in the EdgeFS Data Fabric section of the documentation. The documentation is versioned so this is why I don’t include the specific link. Just check the latest version of the docs for the
right CRD versions.

OK, all set. Now, we’re lucky since minikube supports NFS out-of-the-box. Not quite so with iSCSI, though. I’ve asked them for support; in case you want to join in. ;D

After that, we need to move into the next relevant section: cd csi/nfs and create a generic secret in order for our storageclass to be
able to find our NFS service.

Please, edit: edgefs-nfs-csi-driver-config.yaml and change:

Once that is ready, just create the secret as follows:

Next, apply the driver configuration:

And we’re almost ready! Only the storageclass is missing; but that will be created when we try the provided dynamic nginx example.

Then, we go to the example section:

Then, we need to edit it a bit. Change:

Also, if you like, you can remove the storageclassName: entry from that file’s pvc entry (since we just declared this one to be default).

Next, so we don’t create any conflicts, we need to remove the standard storageclass as default. In order to do that, we can just:

After that, it’s safe to create our storageclass as default; along with the example, of course!

Now, since we created all this in the default namespace, we just need to do this to monitor it:

This will show us the pv that the pvc required and how it is mounted to the nginx test pod. Pretty cool!

Now, if we want to check stuff on the EdgeFS side, we can get back to the mgr pod and take a look:

This will show us our empty bucket and the pv we just created through the pvc. Really cool, huh?

Support

If you ever run into trouble, the people at #edgefs; at the rook slack instance, are more than just helpful. They’re patient and
interested in helping the community get the hang of EdgeFS over k8s. You can always look into the rook website: https://rook.io/ for
the support channels; or just run into the slack instance here: https://slack.rook.io/

I hope you can set things up and learn to love rook/EdgeFS as I have. Next article will be an iSCSI deployment over a 6-node k8s
cluster for you guys. Finally, I might just throw in some promo-code gifts as well so stay tuned!

00fdaa19dc42fd452f5fdd9cc5cd3087?s=80&r=g

About Renich

DevOps @ CloudSigma during the day, Creative Commons artist and producer on my free time... Yeah, that means going to play my guitar or piano on the streets sometimes. You can listen to my music in my personal project: Renich or my Rock project: introbella. And I'm sure I have a cover or two @ YouTube. I am, also, a Fedora and Funtoo maintainer and contributor. In fact, you can just google me "Renich" and you'll find my website and other stuff. I have a blog somewhere; where I write technical stuff as well. I am sure you can't imagine the blog's title 😉 On other matters, I've met Richard Stallman, started the local PHP and Ruby groups and contribute continuously to LinuxCabal.