I’ve been checking out rook lately and it seems like a great project. They’re trying to, somewhat, standardize the approach to
storage from k8s’ perspective. Particularly, I was impressed with EdgeFS’ capabilities. Object Storage, iSCSI block storage, Scale-out NFS (as they call it), geo-transparent storage solution. Man, that, in itself, is a mouthful and an impressive set of features! In any case, I was set to test it. In fact, I will write a few articles in that regard. The first one; this one, is a guide on how to setup rook/EdgeFS on minikube; the RTRD (Replica Transport over Raw Disk) way on CloudSigma’s platform! ;D
Minikube
version: v1.9.2
For this to work, minikube must be ran using kvm2 and libvirt. Here’s my sample
configuration (~/.minikube/config/config.json
):
1 2 3 4 5 6 7 8 9 10 |
{ "bootstrapper": "kubeadm", "container-runtime": "cri-o", "cpus": 8, "disk-size": "50G", "feature-gates": "VolumeSnapshotDataSource=true,CSIDriverRegistry=true", "memory": 16000, "native-ssh": true, "vm-driver": "kvm2" } |
A few notes on my minikube config:
* If it works with cri-o, it works with docker. So don’t mind that one.
* Yeah, I know 16 GB of RAM might be overkill. You can use 8 or 4 or whatever. Just be mindful of the resource usage. In my
experience, minikube requires ~2 GiB for running; probably twice that much when installing.
* The feature gates are very important. Make sure you add them.
After that, one starts minikube:
1 |
minikube start |
Now, wait for it to fully start. In addition, I like enabling some addons:
1 2 3 |
for addon in dashboard ingress metrics-server; do minikube addons enable $addon; done |
This is completely optional, though.
I’d wait until everything is deployed. Monitor stuff with:
1 |
watch kubectl -n kube-system get pods |
When that is done, stop minikube and create the drives that rook/edgefs will use.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
# stop minikube minikube stop # settings drive_path='/var/lib/libvirt/images' drive_size='10G' i=1 for d in vd{b..d}; do # create drive sudo qemu-img create -f raw $drive_path/minikube-${d}.img $drive_size; # attach it sudo virsh attach-disk \ --config \ --targetbus='virtio' \ --subdriver='raw' \ --io='threads' \ --serial="virtio-$i" \ minikube \ $drive_path/minikube-${d}.img \ $d # add one to counter i=$(( i + 1 )) done # start minikube once again minikube start |
There are a few important things here. First of all, the subdriver and image format must be raw because minikube sets it’s storage
that way. Also, you need to set serial numbers for the drives, otherwise, rook/edgefs will not be able to recognize them since they
will not generate entries in /dev/disk/by-id
; which is crucial for rook/EdgeFS to work.
NFS service
OK, now, we get the code.
1 2 3 4 5 6 7 8 9 10 11 12 |
# create, if you don't have one, a src directory (for convenience) mkdir -p ~/src cd ~/src # clone the repo git clone https://github.com/rook/rook.git # go to the relevant section cd rook/cluster/examples/kubernetes/edgefs/ # checkout the right release, at the time: 1.3 git checkout release-1.3 |
Next, we will create a few files that will help us simplify the setup:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
# a copy of cluster.yaml as minikube-cluster-rtrd.yaml apiVersion: edgefs.rook.io/v1 kind: Cluster metadata: name: rook-edgefs namespace: rook-edgefs spec: edgefsImageName: edgefs/edgefs:latest serviceAccount: rook-edgefs-cluster dataDirHostPath: /data/edgefs sysRepCount: 1 storage: useAllNodes: false useAllDevices: false config: useMetadataOffload: "false" useAllSSD: "false" nodes: - name: minikube devices: - name: vdb - name: vdc - name: vdd |
Just change the relevant part on this one.
Next, copy nfs.yaml as minikube-nfs-object.yaml:
1 2 3 4 5 6 7 8 9 |
# minikube-nfs-object.yaml apiVersion: edgefs.rook.io/v1 kind: NFS metadata: name: nfs-minikube # CHANGEME: this has to match the service name we will create further on namespace: rook-edgefs spec: instances: 1 annotations: |
At the time of writing, the operator had an issue with drive detection and we had to get back on using v1.2.7; so, please, edit. The CTO of Nexenta recently told me; in Slack, that this wasn’t necessary in the latest version.
operator.yaml and change: image: rook/edgefs:v1.3.1
to
image: rook/edgefs:v1.2.7
.
Finally, when this is done, you can proceed with the setup:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
# deploy the rooks/edgefs operator kubectl create -f operator.yaml sleep 10 kubectl -n rook-edgefs-system wait --for='condition=Ready' pods --timeout=3m --all # deploy the rooks/edgefs cluster kubectl create -f minikube-cluster-rtrd.yaml sleep 10 kubectl -n rook-edgefs wait --for='condition=Ready' pods --timeout=3m --all ## get mgr pod name and run the toolbox script mgr_pod=$( kubectl -n rook-edgefs get pods -l app=rook-edgefs-mgr -o jsonpath='{.items..metadata.name}' ) kubectl -n rook-edgefs exec -it $mgr_pod -- env COLUMNS=$COLUMNS LINES=$LINES TERM=linux toolbox # create the NFS share # Please, run the following commands within the mgr pod: # efscli system init -f # efscli cluster create mypc # efscli tenant create mypc/minikube # efscli bucket create mypc/minikube/bk1 # efscli service create nfs nfs-minikube # this is the service name that has to match # efscli service serve nfs-minikube mypc/minikube/bk1 # create nfs objects kubectl create -f minikube-nfs-object.yaml # verify nfs_svc=$( kubectl -n rook-edgefs get -l app=rook-edgefs-nfs -o jsonpath='{.items..spec.clusterIP}' svc ) minikube ssh "showmount -e $nfs_svc" |
Now, we have the NFS service up and running. We checked it is exporting stuff and, now, we want to configure CSI in order to be able
to use those exports as storage for our pods.
CSI configuration
OK, so, we’re going to add a few Custom Resource Definitions (CRDs) so that we may use the Container Storage Interface (CSI) to
create persistent volumes.
So, first, we want to have the --allow-privileged
in our kubelet. This is done already if you deploy using kubeadm; which the
minikube configuration I proposed does already. ;D
Next, the feature gates. We want them both. Again, the proposed minikube configuration.
Then, we want to add a pair of CRDs:
1 2 |
kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.13/pkg/crd/manifests/csidriver.yaml kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.13/pkg/crd/manifests/csinodeinfo.yaml |
Please, verify the version of the recommended version to install here; in the EdgeFS Data Fabric section of the documentation. The documentation is versioned so this is why I don’t include the specific link. Just check the latest version of the docs for the
right CRD versions.
OK, all set. Now, we’re lucky since minikube supports NFS out-of-the-box. Not quite so with iSCSI, though. I’ve asked them for support; in case you want to join in. ;D
After that, we need to move into the next relevant section: cd csi/nfs
and create a generic secret in order for our storageclass to be
able to find our NFS service.
Please, edit: edgefs-nfs-csi-driver-config.yaml
and change:
1 2 3 |
# EdgeFS csi operatins options cluster: mypc # substitution edgefs cluster name for csi operations tenant: minikube # substitution edgefs tenant name for csi operations |
Once that is ready, just create the secret as follows:
1 |
kubectl create secret generic edgefs-nfs-csi-driver-config --from-file=./edgefs-nfs-csi-driver-config.yaml |
Next, apply the driver configuration:
1 |
kubectl apply -f edgefs-nfs-csi-driver.yaml |
And we’re almost ready! Only the storageclass is missing; but that will be created when we try the provided dynamic nginx example.
Then, we go to the example section:
1 |
cd examples |
Then, we need to edit it a bit. Change:
1 2 3 4 5 6 |
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: edgefs-nfs-csi-storageclass annotation: storageclass.kubernetes.io/is-default-class: "true" |
Also, if you like, you can remove the storageclassName:
entry from that file’s pvc
entry (since we just declared this one to be default).
Next, so we don’t create any conflicts, we need to remove the standard
storageclass as default. In order to do that, we can just:
1 |
kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' |
After that, it’s safe to create our storageclass as default; along with the example, of course!
1 |
kubectl create -f dynamic-nginx.yaml |
Now, since we created all this in the default namespace, we just need to do this to monitor it:
1 |
kubectl get pods,pv,pvc |
This will show us the pv
that the pvc
required and how it is mounted to the nginx test pod. Pretty cool!
Now, if we want to check stuff on the EdgeFS side, we can get back to the mgr
pod and take a look:
1 2 |
kubectl -n rook-edgefs exec -it $mgr_pod -- env COLUMNS=$COLUMNS LINES=$LINES TERM=linux toolbox efscli bucket list mypc/minikube |
This will show us our empty bucket and the pv
we just created through the pvc
. Really cool, huh?
Support
If you ever run into trouble, the people at #edgefs
; at the rook slack instance, are more than just helpful. They’re patient and
interested in helping the community get the hang of EdgeFS over k8s. You can always look into the rook website: https://rook.io/ for
the support channels; or just run into the slack instance here: https://slack.rook.io/
I hope you can set things up and learn to love rook/EdgeFS as I have. Next article will be an iSCSI deployment over a 6-node k8s
cluster for you guys. Finally, I might just throw in some promo-code gifts as well so stay tuned!
- How to Deploy your Virtual Infrastructure at CloudSigma with Terraform - March 15, 2021
- Testing out rook/EdgeFS + NFS (RTRD) on Minikube - May 7, 2020
- Automate LetsEncrypt SSL Certificate Renewals for NginX - May 22, 2017
- A How-to Guide: Connect VPN Network to CloudSigma Infrastructure - July 15, 2016
- HowTo: CGroups - December 29, 2015