Integration of Ceph and Kubernetes

What is Kubernetes

Kubernetes is an open source orchestration tool developed by Google for managing microservices or containerized applications across a distributed cluster of nodes. Kubernetes provides highly resilient infrastructure with zero downtime deployment capabilities, automatic rollback, scaling, and self-healing of containers (which consists of auto-placement, auto-restart, auto-replication , and scaling of containers on the basis of CPU usage).

What is Ceph?

Ceph is opensource software defined storage platform. Ceph provides reliable and clustered storage solution. It is highly scalable to exabytelevel, operates without single point of failure and it is free. It also provides combined interface for block, object and file-level storage.

Prerequisites

How to Do it ?

On Ceph admin

Lets see our ceph cluster up and running by ceph -s comamnd. Now we need to create a pool and image in it for kubernetes storage where we share it with kubernetes client to use.

$ ceph -s 
$ ceph osd pool create kubernetes 100
$ ceph osd lspools
Ceph Health
Image info in Kubernetes pool
$ scp /etc/ceph/ceph.conf root@master:~
$ scp /etc/ceph/ceph.client.admin.keyring root@master:~
  • Install ceph-common packages
  • Move ceph.conf and admin keyring to the /etc/ceph/ directory Now we can run ceph commands on kubernetes master. Moving further, we need to map our rbd image here. First enable the rbdmap service
$ systemctl enable rbdmap 
$ rbd map kube -p kubernetes $ rbd showmapped

Connecting Ceph and kubernetes

Kubernetes to use ceph storage as backend, kubernetes needs to talk with ceph(not in general) via external storage plugin as this provision is not include in official kube-control-manager image. So we need either to customize the kube-control-manager image or use plugin- rbd-provisioner. In this blog, we are using external plugin. Lets create rbd-provisioner in kube-system namespace with RBAC. Before creating deployment check if rbd-provisioner has same image of ceph version as yours by running following command:

$ docker pull quay.io/external_storage/rbd-provisioner:latest       $ docker history quay.io/external_storage/rbd-provisioner:latest | grep CEPH_VERSION
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: rbd-provisioner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-provisioner
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: rbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "quay.io/external_storage/rbd-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: rbd-provisioner
$ kubectl create -f rbd-provisioner.yml
$ kubectl get pods -l app=rbd-provisioner -n kube-system
$ ceph --cluster ceph auth get-key client.admin
$ kubectl create secret generic ceph-secret \
--type="kubernetes.io/rbd" \
--from-literal=key='COPY-YOUR-ADMIN-KEY-HERE' \
--namespace=kube-system
$ ceph mon dump
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: storage-rbd
provisioner: ceph.com/rbd
parameters:
monitors: <MONITOR_IP_1>:6789, <MONITOR_IP_2>:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
pool: kubernetes
userId: admin
userSecretName: ceph-secret
userSecretNamespace: kube-system
imageFormat: "2"
imageFeatures: layering
$ kubectl create -f storage-class.yml 
$ kubctl get sc
List of storage classes
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: storage-rbd
$ kubectl create -f pvc.yml
List of PVC

Summary

That’s it !! We created the persistent storage for kubernetes from ceph. Use this pvc to create pv and pod whatever you like. Suggestions are most welcomed.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Rahul Waykos

Rahul Waykos

Astrophile | Technophile | Opensource enthusiast