Kubernetes配置Ceph RBD StorageClass

Kubernetes配置Ceph RBD StorageClass

 

1. 在Ceph上为Kubernetes创建一个存储池

# ceph osd pool create k8s 128

 

2. 创建k8s用户

# ceph auth get-or-create client.k8s mon 'allow r' osd 'allow rwx pool=k8s' -o ceph.client.k8s.keyring

 

3. 将k8s用户的key进行base64编码

  这是Kubernetes访问Ceph的密钥,会保存在Kubernetes的Secret中

# grep key ceph.client.k8s.keyring | awk '{printf "%s", $NF}' | base64
VBGFaeN3OWJYdUZPSHhBQTNrU2E2QlUyaEF5UUV0SnNPRHdXeRT8PQ==

 

4. 在Kubernetes创建访问Ceph的Secret

复制代码
# echo ' apiVersion: v1
kind: Secret
metadata:
  name: ceph-k8s-secret
type: "kubernetes.io/rbd"
data:
  key: VBGFaeN3OWJYdUZPSHhBQTNrU2E2QlUyaEF5UUV0SnNPRHdXeRT8PQ==
---
apiVersion: v1
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: kube-system
type: "kubernetes.io/rbd"
data:
  key: VBGFaeN3OWJYdUZPSHhBQTNrU2E2QlUyaEF5UUV0SnNPRHdXeRT8PQ== ‘ | kubectl create -f -
复制代码

 

5. Kubernetes每个节点必须有rbd命令(安装ceph-common包) 

  安装ceph-common包

 

复制代码

[Ceph-Nautilus]
name=Ceph Nautilus
baseurl=https://repo.huaweicloud.com/ceph/rpm-nautilus/el7/x86_64
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://repo.huaweicloud.com/ceph/keys/release.asc

[Ceph-noarch]
name=Ceph noarch
baseurl=https://repo.huaweicloud.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://repo.huaweicloud.com/ceph/keys/release.asc

[Ceph-source]
name=Ceph source
baseurl=https://repo.huaweicloud.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://repo.huaweicloud.com/ceph/keys/release.asc

eof

#yum install ceph-common -y

复制代码

 

6. 在Kubernetes创建ceph-rbd StorageClass

复制代码
# echo ‘apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: ceph-rbd
provisioner: kubernetes.io/rbd
parameters:
  monitors: 192.168.0.86:6789,192.168.0.87:6789,192.168.0.88:6789
  adminId: k8s
  adminSecretName: ceph-k8s-secret
  adminSecretNamespace: kube-system
  pool: k8s
  userId: k8s
  userSecretName: ceph-k8s-secret’ | kubectl create -f -
复制代码

 

7. 将ceph-rbd设置为默认的StorageClass

# kubectl patch storageclass ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

  注意集群中只能存在一个默认StorageClass,如果同时将多个StorageClass设置为默认,相当于没有设置默认StorageClass。查看StorageClass列表,默认StorageClass带有(default)标记:

# kubectl get storageclass
NAME                 TYPE
ceph-rbd (default)   kubernetes.io/rbd
ceph-sas             kubernetes.io/rbd
ceph-ssd             kubernetes.io/rbd

 

8. 创建一个PersistentVolumeClaim

复制代码
# echo ‘apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-test-vol1-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ceph-rbd
  resources:
    requests:
      storage: 10Gi’ | kubectl create -f -
复制代码

因为指定了默认StorageClass,所以这里的storageClassName其实可以省略。

 

9. 创建使用PVC的Pod

复制代码
# echo ‘apiVersion: v1
kind: Pod
metadata:
  name: nginx-test
spec:
  containers:
  - name: nginx
    image: nginx:latest
    volumeMounts:
      - name: nginx-test-vol1
        mountPath: /data/
        readOnly: false
  volumes:
  - name: nginx-test-vol1
    persistentVolumeClaim:
      claimName: nginx-test-vol1-claim’ | kubectl create -f -
复制代码

 

10. 查看容器状态

  进入容器看到rbd挂载到了/data目录

# kubectl exec nginx-test -it -- /bin/bash
[root@nginx-test ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0        50G   52M   47G   1% /data
posted @ 2020-09-27 16:28  yang-leo  阅读(307)  评论(0)    收藏  举报