打赏

星辰大海ゞ

That which does not kill us makes us stronger!

导航

利用阿里云NAS实现Kubernetes持久化动态存储

一、动态供给存储介绍

Dynamic Provisioning机制工作的核心在于StorageClass的API对象
StorageClass声明存储插件,用于自动创建PV
Kubernetes支持动态供给的存储插件:https://kubernetes.io/docs/concepts/storage/storage-classes/
 
流程示意图:
实现原理:
存储控制器 Volume Controller,是用来专门处理持久化存储的控制器,其一个子控制循环 PersistentVolumeController 负责实现 PV 和 PVC 的绑定。PersistentVolumeController 会 watch kube-apiserver 的 PVC 对象。如果发现有 PVC对象创建,则会查看所有可用的 PV,如果有则绑定,若没有,则会使用 StorageClass 的配置和 PVC 的描述创建 PV 进行绑定
特性:
动态卷供给是kubernetes独有的功能,这一功能允许按需创建存储建。在此之前,集群管理员需要事先在集群外由存储提供者或者云提供商创建存储卷,成功之后再创建PersistentVolume对象,才能够在kubernetes中使用。动态卷供给能让集群管理员不必进行预先创建存储卷,而是随着用户需求进行创建。

二、部署步骤

1、创建NFS服务的provisioner
# vim nfs-client-provisioner-deploy.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: *-*.cn-beijing.nas.aliyuncs.com
            - name: NFS_PATH
              value: /pods-volumes
      volumes:
        - name: nfs-client-root
          nfs:
            server:  *-*-beijing.nas.aliyuncs.com
            path: /pods-volumes

# kubectl apply -f  nfs-client-provisioner-deploy.yaml

 

2、创建SA以及RBAC授权

# vim nfs-client-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

# kubectl apply -f nfs-client-rbac.yaml

 

3、创建存储类

storageClassName :在pvc的请求存储大小和访问权限与创建的pv一致的情况下 根据storageClassName进行与pv绑定。常用在pvc需要和特定pv进行绑定的情况下。
举例:当有创建多个pv设置存储的大小和访问权限一致时,且pv,pvc没有配置storageClassName时,pvc会根据存储大小和访问权限去随机匹配。如果配置了storageClassName会根据这三个条件进行匹配。
当然也可以用其他方法实现pvc与特定pv的绑定,例如使用标签。

# vim nfs-sotrage-class.yaml

apiVersion: storage.k8s.io/v1
#allowVolumeExpansion: true 开启允许扩容功能,但是nfs类型不支持
kind: StorageClass
metadata:
  name: yiruike-nfs-storage
mountOptions:
- vers=4
- minorversion=0
- noresvport
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: "false"
# kubectl apply -f nfs-storage-class.yaml

设置pointsmart-nfs-storage sc为后端默认存储类: 

# kubectl patch storageclass yiruike-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

# kubecctl get sc
[root@master-92 pv-pvc]# kubectl get sc
NAME                           PROVISIONER      AGE
yiruike-nfs-storage(default)   fuseim.pri/ifs   48s

三、验证部署结果

1、创建测试PVC文件

# vim test-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "yiruike-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  #persistentVolumeReclaimPolicy: Retain
  resources:
    requests:
      storage: 2Gi

# kubectl apply -f test-claim.yaml

# kubectl get pv,pvc

NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE
persistentvolume/pvc-*   2Gi        RWX            Delete           Bound    default/test-claim   yiruike-nfs-storage            1s

NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
persistentvolumeclaim/test-claim   Bound    pvc-2fc935df-62f2-11ea-9e5a-00163e0a8e3e   2Gi        RWX            yiruike-nfs-storage   5s

 

2、创建测试POD

启动一个pod测试在test-claim的PV里touch一个SUCCESS文件

# vim test-pod.yaml

kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

# kubectl apply -f test-pod.yaml

# df -Th | grep aliyun

*-*.cn-beijing.nas.aliyuncs.com:/pods-volumes nfs4  10P  0  10P  0%  /data/k8s/k8s/kubelet/pods/77a4ad8b-62e1-11ea-89e3-00163e301bb2/volumes/kubernetes.io~nfs/nfs-client-root

# ls  /data/k8s/k8s/kubelet/pods/77a4ad8b-62e1-11ea-89e3-00163e301bb2/volumes/kubernetes.io~nfs/nfs-client-root

default-test-claim-pvc-0b1ce53d-62f4-11ea-9e5a-00163e0a8e3e

# ls  /data/k8s/k8s/kubelet/pods/77a4ad8b-62e1-11ea-89e3-00163e301bb2/volumes/kubernetes.io~nfs/nfs-client-root/default-test-claim-pvc-0b1ce53d-62f4-11ea-9e5a-00163e0a8e3e

SUCCESS

由此可见,部署正常,并且可以动态分配NFS的共享卷

 

3、数据持久化验证

现在我们可以将 test-pod 这个pod删掉,测试数据卷里面的文件会不会消失。 

# kubectl delete pod/test-pod

经过查看可以得知,删掉这个pod以后,数据不会丢失,这样我们也就达到了动态的数据持久化 

 

四、volumeClaimTemplates

volumeClaimTemplates: 存储卷申请模板,创建PVC,指定pvc名称大小,将自动创建pvc,且pvc必须由存储类供应。

为什么需要volumeClaimTemplate?

对于有状态的副本集都会用到持久存储,对于分布式系统来讲,它的最大特点是数据是不一样的,所以各个节点不能使用同一存储卷,每个节点有自已的专用存储,但是如果在Deployment中的Pod template里定义的存储卷,是所有副本集共用一个存储卷,数据是相同的,因为是基于模板来的 ,而statefulset中每个Pod都要自已的专有存储卷,所以statefulset的存储卷就不能再用Pod模板来创建了,于是statefulSet使用volumeClaimTemplate,称为卷申请模板,它会为每个Pod生成不同的pvc,并绑定pv, 从而实现各pod有专用存储

示例:

spec: 
  storage:
    volumeClaimTemplate:
      spec:
        storageClassName: yiruike-nfs-storage
        resources:
          requests:
            storage: 10Gi

 

 

 

posted on 2020-03-11 00:42  星辰大海ゞ  阅读(2047)  评论(0编辑  收藏  举报