Kubernetes-NFS动态存储卷

本文将介绍使用nfs-client-provisioner和NFS Server给Kubernetes作为持久存储的后端,并且动态提供PV的功能

前提准备:
Kubernetes集群
NFS 与Kubernetes 中的worker节点网络能够互联

开始部署:

一、NFS服务端部署,使用yum安装NFS安装包

# yum install nfs-utils -y

服务端配置
创建配置共享目录
# mkdir /nfs/data -p
# chmod 755 /nfs/data
/nfs/data即为配置的共享目录

编写配置文件
# vim /etc/exports
/nfs/data/  192.168.10.0/24(rw,sync,no_root_squash,no_all_squash,insecure)

/nfs/data: 共享目录位置。 192.168.10.0/24: 客户端 IP 范围,* 代表所有,即没有限制。
rw: 权限设置,可读可写.
sync: 同步共享目录.
no_root_squash: 可以使用 root 授权.
no_all_squash: 可以使用普通用户授权.
insecure:允许客户端从大于1024的tcp/ip的端口连接服务器 启动NFS服务 # systemctl start nfs
-server.service 检查服务是否正常 # showmount -e localhost Export list for localhost: /nfs/data 192.168.10.0/24

这样,nfs服务端就配置好了.

二、nfs-client-provisioner部署

1.下载storage文件

# git clone https://github.com/kubernetes-incubator/external-storage.git
# cd external-storage/nfs-client
# ls
CHANGELOG.md  cmd  deploy  docker  Makefile  OWNERS  README.md

cmd:nfs-client-provisioner的源代码,使用go语言编写,可以定制化自己的nfs-client-provisioner(作者定制化了一版,感兴趣的可以玩一下)
deploy: 是k8s的部署文档,默认使用下面的yaml文件就可以完成部署工作
docker: Dockerfile文件,主要是结合cmd中的源代码来制定自己的镜像

部署文件
# cd deploy/
[root@localhost deploy]# ll
total 24
-rw-r--r-- 1 root root  221 Apr 12 14:59 class.yaml
-rw-r--r-- 1 root root 1030 Apr  9 17:02 deployment-arm.yaml
-rw-r--r-- 1 root root 1022 Apr 12 14:54 deployment.yaml
drwxr-xr-x 2 root root  214 Apr  9 17:02 objects
-rw-r--r-- 1 root root 1819 Apr  9 17:02 rbac.yaml
-rw-r--r-- 1 root root  232 Apr 12 15:03 test-claim.yaml
-rw-r--r-- 1 root root  401 Apr 12 15:09 test-pod.yaml
[root@localhost deploy]# 

1、授权, 如果启用了RBAC,需要执行rbac.yaml文件来完成授权.

 # 默认不需要修改,唯一需要修改的地方只有namespace,根据实际情况定义.

[root@k8s-master deploy]# kubectl apply -f rbac.yaml 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@k8s-master deploy]# 

2.创建NFS provisioner

修改deployment文件,需要修改的地方只有NFS服务器所在的IP地址(192.168.10.30),以及NFS服务器共享的路径(/nfs/data),两处都需要修改为你实际的NFS服务器和共享目录.

# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: my-nfs-provisioner
            - name: NFS_SERVER
              value: 192.168.10.30
            - name: NFS_PATH
              value: /nfs/data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.10.30
            path: /nfs/data
deployment.yaml

 部署deployment

[root@k8s-master deploy]# kubectl apply -f deployment.yaml 
deployment.apps/nfs-client-provisioner created
[root@k8s-master deploy]# 
[root@k8s-master deploy]# kubectl get deploy
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           10s
[root@k8s-master deploy]# kubectl get pod -A
NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE
default                nfs-client-provisioner-77fbd4585f-tvgtm      1/1     Running            1          16s

3、创建NFS资源的StorageClass

# cat class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: my-storage
provisioner: my-nfs-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
[root@k8s-master deploy]# 
class.yaml

部署StorageClass

[root@k8s-master deploy]# kubectl apply -f class.yaml 
storageclass.storage.k8s.io/my-storage created
[root@k8s-master deploy]# kubectl get storageclass
NAME         PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
my-storage   my-nfs-provisioner   Delete          Immediate           false                  12s
[root@k8s-master deploy]# 

至此,nfs 存储类创建成功,下来就是使用存储类开创建pod应用.

使用基于nfs的 storageclass

(1)创建pvc调用storageclass动态提供pv

[root@k8s-master deploy]# cat test-claim.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  #annotations:
  #  volume.beta.kubernetes.io/storage-class: "my-storage"
spec:
  storageClassName: my-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
[root@k8s-master deploy]# 

可以使用annotations的方式表明storageclass,也可以通过storageClassName指定storageclass.

[root@k8s-master deploy]# kubectl apply -f test-claim.yaml 
persistentvolumeclaim/test-claim created
[root@k8s-master deploy]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Bound    pvc-57c6f1c3-c8d7-494c-beee-b164e6c15361   1Mi        RWX            my-storage     8s
[root@k8s-master deploy]# 
[root@k8s-master deploy]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
pvc-57c6f1c3-c8d7-494c-beee-b164e6c15361   1Mi        RWX            Delete           Bound    default/test-claim   my-storage              61s
[root@k8s-master deploy]# 

此时,nfs共享目录/nfs/data下面,会出现pvc-e1abfc3c-871d-4975-9519-b8e8e30fe96f文件夹,命名按照${namespace}-${pvcName}-${pvName}格式.

2)在pod控制器中使用storageclass

[root@k8s-master deploy]# cat test-pod.yaml 
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox:latest
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
[root@k8s-master deploy]# 

pod的执行结果,后在pv对应的文件夹下面生成SUCCESS文件.

[root@k8s-master deploy]# kubectl get pod -A
NAMESPACE              NAME                                         READY   STATUS      RESTARTS   AGE
default                nfs-client-provisioner-77fbd4585f-dbqj5      1/1     Running     0          121m
default                test-pod                                     0/1     Completed   0          7s

查看共享目录中生成的文件

# tree  /nfs/data/
/nfs/data/
└── default-test-claim-pvc-e1abfc3c-871d-4975-9519-b8e8e30fe96f
    └── SUCCESS

1 directory, 1 file

说明部署正常,并且可以动态分配NFS的共享卷。

在实际工作中,StorageClass 更多的是StatefulSet控制器管理的pod,StatefulSet控制器中我们也可以通过volumeClaimTemplates的属性直接使StorageClass。
实际上volumeClaimTemplates下面就是一个PVC对象的模板,类似于StatefulSet下面的template,实际上就是一个 Pod 的模板,我们用这种模板就可以动态的去创建pvc对象了。

在StatefulSet中,通过配置volumeClaimTemplates来配置使用动态卷:

[root@k8s-master deploy]# cat nginx-statefulset.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx-statefulset
  namespace: default
spec:
  serviceName: nginx
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
  volumeClaimTemplates:
  - metadata:
      name: nginx-temp
      annotations:
        volume.beta.kubernetes.io/storage-class: "my-storage"
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
  name: ngx-service
  labels:
    app: nginx
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    nodePort: 32500
[root@k8s-master deploy]# 

开始创建服务:

# 创建
[root@k8s-master deploy]# kubectl apply -f nginx-statefulset.yaml 
statefulset.apps/nginx-statefulset created
service/ngx-service created
# 查看创建的pvc
[root@k8s-master deploy]# kubectl get pvc
NAME                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-temp-nginx-statefulset-0   Bound    pvc-5627713c-2f5f-4536-ad70-f20323d26898   2Gi        RWX            my-storage     89s
nginx-temp-nginx-statefulset-1   Bound    pvc-cdfcb0f0-082d-481f-a80c-c91834e9f81c   2Gi        RWX            my-storage     83s
nginx-temp-nginx-statefulset-2   Bound    pvc-cfd505ce-0cab-40ca-a914-0f7ba1540898   2Gi        RWX            my-storage     79s

[root@k8s-master deploy]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                    STORAGECLASS   REASON   AGE
pvc-5627713c-2f5f-4536-ad70-f20323d26898   2Gi        RWX            Delete           Bound    default/nginx-temp-nginx-statefulset-0   my-storage              3m12s
pvc-cdfcb0f0-082d-481f-a80c-c91834e9f81c   2Gi        RWX            Delete           Bound    default/nginx-temp-nginx-statefulset-1   my-storage              3m6s
pvc-cfd505ce-0cab-40ca-a914-0f7ba1540898   2Gi        RWX            Delete           Bound    default/nginx-temp-nginx-statefulset-2   my-storage              3m2s

[root@k8s-master deploy]# 

# 创建pvc后会在nfs共享目录上生成相关的文件
    
[root@k8s-node01 ~]# cd /nfs/data/
[root@k8s-node01 data]# ll
total 0
drwxrwxrwx 2 root root  6 Apr 13 17:04 default-nginx-temp-nginx-statefulset-0-pvc-5627713c-2f5f-4536-ad70-f20323d26898
drwxrwxrwx 2 root root  6 Apr 13 17:04 default-nginx-temp-nginx-statefulset-1-pvc-cdfcb0f0-082d-481f-a80c-c91834e9f81c
drwxrwxrwx 2 root root  6 Apr 13 17:04 default-nginx-temp-nginx-statefulset-2-pvc-cfd505ce-0cab-40ca-a914-0f7ba1540898

 至此statefulset服务部署完成,我这里配置了service服务,服务正常既可以外网访问.

[root@k8s-master deploy]# kubectl get svc
NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE

ngx-service   NodePort    10.1.206.123   <none>        80:32500/TCP   11m

# curl -I http://192.168.10.30:32500/
HTTP/1.1 200 OK
Server: nginx/1.19.9
Date: Tue, 13 Apr 2021 09:17:09 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 30 Mar 2021 14:47:11 GMT
Connection: keep-alive
ETag: "606339ef-264"
Accept-Ranges: bytes

[root@k8s-master deploy]#

 

参考文档:

    https://blog.csdn.net/dayi_123/article/details/107946953

 

posted @ 2021-04-13 09:57  梦徒  阅读(922)  评论(0)    收藏  举报