​Kubernetes自动生成pvc和pv

1.概述

  • 本文通过 ​nfs-subdir-external-provisioner 在 Kubernetes 集群中通过PVC请求自动创建PV,分配nfs共享存储,通过StorageClass 达到统一配置存储的行为。
  • 此方式不支持动态扩容pvc、pv容量,如果想配置动态存储扩容,请参考NFS CSI驱动部署。

2.前置条件

  • NFS 服务器:已搭建并共享目录(如 /nfs_share/k8s)。
  • ​Kubernetes 集群:版本 ≥ 1.20,网络插件(如 Calico)正常运行。
  • 镜像仓库:确保能拉取 nfs-subdir-external-provisioner 镜像

3.扩容流程

3.1 流程说明

  • 用户触发:通过 kubectl edit pvc 或 YAML 修改 PVC 的存储容量。
  • 系统响应:
    Kubernetes 更新 PVC/PV 状态。
    Provisioner 确保 StorageClass 允许扩容。
  • ​存储层处理:
    NFS 服务器检查共享目录所在磁盘的剩余空间。
    PV 容量标记更新(逻辑层)。
  • 文件系统调整:
    若支持在线扩容(如 ext4/xfs),kubelet 自动扩展文件系统。
  • 应用生效:
    挂载点容量更新,应用可直接使用新空间(部分场景需重启 Pod)

3.2 流程图

[用户操作]  
  │  
  ▽  
编辑 PVC 请求扩容(例如 1Gi → 2Gi)  
  │  
  ▽  
Kubernetes 检测到 PVC 变更  
  │  
  ▽  
Provisioner 检查 StorageClass 允许扩容  
  │  
  ▽  
NFS 服务器验证磁盘空间是否充足  
  │  
  ▽  
PV 容量标记更新(1Gi → 2Gi)  
  │  
  ▽  
文件系统自动扩容(如 ext4/xfs 在线扩展)  
  │  
  ▽  
Pod 挂载点生效(无需重启*)  
  │  
  ▽  
应用验证新容量(df -h /data)  

4.下载 Provisioner 的部署文件

  • 访问 GitHub 仓库 nfs-subdir-external-provisioner,下载最新版本的部署模板
wget https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/class.yaml
wget https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/rbac.yaml
wget https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/deployment.yaml

5.部署步骤

5.1 创建namespace

kubectl create ns nfs-provisioner

5.2 修改Provisioner部署文件,并执行

5.2.1 rbac.yaml文件,此文件不需要修改

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-provisioner
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-provisioner
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
参数详解
  1. ​**ServiceAccount**:

    • 作用:为 Provisioner Pod 分配身份,需与 deployment.yaml 中的 serviceAccountName 一致。
    • 关键字段name 和 namespace
  2. ​**ClusterRole**:

    • 作用:定义 Provisioner 需要的集群级权限(如管理 PV、PVC、StorageClass)。
    • 关键权限
      • persistentvolumes:创建/删除 PV。
      • persistentvolumeclaims:更新 PVC 状态。
      • storageclasses:读取存储类配置。
  3. ​**ClusterRoleBinding**:

    • 作用:将 ServiceAccount 绑定到 ClusterRole,授予权限。
修改注意事项
  • 命名空间一致性:所有资源必须部署到同一命名空间(如 nfs-provisioner)。
  • 权限最小化:不要随意扩大 verbs 范围(如避免 "*")。

5.2.2 class.yaml (需要修改)

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "true"
allowVolumeExpansion: true
reclaimPolicy: Retain
参数详解
  1. ​**provisioner**

    • 作用:指定存储分配器名称,必须与 deployment.yaml 中的 PROVISIONER_NAME 完全一致。
    • 示例k8s-sigs.io/nfs-subdir-external-provisioner
  2. ​**allowVolumeExpansion**

    • 作用:是否允许动态扩容 PVC。
    • 可选值true(允许)或 false(禁止)。
  3. ​**reclaimPolicy**

    • 作用:定义 PV 回收策略。
    • 可选值
      • Retain:删除 PVC 后保留 PV 和数据(需手动清理)。
      • Delete:删除 PVC 时自动删除 PV 和 NFS 子目录。
    • 推荐值:生产环境建议 Delete,测试环境可用 Retain
  4. ​**parameters.archiveOnDelete**

    • 作用:删除 PVC 时如何处理 NFS 子目录。
    • 可选值
      • "true":将子目录重命名为 archived-<原目录名>(保留数据)。
      • "false":直接删除子目录(彻底清理)。
    • 推荐值:测试环境用 "false",生产环境谨慎选择。

 

5.2.3 deployment.yaml(需要修改)

  • registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 这个镜像我是通过国外服务器下载的,如需要请联系我!
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: 172.16.4.177:8090/ltzx/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.16.4.60
            - name: NFS_PATH
              value: /nfs_share/k8s/nfs-provisioner
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.16.4.60
            path: /nfs_share/k8s/nfs-provisioner
参数详解
  1. ​**image**

    • 作用:NFS Provisioner 的容器镜像地址。
    • 推荐版本:使用最新稳定版(如 v4.0.2)。
    • 私有仓库:若需认证,需配置 imagePullSecrets
  2. ​**env 环境变量**:

    • ​**PROVISIONER_NAME**:必须与 class.yaml 中的 provisioner 完全一致。
    • ​**NFS_SERVER**:NFS 服务器 IP 或域名。
    • ​**NFS_PATH**:NFS 共享目录的绝对路径。
  3. ​**volumes 数据卷**:

    • ​**nfs.server 和 nfs.path**:必须与 NFS_SERVER 和 NFS_PATH 一致。
    • ​**readOnly**:默认为 false,确保可写。
  4. ​**replicas 和 strategy**:

    • ​**replicas: 1**:单副本运行(避免多副本竞争)。
    • ​**strategy.type: Recreate**:更新时先删除旧 Pod,再创建新 Pod。

 

5.3 执行文件

kubectl apply -f rbac.yaml
kubectl apply -f class.yaml
kubectl apply -f deployment.yaml
[root@master1 nfs]# kubectl apply -f rbac.yaml 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created


[root@master1 nfs]# kubectl apply -f class.yaml 
storageclass.storage.k8s.io/nfs-client created


[root@master1 nfs]# kubectl apply -f deployment.yaml 
deployment.apps/nfs-client-provisioner created

5.4 验证部署

  • 查看 nfs-provisioner pod 状态
[root@master1 nfs]# kubectl get pods -n nfs-provisioner
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-5bd66c9447-hj8qz   1/1     Running   0          10h
  • 查看 storageclass ,确认 "allowVolumeExpansion":true,允许动扩容
[root@master1 nfs]# kubectl get storageclass nfs-client -o yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"nfs-client"},"parameters":{"archiveOnDelete":"true"},"provisioner":"k8s-sigs.io/nfs-subdir-external-provisioner","reclaimPolicy":"Retain"}
  creationTimestamp: "2025-04-02T09:06:32Z"
  name: nfs-client
  resourceVersion: "10513520"
  uid: e1496b73-1d18-46b7-9fe6-598a9229f125
parameters:
  archiveOnDelete: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
reclaimPolicy: Retain
volumeBindingMode: Immediate

6.单副本测试

 6.1 部署busybox,进行测试

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: busybox-test
spec:
  serviceName: "busybox-service"
  replicas: 1  # 单副本测试
  selector:
    matchLabels:
      app: busybox-test
  template:
    metadata:
      labels:
        app: busybox-test
    spec:
      containers:
      - name: busybox
        image: 172.16.4.177:8090/ltzx/busybox:latest  # 替换成你的私有仓库地址
        command: ["/bin/sh", "-c", "sleep infinity"]  # 保持容器运行
        volumeMounts:
        - name: data
          mountPath: /mnt/storage  # 测试挂载点
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: nfs-client  # 指向你的NFS StorageClass
      resources:
        requests:
          storage: 1Gi
kubectl apply -f busybox-test.yaml

 6.2 测试步骤

#查看pod busybox-test是否正常运行
[root@master1 test]# kubectl get pods 
NAME             READY   STATUS    RESTARTS      AGE
busybox-test-0   1/1     Running   0             6s

#进入pod验证存储
[root@master1 test]# kubectl exec -it busybox-test-0 -- /bin/sh
#容器内查看挂载点
/ # df -h /mnt/storage
Filesystem                Size      Used Available Use% Mounted on
172.16.4.60:/nfs_share/k8s/nfs-provisioner/default-data-busybox-test-0-pvc-f5396e40-f0b3-4019-9d38-e738e1a37d18
                        299.4G     90.7G    208.7G  30% /mnt/storage
                        
#创建测试文件,验证pv权限是否正常                       
/ # echo "NFS Storage Test" > /mnt/storage/testfile.txt
/ # cat /mnt/storage/testfile.txt
NFS Storage Test
#查看pvc状态
[root@master1 test]# kubectl get pvc 
NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-busybox-test-0   Bound    pvc-f5396e40-f0b3-4019-9d38-e738e1a37d18   1Gi        RWO            nfs-client     2m21s
#查看pv状态
[root@master1 test]# kubectl get pv
pvc-f5396e40-f0b3-4019-9d38-e738e1a37d18   1Gi        RWO            Retain           Bound      default/data-busybox-test-0              nfs-client                   92m
  • 在nfs服务器上验证,"/nfs_share/k8s/nfs-provisioner" 是上边配置的nfs共享目录
#进到nfs共享目录中
[root@localhost ~]# cd  /nfs_share/k8s/nfs-provisioner
#查看自动生成的pv目录
[root@localhost nfs-provisioner]# ls
default-data-busybox-test-0-pvc-f5396e40-f0b3-4019-9d38-e738e1a37d18
#查看上边创建的文件内容
[root@localhost nfs-provisioner]# cd default-data-busybox-test-0-pvc-f5396e40-f0b3-4019-9d38-e738e1a37d18/
[root@localhost default-data-busybox-test-0-pvc-f5396e40-f0b3-4019-9d38-e738e1a37d18]# cat testfile.txt 
NFS Storage Test

7.多副本测试

7.1 将副本数修改为3个或者多个

[root@master1 test]# cat busybox-test.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: busybox-test
spec:
  serviceName: "busybox-service"
  replicas: 3  # 单副本测试
  selector:
    matchLabels:
      app: busybox-test
  template:
    metadata:
      labels:
        app: busybox-test
    spec:
      containers:
      - name: busybox
        image: 172.16.4.177:8090/ltzx/busybox:latest  # 替换成你的私有仓库地址
        command: ["/bin/sh", "-c", "sleep infinity"]  # 保持容器运行
        volumeMounts:
        - name: data
          mountPath: /mnt/storage  # 测试挂载点
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: nfs-client  # 指向你的NFS StorageClass
      resources:
        requests:
          storage: 1Gi
kubectl apply -f busybox-test.yaml 

7.2 验证

#查看pod状态
[root@master1 test]# kubectl get pods 
NAME             READY   STATUS    RESTARTS      AGE
busybox-test-0   1/1     Running   0             3m24s
busybox-test-1   1/1     Running   0             9s
busybox-test-2   1/1     Running   0             5s
#查看pv、pvc状态
[root@master1 test]# kubectl get pv | grep "^pvc"
pvc-32f0aa55-b4f5-4ad5-a11c-04c63a2ac29c   1Gi        RWO            Retain           Bound    default/data-busybox-test-1              nfs-client                   94m
pvc-f02fd5d2-b4d5-431b-8243-971184ed2182   1Gi        RWO            Retain           Bound    default/data-busybox-test-2              nfs-client                   94m
pvc-f5396e40-f0b3-4019-9d38-e738e1a37d18   1Gi        RWO            Retain           Bound    default/data-busybox-test-0              nfs-client                   97m
[root@master1 test]# kubectl get pvc
NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-busybox-test-0   Bound    pvc-f5396e40-f0b3-4019-9d38-e738e1a37d18   1Gi        RWO            nfs-client     97m
data-busybox-test-1   Bound    pvc-32f0aa55-b4f5-4ad5-a11c-04c63a2ac29c   1Gi        RWO            nfs-client     94m
data-busybox-test-2   Bound    pvc-f02fd5d2-b4d5-431b-8243-971184ed2182   1Gi        RWO            nfs-client     94m
  • 在nfs共享服务器查看
#查看nfs共享挂载点,发现已经有三个目录
[root@localhost ~]# cd  /nfs_share/k8s/nfs-provisioner
[root@localhost nfs-provisioner]# ls
default-data-busybox-test-0-pvc-f5396e40-f0b3-4019-9d38-e738e1a37d18  default-data-busybox-test-2-pvc-f02fd5d2-b4d5-431b-8243-971184ed2182
default-data-busybox-test-1-pvc-32f0aa55-b4f5-4ad5-a11c-04c63a2ac29c

8.参考文档

https://www.cnblogs.com/lori/p/17944619

 

posted @ 2025-04-03 21:42  Leonardo-li  阅读(111)  评论(0)    收藏  举报