K8S- NFS动态分配PVC

参考官方网站:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client/deploy

首先要部署好自己的NFS服务,然后创建共享目录

# vim /etc/exports

/nfs/data/k8s *(insecure,rw,no_root_squash,sync)

 重启nfs 也可以运行exportfs -avr使我们的共享目录生效

从官网分别下载class.yaml  deployment.yaml  rbac.yaml

# vim class.yaml 

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: "false"
#这个没有要改动的
# vim rbac.yaml 

    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
                                                      
#没有要改动的
# vim deployment.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 172.16.250.241
            - name: NFS_PATH
              value: /nfs/data/k8s
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.16.250.241
            path: /nfs/data/k8s

#红色需要注意,蓝色需要修改自己对应信息

 运行kubectl apply

# kubectl  apply -f .
storageclass.storage.k8s.io/managed-nfs-storage created
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

 查看nfs class

# kubectl  get storageclass
NAME                  PROVISIONER                  AGE
managed-nfs-storage   fuseim.pri/ifs               94s

 测试

下载官网的test的例子

# vim test-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
#不进行修改

运行kubectl 

# kubectl  apply -f test-claim.yaml 
persistentvolumeclaim/test-claim created

查看PVC

# kubectl  get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim          Bound    pvc-5799e15b-431e-4f98-8e8e-b69fd75ef219   1Mi        RWX            managed-nfs-storage   37s

在查看pv

# kubectl  get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                STORAGECLASS          REASON   AGE
pvc-5799e15b-431e-4f98-8e8e-b69fd75ef219   1Mi        RWX            Delete           Bound      default/test-claim                   managed-nfs-storage            2m14s
发现是不是对应上了已经,但是有个疑问

 1.是否可以跨命名空间绑定

 2.怎么知道我的PVC是被那个pod 或者资源对象使用了呢

实验1.确定是否可以跨命名空间使用
# vim test_test-claim.yaml 

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim-test
  namespace: test
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

运行并查看

# kubectl  create ns test
namespace/test created
# kubectl  apply -f test_test-claim.yaml 
persistentvolumeclaim/test-claim-test created
# kubectl  get pvc -n test
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim-test   Bound    pvc-38446809-a96d-404d-b6cb-712b78000c39   1Mi        RWX            managed-nfs-storage   10s
#结论:可以在任何命名空间绑定class

实验2:

1.未绑定的pvc 状态查看

# kubectl edit  pvc/test-claim 

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"managed-nfs-storage"},"name":"test-claim","namespace":"default"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"1Mi"}}}}
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-class: managed-nfs-storage
    volume.beta.kubernetes.io/storage-provisioner: fuseim.pri/ifs
  creationTimestamp: "2020-07-04T12:40:03Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: test-claim
  namespace: default
  resourceVersion: "1578587"
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/test-claim
  uid: 5799e15b-431e-4f98-8e8e-b69fd75ef219
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
  volumeMode: Filesystem
  volumeName: pvc-5799e15b-431e-4f98-8e8e-b69fd75ef219
status:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 1Mi
  phase: Bound

 运行个pod 测试 pvc 

# vim test-pod.yaml

kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: gcr.io/google_containers/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc  #要与volumeMounts的name对应上
persistentVolumeClaim: claimName: test-claim #指定的pvc名字

 运行kubectl

# kubectl  apply -f test-pod.yaml
# kubectl  get pod
NAME                                      READY   STATUS      RESTARTS   AGE
test-pod                                  0/1     Completed   0          67s

 查看nfs共享目录

#已经看到文件了
# pwd /nfs/data/k8s/default-test-claim-pvc-5799e15b-431e-4f98-8e8e-b69fd75ef219 # ls SUCCESS

接下来来确定PVC被谁用了,

#检查了半天好像只能知道pv 被谁用了们无法确定pvc 被谁用了

 

posted @ 2020-07-04 21:25  91King  阅读(811)  评论(0)    收藏  举报