K8s:NFS存储+PV+PVC+WEB(实战小案例)
PV_PVC
1、创建pv
注意:PV不用定义名称空间、定义也没用、是全局资源、PV一定要小于后端存储。
apiVersion: v1
kind: PersistentVolume
metadata:
name: myserver-myapp-static-pv
namespace: myserver ###写不写都行、PV是全局资源、不受名称空间限制
spec:
capacity:
storage: 10Gi
accessModes:
- ReadOnlyMany ####多个节点挂载
nfs:
path: /data/k8sdata/myserver/myappdata
server: 10.0.0.206
查看PV
kubectl get pv
2、创建PVC
注意:1、不能大于PV定义的存储空间
2、必须定义namespace、和pod在同一个namespace
3、一定要和前面的PV类型相同
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myserver-myapp-static-pvc
namespace: myserver ###定义后名称空间、后期pod必须跑在这个namespace
spec:
volumeName: myserver-myapp-static-pv
accessModes:
- ReadOnlyMany ###和pv类型一样
resources:
requests:
storage: 10Gi
查看PVC
[root@k8s-31:/data/pv_pvc]# kubectl get pvc -n myserver
[root@k8s-31:/data/pv_pvc]# kubectl describe pvc -n myserver myserver-myapp-static-pvc
错误
一定要和前面的PV类型相同
不然后报错、我前面写的ReadOnlyMany、PVC写ReadWriteOnce、就报下面错误:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning VolumeMismatch 8s (x9 over 2m1s) persistentvolume-controller Cannot bind to requested volume "myserver-myapp-static-pv": incompatible accessMode
3、启动web
注意:启动web声明一个卷、使用PVC类型、而不是NFS或者hostPath类型
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: myserver-myapp
name: myserver-myapp-deployment-name
namespace: myserver
spec:
replicas: 1 ###修改副本
selector:
matchLabels:
app: myserver-myapp-frontend
template:
metadata:
labels:
app: myserver-myapp-frontend
spec:
containers:
- name: myserver-myapp-container
image: nginx:1.20.0
#imagePullPolicy: Always
volumeMounts:
- mountPath: "/usr/share/nginx/html/statics"
name: statics-datadir ###选择PVC标签名字挂载
volumes:
- name: statics-datadir ###挂载时候匹配标签
persistentVolumeClaim:
claimName: myserver-myapp-static-pvc ###PVC名称
---
kind: Service
apiVersion: v1
metadata:
labels:
app: myserver-myapp-service
name: myserver-myapp-service-name
namespace: myserver
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30080
selector:
app: myserver-myapp-frontend
修改haproxy
listen k8s-pv-pvs
bind 10.0.0.205:80
mode tcp
balance roundrobin
server 10.0.0.32 10.0.0.32:30080 check inter 3s fall 3 rise 5
server 10.0.0.33 10.0.0.33:30080 check inter 3s fall 3 rise 5
[root@ubuntu2204 ~]#systemctl restart haproxy.service ###重启haproxy
查看
[root@k8s-31:/data/pv_pvc]# kubectl get pod -n myserver -o wide ###查看pod启动节点
上传一份数据
后端NFS或者ceph:
我这里测试做实验、后端存储还是NFS
10.0.0.206
[root@206 myappdata]#pwd
/data/k8sdata/myserver/myappdata/a.txt ###存放一张图片或者文件
访问
10.0.0.205/statics/a.txt

浙公网安备 33010602011771号