首先,安装好一个NFS Server(参考
假设上面安装的NFS Server export的目录为/nfs
,接着我们在/nfs
下创建一个子目录/dir1
,并且在子目录下创建一个文件hello.txt
Copy $ pwd
/nfs/dir1
$ ls
hello.txt
然后我们在k8s集群中创建一个PV与PVC,yaml文件如下(这里我们把回收策略设置为了Recycle
):
Copy apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv1
spec:
storageClassName: nfs
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 192.168.2.101
path: /nfs/dir1
Copy apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc1
spec:
resources:
requests:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: nfs
创建好之后,我们查看结果,发现它们已经绑定好了
Copy $ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv1 1Gi RWX Recycle Bound default/nfs-pvc1 nfs 6s
但是,此时NFS并没有Mount到k8s的节点上(即在k8s的节点上执行df -hT | grep nfs
的结果为空)。
接着,我们创建如下的一个Pod
Copy apiVersion: v1
kind: Pod
metadata:
name: tomcat
spec:
containers:
- image: tomcat:8
name: tomcat
volumeMounts:
- name: volume1
mountPath: /container
volumes:
- name: volume1
persistentVolumeClaim:
claimName: nfs-pvc1
创建好Pod后,我们查看Pod所在的节点上已经挂载上了NFS目录
Copy $ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tomcat 1/1 Running 0 4s 172.26.220.109 peng04 <none> <none>
$ df -hT | grep nfs
192.168.2.101:/nfs/dir1 nfs4 50G 7.1G 43G 15% /var/lib/kubelet/pods/96df027e-42f6-429e-b1c2-a81089caaf4a/volumes/kubernetes.io~nfs/nfs-pv1
接着,我们在Pod里面的/container
目录下,可以看到NFS上的hello.txt
文件
Copy $ kubectl exec tomcat -- ls /container
hello.txt
接着,我们删除这个Pod,发现该节点上NFS的挂载就消失了。
接着,我们再删除PVC,我们会发现PV的状态先是变成了Released
,然后马上就变成了Availabe
;同时,我们也发现,NFS server的/nfs/dir1/
目录下的内容被清空了
Copy $ kubectl delete pvc nfs-pvc1
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv1 1Gi RWX Recycle Released default/nfs-pvc1 nfs 5m6s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv1 1Gi RWX Recycle Available nfs 5m9s
这是因为我们把PV的回收策略设置为了Recycle
。