10-4 共享存储 --- PV、PVC和StorageClass(上)
pv 描述的是持久化数据卷
nfs 类型的挂载目录
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 10Gi
# 表示只有一个pod可以读写访问这个pv ReadWriteAny的话表示都可以访问
accessModes:
- ReadWriteOnce
nfs:
path: "/tmp"
server: 172.22.1.2
pvc 描述pod希望使用的持久化存储的属性
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
pv和pvc必需要建立绑定关系才行
绑定的过程1 首先两者必须匹配 pv需要满足pvc的需求 2 pv和pvc的storageclass name必须一致
两者满足之后 processtermvolume controller 会发现这个匹配的pv,然后自动将pv和pvc建立绑定关系。本质上是在pvc的资源描述对象里把pv的名字加进去。
当pv和pvc绑定好之后
创建pod
apiVersion: v1
kind: Pod
metadata:
name: web-dev
spec:
containers:
- name: web-dev
images: web:v1
ports:
- containerPort: 8080
volumeMounts:
- name: nfs
mountPath: "/files"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
nfs:kubelet会把目录远程挂在到pv指定的位置,相当于mount命令。挂载好之后对于docker来说并没有什么区别,依旧是使用docker run -v 参数把目录映射到容器里
StorageClass
自动管理pv的机制 本质是pv的模板
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: storage-class-demo # 内置的存储插件 provisioner: kubernetes.io/aws-ebs # aws-ebs的具体参数 需要根据插件修改 parameters: type: io1 zone: us-east-1d iopsPerGB: "10"
有了StorageClass后的pvc 如下
会根据storageclassname的值 自动去创建pv 并建立绑定关系
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-demo
spec:
accessModes:
- ReadWriteOnce
storageClassName: storage-class-demo
resources:
requests:
storage: 10Gi
StorageClass的设计要点
1 每个PV和PVC都有StorageClass 如果没有创建那就会默认一个空的
2 StorageClass 不需要真实存在 名字一样就行
gluster-fs
3个节点 每个节点上 需要一个裸磁盘
3个节点先按照之前的步骤 座位node 加入kubernetes 并且添加好/etc/hosts
3个节点均执行
yum -y install glusterfs glusterfs-fuse
安装完成之后 master上可以grep到一个allow-privileged的api
glusterfs-daemonset.yaml
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: glusterfs
labels:
glusterfs: daemonset
annotations:
description: GlusterFS DaemonSet
tags: glusterfs
spec:
template:
metadata:
name: glusterfs
labels:
glusterfs: pod
glusterfs-node: pod
spec:
nodeSelector:
storagenode: glusterfs
hostNetwork: true
containers:
- image: gluster/gluster-centos:latest
imagePullPolicy: IfNotPresent
name: glusterfs
env:
# alternative for /dev volumeMount to enable access to *all* devices
- name: HOST_DEV_DIR
value: "/mnt/host-dev"
# set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the
# readiness/liveness probe validate gluster-blockd as well
- name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
value: "1"
- name: GB_GLFS_LRU_COUNT
value: "15"
- name: TCMU_LOGDIR
value: "/var/log/glusterfs/gluster-block"
resources:
requests:
memory: 100Mi
cpu: 100m
volumeMounts:
- name: glusterfs-heketi
mountPath: "/var/lib/heketi"
- name: glusterfs-run
mountPath: "/run"
- name: glusterfs-lvm
mountPath: "/run/lvm"
- name: glusterfs-etc
mountPath: "/etc/glusterfs"
- name: glusterfs-logs
mountPath: "/var/log/glusterfs"
- name: glusterfs-config
mountPath: "/var/lib/glusterd"
- name: glusterfs-host-dev
mountPath: "/mnt/host-dev"
- name: glusterfs-misc
mountPath: "/var/lib/misc/glusterfsd"
- name: glusterfs-block-sys-class
mountPath: "/sys/class"
- name: glusterfs-block-sys-module
mountPath: "/sys/module"
- name: glusterfs-cgroup
mountPath: "/sys/fs/cgroup"
readOnly: true
- name: glusterfs-ssl
mountPath: "/etc/ssl"
readOnly: true
- name: kernel-modules
mountPath: "/usr/lib/modules"
readOnly: true
securityContext:
capabilities: {}
privileged: true
readinessProbe:
timeoutSeconds: 3
initialDelaySeconds: 40
exec:
command:
- "/bin/bash"
- "-c"
- "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi"
periodSeconds: 25
successThreshold: 1
failureThreshold: 50
livenessProbe:
timeoutSeconds: 3
initialDelaySeconds: 40
exec:
command:
- "/bin/bash"
- "-c"
- "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi"
periodSeconds: 25
successThreshold: 1
failureThreshold: 50
volumes:
- name: glusterfs-heketi
hostPath:
path: "/var/lib/heketi"
- name: glusterfs-run
- name: glusterfs-lvm
hostPath:
path: "/run/lvm"
- name: glusterfs-etc
hostPath:
path: "/etc/glusterfs"
- name: glusterfs-logs
hostPath:
path: "/var/log/glusterfs"
- name: glusterfs-config
hostPath:
path: "/var/lib/glusterd"
- name: glusterfs-host-dev
hostPath:
path: "/dev"
- name: glusterfs-misc
hostPath:
path: "/var/lib/misc/glusterfsd"
- name: glusterfs-block-sys-class
hostPath:
path: "/sys/class"
- name: glusterfs-block-sys-module
hostPath:
path: "/sys/module"
- name: glusterfs-cgroup
hostPath:
path: "/sys/fs/cgroup"
- name: glusterfs-ssl
hostPath:
path: "/etc/ssl"
- name: kernel-modules
hostPath:
path: "/usr/lib/modules"