storageclass

storageclass是一个存储类,通过创建storageclass可以动态生成一个存储卷供k8s用户使用。

[root@master ~]# kubectl explain storageclass

KIND:     StorageClass
VERSION:  storage.k8s.io/v1

DESCRIPTION:
     StorageClass describes the parameters for a class of storage for which
     PersistentVolumes can be dynamically provisioned.

     StorageClasses are non-namespaced; the name of the storage class according
     to etcd is in ObjectMeta.Name.

FIELDS:
   allowVolumeExpansion <boolean>
     AllowVolumeExpansion shows whether the storage class allow volume expand

   allowedTopologies    <[]Object>
     Restrict the node topologies where volumes can be dynamically provisioned.
     Each volume plugin defines its own supported topology specifications. An
     empty TopologySelectorTerm list means there is no topology restriction.
     This field is only honored by servers that enable the VolumeScheduling
     feature.

   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata     <Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   mountOptions <[]string>
     Dynamically provisioned PersistentVolumes of this storage class are created
     with these mountOptions, e.g. ["ro", "soft"]. Not validated - mount of the
     PVs will simply fail if one is invalid.

   parameters   <map[string]string>
     Parameters holds the parameters for the provisioner that should create
     volumes of this storage class.

   provisioner  <string> -required-
     Provisioner indicates the type of the provisioner.

   reclaimPolicy        <string>
     Dynamically provisioned PersistentVolumes of this storage class are created
     with this reclaimPolicy. Defaults to Delete.

   volumeBindingMode    <string>
     VolumeBindingMode indicates how PersistentVolumeClaims should be
     provisioned and bound. When unset, VolumeBindingImmediate is used. This
     field is only honored by servers that enable the VolumeScheduling feature.

官方亚马逊的创建举例

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
  - debug
volumeBindingMode: Immediate

Provisioner

storageclass需要有一个供应者,用来确定我们使用什么样的存储来创建pv。provisioner既可以是内部供应程序,也可以由外部供应商提供,外部供应商需要额外的配置。

官网地址有详细的介绍
https://kubernetes.io/docs/concepts/storage/storage-classes/ 搜索Provisioner
如果是外部供应商可以参考https://github.com/kubernetes-incubator/external-storage/下提供的方法创建storageclass的provisioner

parameters

type: io1、gp2、sc1、 st1,默认值:gp2,详细见官网

reclaimPolicy(回收策略)

由存储类动态创建持久化存储卷(pv)时可以指定reclaimPolicy字段,这个字段中指定的回收策略可以是Delete或Retain。 如果在创建StorageClass对象时未指定reclaimPolicy,则默认为Delete。

Mount Options(挂载选项)

如果Volume Plugin不支持这个挂载选项,但是指定了,就会使provisioner创建失败

Volume Binding Mode(卷绑定模式)

这个字段用来说明什么时候进行卷绑定和动态配置;

默认情况下,立即模式Immediate表示一旦创建了PersistentVolumeClaim,就会发生卷绑定和动态配置。对于受拓扑约束且无法从群集中的所有节点全局访问的存储后端,将在不知道Pod的调度要求的情况下绑定或配置PersistentVolumes。这可能导致不可调度的Pod。

集群管理员可以通过指定WaitForFirstConsumer模式将延迟绑定和配置PersistentVolume,直到创建使用PersistentVolumeClaim的Pod。将根据Pod的调度约束指定的拓扑选择或配置PersistentVolumes。这些包括但不限于资源需求,节点选择器,pod亲和力和反亲和力,以及污点和容忍度。

允许卷扩展

PersistentVolume 可以配置为可扩展。将此功能设置为 true 时,允许用户通过编辑相应的 PVC 对象来调整卷大小。当基础存储类的 allowVolumeExpansion 字段设置为 true 时,卷支持卷扩展,种类见官网,此功能仅可用于扩容卷,不能用于缩小卷

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
  - key: failure-domain.beta.kubernetes.io/zone
    values:
    - us-central1-a
    - us-central1-b

参数

Storage class 具有描述属于该存储类的卷的参数。可以接受的不同的参数取决于provisioner。 例如,参数 type 的值 io1 和参数 iopsPerGB 特定于 EBS PV。当参数被省略时,会使用默认值。一个 StorageClass 最多可以定义 512 个参数。这些参数对象的总长度不能超过 256 KiB, 包括参数的键和值。

provisioner种类比较多,具体需要看官方网站,这里简单介绍几个

Glusterfs

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: slow
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://127.0.0.1:8081"
  clusterid: "630372ccdc720a92c681fb928f27b53f"
  restauthenabled: "true"
  restuser: "admin"
  secretNamespace: "default"
  secretName: "heketi-secret"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:3"

resturl:分配 gluster 卷的需求的 Gluster REST 服务/Heketi服务url。Heketi可用于管理glusterfs。 通用格式应该是 IPaddress:Port,这是 GlusterFS 动态分配器的必需参数。 如果 Heketi 服务在 openshift/kubernetes 中安装并暴露为可路由服务,则可以使用类似于 http://heketi-storage-project.cloudapps.mystorage.com 的格式,其中 fqdn 是可解析的 heketi 服务网址。

restauthenabled:Gluster REST 服务身份验证布尔值,用于启用对 REST 服务器的身份验证。如果此值为 ‘true’,则必须填写 restuser 和 restuserkey 或 secretNamespace + secretName。此选项已弃用,当在指定 restuser,restuserkey,secretName 或 secretNamespace 时,身份验证被启用。

restuser:在 Gluster 可信池中有权创建卷的 Gluster REST服务/Heketi 用户。

restuserkey:Gluster REST服务/Heketi用户的密码将被用于对 REST 服务器进行身份验证。此参数已弃用,取而代之的是secretNamespace+secretName。

secretNamespace,secretName:Secret 实例的标识,包含与 Gluster REST 服务交互时使用的用户密码。 这些参数是可选的,secretNamespace和secretName都省略时使用空密码。所提供的 Secret 必须将类型设置为“kubernetes.io/glusterfs”,例如以这种方式创建:

kubectl create secret generic heketi-secret  --type="kubernetes.io/glusterfs"  --from-literal=key='opensesame'   --namespace=default

secret 的例子可以在 glusterfs-provisioning-secret.yaml 中找到。

clusterid:630372ccdc720a92c681fb928f27b53f是集群的ID,当分配卷时,Heketi 将会使用这个文件。它也可以是一个 clusterid 列表,例如: “8452344e2becec931ece4e33c4674e4e,42982310d
e6c63381718ccfa6d8cf397”。这个是可选参数。

gidMin,gidMax:storage class GID 范围的最小值和最大值。在此范围(gidMin-gidMax)内的唯一值(GID)将用于动态分配卷。这些是可选的值。如果不指定,卷将被分配一个 2000-2147483647 之间的值,这是 gidMin 和 gidMax 的默认值。

volumetype:卷的类型及其参数可以用这个可选值进行配置。如果未声明卷类型,则由分配器决定卷的类型。例如:volumetype: replicate:3 其中 ‘3’ 是 replica 数量. ‘Disperse/EC volume’: volumetype: disperse:4:2 其中 ‘4’ 是数据,‘2’ 是冗余数量,也可以把volumetype设置成none,volumetype: none有关可用的卷类型和管理选项,请参阅管理指南。更多相关的参考信息,请参阅如何配置 Heketi

当动态分配持久卷时,Gluster插件自动创建名为gluster-dynamic- 的端点和 headless service。在 PVC 被删除时动态端点和 headless service 会自动被删除。

Ceph RBD

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/rbd
parameters:
  monitors: 10.16.153.105:6789
  adminId: kube
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-secret-user
  userSecretNamespace: default
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"

monitors:Ceph monitor,逗号分隔。该参数是必需的。
adminId:Ceph 客户端 ID,用于在池 ceph 池中创建映像。默认是 “admin”。
adminSecret:adminId 的 Secret 名称。该参数是必需的。 提供的 secret 必须有值为 “kubernetes.io/rbd” 的 type 参数。
adminSecretNamespace:adminSecret 的命名空间。默认是 “default”。
pool: Ceph RBD 池. 默认是 “rbd”。
userId:Ceph 客户端 ID,用于映射 RBD 镜像。默认与 adminId 相同。
userSecretName:用于映射 RBD 镜像的 userId 的Ceph Secret的名字。 它必须与PVC存在于相同的namespace 中。该参数是必需的。提供的 secret必须具有值为 “kubernetes.io/rbd” 的 type参数,例如以这样的方式创建:

kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \--from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \ --namespace=kube-system

userSecretNamespace:userSecretName 的命名空间。
fsType:Kubernetes支持的fsType。默认:“ext4”。
imageFormat:Ceph RBD镜像格式,“1” 或者 “2”。默认值是 “1”。
imageFeatures:这个参数是可选的,只能在你将 imageFormat 设置为 “2” 才使用。目前支持的功能只是 layering。默认是 “",没有功能打开。

NFS (外部供应)

参考https://jimmysong.io/kubernetes-handbook/practice/using-nfs-for-persistent-storage.html

(1)创建运行nfs-provisioner的sa账号

[root@master ~]# vim serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
[root@master ~]# kubectl apply -f serviceaccount.yaml
[root@master ~]# kubectl get serviceaccount
NAME              SECRETS   AGE
default           1         17d
nfs-provisioner   1         27s

(2)对sa账号做rbac授权

[root@master ~]# vim rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services", "endpoints"]
    verbs: ["get"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-provisioner
  apiGroup: rbac.authorization.k8s.io
[root@master ~]# kubectl apply -f rbac.yaml

(3)通过deployment创建pod用来运行nfs-provisioner程序(用来划分pv的程序

[root@master ~]# mkdir -p /data/nfs_pro
[root@master ~]# vim /etc/exports
/data/nfs_pro 192.168.1.11/24(rw,no_root_squash)
[root@master ~]# systemctl restart nfs
[root@master ~]# vim nfs-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-provisioner
spec:
  selector:
    matchLabels:
       app: nfs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccountName: nfs-provisioner
      containers:
        - name: nfs-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: example.com/nfs
            - name: NFS_SERVER
              value: 192.168.1.11
            - name: NFS_PATH
              value: /data/nfs_pro
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.11
            path: /data/nfs_pro
[root@master ~]# kubectl apply -f nfs-deployment.yaml
[root@master ~]# kubectl get pod
NAME                              READY   STATUS    RESTARTS   AGE
nfs-provisioner-fd5f59b5f-qsnhx   1/1     Running   0          4s

(4)创建storageclass

[root@master ~]# vim nfs-deployment.yaml
[root@master ~]# vim nfs-storageclass.yaml
[root@master ~]# cat nfs-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: nfs
provisioner: example.com/nfs     #deployment里面ENV字段里的对应
[root@master ~]# kubectl apply -f nfs-storageclass.yaml
[root@master ~]# kubectl get sc
NAME   PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs    example.com/nfs   Delete          Immediate     false          17s

(5)创建pvc

[root@master ~]# vim claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim1
spec:
  accessModes:  [“ReadWriteMany”]
  resources:
    requests:
      storage: 1Gi
  storageClassName:  nfs
此时查看pv和pvc,发现都没有创建
[root@master ~]# kubectl get pv   
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM             
[root@master ~]# kubectl get pvc
NAME        STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
[root@master ~]# kubectl apply -f claim.yaml
此时查看pv和pvc,发现被动态的创建了
[root@master ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                 STORAGECLASS   REASON   AGE
pvc-97141599-5dec-463c-8abe-c2ad12fe687d   1Gi        RWX            Delete           Bound       default/test-claim1   nfs                     115s
[root@master ~]# kubectl get pvc
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim1   Bound    pvc-97141599-5dec-463c-8abe-c2ad12fe687d   1Gi        RWX            nfs            119s

(6)创建pod,使用storageclass动态生成pv

[root@master ~]# vim read-pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: read-pod
spec:
  containers:
  - name: read-pod
    image: nginx
    volumeMounts:
      - name: nfs-pvc
        mountPath: /usr/share/nginx/html
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim1
[root@master ~]# kubectl apply -f read-pod.yaml
[root@master ~]# kubectl get pod -owide
read-pod        1/1     Running   0          10s   10.244.1.95   node1
[root@master ~]# kubectl describe pod read-pod
Volumes:
  nfs-pvc:
    Type:       PersistentVolumeClaim 
    ClaimName:  test-claim1

创建statefulset,动态生成存储

需要上面的第(1)、(2)、(3)、(4)都部署成功才可以使用下面的volumeClaimTemplate

[root@master ~]# vim statefulset-storage.yaml
apiVersion: v1
kind: Service
metadata:
  name: storage
  labels:
    app: storage
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: storage
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: storage
spec:
  selector:
    matchLabels:
      app: storage
  serviceName: "storage"
  replicas: 2
  template:
    metadata:
      labels:
        app: storage
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
      annotations:
         volume.beta.kubernetes.io/storage-class: "nfs"
    spec:
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 2Gi
[root@master ~]# kubectl apply -f statefulset-storage.yaml
[root@master ~]# kubectl get svc
NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                 
storage       ClusterIP   None            <none>        80/TCP                 
[root@master ~]# kubectl get pod
NAME                              READY   STATUS    RESTARTS   AGE
storage-0                         1/1     Running   0          29s
storage-1                         1/1     Running   0          27s
[root@master ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                   STORAGECLASS   REASON   AGE
pvc-ab21661f-6c0e-42cd-9895-d82c511595df   2Gi        RWX            Delete           Bound       default/www-storage-1   nfs                     6m49s
pvc-e9de3930-288c-46e3-9c22-378777e2cf30   2Gi        RWX            Delete           Bound       default/www-storage-0   nfs            
[root@master ~]# kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
www-storage-0   Bound    pvc-e9de3930-288c-46e3-9c22-378777e2cf30   2Gi        RWX            nfs            10m
www-storage-1   Bound    pvc-ab21661f-6c0e-42cd-9895-d82c511595df   2Gi        RWX            nfs            8m1s
Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐