k8s 存储使用Ceph

docker  

在 1.4 以后,kubernetes 提供了一种更加方便的动态创建 PV 的方式;也就是说使用 StoragaClass 时无需预先创建固定大小的 PV,等待使用者创建 PVC 来使用;而是直接创建 PVC 即可分配使用。也无需到各Node节点上执行rbd map 镜像。

StoragaClass 方式

  • k8s安装 ceph-common(安装k8s提前安装好此插件)
yum -y install ceph-common  

ceph集群上的操作

  • 创建pool
ceph osd pool create kubernetes 64 64  
  • 检查
[root@ceph-node-001 ~]# ceph osd lspools
0 rbd,1 cephfs_data,2 cephfs_metadata,3 kubernetes-rbd,4 kubernetes,  
  • 在Ceph上创建用于CSI的用户
ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes'

ceph auth get-or-create client.kubernetes-for-csi mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes' Error EINVAL: moncap parse failed, stopped at 'profile rbd' of 'profile rbd'

  • ceph version 10.2.11 更正为如下的语句
ceph auth get-or-create client.kubernetes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kubernetes' -o ceph.client.kubernetes.keyring  
  • 检查创建的用户
[root@ceph-node-001 ~]# ceph auth get client.kubernetes
exported keyring for client.kubernetes  
[client.kubernetes]
    key = AQBKc1ZigAn1MxAA70YFdmbF7obho7LE1arSLw==
    caps mon = "allow r"
    caps osd = "allow class-read object_prefix rbd_children, allow rwx pool=kubernetes"
  • 获取Ceph信息
[root@ceph-node-001 ~]# ceph mon dump
dumped monmap epoch 3  
epoch 3  
fsid 5a95e527-6aa2-48ef-9f9c-a9b82b462066  
last_changed 2022-04-12 14:15:40.358887  
created 2022-04-12 14:14:07.381472  
0: 10.3.100.86:6789/0 mon.ceph-node-001  
1: 10.3.100.87:6789/0 mon.ceph-node-002  
2: 10.3.100.89:6789/0 mon.ceph-node-003  

在K8s集群上的操作

  • 创建新的namespace
kubectl create namespace csi-ceph  
  • 生成secret
kubectl apply -f csi-rbd-secret.yaml

apiVersion: v1  
kind: Secret  
metadata:  
  name: csi-rbd-secret
  namespace: csi-ceph
stringData:  
  userID: kubernetes  #就是在ceph集群上创建的用户名
  userKey: AQBKc1ZigAn1MxAA70YFdmbF7obho7LE1arSLw==  # 就是在ceph集群上创建的用户key,不用base转换
  • 生成Ceph-CSI的ConfigMap
vim csi-config-map.yaml  
apiVersion: v1  
kind: ConfigMap  
data:  
  config.json: |-
    [
      {
        "clusterID": "5a95e527-6aa2-48ef-9f9c-a9b82b462066", # ceph mon dump获取的fsid填这里
        "monitors": ["10.3.100.86:6789","10.3.100.87:6789","10.3.100.89:6789"]  #这里填ceph mon节点
      }
    ]
metadata:  
  name: ceph-csi-config
  namespace: csi-ceph
  • 生成KMS
vim csi-kms-config-map.yaml  
apiVersion: v1  
kind: ConfigMap  
data:  
  config.json: |-
    {}
metadata:  
  name: ceph-csi-encryption-kms-config
  namespace: csi-ceph

配置plugin

  • RBAC
kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-provisioner-rbac.yaml

kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml

  • provisioner and node plugins

    如果官方镜像无法下载,可以替换成国内的镜像仓库 registry.cn-guangzhou.aliyuncs.com/leoiceok8simages/image:tag 替换image和tag即可
wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml

kubectl apply -f csi-rbdplugin-provisioner.yaml

wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin.yaml

kubectl apply -f csi-rbdplugin.yaml

创建SC

apiVersion: storage.k8s.io/v1  
kind: StorageClass  
metadata:  
   name: csi-rbd-sc
   namespace: csi-ceph
   annotations:
      provisioner: rbd.csi.ceph.com

storageclass.kubernetes.io/is-default-class: 'true'  
parameters:  
   clusterID: 5a95e527-6aa2-48ef-9f9c-a9b82b462066  #fsid
   pool: kubernetes                                 #ceph pool
   imageFeatures: layering
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: csi-ceph
   csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
   csi.storage.k8s.io/controller-expand-secret-namespace: csi-ceph
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: csi-ceph
reclaimPolicy: Delete  
allowVolumeExpansion: true   #允许动态扩容  
mountOptions:  
   - discard

创建pvc

kind: PersistentVolumeClaim  
apiVersion: v1  
metadata:  
  name: sgame-log-s10006
  namespace: jszx
  labels:
    app: x3-sgame-s10006
    version: v1
  annotations:
    pv.kubernetes.io/bind-completed: 'yes'
    pv.kubernetes.io/bound-by-controller: 'yes'
    volume.beta.kubernetes.io/storage-provisioner: rbd.csi.ceph.com
  finalizers:
    - kubernetes.io/pvc-protection
spec:  
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 15Gi
  volumeName: pvc-96a3516e-a1df-4810-a681-bdcf7355262c
  storageClassName: csi-rbd-sc
  volumeMode: Filesystem
[root@k8s-master-001 conf]# kubectl get pvc -n jszx
NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE  
sgame-log-s10006   Bound    pvc-96a3516e-a1df-4810-a681-bdcf7355262c   15Gi       RWO            csi-rbd-sc     28m