一、Ceph CIS 介绍

1.1 功能介绍

​Ceph Container Storage Interface(ceph CSI)是 Ceph 存储集群提供给 Kubernetes 的一种插件式接口,用于 Kubernetes 中动态配置和管理 Ceph 存储。

CSI 是 Kubernetes 的第三方存储插件,为 Kubernetes 集群中运行的 Pod 提供了直接访问 Ceph 存储集群的能力。通过 CSI 接口,Kubernetes 用户可以使用 Ceph 存储集群作为 Kubernetes 集群中的持久化存储,而不需要深入了解 Ceph 存储集群的底层实现。

1.2 工作原理:

Kubernetes 用户需要通过 CSI 接口在 Kubernetes 集群中创建一个 PersistentVolumeClaim(PVC),并指定所需的存储容量和其他相关参数。Kubernetes 会将 PVC 转换成 CSI NodePublishRequest,并发送到 CSI 插件。CSI 插件会将 NodePublishRequest 转换成 Ceph 存储集群中的 API 请求,并将结果返回给 Kubernetes。Kubernetes 将结果转换成 CSI NodePublishResponse,并将其发送到 Node 上运行的 CSI 插件。CSI 插件会将 NodePublishResponse 转换成 Ceph 存储集群中的 API 请求,并将结果返回给 Kubernetes。

通过 CSI 接口,用户可以方便地在 Kubernetes 集群中使用 Ceph 存储集群,从而为 Kubernetes 应用程序提供持久化数据存储。同时,CSI 还提供了一系列的安全机制和管理功能,使 Kubernetes 用户可以更加灵活地管理和配置 Ceph 存储集群。

1.3 主要作用

配置管理 - Ceph CIS允许管理员在集群中配置和管理Ceph存储集群的各种操作。通过CIS,管理员可以执行诸如添加/删除/更新OSD(对象存储设备)、创建/删除CRUSH图等操作。监控 - Ceph CIS提供了对Ceph存储集群的实时监控功能。它可以帮助管理员跟踪集群的使用情况、性能瓶颈以及故障状态。故障排除 - Ceph CIS允许管理员检查集群各个组件的状态,以便快速发现和修复故障。例如,如果某个OSD停止工作,管理员可以使用CIS来诊断和解决该问题。性能优化 - Ceph CIS可以收集并分析Ceph集群的性能数据,以便管理员进行优化。通过CIS,管理员可以了解集群中每个组件的负载情况,以及哪些操作可能会影响集群的性能。

总之,Ceph CIS是Ceph存储集群的重要组成部分,它可以帮助管理员轻松地管理、监控和维护整个集群,确保其高可用性和高性能。

二、部署Ceph CIS插件

ceph集群部署 镜像下载 本站镜像下载

docker pull quay.io/k8scsi/csi-resizer:v1.1.0

docker pull quay.io/k8scsi/csi-attacher:v3.1.0

docker pull quay.io/k8scsi/csi-node-driver-registrar:v2.1.0

docker pull quay.io/k8scsi/csi-provisioner:v2.1.0

docker pull quay.io/k8scsi/csi-snapshotter:v3.0.3

docker pull quay.io/cephcsi/cephcsi:v3.2.0

2.1 创建Namespace

apiVersion: v1

kind: Namespace

metadata:

labels:

kubernetes.io/metadata.name: cephcsi

name: cephcsi

2.2 创建ConfigMap

apiVersion: v1

kind: ConfigMap

metadata:

name: ceph-csi-config

namespace: cephcsi

labels:

addonmanager.kubernetes.io/mode: Reconcile

data:

config.json: |-

[{"clusterID":"92bdd53a-c560-42ea-aecd-056c94808f17","monitors":["192.168.17.50:6789","192.168.17.60:6789","192.168.17.70:6789"],"cephFS":{}}]

# 以上配置信息

ceph mon dump

dumped monmap epoch 3

epoch 3

fsid 92bdd53a-c560-42ea-aecd-056c94808f17 # clusterID

last_changed 2023-04-11 16:44:54.723870

created 2023-04-11 16:43:36.126758

min_mon_release 14 (nautilus)

0: [v2:192.168.17.50:3300/0,v1:192.168.17.50:6789/0] mon.node50 #192.168.17.50:6789地址

1: [v2:192.168.17.60:3300/0,v1:192.168.17.60:6789/0] mon.node60

2: [v2:192.168.17.70:3300/0,v1:192.168.17.70:6789/0] mon.node70

2.3 创建Secret

apiVersion: v1

kind: Secret

metadata:

name: cephfs-secret-ceph

namespace: cephcsi

data:

adminID: admin

adminKey: AQC4HTVkrEaTJxAAsj16ACmuxiDCmlSDEmkInA==

userID: admin

userKey: AQC4HTVkrEaTJxAAsj16ACmuxiDCmlSDEmkInA==

# 获取admin用户的AdminKey,执行指令

ceph auth get client.admin

exported keyring for client.admin

[client.admin]

key = AQC4HTVkrEaTJxAAsj16ACmuxiDCmlSDEmkInA== #adminKey和userKey

caps mds = "allow *"

caps mgr = "allow *"

caps mon = "allow *"

caps osd = "allow *"

2.4 创建StorageClass

allowVolumeExpansion: true

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

annotations:

k8s.kuboard.cn/storageType: cephfs_provisioner

name: ceph

parameters:

clusterID: 92bdd53a-c560-42ea-aecd-056c94808f17 #与ConfigMap相同

csi.storage.k8s.io/controller-expand-secret-name: cephfs-secret-ceph

csi.storage.k8s.io/controller-expand-secret-namespace: cephcsi

csi.storage.k8s.io/node-stage-secret-name: cephfs-secret-ceph

csi.storage.k8s.io/node-stage-secret-namespace: cephcsi

csi.storage.k8s.io/provisioner-secret-name: cephfs-secret-ceph

csi.storage.k8s.io/provisioner-secret-namespace: cephcsi

fsName: cephfs

pool: cephfs_pool

provisioner: cephfs.csi.ceph.com

reclaimPolicy: Delete

volumeBindingMode: Immediate

2.5 创建VolumeSnapshotClass

apiVersion: snapshot.storage.k8s.io/v1beta1

deletionPolicy: Delete

driver: cephfs.csi.ceph.com

kind: VolumeSnapshotClass

metadata:

annotations: {}

name: ceph

parameters:

clusterID: 92bdd53a-c560-42ea-aecd-056c94808f17 #与ConfigMap相同

csi.storage.k8s.io/snapshotter-secret-name: cephfs-secret-ceph

csi.storage.k8s.io/snapshotter-secret-namespace: cephcsi

2.6 创建csi-provisioner RBAC

apiVersion: v1

kind: ServiceAccount

metadata:

name: cephfs-csi-provisioner

namespace: cephcsi

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

name: cephfs-external-provisioner-runner-clusterrole

rules:

- apiGroups:

- ''

resources:

- nodes

verbs:

- get

- list

- watch

- apiGroups:

- ''

resources:

- secrets

verbs:

- get

- list

- apiGroups:

- ''

resources:

- events

verbs:

- list

- watch

- create

- update

- patch

- apiGroups:

- ''

resources:

- persistentvolumes

verbs:

- get

- list

- watch

- create

- delete

- patch

- apiGroups:

- ''

resources:

- persistentvolumeclaims

verbs:

- get

- list

- watch

- update

- apiGroups:

- storage.k8s.io

resources:

- storageclasses

verbs:

- get

- list

- watch

- apiGroups:

- snapshot.storage.k8s.io

resources:

- volumesnapshots

verbs:

- get

- list

- apiGroups:

- snapshot.storage.k8s.io

resources:

- volumesnapshotcontents

verbs:

- create

- get

- list

- watch

- update

- delete

- apiGroups:

- snapshot.storage.k8s.io

resources:

- volumesnapshotclasses

verbs:

- get

- list

- watch

- apiGroups:

- storage.k8s.io

resources:

- volumeattachments

verbs:

- get

- list

- watch

- update

- patch

- apiGroups:

- storage.k8s.io

resources:

- volumeattachments/status

verbs:

- patch

- apiGroups:

- ''

resources:

- persistentvolumeclaims/status

verbs:

- update

- patch

- apiGroups:

- storage.k8s.io

resources:

- csinodes

verbs:

- get

- list

- watch

- apiGroups:

- snapshot.storage.k8s.io

resources:

- volumesnapshotcontents/status

verbs:

- update

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

name: cephfs-csi-provisioner-clusterrolebinding

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: cephfs-external-provisioner-runner-clusterrole

subjects:

- kind: ServiceAccount

name: cephfs-csi-provisioner

namespace: cephcsi

---

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

name: cephfs-external-provisioner-cfg

namespace: cephcsi

rules:

- apiGroups:

- ''

resources:

- configmaps

verbs:

- get

- list

- create

- delete

- apiGroups:

- coordination.k8s.io

resources:

- leases

verbs:

- get

- watch

- list

- delete

- update

- create

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: cephfs-csi-provisioner-role-cfg

namespace: cephcsi

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: cephfs-external-provisioner-cfg

subjects:

- kind: ServiceAccount

name: cephfs-csi-provisioner

namespace: cephcsi

---

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

name: cephfs-csi-provisioner-psp

namespace: cephcsi

rules:

- apiGroups:

- policy

resourceNames:

- cephfs-csi-provisioner-psp

resources:

- podsecuritypolicies

verbs:

- use

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: cephfs-csi-provisioner-psp

namespace: cephcsi

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: cephfs-csi-provisioner-psp

subjects:

- kind: ServiceAccount

name: cephfs-csi-provisioner

namespace: cephcsi

2.7 创建nodepluginRBAC

---

apiVersion: v1

kind: ServiceAccount

metadata:

name: cephfs-csi-nodeplugin

namespace: cephcsi

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

name: cephfs-csi-nodeplugin

rules:

- apiGroups:

- ''

resources:

- nodes

verbs:

- get

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

name: cephfs-csi-nodeplugin-clusterrolebinding

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: cephfs-csi-nodeplugin-clusterrole

subjects:

- kind: ServiceAccount

name: cephfs-csi-nodeplugin

namespace: cephcsi

---

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

name: cephfs-csi-nodeplugin-psp

namespace: cephcsi

rules:

- apiGroups:

- policy

resourceNames:

- cephfs-csi-nodeplugin-psp

resources:

- podsecuritypolicies

verbs:

- use

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: cephfs-csi-nodeplugin-psp

namespace: cephcsi

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: cephfs-csi-nodeplugin-psp

subjects:

- kind: ServiceAccount

name: cephfs-csi-nodeplugin

namespace: cephcsi

2.8 创建provisioner Service

---

apiVersion: v1

kind: Service

metadata:

labels:

app: csi-metrics

name: csi-cephfsplugin-provisioner

namespace: cephcsi

spec:

ports:

- name: http-metrics

port: 8080

protocol: TCP

targetPort: 8681

selector:

app: csi-cephfsplugin-provisioner

2.9 创建cephfsplugin Service

---

apiVersion: v1

kind: Service

metadata:

labels:

app: csi-metrics

name: csi-metrics-cephfsplugin

spec:

ports:

- name: http-metrics

port: 8080

protocol: TCP

targetPort: 8681

selector:

app: csi-cephfsplugin

2.9 创建provisioner Deployment

---

apiVersion: apps/v1

kind: Deployment

metadata:

name: csi-cephfsplugin-provisioner

namespace: cephcsi

spec:

replicas: 2

selector:

matchLabels:

app: csi-cephfsplugin-provisioner

template:

metadata:

labels:

app: csi-cephfsplugin-provisioner

spec:

affinity:

podAntiAffinity:

requiredDuringSchedulingIgnoredDuringExecution:

- labelSelector:

matchExpressions:

- key: app

operator: In

values:

- csi-cephfsplugin-provisioner

topologyKey: kubernetes.io/hostname

containers:

- args:

- '--csi-address=$(ADDRESS)'

- '--v=5'

- '--timeout=150s'

- '--leader-election=true'

- '--retry-interval-start=500ms'

- '--feature-gates=Topology=false'

env:

- name: ADDRESS

value: 'unix:///csi/csi-provisioner.sock'

image: 'csi-provisioner:v2.1.0'

imagePullPolicy: IfNotPresent

name: csi-provisioner

volumeMounts:

- mountPath: /csi

name: socket-dir

- args:

- '--csi-address=$(ADDRESS)'

- '--v=5'

- '--timeout=150s'

- '--leader-election'

- '--retry-interval-start=500ms'

- '--handle-volume-inuse-error=false'

env:

- name: ADDRESS

value: 'unix:///csi/csi-provisioner.sock'

image: 'csi-resizer:v1.1.0'

imagePullPolicy: IfNotPresent

name: csi-resizer

volumeMounts:

- mountPath: /csi

name: socket-dir

- args:

- '--csi-address=$(ADDRESS)'

- '--v=5'

- '--timeout=150s'

- '--leader-election=true'

env:

- name: ADDRESS

value: 'unix:///csi/csi-provisioner.sock'

image: 'csi-snapshotter:v3.0.3'

imagePullPolicy: IfNotPresent

name: csi-snapshotter

securityContext:

privileged: true

volumeMounts:

- mountPath: /csi

name: socket-dir

- args:

- '--v=5'

- '--csi-address=$(ADDRESS)'

- '--leader-election=true'

- '--retry-interval-start=500ms'

env:

- name: ADDRESS

value: /csi/csi-provisioner.sock

image: 'csi-attacher:v3.1.0'

imagePullPolicy: IfNotPresent

name: csi-cephfsplugin-attacher

volumeMounts:

- mountPath: /csi

name: socket-dir

- args:

- '--nodeid=$(NODE_ID)'

- '--type=cephfs'

- '--controllerserver=true'

- '--endpoint=$(CSI_ENDPOINT)'

- '--v=5'

- '--drivername=cephfs.csi.ceph.com'

- '--pidlimit=-1'

env:

- name: POD_IP

valueFrom:

fieldRef:

fieldPath: status.podIP

- name: NODE_ID

valueFrom:

fieldRef:

fieldPath: spec.nodeName

- name: CSI_ENDPOINT

value: 'unix:///csi/csi-provisioner.sock'

image: 'cephcsi:v3.2.0'

imagePullPolicy: IfNotPresent

name: csi-cephfsplugin

securityContext:

capabilities:

add:

- SYS_ADMIN

privileged: true

volumeMounts:

- mountPath: /csi

name: socket-dir

- mountPath: /sys

name: host-sys

- mountPath: /lib/modules

name: lib-modules

readOnly: true

- mountPath: /dev

name: host-dev

- mountPath: /etc/ceph-csi-config/

name: ceph-csi-config

- mountPath: /tmp/csi/keys

name: keys-tmp-dir

- args:

- '--type=liveness'

- '--endpoint=$(CSI_ENDPOINT)'

- '--metricsport=8681'

- '--metricspath=/metrics'

- '--polltime=60s'

- '--timeout=3s'

env:

- name: CSI_ENDPOINT

value: 'unix:///csi/csi-provisioner.sock'

- name: POD_IP

valueFrom:

fieldRef:

fieldPath: status.podIP

image: 'cephcsi:v3.2.0'

imagePullPolicy: IfNotPresent

name: liveness-prometheus

volumeMounts:

- mountPath: /csi

name: socket-dir

serviceAccount: cephfs-csi-provisioner

volumes:

- emptyDir:

medium: Memory

name: socket-dir

- hostPath:

path: /sys

name: host-sys

- hostPath:

path: /lib/modules

name: lib-modules

- hostPath:

path: /dev

name: host-dev

- configMap:

name: ceph-csi-config

name: ceph-csi-config

- emptyDir:

medium: Memory

name: keys-tmp-dir

2.10 创建cephfsplugin DaemonSet

---

apiVersion: apps/v1

kind: DaemonSet

metadata:

name: csi-cephfsplugin

namespace: cephcsi

spec:

selector:

matchLabels:

app: csi-cephfsplugin

template:

metadata:

labels:

app: csi-cephfsplugin

spec:

containers:

- args:

- '--v=5'

- '--csi-address=/csi/csi.sock'

- >-

--kubelet-registration-path=/var/lib/kubelet/plugins/cephfs.csi.ceph.com/csi.sock

env:

- name: KUBE_NODE_NAME

valueFrom:

fieldRef:

fieldPath: spec.nodeName

image: 'csi-node-driver-registrar:v2.1.0'

name: driver-registrar

securityContext:

privileged: true

volumeMounts:

- mountPath: /csi

name: socket-dir

- mountPath: /registration

name: registration-dir

- args:

- '--nodeid=$(NODE_ID)'

- '--type=cephfs'

- '--nodeserver=true'

- '--endpoint=$(CSI_ENDPOINT)'

- '--v=5'

- '--drivername=cephfs.csi.ceph.com'

env:

- name: POD_IP

valueFrom:

fieldRef:

fieldPath: status.podIP

- name: NODE_ID

valueFrom:

fieldRef:

fieldPath: spec.nodeName

- name: CSI_ENDPOINT

value: 'unix:///csi/csi.sock'

image: 'cephcsi:v3.2.0'

imagePullPolicy: IfNotPresent

name: csi-cephfsplugin

securityContext:

allowPrivilegeEscalation: true

capabilities:

add:

- SYS_ADMIN

privileged: true

volumeMounts:

- mountPath: /csi

name: socket-dir

- mountPath: /var/lib/kubelet/pods

mountPropagation: Bidirectional

name: mountpoint-dir

- mountPath: /var/lib/kubelet/plugins

mountPropagation: Bidirectional

name: plugin-dir

- mountPath: /sys

name: host-sys

- mountPath: /lib/modules

name: lib-modules

readOnly: true

- mountPath: /dev

name: host-dev

- mountPath: /run/mount

name: host-mount

- mountPath: /etc/ceph-csi-config/

name: ceph-csi-config

- mountPath: /tmp/csi/keys

name: keys-tmp-dir

- args:

- '--type=liveness'

- '--endpoint=$(CSI_ENDPOINT)'

- '--metricsport=8681'

- '--metricspath=/metrics'

- '--polltime=60s'

- '--timeout=3s'

env:

- name: CSI_ENDPOINT

value: 'unix:///csi/csi.sock'

- name: POD_IP

valueFrom:

fieldRef:

fieldPath: status.podIP

image: 'cephcsi:v3.2.0'

imagePullPolicy: IfNotPresent

name: liveness-prometheus

securityContext:

privileged: true

volumeMounts:

- mountPath: /csi

name: socket-dir

dnsPolicy: ClusterFirstWithHostNet

hostNetwork: true

serviceAccount: cephfs-csi-nodeplugin

tolerations:

- effect: ''

key: node-role.kubernetes.io/master

operator: Exists

volumes:

- hostPath:

path: /var/lib/kubelet/plugins/cephfs.csi.ceph.com/

type: DirectoryOrCreate

name: socket-dir

- hostPath:

path: /var/lib/kubelet/plugins_registry/

type: Directory

name: registration-dir

- hostPath:

path: /var/lib/kubelet/pods

type: DirectoryOrCreate

name: mountpoint-dir

- hostPath:

path: /var/lib/kubelet/plugins

type: Directory

name: plugin-dir

- hostPath:

path: /sys

name: host-sys

- hostPath:

path: /lib/modules

name: lib-modules

- hostPath:

path: /dev

name: host-dev

- hostPath:

path: /run/mount

name: host-mount

- configMap:

name: ceph-csi-config

name: ceph-csi-config

- emptyDir:

medium: Memory

name: keys-tmp-dir

2.11 创建名称为rabbitmq数据卷

---

metadata:

annotations:

k8s.kuboard.cn/pvcType: Dynamic

name: rabbitmq-ceph

namespace: default # 需要用到的命名空间

spec:

accessModes:

- ReadWriteMany

resources:

requests:

storage: 5Gi

storageClassName: ceph

2.12 部署使用使用

# pod的volumes字段声明

volumes:

- name: rabbitmq

persistentVolumeClaim:

claimName: rabbitmq-ceph

# 在容器内使用

volumeMounts:

- mountPath: /data

name: rabbitmq

精彩内容

评论可见,请评论后查看内容,谢谢!!!评论后请刷新页面。