k8s不能挂载ceph块存储
我是参考 Tony Bai 博客进行k8s挂载ceph的存储,但是发现最终pod的状态一直是ContainerCreating一、环境说明:Tony Bai 是把k8s 和 ceph都部署在那两台虚拟机上我的环境是k8s集群和ceph存储集群分别部署在不同机器上的ceph存储集群环境部署可以参考Tony Bai的,或者网上找很多教程,我这里只是记录k8s挂载到ceph块存储所遇到的问题。二、配
·
我是参考 Tony Bai 博客进行k8s挂载ceph的存储,但是发现最终pod的状态一直是ContainerCreating
一、环境说明:
- Tony Bai 是把k8s 和 ceph都部署在那两台虚拟机上
- 我的环境是k8s集群和ceph存储集群分别部署在不同机器上的
ceph存储集群环境部署可以参考Tony Bai的,或者网上找很多教程,我这里只是记录k8s挂载到ceph块存储所遇到的问题。
二、配置文件
# ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFEUGpCVlpnRWphREJBQUtMWFd5SVFsMzRaQ2JYMitFQW1wK2c9PQo=
##########################################
# ceph-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 192.168.100.81:6789
pool: rbd
image: ceph-image
keyring: /etc/ceph/ceph.client.admin.keyring
user: admin
secretRef:
name: ceph-secret
fsType: ext4
readOnly: false
persistentVolumeReclaimPolicy: Recycle
##########################################
# ceph-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
##########################################
# ceph-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: ceph-pod1
spec:
containers:
- name: ceph-busybox1
image: 192.168.100.90:5000/duni/busybox:latest
command: ["sleep", "600000"]
volumeMounts:
- name: ceph-vol1
mountPath: /usr/share/busybox
readOnly: false
volumes:
- name: ceph-vol1
persistentVolumeClaim:
claimName: ceph-claim
三、查找挂载失败的原因
查看对象的状态
$ kubectl get pv,pvc,pods
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv/ceph-pv 1Gi RWO Recycle Bound default/ceph-claim 11s
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/ceph-claim Bound ceph-pv 1Gi RWO 10s
NAME READY STATUS RESTARTS AGE
po/ceph-pod1 0/1 ContainerCreating 0 11s
发现ceph-pod1状态一直是ContainerCreating
查看pod的event
$ kubectl describe po/ceph-pod1
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2m 2m 1 {default-scheduler } Normal Scheduled Successfully assigned ceph-pod1 to duni-node1
6s 6s 1 {kubelet duni-node1} Warning FailedMount Unable to mount volumes for pod "ceph-pod1_default(6656394a-37b6-11e7-b652-000c2932f92e)": timeout expired waiting for volumes to attach/mount for pod "ceph-pod1"/"default". list of unattached/unmounted volumes=[ceph-vol1]
6s 6s 1 {kubelet duni-node1} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "ceph-pod1"/"default". list of unattached/unmounted volumes=[ceph-vol1]
又到ceph-pod1
所在k8s节点机上查看kubelet
日志
$ journalctl -u kubelet -f
May 13 15:09:52 duni-node1 kubelet[5167]: I0513 15:09:52.650241 5167 operation_executor.go:802] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/e38290de-33a7-11e7-b6 52-000c2932f92e-default-token-91w6v" (spec.Name: "default-token-91w6v") pod "e38290de-33a7-11e7-b652-000c2932f92e" (UID: "e38290de-33a7-11e7-b652-000c2932f92e").
203299 May 13 15:10:15 duni-node1 kubelet[5167]: E0513 15:10:15.801855 5167 kubelet.go:1813] Unable to mount volumes for pod "ceph-pod1_default(ef4e99c4-37aa-11e7-b652-000c2932f92e)": t imeout expired waiting for volumes to attach/mount for pod "ceph-pod1"/"default". list of unattached/unmounted volumes=[ceph-vol1]; skipping pod
203300 May 13 15:10:15 duni-node1 kubelet[5167]: E0513 15:10:15.801930 5167 pod_workers.go:184] Error syncing pod ef4e99c4-37aa-11e7-b652-000c2932f92e, skipping: timeout expired waiting for volumes to attach/mount for pod "ceph-pod1"/"default". list of unattached/unmounted volumes=[ceph-vol1]
203301 May 13 15:10:17 duni-node1 kubelet[5167]: I0513 15:10:17.252663 5167 reconciler.go:299] MountVolume operation started for volume "kubernetes.io/secret/ddee5d45-3490-11e7-b652-000 c2932f92e-default-token-91w6v" (spec.Name: "default-token-91w6v") to pod "ddee5d45-3490-11e7-b652-000c2932f92e" (UID: "ddee5d45-3490-11e7-b652-000c2932f92e"). Volume is already moun ted to pod, but remount was requested.
四、解决方法
在k8s节点机上安装ceph common
yum install ceph-common
删掉 ceph-pod1
重新运行,等一会就看到状态Running
了
更多推荐
已为社区贡献3条内容
所有评论(0)