解决k8s集群,node(s) didn‘t match node selector问题
k8s集群中,有pod出现了 Pending ,通过 kubectl describe pod 命令,发现了如下报错0/4 nodes are available: 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn’t tolerate, 3 node(s) didn’t match node s
k8s集群中,有pod出现了 Pending ,通过 kubectl describe pod 命令,发现了如下报错
0/4 nodes are available: 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn’t tolerate, 3 node(s) didn’t match node selector.
这是因为节点被打上了污点(pod的yaml文件中配置了 node selector ,和 node 的 label 做了绑定,因此,导致了pod没有节点可以起来)
Linux:~ # kubectl get nodes -o json | jq '.items[].spec'
{
"taints": [
{
"effect": "NoSchedule",
"key": "node.kubernetes.io/disk-pressure",
"timeAdded": "2021-03-06T14:15:27Z"
}
]
}
可以看到,节点被打上了污点,并且是 NoSchedule ,不可调度
进行如下操作,取消所有节点的 node.kubernetes.io/disk-pressure 这个名称的污点
Linux:~ # kubectl taint nodes --all node.kubernetes.io/disk-pressure-
node/k8s-w1 untainted
taint "node.kubernetes.io/disk-pressure" not found
taint "node.kubernetes.io/disk-pressure" not found
taint "node.kubernetes.io/disk-pressure" not found
如果节点还是Pending,可以导出 pod 的 yaml 文件,重新 kubectl apply -f xxx.yaml 即可(如果 pod 有 deployment ,则直接 kubectl delete pod 即可)
如果还是 Pending , 则再次执行 kubectl describe pod 查看报错的原因
原文链接:https://blog.csdn.net/u010383467/article/details/114502511?spm=1001.2014.3001.5501
更多推荐
所有评论(0)