centos 7 虚拟机安装k8s
系统 |IP|节点角色|CPU|Memory|Hostname|
准备工作
| 系统 |IP |节点角色 |CPU |Memory |Hostname|
|–|–|–|–|–|–|–|
|Centos7.8|192.168.200.211|Master|2|4|k8s_master01|
|Centos7.8|192.168.200.212|Master|2|4|k8s_master02|
|Centos7.8|192.168.200.213|Master|2|4|k8s_master03|
|Centos7.8|192.168.200.221|Node|1|1|k8s_node01|
|Centos7.8|192.168.200.222|Node|1|1|k8s_node02|
|Centos7.8|192.168.200.201|LVS|1|1|lvs_master|
|Centos7.8|192.168.200.202|LVS|1|1|lvs_backup|
|–|192.168.200.200|vip|–|–vip|
- 安装操作系统时候选择额computer node,这样方便一点
- 设置ip地址、hostname、dns
可视化配置,运行nmtui
,在弹出的页面配置即可
[root@master01 ~]#nmtui
手工配置
[root@master01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eno16777736
#主要修改
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.100.101
NETMASK=255.255.255.0
GATEWAY=192.168.200.2
DNS1=8.8.8.8
DNS2=114.114.114.114
修改网卡名称(非必要操作,我是看着一大串难受自己改的)
例子:修改eno16777736为eno33
查看物理网卡地址(当然啦,这里是虚拟机里面的,不要纠结)
是否看到eno16777736后的 00:0c:29:8d:e5:09
,复制一下,等下用
[root@master01 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:8d:e5:09 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.107/24 brd 192.168.200.255 scope global dynamic eno16777736
valid_lft 311sec preferred_lft 311sec
inet6 fe80::20c:29ff:fe8d:e509/64 scope link
valid_lft forever preferred_lft forever
开始修改
[root@master01 ~]# vim /etc/udev/rules.d/90-eno-fix.rules
#Name改为你要的名称即可
# ATTR{address}=="00:0c:29:8d:e5:09"替换成你自己的地址
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:8d:e5:09", NAME="eno33"
将名字更新为你设置的名字
[root@master01 ~]# cd /etc/sysconfig/network-scripts && mv ifcfg-eno16777736 ifcfg-eno33
修改配置文件中的NAME=eno33和DEVICE=eno33
[root@master01 network-scripts]# vim ifcfg-eno33
重启就能看到网卡name已经修改了
- 更新一下系统
[root@master01 ~]#yum update
- 关闭sshd中的nds
这样做的目的是为了再使用sshd远程时候连接快一点,不关闭也不会有什么影响,只是在发起连接的时候稍微慢一点
[root@master01 ~]#vim /etc/ssh/sshd_config
#添加一行
useDNS no
#修改后重启sshd
[root@master01 ~]#systemctl restart sshd
1. 时间同步
#测试环境操作
[root@master01 ~]# systemctl restart chronyd
2. 关闭swap
每一台都要做
vim /etc/fstab
注释掉swap所在行,reboot或者swapoff -a 关闭swap让其生效
或
sed -i 's/^[^#].*swap/#&/' /etc/fstab
#意思是:
#将含内有swap且不以#号开始的行前面加上#
[^#]:非#号
^锚定行首
^[^#]:非#号开头
&:整行
#&:在行首加上#,a&在行首加上a字母
此时运行free 看到swap分区大小显示为0
没关swap
#关闭前
[root@master01 ~]# free -h
total used free shared buff/cache available
Mem: 3.7G 163M 3.4G 8.5M 174M 3.3G
Swap: 2.0G 0B 2.0G
#关闭
[root@master01 ~]# swapoff -a
#关闭后
[root@master01 ~]# free -h
total used free shared buff/cache available
Mem: 3.7G 161M 3.4G 8.5M 174M 3.3G
Swap: 0B 0B 0B
#永久关闭或者你vim 打开/etc/fstab,注释掉swap行,重启,效果一样
[root@master01 ~]# sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
3. 禁用seinux
sed -i ‘s@SELINUX=enforcing@SELINUX=disabled@’ /etc/selinux/config
#关闭前
[root@master01 ~]# getenforce
Enforcing
#关闭
[root@master01 ~]# setenforce 0
#关闭后
[root@master01 ~]# getenforce
Permissive
#永久关闭
[root@master01 ~]# vim /etc/selinux/config
SELINUX=enforcing改为SELINUX=disabled
4. 关闭firewalld
命令:systemctl stop firewalld && systemctl disable firewalld
[root@master01 ~]# systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
Active: active (running) since Mon 2021-05-03 17:36:50 CST; 19min ago
Main PID: 857 (firewalld)
CGroup: /system.slice/firewalld.service
└─857 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
May 03 17:36:50 master01 systemd[1]: Started firewalld - dynamic firewall daemon.
[root@master01 ~]# systemctl stop firewalld && systemctl disable firewalld
rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'
rm '/etc/systemd/system/basic.target.wants/firewalld.service'
[root@master01 ~]#
5. 清理iptables规则
# 清理防火墙规则,设置默认转发策略
[root@master01 ~]# iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
#不用-t指定表名时默认指定filter表,相当于-t filter
#iptables -F:清空filter表的规则
#iptables -X:清空filter表自定规则
#iptables -F -t nat:清空net表规则
#iptables -X -t nat:清空net表自定义规则
# iptables -P FORWARD ACCEPT 将内置FORWARD链上的默认规则改为ACCEPT
6. 修改hosts文件(添加)
cat <<EOF >> /etc/hosts
192.168.100.100 k8svip
192.168.100.101 master01
192.168.100.102 master02
192.168.100.103 master03
192.168.100.111 worker01
192.168.100.112 worker02
192.168.100.113 worker03
EOF
master双核cpu,node三台,总的四台
软件:(四台机器都需要安装!!!,可以先安装一台然后克隆)
docker 容器引擎
kubeadm
kubelet 集群代理
kubcli k8s的命令行工具,用来与APIserver交互的,装在master上就可以了
工作原理
the number of available CPUs 1 is less than the required 2
7. 保证系统能上网
8. 安装docker
9. 开启ntp同步
略
10. 启用ipvs内核模块
设置为开机启动
[root@master01 /]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
#内核模块路径
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for mod in $(ls $ipvs_mods_dir | grep -o "^[^.]*"); do
/sbin/modinfo -F filename $mod &> /dev/null
if [[ $? -eq 0 ]]; then
/sbin/modprobe $mod
fi
done
[root@master01 /]# chmod +x /etc/sysconfig/modules/ipvs.modules
开始安装
打开地址,找到kubernetes,点击后面对应的帮助
阿里镜像源
帮助内容如下(照做就行)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
#你可以选择安装1.18版本或者最新版,最新版不一定能获取到相关镜像,学习建议就用1.18
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"'> /etc/sysconfig/kubelet
查看默认书初始化设置是否与实际相符
- kubernetesVersion: v1.18.0
- serviceSubnet: 10.96.0.0/12
- imageRepository: k8s.gcr.io
[root@localhost ~]# kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.16.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
关机克隆,略
克隆完成以后自己改一下网卡配置文件,通常是会有问题i的,决绝方案使用nmtui工具,将需要原有网卡删掉,重新添加一个,重启一下,配置就可以了。
nmtui
#拉取镜像
kubeadm config images pull --image-repository="registry.aliyuncs.com/google_containers"
#好了,你可去玩了,反正我等了好久,刷了好久视频才下好
#完成后可以看到如下
[root@master01 /]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.18.0 43940c34f24f 13 months ago 117MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.18.0 a31f78c7c8ce 13 months ago 95.3MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.18.0 d3e55153f52f 13 months ago 162MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.18.0 74060cea7f70 13 months ago 173MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 15 months ago 683kB
registry.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 15 months ago 43.8MB
registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 18 months ago 288MB
#将镜像打包,便于其他节点上使用
docker save -o k8s.tar gcr.azk8s.cn/google-containers/kube-proxy gcr.azk8s.cn/google-containers/kube-apiser
ver gcr.azk8s.cn/google-containers/kube-controller-manager gcr.azk8s.cn/google-containers/kube-scheduler gcr.azk8s.cn/google-containers/etcd gcr.azk8s.cn/google-containers/coredns gcr.azk8s.cn/google-containers/pause
[root@master01 /]# ls -list -h
总用量 647M
52934106 647M -rw-------. 1 root root 647M 11月 14 11:48 k8s.tar
这是我下载的v1.18.0镜像,需要的可以去下载
链接:https://pan.baidu.com/s/1wp1NQxpnteoBv1GKqHkcpQ
提取码:ubw0
初始化前干跑一下
kubeadm init --image-repository="registry.aliyuncs.com/google_containers" --kubernetes-version="v1.18.0" --pod-network-cidr="10.244.0.0/16" --ignore-preflight-errors=Swap --dry-run
没啥问题就可以初始化了
[root@master01 ~]# kubeadm init --image-repository="registry.aliyuncs.com/google_containers" --kubernetes-version="v1.18.0" --pod-network-cidr="10.244.0.0/16" --ignore-preflight-errors=Swap
初始化完成功你将看到
先不忙着操作,把“Your Kubernetes control-plane has initialized successfully!”以下的内容保存下来;
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 192.168.200.211:6443 --token gqqhp9.4baj3x2pmcsq9krs \
--discovery-token-ca-cert-hash sha256:9106f14341b54ba70341b79562e7c8576e0e14a70f4a4421e4f5502faba447af \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.200.211:6443 --token gqqhp9.4baj3x2pmcsq9krs \
--discovery-token-ca-cert-hash sha256:9106f14341b54ba70341b79562e7c8576e0e14a70f4a4421e4f5502faba447af
初始化成功后,按提示完成
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Flannel是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用 Kuberentes 的 CoreOS 主机拥有一个完整的子网。这次的分享内容将从Flannel的介绍、工作原理及安装和配置三方面来介绍这个工具的使用方法。
Flannel通过给每台宿主机分配一个子网的方式为容器提供虚拟网络,它基于Linux TUN/TAP,使用UDP封装IP包来创建overlay网络,并借助etcd维护网络的分配情况。
链接:https://www.jianshu.com/p/e4c7f83a2a0b
实在下不下来就来着下载吧,百度网盘
链接 https://pan.baidu.com/s/1tjNR329XEivdlgu9Zuc6xg 提取码:ds7n
[root@master01 .kube]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#安装完成后开始自动创建
[root@master01 kubernetes]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-42j25 1/1 Running 0 63m
coredns-7ff77c879f-bwn7l 1/1 Running 0 63m
etcd-master01 1/1 Running 0 63m
kube-apiserver-master01 1/1 Running 0 63m
kube-controller-manager-master01 1/1 Running 4 63m
kube-flannel-ds-amd64-k46n6 1/1 Running 0 10m
kube-proxy-q855k 1/1 Running 0 63m
kube-scheduler-master01 1/1 Running 4 63m
[root@master01 kubernetes]#
#创建完成后就能看到master的状态为Ready
[root@master01 kubernetes]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 63m v1.18.0
到此我们的master的安装已经完成
ymal里的内容如下,实在下不了你也可以复制下面的内容,粘贴进flannel.ymal
再运行kubectl apply -f flannel.ymal
效果也是一样的!
vum flannel.ymal
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- amd64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- arm64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- arm
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- ppc64le
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- s390x
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
在操作之前,请再次检查你的firewalld服务、iptables服务、selinux服务是否禁用……
请再次检查你的docker是否已经运行
检查完在往下做
分别在node主机上导入镜像
#scp的过程就不写了,
docker load -i k8s.tar
运行初始化时生成的语句
kubeadm join 192.168.40.128:6443 --token ih84g1.da98xytbazu4hwb5 \
--discovery-token-ca-cert-hash sha256:1b5052bda6f68a5c15ed4f63de98f2d96dfb31be9d6cae7423b3372fa1cb1376
要是上面这个忘记保存了或者是token过期也不要紧,在master节点上运行
kubeadm token create --print-join-command
你有可以看到加入的命令啦,复制下来,粘贴到需要加入的节点上运行即可!
[root@k8s-master01 yum.repos.d]# kubeadm token create --print-join-command
W0815 12:54:45.784585 87248 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.100.100:6443 --token f8qfep.tjuqqql9fw6lle6d --discovery-token-ca-cert-hash sha256:ced6ab112eb3f065a19b6bbd619d5aee210beb6e4635020d151273ed1d76618d
没有报错的前提下,请到master下运行
[root@master01 .kube]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 134m v1.16.2
node01 Ready <none> 27m v1.16.2
node02 Ready <none> 19m v1.16.2
node03 Ready <none> 17m v1.16.2
后面再补充
更多推荐
所有评论(0)