目录

一.准备工作(设置虚拟机)

1.创建3个虚拟机

2.设置系统主机名以及 Host 文件的相互解析(三个节点都做)

3.安装依赖包(三个节点都做)

4.设置防火墙为 Iptables 并设置空规则(三个节点都做)

5.关闭 SELINUX(三个节点都做)

6.调整内核参数,对于 K8S(三个节点都做)

7.调整系统时区(本来就是中国时区,所以不需要设置)

8.关闭系统不需要服务

9. 设置 rsyslogd 和 systemd journald

10. 升级系统内核为 4.44 (三个节点)

二.安装相关软件:

1.Kube-proxy开启ipvs的前置条件 (三个节点)

2.安装docker (三个节点)

3.创建 /etc/docker目录 (三个节点)

4. 配置daemon (三个节点)

5. 重启docker服务(三个节点)

6. 安装kubeadm(三个节点)

三:集群创建:

1.下载镜像包:(三个节点都做)

2.生成默认初始化模板,并进行修改(master节点):

3.初始化master节点:

安装flannel网络插件:

节点的状态为Ready

查看flannel网卡:

把worker node加入到集群中:

3.在master节点查看集群信息:

查看更详细的信息:


一.准备工作(设置虚拟机)

1.创建3个虚拟机

其中一个做为k8s集群的master节点,另外两个做为worker节点,我在virtualbox中安装了3个centos, 硬件配置至少要双核,不然创建k8s集群的时候,无法初始化master节点。

2.设置系统主机名以及 Host 文件的相互解析(三个节点都做)

hostnamectl  set-hostname  k8s-master01

hostnamectl  set-hostname  k8s-node01

hostnamectl  set-hostname  k8s-node02

3.安装依赖包(三个节点都做)

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim

net-tools git

4.设置防火墙为 Iptables 并设置空规则(三个节点都做)

systemctl stop firewalld && systemctl disable firewalld

yum -y install iptables-services && systemctl start iptables && systemctl enable iptables

&& iptables -F && service iptables save

5.关闭 SELINUX(三个节点都做)

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

6.调整内核参数,对于 K8S(三个节点都做)

cat > kubernetes.conf <<EOF

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

net.ipv4.tcp_tw_recycle=0

vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它

vm.overcommit_memory=1 # 不检查物理内存是否够用

vm.panic_on_oom=0 # 开启 OOM

fs.inotify.max_user_instances=8192

fs.inotify.max_user_watches=1048576

fs.file-max=52706963

fs.nr_open=52706963

net.ipv6.conf.all.disable_ipv6=1

net.netfilter.nf_conntrack_max=2310720

EOF

cp kubernetes.conf /etc/sysctl.d/kubernetes.conf

sysctl -p /etc/sysctl.d/kubernetes.conf

7.调整系统时区(本来就是中国时区,所以不需要设置)

# 设置系统时区为 中国/上海

timedatectl set-timezone Asia/Shanghai

# 将当前的 UTC 时间写入硬件时钟

timedatectl set-local-rtc 0

# 重启依赖于系统时间的服务

systemctl restart rsyslog

systemctl restart crond

8.关闭系统不需要服务

systemctl stop postfix && systemctl disable postfix

9. 设置 rsyslogd 和 systemd journald

mkdir /var/log/journal # 持久化保存日志的目录

mkdir /etc/systemd/journald.conf.d

cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF

[Journal]

# 持久化保存到磁盘

Storage=persistent

# 压缩历史日志

Compress=yes

SyncIntervalSec=5m

RateLimitInterval=30s

RateLimitBurst=1000

# 最大占用空间 10G

SystemMaxUse=10G

# 单日志文件最大 200M

SystemMaxFileSize=200M

# 日志保存时间 2 周

MaxRetentionSec=2week

# 不将日志转发到 syslog

ForwardToSyslog=no

EOF

systemctl restart systemd-journald

10. 升级系统内核为 4.44 (三个节点)

CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,例如:

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

# 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次!

yum --enablerepo=elrepo-kernel install -y kernel-lt

# 设置开机从新内核启动(注意,这里需要执行上面安装的具体的内核版本)

grub2-set-default 'CentOS Linux (4.4.189-1.el7.elrepo.x86_64) 7 (Core)'

二.安装相关软件:

1.Kube-proxy开启ipvs的前置条件 (三个节点)

cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack_ipv4

EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

2.安装docker (三个节点)

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum makecache fast

yum install docker-ce(安装最新版本的docker)

3.创建 /etc/docker目录 (三个节点)

mkdir /etc/docker

4. 配置daemon (三个节点)

cat > /etc/docker/daemon.json<<EOF

{

"exec-opts":["native.cgroupdriver=systemd"],

"log-driver":"json-file",

"log-opts": {"max-size":"500m", "max-file":"3"}

}

EOF


mkdir -p /etc/systemd/system/docker.service.d

5. 重启docker服务(三个节点)

systemctl daemon-reload && systemctl restart docker && systemctl enable docker

6. 安装kubeadm(三个节点)

设置阿里源:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=http://mirrors.aliyun.com/kubernets/yum/doc/yum-key.gpg

http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

安装:

yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1

systemctl start kubelet.service

systemctl enable kubelet.service

三:集群创建:

初始化k8s集群:

1.下载镜像包:(三个节点都做)

在初始化的时候,要去google下载很多镜像文件,镜像很大,过程很慢,我们不能直接连接google,所以需要自己下载,然后导入到我们的虚拟机中。

下载镜像文件包:kubeadm-basic.images.tar.gz(自己从网上下载)

例如:从百度云盘下载

链接:https://pan.baidu.com/s/1luy_ot5bzWbTVm9pRVt_Rg      kubeadm-basic.images.tar.gz

提取码:5zwz

链接:https://pan.baidu.com/s/1ti-mi4WBaitvyHuAOhsr3A   harbor-offline-installer-v2.3.2.tgz

提取码:11at

然后解压:

[root@k8s-master01 ~]# tar xvf kubeadm-basic.images.tar.gz

[root@k8s-master01 ~]# ls kubeadm-basic.images

apiserver.tar  coredns.tar  etcd.tar  kubec-con-man.tar  pause.tar  proxy.tar  scheduler.tar

写一个脚本,将其导入:

[root@k8s-master01 ~]# cat load-image.sh

#!/bin/bash

ls /root/kubeadm-basic.images > /tmp/image-list.txt

cd /root/kubeadm-basic.images

for i in $( cat /tmp/image-list.txt )

do

    docker load -i $i

done

rm -rf /tmp/image-list.txt

执行脚本:

[root@k8s-master01 ~]# ./load-image.sh

[root@k8s- master01 ~]# docker images

REPOSITORY                                       TAG       IMAGE ID       CREATED       SIZE

quay.io/coreos/flannel                           v0.15.1   e6ea68648f0c   4 weeks ago   69.5MB

rancher/mirrored-flannelcni-flannel-cni-plugin   v1.0.0    cd5235cd7dc2   6 weeks ago   9.03MB

k8s.gcr.io/kube-controller-manager               v1.15.1   d75082f1d121   2 years ago   159MB

k8s.gcr.io/kube-proxy                            v1.15.1   89a062da739d   2 years ago   82.4MB

k8s.gcr.io/kube-apiserver                        v1.15.1   68c3eb07bfc3   2 years ago   207MB

k8s.gcr.io/kube-scheduler                        v1.15.1   b0b3c4c404da   2 years ago   81.1MB

k8s.gcr.io/coredns                               1.3.1     eb516548c180   2 years ago   40.3MB

k8s.gcr.io/etcd                                  3.3.10    2c4adeb21b4f   3 years ago   258MB

k8s.gcr.io/pause                                 3.1       da86e6ba6ca1   3 years ago   742kB

2.生成默认初始化模板,并进行修改(master节点):

生成并修改配置文件:

kubeadm config print init-defaults > kubeadm-config.yaml

最终的配置文件内容:

[root@k8s-master01 ~]# cat kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2

bootstrapTokens:

- groups:

  - system:bootstrappers:kubeadm:default-node-token

  token: abcdef.0123456789abcdef

  ttl: 24h0m0s

  usages:

  - signing

  - authentication

kind: InitConfiguration

localAPIEndpoint:

  advertiseAddress: 192.168.31.97

  bindPort: 6443

nodeRegistration:

  criSocket: /var/run/dockershim.sock

  name: k8s-master01

  taints:

  - effect: NoSchedule

    key: node-role.kubernetes.io/master

---

apiServer:

  timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta2

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controllerManager: {}

dns:

  type: CoreDNS

etcd:

  local:

    dataDir: /var/lib/etcd

imageRepository: k8s.gcr.io

kind: ClusterConfiguration

kubernetesVersion: v1.15.1

networking:

  dnsDomain: cluster.local

  podSubnet: "10.244.0.0/16"

  serviceSubnet: 10.96.0.0/12

scheduler: {}

---

apiVersion: kubeproxy.config.k8s.io/v1alpha1

kind: KubeProxyConfiguration

featureGates:

  SupportIPVSProxyMode: true

mode: ipvs

3.初始化master节点:

[root@k8s-master01 ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log

Flag --experimental-upload-certs has been deprecated, use --upload-certs instead

[init] Using Kubernetes version: v1.15.1

[preflight] Running pre-flight checks

        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.11. Latest validated version: 18.09

……………………………………………………………

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  Installing Addons | Kubernetes

重要:master节点初始化成功后,用下面的命令把worker node加入进来。

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.31.97:6443 --token abcdef.0123456789abcdef \

--discovery-token-ca-cert-hash sha256:91f7c0eabf49cc5d19275ca6e9c0b8f68d464c7969ac90d0c339b01f51fcd486

[root@k8s-master01 ~]# docker images

REPOSITORY                           TAG       IMAGE ID       CREATED       SIZE

k8s.gcr.io/kube-proxy                v1.15.1   89a062da739d   2 years ago   82.4MB

k8s.gcr.io/kube-scheduler            v1.15.1   b0b3c4c404da   2 years ago   81.1MB

k8s.gcr.io/kube-controller-manager   v1.15.1   d75082f1d121   2 years ago   159MB

k8s.gcr.io/kube-apiserver            v1.15.1   68c3eb07bfc3   2 years ago   207MB

k8s.gcr.io/coredns                   1.3.1     eb516548c180   2 years ago   40.3MB

k8s.gcr.io/etcd                      3.3.10    2c4adeb21b4f   3 years ago   258MB

k8s.gcr.io/pause                     3.1       da86e6ba6ca1   3 years ago   742kB

[root@k8s-master01 ~]# mkdir -p $HOME/.kube

[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

 [root@k8s-master01 ~]# kubectl get node

NAME           STATUS     ROLES    AGE    VERSION

k8s-master01   NotReady   master   7m6s   v1.15.1

[root@k8s-master01 ~]# pwd

/root

[root@k8s-master01 ~]# ls

anaconda-ks.cfg  kubeadm-basic.images         kubeadm-config.yaml  kubernetes.conf

install-k8s      kubeadm-basic.images.tar.gz  kubeadm-init.log     load-image.sh

[root@k8s-master01 ~]# mkdir install-k8s

[root@k8s-master01 ~]# cp kubeadm-config.yaml kubeadm-init.log install-k8s

[root@k8s-master01 ~]# cd install-k8s/

[root@k8s-master01 install-k8s]# ls

kubeadm-config.yaml  kubeadm-init.log

[root@k8s-master01 install-k8s]# mkdir core

[root@k8s-master01 install-k8s]# ls

core  kubeadm-config.yaml  kubeadm-init.log

[root@k8s-master01 install-k8s]# mv * core/

[root@k8s-master01 install-k8s]# mkdir plugin

[root@k8s-master01 install-k8s]# cd plugin/

[root@k8s-master01 plugin]# ls

安装flannel网络插件:

[root@k8s-master01 plugin]# mkdir flannel

[root@k8s-master01 plugin]# cd flannel/

[root@k8s-master01 flannel]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

--2021-12-11 20:21:59--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.108.133, ...

Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.

HTTP request sent, awaiting response... 200 OK

Length: 5177 (5.1K) [text/plain]

Saving to: ‘kube-flannel.yml’

100%[====================================================================================>] 5,177       --.-K/s   in 0s

2021-12-11 20:22:00 (68.0 MB/s) - ‘kube-flannel.yml’ saved [5177/5177]

[root@k8s-master01 flannel]# ls

kube-flannel.yml

[root@k8s-master01 flannel]# pwd

/root/install-k8s/plugin/flannel

使用flannel资源清单kube-flannel.yml,去创建对应的pod

Flannel:为了Pod之间通信,创建扁平化的网络

 [root@k8s-master01 flannel]# kubectl create -f kube-flannel.yml

podsecuritypolicy.policy/psp.flannel.unprivileged created

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds created

[root@k8s-master01 flannel]# kubectl get pod -n kube-system

NAME                                   READY   STATUS    RESTARTS   AGE

coredns-5c98db65d4-czzr8               1/1     Running   0          97m

coredns-5c98db65d4-d44v7               1/1     Running   0          97m

etcd-k8s-master01                      1/1     Running   0          96m

kube-apiserver-k8s-master01            1/1     Running   0          96m

kube-controller-manager-k8s-master01   1/1     Running   0          96m

kube-flannel-ds-dqdp9                  1/1     Running   0          2m4s

kube-proxy-59rln                       1/1     Running   0          97m

kube-scheduler-k8s-master01            1/1     Running   0          96m

节点的状态为Ready

[root@k8s-master01 flannel]# kubectl get node

NAME           STATUS   ROLES    AGE    VERSION

k8s-master01   Ready    master   100m   v1.15.1

查看flannel网卡:

[root@k8s-master01 flannel]# ifconfig

cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450

        inet 10.244.0.1  netmask 255.255.255.0  broadcast 10.244.0.255

        inet6 fe80::3455:a8ff:fe89:b869  prefixlen 64  scopeid 0x20<link>

        ether 36:55:a8:89:b8:69  txqueuelen 1000  (Ethernet)

        RX packets 1439  bytes 100064 (97.7 KiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 1450  bytes 453045 (442.4 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500

        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

        ether 02:42:e0:c9:6b:f2  txqueuelen 0  (Ethernet)

        RX packets 0  bytes 0 (0.0 B)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 0  bytes 0 (0.0 B)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 192.168.31.97  netmask 255.255.255.0  broadcast 192.168.31.255

        inet6 fe80::3137:ef9c:4071:2a8  prefixlen 64  scopeid 0x20<link>

        ether 08:00:27:4e:28:22  txqueuelen 1000  (Ethernet)

        RX packets 45729  bytes 30420330 (29.0 MiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 7772  bytes 663834 (648.2 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450

        inet 10.244.0.0  netmask 255.255.255.255  broadcast 10.244.0.0

        inet6 fe80::806f:d4ff:fe7f:44c7  prefixlen 64  scopeid 0x20<link>

        ether 82:6f:d4:7f:44:c7  txqueuelen 0  (Ethernet)

        RX packets 0  bytes 0 (0.0 B)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 0  bytes 0 (0.0 B)

        TX errors 0  dropped 12 overruns 0  carrier 0  collisions 0

worker node加入到集群中:

1.加入node01:

[root@k8s-node01 ~]# kubeadm join 192.168.31.97:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:91f7c0eabf49cc5d19275ca6e9c0b8f68d464c7969ac90d0c339b01f51fcd486

[preflight] Running pre-flight checks

        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.11. Latest validated version: 18.09

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Activating the kubelet service

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node01 ~]#

2.加入node02:

[root@k8s-node02 ~]# kubeadm join 192.168.31.97:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:91f7c0eabf49cc5d19275ca6e9c0b8f68d464c7969ac90d0c339b01f51fcd486

[preflight] Running pre-flight checks

        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.11. Latest validated version: 18.09

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Activating the kubelet service

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node02 ~]#

在master节点查看集群信息:

[root@k8s-master01 flannel]# kubectl get pod -n kube-system

NAME                                   READY   STATUS    RESTARTS   AGE

coredns-5c98db65d4-czzr8               1/1     Running   0          110m

coredns-5c98db65d4-d44v7               1/1     Running   0          110m

etcd-k8s-master01                      1/1     Running   0          110m

kube-apiserver-k8s-master01            1/1     Running   0          109m

kube-controller-manager-k8s-master01   1/1     Running   0          109m

kube-flannel-ds-dqdp9                  1/1     Running   0          15m

kube-flannel-ds-khqmk                  1/1     Running   0          2m34s

kube-flannel-ds-rdtqd                  1/1     Running   0          3m6s

kube-proxy-4jnkw                       1/1     Running   0          3m6s

kube-proxy-59rln                       1/1     Running   0          110m

kube-proxy-rnkvm                       1/1     Running   0          2m34s

kube-scheduler-k8s-master01            1/1     Running   0          110m

[root@k8s-master01 flannel]#

查看更详细的信息:

[root@k8s-master01 flannel]# kubectl get pod -n kube-system -o wide

NAME                                   READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES

coredns-5c98db65d4-czzr8               1/1     Running   0          111m    10.244.0.3       k8s-master01   <none>           <none>

coredns-5c98db65d4-d44v7               1/1     Running   0          111m    10.244.0.2       k8s-master01   <none>           <none>

etcd-k8s-master01                      1/1     Running   0          110m    192.168.31.97    k8s-master01   <none>           <none>

kube-apiserver-k8s-master01            1/1     Running   0          110m    192.168.31.97    k8s-master01   <none>           <none>

kube-controller-manager-k8s-master01   1/1     Running   0          110m    192.168.31.97    k8s-master01   <none>           <none>

kube-flannel-ds-dqdp9                  1/1     Running   0          15m     192.168.31.97    k8s-master01   <none>           <none>

kube-flannel-ds-khqmk                  1/1     Running   0          2m47s   192.168.31.213   k8s-node02     <none>           <none>

kube-flannel-ds-rdtqd                  1/1     Running   0          3m19s   192.168.31.34    k8s-node01     <none>           <none>

kube-proxy-4jnkw                       1/1     Running   0          3m19s   192.168.31.34    k8s-node01     <none>           <none>

kube-proxy-59rln                       1/1     Running   0          111m    192.168.31.97    k8s-master01   <none>           <none>

kube-proxy-rnkvm                       1/1     Running   0          2m47s   192.168.31.213   k8s-node02     <none>           <none>

kube-scheduler-k8s-master01            1/1     Running   0          110m    192.168.31.97    k8s-master01   <none>           <none>

[root@k8s-master01 flannel]#

[root@k8s-master01 flannel]# kubectl get node

NAME           STATUS   ROLES    AGE     VERSION

k8s-master01   Ready    master   112m    v1.15.1

k8s-node01     Ready    <none>   4m49s   v1.15.1

k8s-node02     Ready    <none>   4m17s   v1.15.1

保存一些重要的文件,以后k8s集群要用到:

[root@k8s-master01 ~]# pwd

/root

[root@k8s-master01 ~]# cp -r install-k8s /usr/local

[root@k8s-master01 ~]# cd /usr/local/

[root@k8s-master01 local]# ls

bin  etc  games  include  install-k8s  lib  lib64  libexec  sbin  share  src

至此,k8s集群部署成功

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐