目录

1.设备介绍:(VMware workstation16.0)

2.环境准备:(all)

 1.配置域名映射:

2.放行kubernetes所用到的端口:

 3.关闭selinux:

4.关闭swap:

 5.开启转发并配置ipvs:

3.设置源(all)

4.安装docker,配置镜像加速。(all)

5.安装kubernetes集群

1.查看kubernetes的所有版本

2.下载kubeadm-1.23.9,kubelet-1.23.9,kubectl-1.23.9。(all)

3.master节点初始换(init)(master)

4.配置node节点加入集群(node)

6.安装caclico网络插件(master)

7.设置master节点是否可调用(可选)


1.设备介绍:(VMware workstation16.0)

Ubuntu20.04192.168.111.141master01
Ubuntu20.04192.168.111.142node01
Ubuntu20.04192.168.111.143node02

2.环境准备:(all)

 1.配置域名映射:

root@master01:~# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 apang

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.111.141 master01
192.168.111.142 node01
192.168.111.143 node02

2.放行kubernetes所用到的端口:

放行kubernetes所用到的端口。这里我用的vmware workstation直接在电脑部署得虚拟机,所以我直接关闭了防火墙。实际生产不可关闭防火墙,需要开放端口为:

主节点:

TCP     6443*       Kubernetes API Server
TCP     2379-2380   etcd server client API
TCP     10250       Kubelet API
TCP     10251       kube-scheduler
TCP     10252       kube-controller-manager
TCP     10255       Read-Only Kubelet API

node节点:

TCP     10250       Kubelet API
TCP     10255       Read-Only Kubelet API
TCP     30000-32767 NodePort Services

 开放端口

## master节点上:
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=8080/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=53/tcp
firewall-cmd --permanent --add-port=53/udp
和其它需要的端口。
## node节点上:
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10255/tcp

 3.关闭selinux:

selinux相当于我们windos下面的360一样,也是一个保护我们系统的程序,我的ubuntu20.04是默认关闭的,看是否关闭selinux,如下

root@master01:~# apt -y install selinux-utils
root@master01:~# getenforce 
Disabled

 如果是enable的话,就是开启。可以执行setenforce 0来临时关闭。

4.关闭swap:

## 查看swap
root@master01:~# free  -h
               total        used        free      shared  buff/cache   available
Mem:           3.8Gi       316Mi       2.5Gi       1.0Mi       971Mi       3.3Gi
Swap:          3.8Gi          0B       3.8Gi
## 临时关闭swap
root@master01:~# swapoff -a
root@master01:~# free  -h
               total        used        free      shared  buff/cache   available
Mem:           3.8Gi       315Mi       2.5Gi       1.0Mi       971Mi       3.3Gi
Swap:             0B          0B          0B
## 永久关闭swap,讲/etc/fstab文件中的swap行全都用#给注释掉。
root@master01:~# cat /etc/fstab 
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-sG2jwkz8EncvlR73fIPJY43cn1PcNlBxGjh7LiAsyDMV7oBoP0gPUAZ9hMVSQKmi / ext4 defaults 0 1
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/f289d351-bfc0-4a3a-b13a-7dd1f4593dad /boot ext4 defaults 0 1
# /swap.img	none	swap	sw	0	0

 5.开启转发并配置ipvs:

iptables采用一条条的规则列表。iptables又是为了防火墙设计的,集群数量越多iptables规则就越多,而iptables规则是从上到下匹配,所以效率就越是低下。因此当service数量达到一定规模时,hash查表的速度优势就会显现出来,从而提高service的服务性能。

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system

## 下载 ipset 和 ipvsadm 来使ipvs更加方便地代理规则
apt -y install ipset ipvsadm -y vim bash-completion net-tools gcc -y

3.设置源(all)

 docker源

阿里云docker镜像链接

kubernetes源

阿里云K8S镜像​​​​​​链接
 

最终的源为

root@master01:~# apt-get update
Hit:1 https://mirrors.aliyun.com/docker-ce/linux/ubuntu jammy InRelease
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease [9383 B]                            
Hit:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy InRelease                                               
Hit:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-updates InRelease
Ign:5 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
Hit:6 https://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-backports InRelease
Get:5 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages [59.4 kB]
Hit:7 https://mirrors.tuna.tsinghua.edu.cn/ubuntu jammy-security InRelease

4.安装docker,配置镜像加速。(all)

安装docker,并永久启动。

root@master01:~# apt -y install docker-ce
root@master01:~# systemctl enable docker

配置镜像加速

链接:阿里云登录 - 欢迎登录阿里云,安全稳定的云计算服务平台

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://xxxxxxx.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]    ##因为kubernetes的native.cgroupdriver为systemd,所以需要更改一下
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

配置完后可以用 docker info 查看docker的详细信息,发现registry Mirrors变为阿里云的就配置成功了。

5.安装kubernetes集群

1.查看kubernetes的所有版本

apt-cache madison kubelet

 这里我选用的是1.23.9版本的kubernetes,因为到了1.24版本后,kubernetes不再支持docker项目了,而是支持container容器项目了。

2.下载kubeadm-1.23.9,kubelet-1.23.9,kubectl-1.23.9。(all)

我们可以发现我们的kubelet启动不了,但是不用担心,我们还需要初始化master(init)一下。

root@master01:~# apt -y install kubeadm=1.23.9-00 kubectl=1.23.9-00 kubelet=1.23.9-00
## 启动kubelet
root@master01:~# systemctl enable --now kubelet
## 查看kubelet的状态
root@master01:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Mon 2022-09-19 19:12:37 CST; 6s ago
       Docs: https://kubernetes.io/docs/home/
    Process: 5272 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
   Main PID: 5272 (code=exited, status=1/FAILURE)
        CPU: 193ms

3.master节点初始换(init)(master)

  • ◎ kubeadm config upload from-file: 由配置文件上传到集群中生成ConfigMap。
  • ◎ kubeadm config upload from-flags: 由配置参数生成ConfigMap。
  • ◎ kubeadm config view: 查看当前集群中的配置值。
  • ◎ kubeadm config print init-defaults: 输出kubeadm init默认参数文件的内容。
  • ◎ kubeadm config print join-defaults: 输出kubeadm join默认参数文件的内容。
  • ◎ kubeadm config migrate: 在新旧版本之间进行配置转换。
  • ◎ kubeadm config images list: 列出所需的镜像列表。
  • ◎ kubeadm config images pull: 拉取镜像到本地。
  • 通过kubeadm config print init-defaults > kubeadm.yaml,我们可以得到一个默认的kubernetes项目的初始化参数的文件--kubeadm.yaml。(其实大神的话可以完全自己写一个完全符合自己实际生产的kubeadm.yaml,但是我是小菜鸡,我就用默认生成的kubeadm.yaml文件了)
  • root@master01:/opt/k8s# kubeadm config print init-defaults >kubeadm.yaml
    root@master01:/opt/k8s# ll
    total 12
    drwxr-xr-x 2 root root 4096 Sep 19 19:20 ./
    drwxr-xr-x 5 root root 4096 Sep 19 19:20 ../
    -rw-r--r-- 1 root root  784 Sep 19 19:20 kubeadm.yaml

    打开kubeadm.yaml文件进行修改一下

  • root@master01:/opt/k8s# cat kubeadm.yaml 
    apiVersion: kubeadm.k8s.io/v1beta3
    bootstrapTokens:
    - groups:
      - system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef  ## 后面kubeadm join时用
      ttl: 24h0m0s   #生命周期
      usages:
      - signing
      - authentication
    kind: InitConfiguration    #选择类型
    localAPIEndpoint:
      advertiseAddress: 192.168.111.141    ## 这个更改为master节点的IP地址
      bindPort: 6443
    nodeRegistration:
      criSocket: /var/run/dockershim.sock
      imagePullPolicy: IfNotPresent     ## 镜像拉去策略
      name: master01               ## 更改为自己的master节点的主机名
      taints: null
    ---
    apiServer:
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns: {}
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.aliyuncs.com/google_containers   ## 这个原本是K8S官网的镜像,但在国内下载镜像太慢了,所以我换为了阿里云的
    kind: ClusterConfiguration
    kubernetesVersion: 1.23.0
    networking:
      dnsDomain: cluster.local
      podSubnet: 10.220.0.0/16           ##这个自己可以加上去,pod的网段,后面的flannel插件会使用到
      serviceSubnet: 10.10.0.0/12        ##这个是service的网段,这个和上面的pod网段都可以随意更改的,不需要像我的一样。
    scheduler: {}

    查看安装kubernetes项目需要那些插件

  • root@master01:/opt/k8s# kubeadm config images list
    I0919 19:44:29.558272    6867 version.go:255] remote version is much newer: v1.25.1; falling back to: stable-1.23
    k8s.gcr.io/kube-apiserver:v1.23.11              ## kube-apiserver获取kubernetes集群中的api的,并将获得的数据传到etcd中去
    k8s.gcr.io/kube-controller-manager:v1.23.11     ## kube-controller-manager是编排容器的控制器
    k8s.gcr.io/kube-scheduler:v1.23.11              ## kube-scheduler是kubernetes集群的调度,用来将创建的容器调度到合适的node上面去。
    k8s.gcr.io/kube-proxy:v1.23.11                  ## kube-proxy就和我们前面配置的ipvs联系上了,如果我们想要两个容器之间互通,或者容器和外面之间通讯,那必须要有路由的,kube-proxy就是干这个的,他会在ipvs中加入路由,从而可以互通。
    k8s.gcr.io/pause:3.6                            ## pause是pod中其他容器共享linux namespace的基础,基础设施
    k8s.gcr.io/etcd:3.5.1-0                         ## etcd是高可用的分布式储存项目
    k8s.gcr.io/coredns/coredns:v1.8.6               ## coredns就是起域名解析的作用了,而容器之间的联系建议用域名进行连接,以为容器重新建立后IP地址会变。

    下载所需的镜像

  • root@master01:/opt/k8s#  kubeadm config images pull --config kubeadm.yaml
    [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
    [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
    [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
    [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
    [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
    [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
    [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
    root@master01:/opt/k8s# docker images
    REPOSITORY                                                        TAG       IMAGE ID       CREATED         SIZE
    registry.aliyuncs.com/google_containers/kube-apiserver            v1.23.0   e6bf5ddd4098   9 months ago    135MB
    registry.aliyuncs.com/google_containers/kube-proxy                v1.23.0   e03484a90585   9 months ago    112MB
    registry.aliyuncs.com/google_containers/kube-controller-manager   v1.23.0   37c6aeb3663b   9 months ago    125MB
    registry.aliyuncs.com/google_containers/kube-scheduler            v1.23.0   56c5af1d00b5   9 months ago    53.5MB
    registry.aliyuncs.com/google_containers/etcd                      3.5.1-0   25f8c7f3da61   10 months ago   293MB
    registry.aliyuncs.com/google_containers/coredns                   v1.8.6    a4ca41631cc7   11 months ago   46.8MB
    registry.aliyuncs.com/google_containers/pause                     3.6       6270bb605e12   12 months ago   683kB
    

    运行kubeadm init安装master,但要注意的是kubeadm的安装过程不涉及网络插件(CNI)的初始化。

  • root@master:~# kubeadm init --config kubeadm.yaml 
    [init] Using Kubernetes version: v1.23.0
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.111.144]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.111.144 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.111.144 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 7.506431 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
    NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
    [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: abcdef.0123456789abcdef
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.111.144:6443 --token abcdef.0123456789abcdef \
    	--discovery-token-ca-cert-hash sha256:8f7d449319554abd06707d01ac43ed3c6940dfaaffb78694a644c92b7eab29a2 
    

    由于kubeadm默认使用CA证书,所以需要为kubectl配置证书才能访问master。

    根据提示配置kubectl客户端的认证:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    启用 kubectl 命令的自动补全功能:

    echo "source <(kubectl completion bash)" >> ~/.bashrc
    
    source ~/.bashrc

  • 查看一下kubelet,发现启动成功

  • 4.配置node节点加入集群(node)

  • 运用kubeadm join加入集群
  • root@node01:/etc# kubeadm join 192.168.111.141:6443 --token abcdef.0123456789abcdef \
    > --discovery-token-ca-cert-hash sha256:3ad343aa2d6bce6c3e20a09000661859fdf4a9bdb94b464737676e1c30380ec5

    加入后,我们查看一下node的状态。

  • 发现各个节点都是NotReady,这是为什么呢?

  • 咱们前面说过kubeadm安装时不会配置网络设置,所以咱们的coredns服务应该起不来,我们来看一下。

  • root@master01:/opt/k8s# kubectl get pod -n kube-system 
    NAME                               READY   STATUS    RESTARTS   AGE
    coredns-6d8c4cb4d-6hwzj            0/1     Pending   0          15m
    coredns-6d8c4cb4d-tl4xd            0/1     Pending   0          15m
    etcd-master01                      1/1     Running   0          15m
    kube-apiserver-master01            1/1     Running   0          15m
    kube-controller-manager-master01   1/1     Running   0          15m
    kube-proxy-2z9k6                   1/1     Running   0          10m
    kube-proxy-7xxtf                   1/1     Running   0          10m
    kube-proxy-z9h4q                   1/1     Running   0          15m
    kube-scheduler-master01            1/1     Running   0          15m

    果然coredns出现出现pending,解决它!!!!

  • 6.安装caclico网络插件(master)

  • 目前有flannel插件和calico插件,相比之下,calico会比flannel要一些,那就用calico插件

  • 下载calico.yaml文件,并生成容器。

  • root@master01:/opt/k8s# wget https://docs.projectcalico.org/v3.15/manifests/calico.yaml
    root@master01:/opt/k8s# kubectl apply -f calico.yaml 
    configmap/calico-config configured
    customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
    clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrole.rbac.authorization.k8s.io/calico-node created
    clusterrolebinding.rbac.authorization.k8s.io/calico-node created
    daemonset.apps/calico-node created
    serviceaccount/calico-node unchanged
    deployment.apps/calico-kube-controllers created
    serviceaccount/calico-kube-controllers unchanged

    再来看看coredns容器和node节点的情况

  • root@master01:/opt/k8s# kubectl get pod -n kube-system 
    NAME                                       READY   STATUS    RESTARTS   AGE
    calico-kube-controllers-6844cbdddf-hnmw8   1/1     Running   0          3m55s
    calico-node-67dmh                          1/1     Running   0          3m55s
    calico-node-f2xm6                          1/1     Running   0          3m55s
    calico-node-mspck                          1/1     Running   0          3m55s
    coredns-6d8c4cb4d-6hwzj                    1/1     Running   0          33m
    coredns-6d8c4cb4d-tl4xd                    1/1     Running   0          33m
    etcd-master01                              1/1     Running   0          33m
    kube-apiserver-master01                    1/1     Running   0          33m
    kube-controller-manager-master01           1/1     Running   0          33m
    kube-proxy-2z9k6                           1/1     Running   0          28m
    kube-proxy-7xxtf                           1/1     Running   0          28m
    kube-proxy-z9h4q                           1/1     Running   0          33m
    kube-scheduler-master01                    1/1     Running   0          33m
    root@master01:/opt/k8s# kubectl get nodes 
    NAME       STATUS   ROLES                  AGE   VERSION
    master01   Ready    control-plane,master   34m   v1.23.9
    node01     Ready    <none>                 28m   v1.23.9
    node02     Ready    <none>                 28m   v1.23.9

    7.设置master节点是否可调用(可选)

  • 默认部署的话,master节点无法调度业务pod。执行下面就可以进行调度了,这个涉及到“污点”的知识了。

  • kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐