1、准备系统环境

  • 禁用默认休眠(可选)
    • 禁用:systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
    • 启用:sudo systemctl unmask sleep.target suspend.target hibernate.target hybrid-sleep.target
  • 关闭防火墙(可选)
    • Centos:systemctl stop firewalld && systemctl disable firewalld && iptables -F
    • Ubuntu:ufw disable
    • Debian:sudo iptables -P INPUT ACCEPT & sudo iptables -P FORWARD ACCEPT & sudo iptables -P OUTPUT ACCEPT & sudo iptables -F
  • 关闭selinux(可选)
    • Centos:sed -i s#SELINUX=enforcing#SELINUX=disabled# /etc/selinux/config
  • 关闭swap分区(必须)
    • swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab
  • 同步时间:
    • 设置为东八区:timedatectl set-timezone "Asia/Shanghai"
    • 安装ntpdate:sudo apt install ntpdate -y->校时ntpdate time1.aliyun.com
  • 更新系统内核:apt update && apt upgrade

ECS部署请在每个节点添加虚拟网卡

方式1(重启失效)
#添加虚拟网卡
ip link add dummy-pub0 type dummy
#添加公网IP地址,将$public_ip改为公网IP
ip addr add $public_ip dev dummy-pub0

验证

ip a
方式2

编辑网卡配置

  • debian/ubuntuvim /etc/network/interfaces
auto dummy-pub0:1
iface dummy-pub0:0 inet static
address 公网ip
netmask 255.255.255.0

重启网卡(debian需要先安装apt install ifupdown

/etc/init.d/networking restart
or
systemctl restart networking

验证

ip a
  • centosvim /etc/sysconfig/network-scripts/ifcfg-eth0:1
BOOTPROTO=static
DEVICE=eth0:1
IPADDR=公网ip
PREFIX=32
TYPE=Ethernet
USERCTL=no
ONBOOT=yes

重启网网卡

systemctl restart network

验证

ip a

ECS部署请在每个节点配置iptables地址转换

#将从公网ip发出的数据包的源地址替换为私网ip
iptables -t nat -I POSTROUTING -s 公网ip/32 -j SNAT --to-source 私网ip
#将发送到公网ip子网中任何 IP 地址的数据包的目标地址替换为私网ip
sudo iptables -t nat -I OUTPUT -d 公网ip/32 -j DNAT --to-destination 私网ip

查看规则

iptables --table nat -L --line-number

如果要删除

iptables -t nat -D POSTROUTING <rule_number>
iptables -t nat -D OUTPUT <rule_number>

2、修改主机名

CPU&RAM硬盘角色主机名
2C2G+6G+master-nodek8s-node1
2C2G+6G+worker-nodek8s-node2
2C2G+6G+worker-nodek8s-node3
分别设置主机名
#master-node
hostnamectl set-hostname k8s-node1
#worker-node
hostnamectl set-hostname k8s-node2
#worker-node
hostnamectl set-hostname k8s-node3
#查看主机名
hostname

3、设置hosts

设置静态ip,查看网卡名和当前分配的ip:ip addr
在这里插入图片描述

Debian/Ubuntu设置静态ip:vim /etc/network/interfaces(ip设置为自动分配的ip,网关设置为虚拟机网络编辑器->NAT模式->NAT设置里的网关,生产环境有专门的网关)

auto ens33
iface ens33 inet static
address 192.168.64.129
netmask 255.255.255.0
gateway 192.168.64.2

所有节点vim /etc/hosts添加如下内容(如果是ECS部署k8s,这里请设置公网ip

192.168.68.129   k8s-node1
192.168.68.130   k8s-node2
192.168.68.131   k8s-node3

所有节点设置dns:vim /etc/resolv.conf,添加以下内容

nameserver 114.114.114.114
nameserver 8.8.8.8
nameserver 8.8.8.4

4、配置网桥过滤和内核转发

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

加载overlaybr_netfilter模块(使 iptables 规则可以在 Linux Bridges 上面工作,用于将桥接的流量转发至iptables链)

modprobe overlay
modprobe br_netfilter

设置开机自动加载

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

验证overlaybr_netfilter是否加载

lsmod | grep overlay
lsmod | grep br_netfilter

从所有系统配置文件中加载参数

sysctl --system

5、加载ip_vs

kube-proxy模式为ip_vs则必须加载

modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4

设置开机自动加载

cat > /etc/modules-load.d/ip_vs.conf << EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOF

验证

lsmod | grep ip_vs
lsmod | grep ip_vs_rr
lsmod | grep ip_vs_wrr
lsmod | grep ip_vs_sh
lsmod | grep nf_conntrack_ipv4

安装ipsetipvsadm用于查看ipvs规则

apt install -y ipset ipvsadm

6、安装容器运行时

在线安装 containerd

#查看软件包
apt list | grep containerd
#安装
sudo apt-get install containerd.io
#卸载
sudo apt-get remove containerd

设置containerd开机自启动

#设置自启动并立即启动containerd
systemctl enable --now containerd
#启动
systemctl start containerd
#查看状态
systemctl status containerd

检查

ctr version

离线安装 containerd(推荐)

1、从 https://github.com/containerd/containerd/releases 下载最新版containerd(右键复制链接)

在这里插入图片描述

#下载
wget https://github.com/containerd/containerd/releases/download/v1.7.20/containerd-1.7.20-linux-amd64.tar.gz
#解压
tar Cxzvf /usr/local containerd-1.7.20-linux-amd64.tar.gz
#清理
rm containerd-1.7.20-linux-amd64.tar.gz

新建目录mkdir -p /usr/local/lib/systemd/system

新增加配置文件vim /usr/local/lib/systemd/system/containerd.service用于使用systemd启动

# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

设置开机自启动

#重新加载systemd配置文件
systemctl daemon-reload
#设置自启动并立即启动containerd
systemctl enable --now containerd

2、安装 runc
https://github.com/opencontainers/runc/releases下载最新版runc(右键复制链接)

在这里插入图片描述

#下载
wget https://github.com/opencontainers/runc/releases/download/v1.1.13/runc.amd64
#安装
install -m 755 runc.amd64 /usr/local/sbin/runc
#清理
rm runc.amd64
#检查
runc -v

3、安装 CNI 插件
https://github.com/containernetworking/plugins/releases 下载最新版CNI(右键复制链接)

在这里插入图片描述

#下载
wget https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.5.1.tgz
#新建目录
mkdir -p /opt/cni/bin
#解压
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.5.1.tgz
#清理
rm cni-plugins-linux-amd64-v1.5.1.tgz

检查

ctr version

创建containerd配置文件

配置详情https://github.com/containerd/containerd/blob/main/docs/man/containerd-config.toml.5.md

生成默认配置即可

#创建目录
mkdir -p /etc/containerd
#生成配置
containerd config default > /etc/containerd/config.toml

检查

cat /etc/containerd/config.toml

配置systemd作为cgroup驱动

编辑文件vim /etc/containerd/config.toml

#将SystemdCgroup设置为true
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

重启

systemctl restart containerd

7、开放以下端口(如果没有关闭防火墙,生产环境手动开放)

#检查端口是否开放
nc 127.0.0.1 6443 -v
或
nc k8s-node1 6443 -v
Control plane(master-node)
Protocol/协议Direction/方向Port Range/端口Purpose/目的Used By/使用者
TCPInbound6443Kubernetes API serverAll
TCPInbound2379-2380etcd server client APIkube-apiserver, etcd
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound10259kube-schedulerSelf
TCPInbound10257kube-controller-managerSelf

Worker node(s)

Protocol/协议Direction/方向Port Range/端口Purpose/目的Used By/使用者
TCPInbound10256kube-proxySelf, Load balancers
TCPInbound30000-32767NodePort Services†All

所有节点都需要开放的端口

Calico插件https://docs.tigera.io/calico/latest/getting-started/kubernetes/requirements

Protocol/协议Direction/方向Port Range/端口Purpose/目的Used By/使用者
TCPBidirectional179BGP for Calico networkingAll hosts
IP-in-IPBidirectionalProtocol 4Calico networking with IP-in-IP enabled (default)All hosts
UDPBidirectional4789VXLAN for Calico networking with VXLAN enabledAll hosts
TCPIncoming5473Typha for Calico networking with Typha enabledTypha agent hosts
UDPBidirectional51820IPv4 Wireguard for Calico networking with IPv4 Wireguard enabledAll hosts
UDPBidirectional51821IPv6 Wireguard for Calico networking with IPv6 Wireguard enabledAll hosts
UDPBidirectional4789VXLAN for flannel networkingAll hosts
TCPIncoming443 or 6443*kube-apiserver hostAll hosts
TCPIncoming2379etcd datastoreetcd hosts

Flannel插件

Protocol/协议Direction/方向Port Range/端口Purpose/目的Used By/使用者
UDPInbound8285flannel vxlanFlannel overlay network
UDPInbound8472flannel udpFlannel overlay network
7、安装 crictlkubeadm/kubelet 容器运行时接口(CRI)所需,容器运行时工具)

github地址

#下载包
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.30.0/crictl-v1.30.0-linux-amd64.tar.gz
#安装
tar zxvf crictl-v1.30.0-linux-amd64.tar.gz -C /usr/local/bin
#验证
crictl -v
#清理包
rm -f crictl-v1.30.0-linux-amd64.tar.gz

8、安装kubeadm、kubelet、kubectl

  • kubeadm:引导集群的命令
  • kubelet:在集群中的所有机器上运行并执行启动 pod 和容器等操作的组件
  • kubectl:命令行工具

在线安装

#安装使用k8s需要的包
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
#安装公共签名密钥
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

安装kubeadm、kubelet、kubectl

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
#固定版本
sudo apt-mark hold kubelet kubeadm kubectl

设置开机自启动

sudo systemctl enable --now kubelet

离线安装

安装kubeadm、kubelet、kubectl

#进入目录
cd /usr/local/bin
#下载最新版kubeadm,kubelet
sudo curl -L --remote-name-all "https://dl.k8s.io/release/$(curl -sSL https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/{kubeadm,kubelet}"
#授权
sudo chmod +x {kubeadm,kubelet}

添加kubelet.service配置:vim /usr/lib/systemd/system/kubelet.service

[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target

[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

创建kubelet系统服务目录

sudo mkdir -p /usr/lib/systemd/system/kubelet.service.d

配置kubeadmvim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

如果在ECS部署,请在最后一行后添加

 --node-ip=公网ip

在这里插入图片描述

设置kubelet开机自启动

sudo systemctl enable --now kubelet

验证kubeadmkubelet

kubeadm version
kubelet --version

如果下载不了,查看ip 添加解析的ip到hosts,vim /etc/hosts,再次尝试

20.205.243.166  raw.githubusercontent.com

安装kubectl

#回到根目录
cd
#下载
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
#安装
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
#验证
kubectl version --client

所有节点安装ethtool(用于查询及设置网卡参数的命令)和conntrack(网络连接跟踪)

apt install -y ebtables ethtool conntrack socat

9、初始化master-node(仅在master-node操作)

初始化方式1

下载镜像

#查看配置所需镜像列表
kubeadm config images list
#下载镜像
kubeadm config images pull
#或指定镜像源下载
kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers
#查看镜像
crictl img

如果镜像无法下载

#在能访问外网的电脑,拉取镜像到本地 
docker pull 镜像 
#将本地镜像导出为tar包不指定版本默认最新 
docker save -o front.tar 镜像 
#上传服务器加载镜像 
sudo ctr -n k8s.io images import front.tar
#查看镜像 
sudo ctr -n k8s.io images ls

在这里插入图片描述

初始化,将apiserver-advertise-address修改为master-nodeip,(如果是ECS部署k8s,这里设置公网ip

kubeadm init --kubernetes-version v1.30.3 \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=公网ip或节点ip \
--control-plane-endpoint=8.130.120.125 \
--service-cidr=10.96.0.0/12 \
--image-repository registry.aliyuncs.com/google_containers --v=5

命令解释

kubeadm init \
  --kubernetes-version 指定版本 \
  --apiserver-advertise-address=apiserver广播ip \
  --service-cidr=service的ip网段,用于分配ip \
  --control-plane-endpoint=控制平面ip \
  --pod-network-cidr=pod内部的ip网段 \
  --image-repository 镜像地址 

初始化失败执行kubeadm reset -f,如果重启虚拟网卡会失效,需要重新配置

清除iptables规则:iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X,清除后需要重新配置iptables地址转换(文章最前面)

查看规则:iptables --table nat -L --line-number

重置 IPVS 表:ipvsadm -C

清除配置:rm -f $HOME/.kube/config

清除cni配置:rm -rf /etc/cni/net.d

出现以下信息就是成功了

在这里插入图片描述

报错:missing required cgroups: cpu

vim /etc/default/grubGRUB_CMDLINE_LINUX=""内添加cgroup_enable=cpu,用空格分开然后重启

在这里插入图片描述

初始化方式2

下载镜像

#查看配置所需镜像列表
kubeadm config images list
#输出配置文件
kubeadm config print init-defaults > init.default.yaml

编辑配置文件vim init.default.yaml,修改()标注的内容,将apiserver-advertise-address修改为master-nodeip,(如果是ECS部署k8s,这里设置公网ip

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4(公网ip)
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: k8s-node1(本节点名)
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers(镜像地址)
kind: ClusterConfiguration
kubernetesVersion: 1.30.3(指定k8s版本)
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: "10.244.0.0/16"(pod网段)
scheduler: {}

下载镜像

kubeadm config images pull --config=init.default.yaml
#查看镜像
crictl img

初始化,将apiserver-advertise-address修改为master-nodeip或公网ip

kubeadm init --config init.default.yaml --v=5

如果init时需要镜像,在--image-repository时设置--image-repository registry.aliyuncs.com/google_containers

或在init.default.yaml设置镜像,设置为imageRepository: registry.aliyuncs.com/google_containers

containerd配置文件/etc/containerd/config.toml的镜像地址sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"然后systemctl restart containerd

10、加入master-node节点

如果是普通用户在master-node,执行

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果是root用户在master-node,执行

export KUBECONFIG=/etc/kubernetes/admin.conf

如果想重启后仍然保持配置

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile

最后一个框的token内容,在你需要加入的worker-node执行,这里在k8s-node1k8s-node2执行(token有效期为24小时)

kubeadm join 节点或公网ip:6443 --token 2gu194.xtajrtbzk7lgulyf \
        --discovery-token-ca-cert-hash sha256:95e9b2e4ffc706d8f52b406b52f280dd875d8dbfddd4aa565c1ce7446977cefd 

master-node查看节点

kubectl get nodes

查看所有pod

kubectl get pods -n kube-system

在这里插入图片描述

删除节点

kubectl delete node k8s-node2

k8s添加新节点

#查看token
kubeadm token list
#没有就生成新token
kubeadm token create 
#生成CA证书公钥的hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^ .* //'
#执行
kubeadm join 节点或公网ip:6443 --token token值 \
        --discovery-token-ca-cert-hash sha256:哈希值

此时kubectl get nodes

在这里插入图片描述

11、修改apiserver配置文件

master-node编辑vim /etc/kubernetes/manifests/kube-apiserver.yaml
添加

--bind-address=0.0.0.0

在这里插入图片描述

修改后api-service会自动重启

查看是否运行

kubectl get pods -n kube-system -o wide

在这里插入图片描述

12、每个节点部署网络插件

Flannel(需要CNI插件)

在线安装Flannel

下载配置文件

wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

编辑配置,添加以下内容(iface改为网卡名,使用了虚拟网卡就用虚拟网卡名,ip a查看网卡名)

        - --public-ip=$(PUBLIC_IP)
        - --iface=dummy-pub0
        - name: PUBLIC_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP

在这里插入图片描述

安装

kubectl apply -f kube-flannel.yml

github右键复制链接下载docker镜像https://github.com/flannel-io/flannel/releases,然后导入镜像ctr -n=k8s.io image import flanneld-v0.25.5-amd64.docker
查看镜像:crictl images

卸载

kubectl delete -f kube-flannel.yml
or
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

离线安装Flannel

wget https://github.com/flannel-io/flannel/releases/download/v0.25.5/flannel-v0.25.5-linux-amd64.tar.gz
#新建目录
mkdir -p /usr/bin/flanneld
#解压
tar Cxzvf /usr/bin/flanneld flannel-v0.25.5-linux-amd64.tar.gz

创建启动文件vim /usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flannel Network Fabric
Documentation=https://github.com/coreos/flannel
Before=containerd.service
After=etcd.service
 
[Service]
Environment='DAEMON_ARGS=--etcd-endpoints=http://196.168.8.119:2379'
Type=notify
ExecStart=/usr/bin/flanneld $DAEMON_ARGS
Restart=always
RestartSec=10s
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

开机自启动

systemctl daemon-reload
systemctl enable --now flanneld

验证

kubectl get ns
kubectl get pods -n kube-flannel

查看dns是否启动

kubectl get pods -n kube-system -o wide

Calico官网

如果主机上存在 NetworkManager,需要创建以下配置文件以vim /etc/NetworkManager/conf.d/calico.conf防止 NetworkManager 干扰接口

[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:vxlan-v6.calico;interface-name:wireguard.cali;interface-name:wg-v6.cali

检查是否有使用NetworkManager

nmcli -v 

master-node安装Tigera Calico 操作器

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml

卸载Tigera Calico 操作器

kubectl delete -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml

查看是否运行,Running代表运行(新建了ssh链接可能失败,执行export KUBECONFIG=/etc/kubernetes/admin.conf

kubectl get ns
watch -n 1 kubectl get pods -n tigera-operator

在这里插入图片描述

下载Calico安装配置文件,配置参考

wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml

编辑配置

vim custom-resources.yaml

修改为初始化master-node设置的pod-network-cidr
在这里插入图片描述

master-node安装Calico

kubectl create -f custom-resources.yaml

卸载

kubectl delete -f custom-resources.yaml

查看是否运行,Running代表运行

kubectl get ns
watch kubectl get pods -n calico-system
kubectl get pods -n kube-system -o wide

在这里插入图片描述

验证calico

watch kubectl get pods -l k8s-app=calico-node -A

查看dns是否启动

kubectl get pods -n kube-system -o wide

在这里插入图片描述

验证dns是否解析成功

#查看服务信息
kubectl get svc -n kube-system
dig -t a www.baidu.com @10.96.0.10

在这里插入图片描述

12、部署一个Nginx应用

创建命名空间

vim nginx-namespace.yaml

添加以下内容

apiVersion: v1
kind: Namespace
metadata:
  name: nginx
  labels:
    name: nginx

创建命名空间

kubectl create -f nginx-namespace.yaml

创建deployment

vim nginx-deployment.yaml

添加以下内容

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx-deployment
  namespace: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.26.1
        name: nginx
        volumeMounts:
        - name: conf
          mountPath: /etc/nginx/nginx.conf
        - name: log
          mountPath: /var/log/nginx/
        - name: html
          mountPath: /etc/nginx/html
      tolerations:
      - key: "key"
        operator: "Equal"
        value: "nginx"
        effect: "NoSchedule"
      volumes:
      - name: conf
        hostPath:
          path: /usr/local/nginx/conf/nginx.conf
      - name: log
        hostPath:
          path: /usr/local/nginx/logs
          type: Directory
      - name: html
        hostPath:
          path: /usr/local/nginx/html
          type: Directory

部署应用

kubectl create -f nginx-deployment.yaml

创建service

vim nginx-service.yaml

添加以下内容

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx-deployment
  namespace: nginx
spec:
  ports:
  - port: 80         // 外部访问k8s中的服务的端口
    name: nginx-service80
    protocol: TCP
    targetPort: 80   // 服务访问端口(外部访问)
    nodePort: 80     // pod控制器中定义的端口
  - port: 81
    name: nginx-service81
    protocol: TCP
    targetPort: 81
    nodePort: 81
  selector:
    app: nginx
  type: NodePort

部署service

kubectl create -f nginx-service.yaml

验证

kubectl get pods
kubectl get svc

访问ip/index.html

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐