目录

kuberadmin离线安装K8s集群

环境准备

基础环境配置

安装docker

准备基础镜像以及安装包

镜像

集群初始化

设置.kube/config

安装网络组件

加入node节点

验证集群

部署dashboard

设置访问dashboard的端口

创建访问账号

令牌访问

安装包

kuberadmin离线安装K8s集群

环境准备

使用vagrant 拉起三台虚拟机,操作系统版本为Centos7.3

192.168.56.10 master

192.168.56.11 worker

192.168.56.12 worker

基础环境配置

#########################################################################
#关闭防火墙: 如果是云服务器,需要设置安全组策略放行端口
# https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
systemctl stop firewalld
systemctl disable firewalld
​
# 修改 hostname
hostnamectl set-hostname k8s-01
# 查看修改结果
hostnamectl status
# 设置 hostname 解析
echo "127.0.0.1   $(hostname)" >> /etc/hosts
​
#关闭 selinux: 
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
​
#关闭 swap:
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab 
​
#允许 iptables 检查桥接流量
#https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#%E5%85%81%E8%AE%B8-iptables-%E6%A3%80%E6%9F%A5%E6%A1%A5%E6%8E%A5%E6%B5%81%E9%87%8F
## 开启br_netfilter
## sudo modprobe br_netfilter
## 确认下
## lsmod | grep br_netfilter
​
## 修改配置
​
​
#将桥接的 IPv4 流量传递到 iptables 的链:
# 修改 /etc/sysctl.conf
# 如果有配置,则修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf
# 可能没有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf
# 执行命令以应用
sysctl -p
​
​
#################################################################
​

安装docker

参考:

Centos安装docker_MyySophia的博客-CSDN博客

systemctl daemon-reload systemctl enable docker --now

准备基础镜像以及安装包

由于没有外网需要先将所需镜像和rpm包事先准备好

镜像和rpm包都是在有外网的机器上下载然后copy到目标机器上。

镜像

kubernetes集群版本为V1.20.9,docker版本是共13个镜像,使用如下两个脚本先save然后dokce load(三台机器都执行)

# base镜像
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
# calico镜像
docker.io/calico/cni:v3.22.1
docker.io/calico/pod2daemon-flexvol:v3.22.1
docker.io/calico/node:v3.22.1
docker.io/calico/kube-controllers:v3.22.1
#dashboard镜像
kubernetesui/dashboard:v2.3.1
kubernetesui/metrics-scraper:v1.0.6

使用如下脚本先docker save

tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-apiserver:v1.20.9
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-proxy:v1.20.9
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-controller-manager:v1.20.9
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-scheduler:v1.20.9
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns:1.7.0
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/etcd:3.4.13-0
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/pause:3.2
calico/cni:v3.22.1
calico/pod2daemon-flexvol:v3.22.1
calico/node:v3.22.1
calico/kube-controllers:v3.22.1
kubernetesui/dashboard:v2.3.1
kubernetesui/metrics-scraper:v1.0.6
)
for imageName in ${images[@]} ; do
docker save -o $imageName.docker $imageName
done
EOF

使用如下脚本先docker load

#!/bin/bash
images=(
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-apiserver:v1.20.9
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-proxy:v1.20.9
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-controller-manager:v1.20.9
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-scheduler:v1.20.9
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns:1.7.0
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/etcd:3.4.13-0
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/pause:3.2
calico/cni:v3.22.1
calico/pod2daemon-flexvol:v3.22.1
calico/node:v3.22.1
calico/kube-controllers:v3.22.1
kubernetesui/dashboard:v2.3.1
kubernetesui/metrics-scraper:v1.0.6
)
for imageName in ${images[@]} ; do
name=`echo $imageName | sed 's/\//_/g'`
key=.docker
echo "docker load -i $name$key "
docker load -i $name$key
done

包安装

找一台有外网的机器使用 yum install --downloadonly 报名。将包下载下来.(三台机器都执行)

rpm -ivh docker-ce-cli-20.10.7-3.el7.x86_64.rpm --force --nodeps
rpm -ivh containerd.io-1.4.6-3.1.el7.x86_64.rpm --force --nodeps
rpm -ivh docker-ce-20.10.7-3.el7.x86_64.rpm --force --nodeps
rpm -ivh docker-scan-plugin-0.17.0-3.el7.x86_64.rpm --force --nodeps
rpm -ivh ksh-20120801-142.el7.x86_64.rpm --force --nodeps
rpm -ivh 67ffa375b03cea72703fe446ff00963919e8fce913fbc4bb86f06d1475a6bdf9-cri-tools-1.19.0-0.x86_64.rpm --force --nodeps
rpm -ivh db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm --force --nodeps
rpm -ivh 02431d76ab73878211a6052a2fded564a3a2ca96438974e4b0baffb0b3cb883a-kubelet-1.20.9-0.x86_64.rpm --force --nodeps
rpm -ivh 8c6b5ba8f467558ee1418d44e30310b7a8d463fc2d2da510e8aeeaf0edbed044-kubeadm-1.20.9-0.x86_64.rpm --force --nodeps
rpm -ivh c968b9ca8bd22f047f56a929184d2b0ec8eae9c0173146f2706cec9e24b5fefb-kubectl-1.20.9-0.x86_64.rpm --force --nodeps
rpm -ivh conntrack-tools-1.4.4-7.el7.x86_64.rpm
rpm -ivh audit-libs-python-2.8.5-4.el7.x86_64.rpm --force --nodeps
rpm -ivh checkpolicy-2.5-8.el7.x86_64.rpm --force --nodeps
rpm -ivh conntrack-tools-1.4.4-7.el7.x86_64.rpm --force --nodeps
rpm -ivh libcgroup-0.41-21.el7.x86_64.rpm --force --nodeps
rpm -ivh libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm --force --nodeps
rpm -ivh libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm --force --nodeps
rpm -ivh libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm --force --nodeps
rpm -ivh libsemanage-python-2.5-14.el7.x86_64.rpm --force --nodeps
rpm -ivh policycoreutils-python-2.5-34.el7.x86_64.rpm --force --nodeps
rpm -ivh python-IPy-0.75-6.el7.noarch.rpm --force --nodeps
rpm -ivh setools-libs-3.3.8-4.el7.x86_64.rpm --force --nodeps
rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm --force --nodeps
rpm -ivh audit-libs-python-2.8.5-4.el7.x86_64.rpm --force --nodeps
rpm -ivh checkpolicy-2.5-8.el7.x86_64.rpm --force --nodeps
rpm -ivh conntrack-tools-1.4.4-7.el7.x86_64.rpm --force --nodeps
rpm -ivh libcgroup-0.41-21.el7.x86_64.rpm --force --nodeps
rpm -ivh libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm --force --nodeps
rpm -ivh libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm --force --nodeps
rpm -ivh libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm --force --nodeps
rpm -ivh libsemanage-python-2.5-14.el7.x86_64.rpm --force --nodeps
rpm -ivh policycoreutils-python-2.5-34.el7.x86_64.rpm --force --nodeps
rpm -ivh python-IPy-0.75-6.el7.noarch.rpm --force --nodeps
rpm -ivh setools-libs-3.3.8-4.el7.x86_64.rpm --force --nodeps
rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm --force --nodeps
rpm -ivh yum-utils-1.1.31-54.el7_8.noarch.rpm --force --nodeps

集群初始化

#所有机器添加master域名映射,以下需要修改为自己的
echo "172.31.0.4  cluster-endpoint" >> /etc/hosts
[root@p1edaspk04 packages]# kubeadm init \
--apiserver-advertise-address=10.50.10.187 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.20.9
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [cluster-endpoint kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local p1edaspk04] and IPs [10.96.0.1 10.50.10.187]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost p1edaspk04] and IPs [10.50.10.187 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost p1edaspk04] and IPs [10.50.10.187 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.003044 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node p1edaspk04 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node p1edaspk04 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 1aqsp7.aqpc27wcm17t1fmp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
​
Your Kubernetes control-plane has initialized successfully!
​
To start using your cluster, you need to run the following as a regular user:
​
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
​
Alternatively, if you are the root user, you can run:
​
  export KUBECONFIG=/etc/kubernetes/admin.conf
​
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
​
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
​
  kubeadm join cluster-endpoint:6443 --token 1aqsp7.aqpc27wcm17t1fmp \
    --discovery-token-ca-cert-hash sha256:a2ba045927a20ba14c5942cfb9c405aa1734984de129715bb3be25eafb60ebeb \
    --control-plane
​
Then you can join any number of worker nodes by running the following on each as root:
​
kubeadm join cluster-endpoint:6443 --token 1aqsp7.aqpc27wcm17t1fmp \
    --discovery-token-ca-cert-hash sha256:a2ba045927a20ba14c5942cfb9c405aa1734984de129715bb3be25eafb60ebeb
​

设置.kube/config

1、设置.kube/config
​
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config  

安装网络组件

将calico yaml下载到本地

curl https://docs.projectcalico.org/manifests/calico.yaml -O

修改1:

将calico.yaml中的 
- name: CALICO_IPV4POOL_CIDR
  value: "172.31.0.0/16"
将此IP修改为--pod-network-cidr对应的cidr.

修改2:

将images 字段的images修改为本地的images (这个步骤如果按照之前脚本load的话不用修改)

安装calico插件

kubectl apply -f calico.yaml

加入node节点

在node节点上执行

kubeadm join cluster-endpoint:6443 --token 4xeuyq.rewmrveaf9euy35g \ --discovery-token-ca-cert-hash sha256:bd393cfa9b859330e0675527add0dbbc1bce6b733d455a9650620d94453e47e8g 该秘钥24小时候时效。如果时效用如下命令创建新令牌

kubeadm token create --print-join-command

node加入成功如下。

[root@hadoop101 ~]# kubeadm join cluster-endpoint:6443 --token 4xeuyq.rewmrveaf9euy35g --discovery-token-ca-cert-hash sha256:bd393cfa9b859330e0675527add0dbbc1bce6b733d455a9650620d94453e47e8 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.

  • The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

高可用部署方式,也是在这一步的时候,使用添加主节点的命令即可

主节点是control-plane.

kubeadm join cluster-endpoint:6443 --token 4xeuyq.rewmrveaf9euy35g \ --discovery-token-ca-cert-hash sha256:bd393cfa9b859330e0675527add0dbbc1bce6b733d455a9650620d94453e47e8 \ --control-plane

验证集群

  • kubectl get nodes

    需要等pods 的镜像拉去完之后,node才会ready。

部署dashboard

先下载yaml文件,image之前的脚本已经准备好啦

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

设置访问dashboard的端口

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

type: ClusterIP 改为 type: NodePort

kubectl get svc -A |grep kubernetes-dashboard

根据service的端口登录:

https://集群任意IP:端口

创建访问账号

#创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
kubectl apply -f dash.yaml

令牌访问

#获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
生成的令牌
​
eyJhbGciOiJSUzI1NiIsImtpZCI6IkdvaXk4QnM5UE1Gb0wxaUpHeEhpQUlvZV8tc09MbEhSaFU4UWZwdjNQbVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXo2OHNqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiOTBlZTc4OS1hMTUwLTRjYWQtYmUzNC0zYTA0MDRjODE1Y2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.J4Pr4YsPOadz0AUpaoELKNfeHvYwWFIiD1cMgbkR-AL6uHNbjXXD69ZNYgy7gWdHY5QQBNvXYhJc4t7EKUi1rDsEfWA_OivsLMuIWV_hfERv6vGY78ZnijW68z-fc7hzGkhwe-fUrnXCmieTxPdw945_jb7HmRLUIQt3baZvYY88XoHOUvOz0r_T_2PEAnKsoKzdpPTcIrtaOggFENstkoAe7dX5gXXFFO_EfM15UYXiXADFLqIBLllBGd2ECKAsOR3f_ViT2_Q8VViWwCld5zqKcG0GtOYIibIwYSTUPYwhdQidd9dUPlwuOPnXoK_26TUGPnR8fwPEeul3qPAZMw
保存好这个令牌,session掉了之后还需要这个令牌登录。

安装包

链接:百度网盘 请输入提取码 提取码:0cxr --来自百度网盘超级会员V4的分享

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐