跟我一步步二进制部署高可用kubernetes集群(上)
微信公众号:运维开发故事,作者:double冬1.实现架构image1.1 硬件环境创建5台虚拟机网络:可通外网RoleHOSTNAMEIPCPUMEMOSDISKLB、DNSzdd211-11.host.com10.211.55.112C2GCentos7.520GLB、ETCDzdd211-12.host.com10.211.55.122C2GCentos7.520GMaster1、Node1
微信公众号:运维开发故事,作者:double冬
1.实现架构
image
1.1 硬件环境
创建5台虚拟机
网络:可通外网
Role | HOSTNAME | IP | CPU | MEM | OS | DISK |
---|---|---|---|---|---|---|
LB、DNS | zdd211-11.host.com | 10.211.55.11 | 2C | 2G | Centos7.5 | 20G |
LB、ETCD | zdd211-12.host.com | 10.211.55.12 | 2C | 2G | Centos7.5 | 20G |
Master1、Node1、ETCD | zdd211-21.host.com | 10.211.55.21 | 4C | 8G | Centos7.5 | 20G |
Master2、Node2、ETCD | zdd211-22.host.com | 10.211.55.22 | 4C | 8G | Centos7.5 | 20G |
Harbor、NFS | zdd211-200.host.com | 10.211.55.200 | 2C | 2G | Centos7.5 | 20G |
具体IP 以自己网络配置,但是后续的文档中的IP需要自己修改替换
2.实验准备工作
2.1 准备虚拟机
[root@zdd211-11 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="eth0"
UUID="7290e242-14de-4cd8-92f4-5406b3507f03"
DEVICE="eth0"
ONBOOT="yes"
IPADDR=10.211.55.11
PREFIX=24
GATEWAY=10.211.55.1
DNS1=10.211.55.1
2.2 调整操作系统初始化
在每一个主机上分别设置各自的主机名
#hostnamectl set-hostname zdd211-11.host.com
#hostnamectl set-hostname zdd211-12.host.com
#hostnamectl set-hostname zdd211-21.host.com
#hostnamectl set-hostname zdd211-22.host.com
#hostnamectl set-hostname zdd211-200.host.com
所有主机firewalldf防火墙
#systemctl stop firewalld.service && systemctl enable firewalld
所有主机关闭SElinux
#setenforce 0 #临时
#sed -i 's/enforcing/disabled/' /etc/selinux/config #永久
关闭swap
#swapoff -a #临时
#sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #永久
永久生效都需要重启服务器,验证是否生效
[root@zdd211-11 ~]# getenforce
Disabled
[root@zdd211-11 ~]# free -m
total used free shared buff/cache available
Mem: 1833 111 1533 8 188 1555
Swap: 0
同步系统时间
#ntpdate time.windows.com
安装epel源和base源
#yum -y install epel-release
安装必要的软件和工具
# yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils -y
2.3 DNS服务初始化
2.3.1 安装bind9软件
zdd211-11上操作
[root@zdd211-11 ~]# yum install bind -y ##安装dns软件
2.3.2 配置bind9
主配置文件
zdd211-11上操作
[root@zdd211-11 ~]#vim /etc/named.conf ##dns 配置文件
options {
listen-on port 53 { 10.211.55.11; }; ## 配置本机IP地址
allow-query { any; }; ## 让这个网段的所有主机都可用
forwarders { 10.211.55.1; }; ## 配置网关
recursion yes; ##递归查询
dnssec-enable no;
dnssec-validation no;
[root@zdd211-11 ~]# named-checkconf ##检查dns 配置文件,没有报错就可以
区域配置文件
#vim /etc/named.rfc1912.zones ## 修改区域配置文件,在最底部填写如下信息,ip地址修改成本机地址
zone "host.com" IN {
type master;
file "host.com.zone";
allow-update { 10.211.55.11; };
};
zone "od.com" IN {
type master;
file "od.com.zone";
allow-update { 10.211.55.11; };
};
配置区域数据文件
[root@zdd211-11 ~]# vim /var/named/host.com.zone # 其中serial需要改成自己当天的时间,底部的主机名和ip也修改成自己的
$ORIGIN host.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.host.com. dnsadmin.host.com. (
2020022901 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.host.com.
$TTL 60 ; 1 minute
dns A 10.211.55.11
zdd211-11 A 10.211.55.11
zdd211-12 A 10.211.55.12
zdd211-21 A 10.211.55.21
zdd211-22 A 10.211.55.22
配置业务域数据文件
[root@zdd211-11 ~]# vim /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.od.com. dnsadmin.od.com. (
2020022101 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.od.com.
$TTL 60 ; 1 minute
dns A 10.211.55.11
启动bind9
[root@zdd211-11 ~]# named-checkconf ##检查dns 配置文件,没有报错就可以
[root@zdd211-11 ~]# systemctl start named ##启动
[root@zdd211-11 ~]# netstat -luntp|grep 53
tcp 0 0 10.211.55.11:53 0.0.0.0:* LISTEN 14046/named
tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 14046/named
tcp6 0 0 ::1:53 :::* LISTEN 14046/named
tcp6 0 0 ::1:953 :::* LISTEN 14046/named
udp 0 0 10.211.55.11:53 0.0.0.0:* 14046/named
udp6 0 0 ::1:53
验证
[root@zdd211-11 ~]# dig -t A zdd211-200.host.com @10.211.55.11 +short
10.211.55.200
[root@zdd211-11 ~]# dig -t A zdd211-11.host.com @10.211.55.11 +short
10.211.55.11
配置客户端使用dns
- linux主机
[root@zdd211-11 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0 ##修改需要使用这个dns的服务器网卡配置
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="eth0"
UUID="7290e242-14de-4cd8-92f4-5406b3507f03"
DEVICE="eth0"
ONBOOT="yes"
IPADDR=10.211.55.11
PREFIX=24
GATEWAY=10.211.55.1
DNS1=10.211.55.11
[root@zdd211-11 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
search host.com ##增加主机域
nameserver 10.211.55.11
nameserver fe80::21c:42ff:fe00:18%eth0
剩下四台机器同样的操做
1、修改网卡配置文件,将 dns1 改成:10.211.55.11
2、在/etc/resolv.conf 增加主机域
- windows主机和Mac上
网络和共享中心->网卡设置->设置DNS服务器
如有必要,还应设置虚拟网卡的接口跃点数为:10
检查
-
1、五台机器之间可以相互之间可以 ping通 其中两台
-
2、本机电脑也要ping 通 zdd211-11.hsot.com 和 dns.od.com 这两个域名
可以更改电脑的 dns,,但是记得用完要更改回来
2.4 准备签发证书环境
2.4.1 安装cfssl
证书签发工具cfssl:R1.2
[root@zdd211-200 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
[root@zdd211-200 ~]wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
[root@zdd211-200 ~]wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo
[root@zdd211-200 ~]# chmod +x /usr/bin/cfssl*
2.4.2 创建生成CA证书签名请求(csr)的JSON配置文件
[root@zdd211-200 ~]# cd /opt/
[root@zdd211-200 opt]# mkdir certs
[root@zdd211-200 opt]# cd certs/
[root@zdd211-200 certs]# vim ca-csr.json
{
"CN": "Goldwind",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
],
"ca": {
"expiry": "175200h"
}
}
CN:Common Name,浏览器使用该字段验证网站是否合法,一般写的是域名,非常重要。
C:Country,国家
ST:State,州,省
L:Locality,地区,城市
O:Organization Name,组织名称,公司名称
OU:Organization Unit Name,组织单位名称,公司部门
2.4.3 生成CA证书和私钥
[root@zdd211-200 certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
2020/02/29 20:23:57 [INFO] generating a new CA key and certificate from CSR
2020/02/29 20:23:57 [INFO] generate received request
2020/02/29 20:23:57 [INFO] received CSR
2020/02/29 20:23:57 [INFO] generating key: rsa-2048
2020/02/29 20:23:57 [INFO] encoded CSR
2020/02/29 20:23:57 [INFO] signed certificate with serial number 428559232580901856197456402666377998171140121588
[root@zdd211-200 certs]# ls
ca.csr ca-csr.json ca-key.pem ca.pem
2.5 部署docker环境
需要在zdd211-200.host.com,zdd211-21.host.com,zdd211-22.host.com上分别进行操作
2.5.1 安装
[root@zdd211-200 ~]# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
2.5.2 配置
[root@zdd211-200 ~]# mkdir -p /data/docker /etc/docker
[root@zdd211-200 ~]# vim /etc/docker/daemon.json
{
"graph": "/data/docker",
"storage-driver": "overlay2",
"insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"],
"registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
"bip": "172.7.200.1/24",
"exec-opts": ["native.cgroupdriver=systemd"],
"live-restore": true
}
2.5.3 启动
[root@zdd211-200 ~]# systemctl start docker && systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@zdd211-200 ~]# docker version
Client: Docker Engine - Community
Version: 19.03.6
API version: 1.40
Go version: go1.12.16
Git commit: 369ce74a3c
Built: Thu Feb 13 01:29:29 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.6
API version: 1.40 (minimum version 1.12)
Go version: go1.12.16
Git commit: 369ce74a3c
Built: Thu Feb 13 01:28:07 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
[root@zdd211-200 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2.6 部署docker镜像私有仓库harbor
按照集群的规划是在zdd211-200.host.com主机上部署
下载软件二进制包解压
[root@zdd211-200 src]# wget https://storage.googleapis.com/harbor-releases/release-1.8.0/harbor-offline-installer-v1.8.3.tgz
[root@zdd211-200 src# tar -xf harbor-offline-installer-v1.8.3.tgz -C /opt
[root@zdd211-200 src]# cd ..
[root@zdd211-200 opt]# mv harbor/ harbor-v1.8.3
[root@zdd211-200 opt]# ln -sf /opt/harbor-v1.8.3/ /opt/harbor ##方便将来升级
配置
[root@zdd211-200 opt]# cd harbor
[root@zdd211-200 harbor]# vim harbor.yml ##主要修改如下地方
hostname: harbor.od.com
http:
port: 180 ##启动端口
harbor_admin_password: Harbor12345 ##密码,生产环境需要修改
data_volume: /data/harbor
log:
level: info
rotate_count: 50
rotate_size: 200M
location: /data/harbor/logs
[root@zdd211-200 harbor]# mkdir -p /data/harbor/logs
安装docker-compose
[root@zdd211-200 harbor]# yum -y install docker-compose
[root@zdd211-200 harbor]# docker-compose version
docker-compose version 1.18.0, build 8dd22a9
docker-py version: 2.6.1
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.0.2k-fips 26 Jan 2017
安装harbor
[root@zdd211-200 harbor]# ./install.sh ##启动
检查harbor启动情况
[root@zdd211-200 harbor]# docker-compose ps ##查看是否启动正常
Name Command State Ports
--------------------------------------------------------------------------------------
harbor-core /harbor/start.sh Up
harbor-db /entrypoint.sh postgres Up 5432/tcp
harbor-jobservice /harbor/start.sh Up
harbor-log /bin/sh -c /usr/local/bin/ ... Up 127.0.0.1:1514->10514/tcp
harbor-portal nginx -g daemon off; Up 80/tcp
nginx nginx -g daemon off; Up 0.0.0.0:180->80/tcp
redis docker-entrypoint.sh redis ... Up 6379/tcp
registry /entrypoint.sh /etc/regist ... Up 5000/tcp
registryctl /harbor/start.sh Up
配置harbor的dns内网解析
去zdd211-11上配置DNS
[root@zdd211-11 ~]# vim /var/named/od.com.zone ##在最后添加一条harbor的解析
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.od.com. dnsadmin.od.com. (
2020022102 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.od.com.
$TTL 60 ; 1 minute
dns A 10.211.55.11
harbor A 10.211.55.200
[root@zdd211-11 ~]# systemctl restart named
[root@zdd211-11 ~]# dig -t A harbor.od.com +short
10.211.55.200
安装nginx并配置
去zdd211-200上:
[root@zdd211-200 harbor]# yum -y install nginx
[root@zdd211-200 harbor]# vim /etc/nginx/conf.d/harbor.od.com.conf
server {
listen 80;
server_name harbor.od.com;
client_max_body_size 1000m;
location / {
proxy_pass http://127.0.0.1:180;
}
}
[root@zdd211-200 harbor]# nginx -t
[root@zdd211-200 harbor]# systemctl start nginx
[root@zdd211-200 harbor]# systemctl enable nginx
浏览器打开测试检查
打开本机浏览器,输入Url:http://harbor.od.com
image
用户名:admin 密码:Harbor12345
image
- 在zdd211-200上推送镜像验证harbor镜像仓库
[root@zdd211-200 harbor]# docker pull nginx:1.7.9
[root@zdd211-200 harbor]# docker images |grep nginx
goharbor/nginx-photon v1.8.3 3a016e0dc7de 5 months ago 37MB
nginx 1.7.9 84581e99d807 5 years ago 91.7MB
[root@zdd211-200 harbor]# docker tag 84581e99d807 harbor.od.com/public/nginx:v1.7.9
[root@zdd211-200 harbor]# docker images |grep nginx
goharbor/nginx-photon v1.8.3 3a016e0dc7de 5 months ago 37MB
nginx 1.7.9 84581e99d807 5 years ago 91.7MB
harbor.od.com/public/nginx v1.7.9 84581e99d807 5 years ago 91.7MB
[root@zdd211-200 harbor]# docker login harbor.od.com
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[root@zdd211-200 harbor]# docker push harbor.od.com/public/nginx:v1.7.9
The push refers to repository [harbor.od.com/public/nginx]
5f70bf18a086: Pushed
4b26ab29a475: Pushed
ccb1d68e3fb7: Pushed
e387107e2065: Pushed
63bf84221cce: Pushed
e02dce553481: Pushed
dea2e4984e29: Pushed
v1.7.9: digest: sha256:b1f5935eb2e9e2ae89c0b3e2e148c19068d91ca502e857052f14db230443e4c2 size: 3012
[root@zdd211-200 harbor]#
- 登陆harbor镜像仓库验证
image
3.部署Master节点服务
3.1 部署etcd集群
集群规划
主机名 | 角色 | ip |
---|---|---|
zdd211-12.host.com | Etcd lead | 10.211.55.12 |
zdd211-21.host.com | Etcd follow | 10.211.55.21 |
zdd211-22.host.com | Etcd follow | 10.211.55.22 |
注意:这里部署文件以zdd211-12.host.com主机为例,另外两台主机安装部署方法类似
创建基于根证书的config配置文件
[root@zdd211-200 ~]# cd /opt/certs/
[root@zdd211-200 certs]# vim ca-config.json
{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"server": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"client": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"peer": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
生产etcd证书和密钥
[root@zdd211-200 certs]# vim etcd-peer-csr.json
{
"CN": "k8s-etcd",
"hosts": [
"10.211.55.11",
"10.211.55.12",
"10.211.55.21",
"10.211.55.22"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
[root@zdd211-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json
[root@zdd211-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer
2020/02/29 21:33:22 [INFO] generate received request
2020/02/29 21:33:22 [INFO] received CSR
2020/02/29 21:33:22 [INFO] generating key: rsa-2048
2020/02/29 21:33:22 [INFO] encoded CSR
2020/02/29 21:33:22 [INFO] signed certificate with serial number 9245854548474729834486793670346860279143220749
2020/02/29 21:33:22 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
检查生产的证书、密钥
[root@zdd211-200 certs]# ls -l|grep etcd
-rw-r--r-- 1 root root 1062 Feb 29 21:33 etcd-peer.csr
-rw-r--r-- 1 root root 375 Feb 29 21:30 etcd-peer-csr.json
-rw------- 1 root root 1679 Feb 29 21:33 etcd-peer-key.pem
-rw-r--r-- 1 root root 1428 Feb 29 21:33 etcd-peer.pem
创建etcd用户
zdd211-12.host.com上操作
[root@zdd211-12 ~]# useradd -s /sbin/nologin -M etcd
下载etcd软件、解压,做软链接
我这下载的是:etcd-v3.1.20-linux-amd64.tar.gz 放入/opt/src
[root@zdd211-12 ~]# cd /opt/
[root@zdd211-12 opt]# mkdir src
[root@zdd211-12 opt]# cd src
[root@zdd211-12 src]# wget https://github.com/etcd-io/etcd/releases/download/v3.1.20/etcd-v3.1.20-linux-amd64.tar.gz
[root@zdd211-12 src]# tar -xf etcd-v3.1.20-linux-amd64.tar.gz -C /opt
[root@zdd211-12 src]# cd /opt/
[root@zdd211-12 opt]# mv etcd-v3.1.20-linux-amd64/ etcd-v3.1.20
[root@zdd211-12 opt]# ln -s /opt/etcd-v3.1.20/ /opt/etcd
[root@zdd211-12 opt]# ll
total 0
lrwxrwxrwx 1 root root 18 Feb 29 21:53 etcd -> /opt/etcd-v3.1.20/
drwxr-xr-x 3 478493 89939 123 Oct 11 2018 etcd-v3.1.20
drwxr-xr-x 2 root root 67 Feb 29 21:48 src
[root@zdd211-12 opt]# cd etcd
创建目录,拷贝证书、私钥
zdd211-12.host.com上:
- 创建目录
[root@zdd211-12 etcd]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
- 拷贝证书
将运维主机上生成的ca.pem、etcd-peer-key.pem、etüd-peer.pem拷贝到/opt/etcd/certs目录中,注意四要文件权限600
[root@zdd211-12 etcd]# cd certs/
[root@zdd211-12 certs]# scp zdd211-200:/opt/certs/ca.pem .
[root@zdd211-12 certs]# scp zdd211-200:/opt/certs/etcd-peer-key.pem .
[root@zdd211-12 certs]# scp zdd211-200:/opt/certs/etcd-peer.pem .
[root@zdd211-12 certs]# ll
total 12
-rw-r--r-- 1 root root 1342 Feb 29 22:01 ca.pem
-rw------- 1 root root 1679 Feb 29 22:05 etcd-peer-key.pem
-rw-r--r-- 1 root root 1428 Feb 29 22:05 etcd-peer.pem
- 修改权限
[root@zdd211-12 certs]# chown -R etcd:etcd /opt/etcd/certs /data/etcd /data/logs/etcd-server
[root@zdd211-12 certs]# ls -l
total 12
-rw-r--r-- 1 etcd etcd 1342 Feb 29 22:01 ca.pem
-rw------- 1 etcd etcd 1679 Feb 29 22:05 etcd-peer-key.pem
-rw-r--r-- 1 etcd etcd 1428 Feb 29 22:05 etcd-peer.pem
创建etcd服务启动脚本
zdd211-12.host.com上:
[root@zdd211-12 etcd]# vim etcd-server-startup.sh
#!/bin/sh
./etcd --name etcd-server-55-12 \
--data-dir /data/etcd/etcd-server \
--listen-peer-urls https://10.211.55.12:2380 \
--listen-client-urls https://10.211.55.12:2379,http://127.0.0.1:2379 \
--quota-backend-bytes 8000000000 \
--initial-advertise-peer-urls https://10.211.55.12:2380 \
--advertise-client-urls https://10.211.55.12:2379,http://127.0.0.1:2379 \
--initial-cluster etcd-server-55-12=https://10.211.55.12:2380,etcd-server-55-21=https://10.211.55.21:2380,etcd-server-55-22=https://10.211.55.22:2380 \
--ca-file ./certs/ca.pem \
--cert-file ./certs/etcd-peer.pem \
--key-file ./certs/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ./certs/ca.pem \
--peer-ca-file ./certs/ca.pem \
--peer-cert-file ./certs/etcd-peer.pem \
--peer-key-file ./certs/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ./certs/ca.pem \
--log-output stdout
[root@zdd211-12 etcd]# chmod +x etcd-server-startup.sh
[root@zdd211-12 etcd]# chown -R etcd.etcd /opt/etcd-v3.1.20/
[root@zdd211-12 etcd]# ll
total 30072
drwxr-xr-x 2 etcd etcd 66 Feb 29 22:05 certs
drwxr-xr-x 11 etcd etcd 4096 Oct 11 2018 Documentation
-rwxr-xr-x 1 etcd etcd 16406432 Oct 11 2018 etcd
-rwxr-xr-x 1 etcd etcd 14327712 Oct 11 2018 etcdctl
-rwxr-xr-x 1 etcd etcd 1006 Feb 29 22:14 etcd-server-startup.sh
-rw-r--r-- 1 etcd etcd 32632 Oct 11 2018 README-etcdctl.md
-rw-r--r-- 1 etcd etcd 5878 Oct 11 2018 README.md
-rw-r--r-- 1 etcd etcd 7892 Oct 11 2018 READMEv2-etcdctl.md
[root@zdd211-12 etcd]# chown -R etcd.etcd /data/etcd/
[root@zdd211-12 etcd]# chown -R etcd.etcd /data/logs/etcd-server/
部署supervsiord进行服务管理
zdd211-12.host.com上:
管理后台进程软件
[root@zdd211-12 etcd]# yum -y install supervisor
[root@zdd211-12 etcd]# systemctl start supervisord
[root@zdd211-12 etcd]# systemctl enable supervisord
Created symlink from /etc/systemd/system/multi-user.target.wants/supervisord.service to /usr/lib/systemd/system/supervisord.service.
创建etcd-server的启动配置
zdd211-12.host.com上:
[root@zdd211-12 etcd]# vim /etc/supervisord.d/etcd-server.ini
[program:etcd-server-211-12]
command=/opt/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/etcd ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=etcd ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
注意:etcd集群各主机启动配置略有不同,配置其他节点时注意修改
启动etcd服务并检查
[root@zdd211-12 etcd]# supervisorctl update ##更新,启动
etcd-server-211-12: added process group
[root@zdd211-12 etcd]# supervisorctl status
etcd-server-211-12 STARTING
[root@zdd211-12 etcd]# tail -f /data/logs/etcd-server/etcd.stdout.log ##有问题可以查看日志
[root@zdd211-12 etcd]# supervisorctl status
etcd-server-211-12 RUNNING pid 16207, uptime 0:00:42
[root@zdd211-12 etcd]# netstat -ntulp |grep etcd
tcp 0 0 10.211.55.12:2379 0.0.0.0:* LISTEN 16208/./etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 16208/./etcd
tcp 0 0 10.211.55.12:2380 0.0.0.0:* LISTEN 16208/./etcd
安装部署启动检查所有集群规划
检查集群状态
三个etcd节点都起来后:
[root@zdd211-12 etcd]# ./etcdctl cluster-health
member b1a3627c0343c2c5 is healthy: got healthy result from http://127.0.0.1:2379
member bd1ce44b83380692 is healthy: got healthy result from http://127.0.0.1:2379
member f9c27963788aad6e is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy
[root@zdd211-12 etcd]# ./etcdctl member list
b1a3627c0343c2c5: name=etcd-server-55-21 peerURLs=https://10.211.55.21:2380 clientURLs=http://127.0.0.1:2379,https://10.211.55.21:2379 isLeader=false
bd1ce44b83380692: name=etcd-server-55-22 peerURLs=https://10.211.55.22:2380 clientURLs=http://127.0.0.1:2379,https://10.211.55.22:2379 isLeader=false
f9c27963788aad6e: name=etcd-server-55-12 peerURLs=https://10.211.55.12:2380 clientURLs=http://127.0.0.1:2379,https://10.211.55.12:2379 isLeader=true
3.2 部署kube-apiserver集群
集群规划
主机名 | 角色 | ip |
---|---|---|
zdd211-21.host.com | Kube-apiserver | 10.211.55.21 |
zdd211-22.host.com | Kube-apiserver | 10.211.55.22 |
zdd211-11.host.com | 4层负载均衡 | 10.211.55.11 |
zdd211-12.host.com | 4层负载均衡 | 10.211.55.12 |
注意:这里10.211.55.11和10.211.55.12使用nginx做4层负载均衡器,用keepalive跑一个vip:10.211.55.10,代理两个kube-apiserver,实现高可用
这里部署文档以zdd211-21.host.com主机为例,另外一台运算节点安装部署方法类似
下载软件、解压、做软连接
zdd211-21.host.com上:
有的时候不能下载,需要翻墙
[root@zdd211-21 src]# tar -xf kubernetes-server-linux-amd64-v1.15.4.tar.gz -C /opt/
[root@zdd211-21 src]# cd /opt/
[root@zdd211-21 opt]# mv kubernetes/ kubernetes-v1.15.4
[root@zdd211-21 opt]# ln -s /opt/kubernetes-v1.15.4/ /opt/kubernetes
[root@zdd211-21 opt]# ll
total 0
drwx--x--x 4 root root 28 Feb 29 20:45 containerd
lrwxrwxrwx 1 root root 18 Feb 29 23:18 etcd -> /opt/etcd-v3.1.20/
drwxr-xr-x 4 etcd etcd 166 Feb 29 23:27 etcd-v3.1.20
lrwxrwxrwx 1 root root 24 Mar 1 00:41 kubernetes -> /opt/kubernetes-v1.15.4/
drwxr-xr-x 4 root root 79 Sep 18 23:09 kubernetes-v1.15.4
drwxr-xr-x. 2 root root 6 Oct 31 2018 rh
drwxr-xr-x 2 root root 97 Mar 1 00:14 src
[root@zdd211-21 opt]# cd kubernetes
[root@zdd211-21 kubernetes]# ll
total 27196
drwxr-xr-x 2 root root 6 Sep 18 23:09 addons
-rw-r--r-- 1 root root 26639441 Sep 18 23:09 kubernetes-src.tar.gz
-rw-r--r-- 1 root root 1205293 Sep 18 23:09 LICENSES
drwxr-xr-x 3 root root 17 Sep 18 23:05 server
[root@zdd211-21 kubernetes]# rm -rf kubernetes-src.tar.gz
[root@zdd211-21 kubernetes]# cd server/bin/
[root@zdd211-21 bin]# ll
total 1549312
-rwxr-xr-x 1 root root 43538912 Sep 18 23:09 apiextensions-apiserver
-rwxr-xr-x 1 root root 100605984 Sep 18 23:09 cloud-controller-manager
-rw-r--r-- 1 root root 8 Sep 18 23:05 cloud-controller-manager.docker_tag
-rw-r--r-- 1 root root 144495104 Sep 18 23:05 cloud-controller-manager.tar
-rwxr-xr-x 1 root root 200722064 Sep 18 23:09 hyperkube
-rwxr-xr-x 1 root root 40186304 Sep 18 23:09 kubeadm
-rwxr-xr-x 1 root root 164563360 Sep 18 23:09 kube-apiserver
-rw-r--r-- 1 root root 8 Sep 18 23:05 kube-apiserver.docker_tag
-rw-r--r-- 1 root root 208452096 Sep 18 23:05 kube-apiserver.tar
-rwxr-xr-x 1 root root 116462624 Sep 18 23:09 kube-controller-manager
-rw-r--r-- 1 root root 8 Sep 18 23:05 kube-controller-manager.docker_tag
-rw-r--r-- 1 root root 160351744 Sep 18 23:05 kube-controller-manager.tar
-rwxr-xr-x 1 root root 42985504 Sep 18 23:09 kubectl
-rwxr-xr-x 1 root root 119690288 Sep 18 23:09 kubelet
-rwxr-xr-x 1 root root 36987488 Sep 18 23:09 kube-proxy
-rw-r--r-- 1 root root 8 Sep 18 23:05 kube-proxy.docker_tag
-rw-r--r-- 1 root root 84282368 Sep 18 23:05 kube-proxy.tar
-rwxr-xr-x 1 root root 38786144 Sep 18 23:09 kube-scheduler
-rw-r--r-- 1 root root 8 Sep 18 23:05 kube-scheduler.docker_tag
-rw-r--r-- 1 root root 82675200 Sep 18 23:05 kube-scheduler.tar
-rwxr-xr-x 1 root root 1648224 Sep 18 23:09 mounter
[root@zdd211-21 bin]# rm -rf *.tar
[root@zdd211-21 bin]# rm -rf *_tag
[root@zdd211-21 bin]# ll
total 884968
-rwxr-xr-x 1 root root 43538912 Sep 18 23:09 apiextensions-apiserver
-rwxr-xr-x 1 root root 100605984 Sep 18 23:09 cloud-controller-manager
-rwxr-xr-x 1 root root 200722064 Sep 18 23:09 hyperkube
-rwxr-xr-x 1 root root 40186304 Sep 18 23:09 kubeadm
-rwxr-xr-x 1 root root 164563360 Sep 18 23:09 kube-apiserver
-rwxr-xr-x 1 root root 116462624 Sep 18 23:09 kube-controller-manager
-rwxr-xr-x 1 root root 42985504 Sep 18 23:09 kubectl
-rwxr-xr-x 1 root root 119690288 Sep 18 23:09 kubelet
-rwxr-xr-x 1 root root 36987488 Sep 18 23:09 kube-proxy
-rwxr-xr-x 1 root root 38786144 Sep 18 23:09 kube-scheduler
-rwxr-xr-x 1 root root 1648224 Sep 18 23:09 mounter
[root@zdd211-21 bin]# mkdir certs
[root@zdd211-21 bin]# cd certs/
签发client证书
zdd211-200.host.com上:
创建生成证书签名请求的JSON配置文件
[root@zdd211-200 certs]# cd /opt/certs/
[root@zdd211-200 certs]# vim client-csr.json
{
"CN": "k8s-node",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
[root@zdd211-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-
生成client证书和私钥
[root@zdd211-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client
2020/03/01 00:22:52 [INFO] generate received request
2020/03/01 00:22:52 [INFO] received CSR
2020/03/01 00:22:52 [INFO] generating key: rsa-2048
2020/03/01 00:22:52 [INFO] encoded CSR
2020/03/01 00:22:53 [INFO] signed certificate with serial number 260904771189188911312572941899563851435532977548
2020/03/01 00:22:53 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
检查生成的证书、私钥
[root@zdd211-200 certs]# ls -l |grep client
-rw-r--r-- 1 root root 993 Mar 1 00:22 client.csr
-rw-r--r-- 1 root root 280 Mar 1 00:22 client-csr.json
-rw------- 1 root root 1675 Mar 1 00:22 client-key.pem
-rw-r--r-- 1 root root 1363 Mar 1 00:22 client.pem
签发apiserver证书
zdd211-200.host.com上:
创建生成证书签名请求的JSON配置文件
[root@zdd211-200 certs]# vim apiserver-csr.json
{
"CN": "k8s-apiserver",
"hosts": [
"127.0.0.1",
"192.168.0.1",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"10.211.55.10",
"10.211.55.21",
"10.211.55.22",
"10.211.55.23"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
生成apiserver证书和私钥
[root@zdd211-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver
2020/03/01 00:34:18 [INFO] generate received request
2020/03/01 00:34:18 [INFO] received CSR
2020/03/01 00:34:18 [INFO] generating key: rsa-2048
2020/03/01 00:34:18 [INFO] encoded CSR
2020/03/01 00:34:18 [INFO] signed certificate with serial number 312812152997845240703964115854943008649545862358
2020/03/01 00:34:18 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@zdd211-200 certs]#
检查生成的证书、私钥
[root@zdd211-200 certs]# ls -l |grep apiserver
-rw-r--r-- 1 root root 1257 Mar 1 00:34 apiserver.csr
-rw-r--r-- 1 root root 602 Mar 1 00:32 apiserver-csr.json
-rw------- 1 root root 1679 Mar 1 00:34 apiserver-key.pem
-rw-r--r-- 1 root root 1606 Mar 1 00:34 apiserver.pem
拷贝证书至各运算节点,并创建配置
zdd211-21.host.com上:
拷贝证书、私钥,注意私钥文件属性600
[root@zdd211-21 certs]# pwd
/opt/kubernetes/server/bin/certs
[root@zdd211-21 certs]# scp zdd211-200:/opt/certs/ca.pem .
root@zdd211-200's password:
ca.pem 100% 1342 1.9MB/s 00:00
[root@zdd211-21 certs]# scp zdd211-200:/opt/certs/ca-key.pem .
root@zdd211-200's password:
ca-key.pem 100% 1679 2.3MB/s 00:00
[root@zdd211-21 certs]# scp zdd211-200:/opt/certs/client.pem .
root@zdd211-200's password:
client.pem 100% 1363 2.0MB/s 00:00
[root@zdd211-21 certs]# scp zdd211-200:/opt/certs/client-key.pem .
root@zdd211-200's password:
client-key.pem 100% 1675 2.3MB/s 00:00
[root@zdd211-21 certs]# scp zdd211-200:/opt/certs/apiserver.pem .
root@zdd211-200's password:
apiserver.pem 100% 1606 2.3MB/s 00:00
[root@zdd211-21 certs]# scp zdd211-200:/opt/certs/apiserver-key.pem .
root@zdd211-200's password:
apiserver-key.pem
检查证书、私钥
共3套证书和私钥
[root@zdd211-21 certs]# ll
total 24
-rw------- 1 root root 1679 Mar 1 00:48 apiserver-key.pem
-rw-r--r-- 1 root root 1606 Mar 1 00:48 apiserver.pem
-rw------- 1 root root 1679 Mar 1 00:47 ca-key.pem
-rw-r--r-- 1 root root 1342 Mar 1 00:46 ca.pem
-rw------- 1 root root 1675 Mar 1 00:47 client-key.pem
-rw-r--r-- 1 root root 1363 Mar 1 00:47 client.pem
创建配置
[root@zdd211-21 certs]# cd ..
[root@zdd211-21 bin]# mkdir conf
[root@zdd211-21 bin]# cd conf/
[root@zdd211-21 conf]# cat audit.yaml
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"
创建启动脚本
[root@zdd211-21 conf]# cd ../
root@zdd211-21 bin]# vim kube-apiserver.sh
#!/bin/bash
./kube-apiserver \
--apiserver-count 2 \
--audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
--audit-policy-file ./conf/audit.yaml \
--authorization-mode RBAC \
--client-ca-file ./certs/ca.pem \
--requestheader-client-ca-file ./certs/ca.pem \
--enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
--etcd-cafile ./certs/ca.pem \
--etcd-certfile ./certs/client.pem \
--etcd-keyfile ./certs/client-key.pem \
--etcd-servers https://10.211.55.12:2379,https://10.211.55.21:2379,https://10.211.55.22:2379 \
--service-account-key-file ./certs/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--service-node-port-range 3000-29999 \
--target-ram-mb=1024 \
--kubelet-client-certificate ./certs/client.pem \
--kubelet-client-key ./certs/client-key.pem \
--log-dir /data/logs/kubernetes/kube-apiserver \
--tls-cert-file ./certs/apiserver.pem \
--tls-private-key-file ./certs/apiserver-key.pem \
--v 2
[root@zdd211-21 bin]# ./kube-apiserver --help|grep -A 5 target-ram-mb ##可以查看参数解释
--target-ram-mb int Memory limit for apiserver in MB (used to configure sizes of caches, etc.)
Etcd flags:
--default-watch-cache-size int Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set. (default 100)
--delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
调整权限和目录
[root@zdd211-21 bin]# chmod +x kube-apiserver.sh
[root@zdd211-21 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver
创建supervisor配置
[root@zdd211-21 bin]# vim /etc/supervisord.d/kube-apiserver.ini
[program:kube-apiserver-211-21] ;根据实际情况更改
command=/opt/kubernetes/server/bin/kube-apiserver.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false
启动服务并检查
[root@zdd211-21 bin]# supervisorctl update
kube-apiserver-211-21: added process group
[root@zdd211-21 bin]# supervisorctl status
etcd-server-211-21 RUNNING pid 16243, uptime 0:00:36
kube-apiserver-211-21 RUNNING pid 16244, uptime 0:00:36
安装部署启动检查所有集群规划节点
配4层反向代理
zdd211-11和 zdd211-12操作,以 zdd211-11为例
nginx安装
nginx配置
[root@zdd211-11 ~]# vim /etc/nginx/nginx.conf ##在配置文件的最底部
stream {
upstream kube-apiserver {
server 10.211.55.21:6443 max_fails=3 fail_timeout=30s;
server 10.211.55.22:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 7443;
proxy_connect_timeout 2s;
proxy_timeout 900s;
proxy_pass kube-apiserver;
}
}
ngnix启动
[root@zdd211-11 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@zdd211-11 ~]# systemctl start nginx && systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
keepalived配置
- 安装
[root@zdd211-11 ~]# yum -y install keepalived
- 监听脚本
[root@zdd211-11 ~]# vim /etc/keepalived/check_port.sh
#!/bin/bash
#keepalived 监控端口脚本
#使用方法:
#在keepalived的配置文件中
#vrrp_script check_port {#创建一个vrrp_script脚本,检查配置
# script "/etc/keepalived/check_port.sh 6379" #配置监听的端口
# interval 2 #检查脚本的频率,单位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
if [ $PORT_PROCESS -eq 0 ];then
echo "Port $CHK_PORT Is Not Used,End."
exit 1
fi
else
echo "Check Port Cant Be Empty!"
fi
[root@zdd211-11 ~]# chmod +x /etc/keepalived/check_port.sh
- keepalived主配置文件
[root@zdd211-11 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 10.211.55.11
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_port.sh 7443"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 251
priority 100
advert_int 1
mcast_src_ip 10.211.55.11
nopreempt ##不抢VIP 机制
authentication {
auth_type PASS
auth_pass 11111111
}
track_script {
chk_nginx
}
virtual_ipaddress {
10.211.55.10
}
}
[root@zdd211-11 ~]systemctl start keepalived && systemctl enable keepalive
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
- keepalived从配置文件
[root@zdd211-12 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 10.211.55.12
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_port.sh 7443"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 251
mcast_src_ip 10.211.55.12
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 11111111
}
track_script {
chk_nginx
}
virtual_ipaddress {
10.211.55.10
}
}
[root@zdd211-12 ~]systemctl start keepalived && systemctl enable keepalive
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
启动代理并检查
[root@zdd211-11 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:1c:42:b6:c5:48 brd ff:ff:ff:ff:ff:ff
inet 10.211.55.11/24 brd 10.211.55.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet 10.211.55.10/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::c032:4140:7917:cf93/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::be6c:3cae:2851:9de9/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::e17c:787c:c2c1:540c/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
3.3 部署controller-manager
集群规划
主机名 | 角色 | ip |
---|---|---|
zdd211-21.host.com | kube-controller-manager | 10.211.55.21 |
zdd211-22.host.com | kube-controller-manager | 10.211.55.22 |
注意:这里部署文档以zdd211-21.host.com主机为例,另外一台运算节点安装部署方法类似
创建启动脚本
Zdd211-21.host.com上:
[root@zdd211-21 bin]# pwd
/opt/kubernetes/server/bin
[root@zdd211-21 bin]# vim kube-controller-manager.sh
#!/bin/sh
./kube-controller-manager \
--cluster-cidr 172.7.0.0/16 \
--leader-elect true \
--log-dir /data/logs/kubernetes/kube-controller-manager \
--master http://127.0.0.1:8080 \
--service-account-private-key-file ./certs/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--root-ca-file ./certs/ca.pem \
--v 2
调整文件权限,创建目录
[root@zdd211-21 bin]# chmod +x kube-controller-manager.sh
[root@zdd211-21 bin]# mkdir -p /data/logs/kubernetes
[root@zdd211-21 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager/
创建supervisor配置
[root@zdd211-21 bin]# vim /etc/supervisord.d/kube-conntroller-manager.ini
[program:kube-controller-manager-211-21]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false
启动服务并检查
[root@zdd211-21 bin]# supervisorctl update
kube-controller-manager-211-21: added process group
[root@zdd211-21 bin]# supervisorctl status
etcd-server-211-21 RUNNING pid 16243, uptime 0:57:43
kube-apiserver-211-21 RUNNING pid 16244, uptime 0:57:43
kube-controller-manager-211-21 RUNNING pid 16513, uptime 0:01:03
安装部署启动检查所有集群规划主机上的kube-controller-manager服务
略
3.4 部署kube-scheduler
集群规划
主机名 | 角色 | ip |
---|---|---|
zdd211-21.host.com | Kube-scheduler | 10.211.55.21 |
zdd211-22.host.com | Kube-scheduler | 10.211.55.22 |
注意:这里部署文档以zdd211-21.host.com主机为例,另外一台运算节点安装部署方法类似
创建启动脚本
Zdd211-21.host.com上:
[root@zdd211-21 bin]# pwd
/opt/kubernetes/server/bin
[root@zdd211-21 bin]# vim kube-scheduler.sh
#!/bin/sh
./kube-scheduler \
--leader-elect \
--log-dir /data/logs/kubernetes/kube-scheduler \
--master http://127.0.0.1:8080 \
--v 2
调整文件权限,创建目录
[root@zdd211-21 bin]# chmod +x kube-scheduler.sh
[root@zdd211-21 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler
创建supervisor配置
[root@zdd211-21 bin]# vim /etc/supervisord.d/kube-scheduler.ini
[program:kube-scheduler-211-21]
command=/opt/kubernetes/server/bin/kube-scheduler.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false
启动服务并检查
[root@zdd211-21 bin]# supervisorctl update
kube-controller-manager-211-21: added process group
[root@zdd211-21 bin]# supervisorctl status
etcd-server-211-21 RUNNING pid 16243, uptime 0:57:43
kube-apiserver-211-21 RUNNING pid 16244, uptime 0:57:43
kube-controller-manager-211-21 RUNNING pid 16513, uptime 0:01:03
安装部署启动检查所有集群规划主机上的kube-controller-manager服务
略
3.5 检查主控节点
[root@zdd211-21 bin]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
[root@zdd211-21 bin]# which kubectl
/usr/bin/kubectl
[root@zdd211-21 bin]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
4.部署Node节点服务
4.1 部署kubelet
集群规划
主机名 | 角色 | ip |
---|---|---|
zdd211-21.host.com | kubelet | 10.211.55.21 |
zdd211-22.host.com | kubelet | 10.211.55.22 |
注意:这里部署文档以zdd211-21.host.com主机为例,另外一台运算节点安装部署方法类似
签发证书
运维主机zdd211-200.host.com上:
创建生成证书签名请求的JSON配置文件
[root@zdd211-200 certs]# vim kubelet-csr.json
{
"CN": "k8s-kubelet",
"hosts": [
"127.0.0.1",
"10.211.55.10",
"10.211.55.21",
"10.211.55.22",
"10.211.55.23",
"10.211.55.24",
"10.211.55.25",
"10.211.55.26",
"10.211.55.27",
"10.211.55.28"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
生成kubelet证书和私钥
[root@zdd211-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
2020/03/01 02:43:04 [INFO] generate received request
2020/03/01 02:43:04 [INFO] received CSR
2020/03/01 02:43:04 [INFO] generating key: rsa-2048
2020/03/01 02:43:05 [INFO] encoded CSR
2020/03/01 02:43:05 [INFO] signed certificate with serial number 87704427073277495803939532855812728907884442069
2020/03/01 02:43:05 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
检查生成的证书,私钥
[root@zdd211-200 certs]# ls -l|grep kubelet
-rw-r--r-- 1 root root 1115 Mar 1 02:43 kubelet.csr
-rw-r--r-- 1 root root 479 Mar 1 02:37 kubelet-csr.json
-rw------- 1 root root 1679 Mar 1 02:43 kubelet-key.pem
-rw-r--r-- 1 root root 1464 Mar 1 02:43 kubelet.pem
拷贝证书至各运算节点,并创建配置
zdd211-21.host.com 操作
拷贝证书,私钥,注意私钥文件属性600
[root@zdd211-21 certs]# pwd
/opt/kubernetes/server/bin/certs
[root@zdd211-21 certs]# scp zdd211-200:/opt/certs/kubelet.pem .
root@zdd211-200's password:
kubelet.pem
[root@zdd211-21 certs]# scp zdd211-200:/opt/certs/kubelet-key.pem .
root@zdd211-200's password:
kubelet-key.pem
[root@zdd211-21 certs]# ll
total 32
-rw------- 1 root root 1679 Mar 1 00:48 apiserver-key.pem
-rw-r--r-- 1 root root 1606 Mar 1 00:48 apiserver.pem
-rw------- 1 root root 1679 Mar 1 00:47 ca-key.pem
-rw-r--r-- 1 root root 1342 Mar 1 00:46 ca.pem
-rw------- 1 root root 1675 Mar 1 00:47 client-key.pem
-rw-r--r-- 1 root root 1363 Mar 1 00:47 client.pem
-rw------- 1 root root 1679 Mar 1 02:46 kubelet-key.pem
-rw-r--r-- 1 root root 1464 Mar 1 02:45 kubelet.pem
[root@zdd211-21 certs]# cd ..
[root@zdd211-21 bin]# cd conf/
创建配置分发证书
分发证书到 zdd211-21 、zdd211-22 ##这里只需要在一个节点上操作一次
- set-cluster
注意:在conf目录下
kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
--embed-certs=true \
--server=https://10.211.55.10:7443\
--kubeconfig=kubelet.kubeconfig
- Set-credentials
注意:在conf目录下
kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/certs/client.pem \
--client-key=/opt/kubernetes/server/bin/certs/client-key.pem \
--embed-certs=true \
--kubeconfig=kubelet.kubeconfig
- Set-context
kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node \
--kubeconfig=kubelet.kubeconfig
- Use-context
[root@zdd211-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
Switched to context "myk8s-context".
授权K8s node角色
[root@zdd211-21 conf]# vim k8s-node.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node
[root@zdd211-21 conf]# kubectl create -f k8s-node.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created
[root@zdd211-21 conf]# kubectl get clusterrolebinding k8s-node
NAME AGE
k8s-node 61s
拷贝配置到另一个节点
zdd211-22.host.com 操作
[root@zdd211-22 conf]# pwd
/opt/kubernetes/server/bin/conf
[root@zdd211-22 conf]# scp zdd211-21:/opt/kubernetes/server/bin/conf/kubelet.kubeconfig .
[root@zdd211-22 conf]# ll
total 12
-rw------- 1 root root 2237 Mar 1 01:26 audit.yaml
-rw------- 1 root root 6194 Mar 1 03:12 kubelet.kubeconfig
准备pause基础镜像
zdd211-200.host.com上操作
[root@zdd211-200 certs]# docker pull kubernetes/pause
[root@zdd211-200 certs]# docker images |grep kubernetes
kubernetes/pause latest f9d5de079539 5 years ago
[root@zdd211-200 certs]# docker tag f9d5de079539 harbor.od.com/public/pause:latest
[root@zdd211-200 certs]# docker push harbor.od.com/public/pause:latest
创建kubelet启动脚本
[root@zdd211-21 bin]# pwd
/opt/kubernetes/server/bin
[root@zdd211-21 bin]# vim kubelet.sh
#!/bin/sh
./kubelet \
--anonymous-auth=false \
--cgroup-driver systemd \
--cluster-dns 192.168.0.2 \
--cluster-domain cluster.local \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on="false" \
--client-ca-file ./certs/ca.pem \
--tls-cert-file ./certs/kubelet.pem \
--tls-private-key-file ./certs/kubelet-key.pem \
--hostname-override zdd211-21.host.com \
--image-gc-high-threshold 20 \
--image-gc-low-threshold 10 \
--kubeconfig ./conf/kubelet.kubeconfig \
--log-dir /data/logs/kubernetes/kube-kubelet \
--pod-infra-container-image harbor.od.com/public/pause:latest \
--root-dir /data/kubelet
检查配置,权限,创建日志目录
[root@zdd211-21 bin]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet
[root@zdd211-21 bin]# chmod +x kubelet.sh
创建supervisor配置
zdd211-21.host.com上操作
[root@zdd211-21 bin]# vim /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-211-21]
command=/opt/kubernetes/server/bin/kubelet.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
启动服务并检查
[root@zdd211-21 bin]# supervisorctl update
kube-kubelet-211-21: added process group
[root@zdd211-21 bin]# supervisorctl status
etcd-server-211-21 RUNNING pid 19731, uptime 2:41:09
kube-apiserver-211-21 RUNNING pid 19729, uptime 2:41:09
kube-controller-manager-211-21 RUNNING pid 19736, uptime 2:41:09
kube-kubelet-211-21 RUNNING pid 19735, uptime 2:41:09
kube-scheduler-211-21 RUNNING pid 19733, uptime 2:41:09
检查运算节点
[root@zdd211-21 bin]# kubectl get node
NAME STATUS ROLES AGE VERSION
zdd211-21.host.com Ready <none> 172m v1.15.4
安装部署启动检查所有集群规划主机上的kubelet服务
另外一个node节点zdd211-22.host.com可以根据上面的的步骤进行部署,注意修改配置文件中主机名
检查所有运算节点
[root@zdd211-22 bin]# supervisorctl update
kube-kubelet-211-22: added process group
[root@zdd211-22 bin]# supervisorctl status
etcd-server-211-22 RUNNING pid 16026, uptime 15:39:10
kube-apiserver-211-22 RUNNING pid 16245, uptime 13:53:58
kube-controller-manager-211-22 RUNNING pid 18498, uptime 10:54:38
kube-kubelet-211-22 RUNNING pid 23091, uptime 0:19:50
kube-scheduler-211-22 RUNNING pid 18505, uptime 10:54:38
[root@zdd211-21 bin]# kubectl get node
NAME STATUS ROLES AGE VERSION
zdd211-21.host.com Ready <none> 172m v1.15.4
zdd211-22.host.com Ready <none> 16m v1.15.4
给node添加标签
[root@zdd211-21 bin]# kubectl label node zdd211-21.host.com node-role.kubernetes.io/node=
node/zdd211-21.host.com labeled
[root@zdd211-21 bin]# kubectl get node
NAME STATUS ROLES AGE VERSION
zdd211-21.host.com Ready node 177m v1.15.4
zdd211-22.host.com Ready <none> 21m v1.15.4
[root@zdd211-21 bin]# kubectl label node zdd211-21.host.com node-role.kubernetes.io/master=
node/zdd211-21.host.com labeled
[root@zdd211-21 bin]# kubectl get node
NAME STATUS ROLES AGE VERSION
zdd211-21.host.com Ready master,node 178m v1.15.4
zdd211-22.host.com Ready <none> 22m v1.15.4
[root@zdd211-21 bin]# kubectl label node zdd211-22.host.com node-role.kubernetes.io/node=
node/zdd211-22.host.com labeled
[root@zdd211-21 bin]# kubectl label node zdd211-22.host.com node-role.kubernetes.io/master=
node/zdd211-22.host.com labeled
[root@zdd211-21 bin]# kubectl get node
NAME STATUS ROLES AGE VERSION
zdd211-21.host.com Ready master,node 178m v1.15.4
zdd211-22.host.com Ready master,node 22m v1.15.4
4.2 部署kube-proxy
集群规划
主机名 | 角色 | ip |
---|---|---|
zdd211-21.host.com | kube-proxy | 10.211.55.21 |
zdd211-22.host.com | kube-proxy | 10.211.55.22 |
注意:这里部署文档以zdd211-21.host.com主机为例,另外一台运算节点安装部署方法类似
签发证书
创建生成证书签名请求的JSON配置文件
kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
生成kube-proxy证书和私钥
[root@zdd211-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
2020/03/01 03:52:35 [INFO] generate received request
2020/03/01 03:52:35 [INFO] received CSR
2020/03/01 03:52:35 [INFO] generating key: rsa-2048
2020/03/01 03:52:35 [INFO] encoded CSR
2020/03/01 03:52:35 [INFO] signed certificate with serial number 489461653582417608550059955294808367031662016464
2020/03/01 03:52:35 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
检查生成的证书,私钥
[root@zdd211-200 certs]# ls -l|grep kube-proxy
-rw-r--r-- 1 root root 1005 Mar 1 03:52 kube-proxy-client.csr
-rw------- 1 root root 1679 Mar 1 03:52 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1375 Mar 1 03:52 kube-proxy-client.pem
-rw-r--r-- 1 root root 267 Mar 1 03:49 kube-proxy-csr.json
拷贝证书至各运算节点,并创建配置
zdd211-21.host.com 操作
拷贝证书,私钥,注意私钥文件属性600
[root@zdd211-21 certs]# pwd
/opt/kubernetes/server/bin/certs
[root@zdd211-21 certs]# scp zdd211-200:/opt/certs/kube-proxy-client.pem .
root@zdd211-200's password:
kube-proxy-client.pem
[root@zdd211-21 certs]# scp zdd211-200:/opt/certs/kube-proxy-client-key.pem .
root@zdd211-200's password:
kube-proxy-client-key.pem
[root@zdd211-21 certs]# ls -l|grep kube-proxy
-rw------- 1 root root 1679 Mar 1 03:55 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1375 Mar 1 03:54 kube-proxy-client.pem
[root@zdd211-21 certs]# cd ..
[root@zdd211-21 bin]# cd conf/
创建kube-proxy分发证书配置文件
分发证书到 zdd211-21 、zdd211-22 ##这里只需要在一个节点上操作一次
- set-cluster
注意:在conf目录下
kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \
--embed-certs=true \
--server=https://10.211.55.10:7443 \
--kubeconfig=kube-proxy.kubeconfig
- Set-credentials
注意:在conf目录下
kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/server/bin/certs/kube-proxy-client.pem \
--client-key=/opt/kubernetes/server/bin/certs/kube-proxy-client-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
- Set-context
kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
- Use-context
[root@zdd211-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
Switched to context "myk8s-context".
K8s node角色绑定
拷贝配置到另一个节点
zdd211-22.host.com 操作
[root@zdd211-22 conf]# pwd
/opt/kubernetes/server/bin/conf
[root@zdd211-22 conf]# scp zdd211-21:/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig .
[root@zdd211-22 conf]# ll
total 20
-rw------- 1 root root 2237 Mar 1 01:26 audit.yaml
-rw------- 1 root root 6194 Mar 1 03:12 kubelet.kubeconfig
-rw------- 1 root root 6218 Mar 1 04:02 kube-proxy.kubeconfig
加载ipvs模块
zdd211-21.host.com上操作
[root@zdd211-21 conf]# cd
[root@zdd211-21 ~]# vim ipvs.sh
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
/sbin/modinfo -F filename $i &>/dev/null
if [ $? -eq 0 ];then
/sbin/modprobe $i
fi
done
[root@zdd211-21 ~]# chmod +x ipvs.sh
[root@zdd211-21 ~]# ./ipvs.sh
[root@zdd211-21 ~]# lsmod | grep ip_vs
ip_vs_wrr 12697 0
ip_vs_wlc 12519 0
ip_vs_sh 12688 0
ip_vs_sed 12519 0
ip_vs_rr 12600 0
ip_vs_pe_sip 12740 0
nf_conntrack_sip 33860 1 ip_vs_pe_sip
ip_vs_nq 12516 0
ip_vs_lc 12516 0
ip_vs_lblcr 12922 0
ip_vs_lblc 12819 0
ip_vs_ftp 13079 0
ip_vs_dh 12688 0
ip_vs 145497 24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc
nf_nat 26787 3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4
nf_conntrack 133095 8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
创建kube-proxy启动脚本
[root@zdd211-21 ~]# cd /opt/kubernetes/server/bin/
[root@zdd211-21 bin]# vim kube-proxy.sh
#!/bin/sh
./kube-proxy \
--cluster-cidr 172.7.0.0/16 \
--hostname-override zdd211-21.host.com \
--proxy-mode=ipvs \
--ipvs-scheduler=nq \
--kubeconfig ./conf/kube-proxy.kubeconfig
检查配置,权限,创建日志目录
[root@zdd211-21 bin]# chmod +x kube-proxy.sh
[root@zdd211-21 bin]# mkdir -p /data/logs/kubernetes/kube-proxy
创建supervisor配置
zdd211-21.host.com上操作
[root@zdd211-21 bin]# vim /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-211-21]
command=/opt/kubernetes/server/bin/kube-proxy.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
启动服务并检查
[root@zdd211-21 bin]# supervisorctl update
kube-proxy-211-21: added process group
[root@zdd211-21 bin]# supervisorctl status
etcd-server-211-21 RUNNING pid 19731, uptime 2:41:09
kube-apiserver-211-21 RUNNING pid 19729, uptime 2:41:09
kube-controller-manager-211-21 RUNNING pid 19736, uptime 2:41:09
kube-kubelet-211-21 RUNNING pid 19735, uptime 2:41:09
kube-proxy-211-21 RUNNING pid 22344, uptime 0:02:20
kube-scheduler-211-21 RUNNING pid 19733, uptime 2:41:09
检查运算节点
[root@zdd211-21 bin]#yum -y install ipvsadm
[root@zdd211-21 bin]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.0.1:443 nq
-> 10.211.55.21:6443 Masq 1 0 0
-> 10.211.55.22:6443 Masq 1 0 0
安装部署启动检查所有集群规划主机上的kubelet服务
方法可以参考zdd211-21进行部署,需要需要修改配置文件中的主机名
检查所有运算节点
[root@zdd211-22 bin]# supervisorctl status
etcd-server-211-22 RUNNING pid 16026, uptime 15:30:58
kube-apiserver-211-22 RUNNING pid 16245, uptime 13:45:46
kube-controller-manager-211-22 RUNNING pid 18498, uptime 10:46:26
kube-kubelet-211-22 RUNNING pid 23091, uptime 0:11:38
kube-proxy-211-22 RUNNING pid 25108, uptime 0:02:55
kube-scheduler-211-22 RUNNING pid 18505, uptime 10:46:26
[root@zdd211-22 bin]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.0.1:443 nq
-> 10.211.55.21:6443 Masq 1 0 0
-> 10.211.55.22:6443 Masq 1 0 0
5.验证kubernetes集群
5.1 在任意一个运算节点,创建一个资源配置清单
[root@zdd211-21 ~]# pwd
/root
[root@zdd211-21 ~]# cat nginx-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
spec:
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: harbor.od.com/public/nginx:v1.7.9
ports:
- containerPort: 80
[root@zdd211-21 ~]# kubectl create -f nginx-ds.yaml
daemonset.extensions/nginx-ds created
5.2 应用资源配置,并检查
[root@zdd211-21 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ds-8r8sc 1/1 Running 0 115s
nginx-ds-pdznf 1/1 Running 0 115s
[root@zdd211-21 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ds-8r8sc 1/1 Running 0 2m5s 172.7.22.2 zdd211-22.host.com <none> <none>
nginx-ds-pdznf 1/1 Running 0 2m5s 172.7.21.2 zdd211-21.host.com <none> <none>
[root@zdd211-21 ~]# curl 172.7.22.2 ##没有信息回应
^C
[root@zdd211-21 ~]# curl 172.7.21.2 ## 有信息回应
由于还没安装网络插件,不同主机上的pod之间不能互相访问
zdd211-21、zdd211-22今天的部署到以下的结果,,就算部署完成
[root@zdd211-21 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
[root@zdd211-21 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
zdd211-21.host.com Ready master,node 3h5m v1.15.4
zdd211-22.host.com Ready master,node 29m v1.15.4
[root@zdd211-21 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ds-8r8sc 1/1 Running 0 5m21s
nginx-ds-pdznf 1/1 Running 0 5m21s
温馨
如果你喜欢本文,请分享到朋友圈,想要获得更多信息,请关注我。
更多推荐
所有评论(0)