centos7部署k8s集群(入门)
环境要求:centos7测试环境:1Master、3Node,测试使用的Kubernetes集群可由一个master主机及一个以上(建议至少两个)node主机组成,这里我所使用3个node主机,且每台主机分别拥有4核心的cpu和1g内存(我所给的,若你电脑强大可以给4g),这些主机可以是物理服务器,也可以运行于vmware虚拟平台上的虚拟机。tip:保证master节点和node节点可直接访问..
环境要求:centos7
测试环境:1Master、3Node,测试使用的Kubernetes集群可由一个master主机及一个以上(建议至少两个)node主机组成,这里我所使用3个node主机,且每台主机分别拥有4核心的cpu和1g内存(我所给的,若你电脑强大可以给4g),这些主机可以是物理服务器,也可以运行于vmware虚拟平台上的虚拟机。
tip:保证master节点和node节点可直接访问互联网,centos环境就不写教程了,后期补上
1,设置时钟同步(分别在4个节点查询date)
[root@master ~]# date
2019年 09月 18日 星期三 18:32:58 CST
tip:发现时间并不统一,每个节点分别执行以下命令同步时间并检查
[root@master ~]# systemctl status chrinyd
Unit chrinyd.service could not be found.
[root@master ~]# systemctl status chronyd
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Active: active (running) since 三 2019-09-18 16:35:09 CST; 2h 2min ago
Docs: man:chronyd(8)
man:chrony.conf(5)
Process: 6257 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
Process: 6240 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 6244 (chronyd)
Tasks: 1
Memory: 1.2M
CGroup: /system.slice/chronyd.service
└─6244 /usr/sbin/chronyd
9月 18 16:35:09 master systemd[1]: Started NTP client/server.
9月 18 16:35:47 master chronyd[6244]: Selected source 144.76.76.107
9月 18 16:35:55 master chronyd[6244]: Source 144.76.76.107 replaced with 119.28.206.193
9月 18 16:35:56 master chronyd[6244]: Can't synchronise: no selectable sources
9月 18 16:36:00 master chronyd[6244]: Selected source 193.182.111.143
9月 18 16:36:52 master chronyd[6244]: Can't synchronise: no selectable sources
9月 18 16:36:54 master chronyd[6244]: Selected source 193.182.111.14
9月 18 16:39:02 master chronyd[6244]: Selected source 119.28.206.193
9月 18 16:40:20 master chronyd[6244]: Selected source 193.182.111.143
9月 18 16:41:11 master chronyd[6244]: Selected source 119.28.206.193
2,主机名称解析,在各节点修改hosts文件
[root@master ~]# cat /etc/hosts
192.168.200.129 master.magedu.com master
192.168.200.130 node01.magedu.com node01
192.168.200.131 node02.magedu.com node02
192.168.200.132 node03.magedu.com node03
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
将你各节点ip与主机名对应
3,各节点关闭iptables和firewalld服务
[root@master ~]# systemctl status firwalld
Unit firwalld.service could not be found.
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
[root@master ~]# systemctl status iptable
tip:iptable默认没有,有的话就需要关闭
4,各节点关闭并禁用selinux
编辑/etc/syscinfig/selinux文件,改成SELINUX=disabled
[root@master ~]# vi /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
tip:修改之后并不会生效,需要reboot重启,使用命令查询状态
[root@master ~]# getenforce
Disabled
[root@master ~]#
5,禁用Swap设备(此步骤我并没有禁用,但后面需要配置错误)
查看Swap
[root@master ~]# free -m
total used free shared buff/cache available
Mem: 1819 714 113 9 991 814
Swap: 2047 2 2045
[root@master ~]#
永久关闭Swap,在Swap前注释
[root@master ~]# vi /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Sep 17 01:06:07 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=408e00c4-8bd6-46a6-ba00-02f54432af0a /boot xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0
tip:也需要重启生效
6,启用ipvs内核模块(可暂时不做)
创建内核模块载入相关文件/etc/sysconfig/modules/ipvs.modules
[root@master ~]# cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for mod in $(ls $ipvs_mods_dir | grep -o "^[^.]*"); do
/sbin/modinfo -F filename $mod &> /dev/null
if [ $? -eq 0 ]; then
/sbin/modprobe $mod
fi
done
[root@master ~]#
修改文件权限,并手动为当前系统加载内核模块
[root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@master ~]# bash /etc/sysconfig/modules/ipvs.modules
7,各节点安装docker-ce
在此之前wget并没有用,所以我们需要下载一个wget
[root@master ~]# yum install -y wget
获取docker-ce的配置仓库配置文件
[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
将docker-ce.repo复制到其他节点
[root@master ~]# scp docker-ce.repo node01:/etc/yum.repos.d/
[root@master ~]# scp docker-ce.repo node02:/etc/yum.repos.d/
[root@master ~]# scp docker-ce.repo node03:/etc/yum.repos.d/
然后分别在其他节点执行
[root@node01 ~]# yum install -y docker-ce
8,启动docker服务
配置/usr/lib/systemd/system/docker.service ,加入
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
Environment="NO_PROXY=127.0.0.0/8,192.168.200.0/24"
注:192.168.200.0/24为我的ip,根据个人ip设置
查看bridge
[root@master ~]# sysctl -a | grep bridge
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-filter-pppoe-tagged = 0
net.bridge.bridge-nf-filter-vlan-tagged = 0
net.bridge.bridge-nf-pass-vlan-input-dev = 0
sysctl: reading key "net.ipv6.conf.all.stable_secret"
sysctl: reading key "net.ipv6.conf.cni0.stable_secret"
sysctl: reading key "net.ipv6.conf.default.stable_secret"
sysctl: reading key "net.ipv6.conf.docker0.stable_secret"
sysctl: reading key "net.ipv6.conf.ens33.stable_secret"
sysctl: reading key "net.ipv6.conf.flannel/1.stable_secret"
sysctl: reading key "net.ipv6.conf.lo.stable_secret"
sysctl: reading key "net.ipv6.conf.veth426a064f.stable_secret"
sysctl: reading key "net.ipv6.conf.vethb7175a9c.stable_secret"
若前三都为1则不要配置,否者编辑/etc/sysctl.d/k8s.conf
[root@master ~]# vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
并将此文件复制到其他节点
[root@master ~]# scp /etc/sysctl.d/k8s.conf node01:/etc/sysctl.d/k8s.conf
[root@master ~]# scp /etc/sysctl.d/k8s.conf node02:/etc/sysctl.d/k8s.conf
[root@master ~]# scp /etc/sysctl.d/k8s.conf node03:/etc/sysctl.d/k8s.conf
启动daemon和docker
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl start docker
将/usr/lib/systemd/system/docker.service复制到其他节点
[root@master ~]# scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/docker.service
[root@master ~]# scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/docker.service
[root@master ~]# scp /usr/lib/systemd/system/docker.service node03:/usr/lib/systemd/system/docker.service
在各节点安装docker-ce
[root@node01 ~]# yum install -y docker-ce
[root@node02 ~]# yum install -y docker-ce
[root@node03 ~]# yum install -y docker-ce
tip:安装后都重启daemon和docker,最后查看docker info
各节点设置开机自动启
[root@master ~]# systemctl enable docker
[root@node01 ~]# systemctl enable docker
[root@node02 ~]# systemctl enable docker
[root@node03 ~]# systemctl enable docker
配置阿里云加速,创建/etc/docker/daemon.json文件
[root@master ~]# vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://lptjipx8.mirror.aliyuncs.com"]
}
tip:若没有申请可以用我的
9,安装kunernetes
找到kubernetes的下载链接,centos7,64位
网站链接: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
通过链接手动生成kubernetes的yum仓库配置文件/etc/yum.repos.d/kubernetes.repo
[root@master ~]# vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes Repository
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
将此文件复制到其他节点
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo node01:/etc/yum.repos.d/kubernetes.repo
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo node02:/etc/yum.repos.d/kubernetes.repo
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo node03:/etc/yum.repos.d/kubernetes.repo
tip:这里的gpgkey的链接地址如图(复制链接)
查看kubernetes
[root@master ~]# yum list all | grep "^kube"
下载kubernetes
[root@master ~]# yum install kubeadm kubelet kubectl
查看版本
/usr/bin/kubectl
[root@master ~]# rpm -q kubectl
kubectl-1.15.3-0.x86_64
[root@master ~]#
10,初始master节点
若未禁用Swap设备,则需要编辑/etc/sysconfig/kubelet
[root@master ~]# vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
初始化集群,在初始化集群前需要7个镜像
k8s.gcr.io/kube-scheduler v1.15.3 6ef91efad3d9 13 days ago 81.1MB
k8s.gcr.io/kube-proxy v1.15.3 aaaae9089f19 13 days ago 82.4MB
k8s.gcr.io/kube-controller-manager v1.15.3 766b3b091b23 13 days ago 159MB
k8s.gcr.io/kube-apiserver v1.15.3 5eb2d3fc7a44 4 weeks ago 207MB
k8s.gcr.io/coredns 1.3.1 a773837be6c4 5 months ago 40.3MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 9 months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 21 months ago 742kB
在此我们需要先行下载,通过阿里云拉取这7个镜像(后期补上教程),再初始化
[root@master ~]# kubeadm init --kubernetes-version="v1.15.3" --pod-network-cidr="10.244.0.0/24" --dry-run --ignore-preflight-errors=Swap
在这会出现以下内容成功:
保存好最后一行kubeadm join用于node节点的加入
创建隐藏目录.kube/
[root@master ~]# mkdir .kube/
[root@master ~]# cp /etc/kubernetes/admin.conf .kube/config
11,部署flannel网络
复制
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
此时状态为ready:
每台node节点需要在阿里云拉取三个镜像:
[root@node01 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.3 aaaae9089f19 13 days ago 82.4MB
quay.io/coreos/flannel v0.11.0-amd64 6f4360a7f3bb 5 months ago 52.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 21 months ago 742kB
再使用kubeadm join加入每台node:
[root@master ~]# kubeadm join 192.168.200.129:6443 --token 6fwamt.sn9qrs19tlhzl9cu \
> --discovery-token-ca-cert-hash sha256:85febffe07dc77cc4c1dda7f11a888e85b7d9e3f64deafb9d968adefefafbfc2 --ignore-preflight-errors=Swap
等待一会,查询nodes:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 8h v1.15.3
node01 Ready <none> 6h2m v1.15.3
node02 Ready <none> 5h59m v1.15.3
node03 Ready <none> 5h59m v1.15.3
12,配置/etc/resolv.conf
[root@master ~]# cat /etc/resolv.conf
nameserver 8.8.8.8
nameserver 192.168.200.2
更多推荐
所有评论(0)