从头开始搭建kubernetes集群+istio服务网格(3)—— 搭建istio
(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)本系列分为三章,第一章是创建虚拟机、docker、kubernetes等一些基础设施;第二章是在此基础上创建一个三节点的kubernetes集群;第三章是再在之上搭建istio服务网格。本文参考了大量其他优秀作者的创作(已经
(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)
本文参考网址:
https://www.jianshu.com/p/e43f5e848da1
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://www.jianshu.com/p/1aebf568b786
https://blog.csdn.net/donglynn/article/details/47784393
https://blog.csdn.net/MC_CodeGirl/article/details/79998656
https://blog.csdn.net/andriy_dangli/article/details/85062983
https://docs.projectcalico.org/v3.8/getting-started/kubernetes/installation/calico
https://www.jianshu.com/p/70efa1b853f5
https://blog.csdn.net/weixin_44723434/article/details/94583457
https://preliminary.istio.io/zh/docs/setup/kubernetes/download/
https://www.cnblogs.com/rickie/p/istio.html
https://blog.csdn.net/lwplvx/article/details/79192182
https://blog.csdn.net/qq_36402372/article/details/82991098
https://www.cnblogs.com/assion/p/11326088.html
http://www.lampnick.com/php/823
https://blog.csdn.net/ccagy/article/details/83059349
https://www.jianshu.com/p/789bc867feaa
https://www.jianshu.com/p/dde56c521078
本系列分为三章,第一章是创建虚拟机、docker、kubernetes等一些基础设施;第二章是在此基础上创建一个三节点的kubernetes集群;第三章是再在之上搭建istio服务网格。
本文参考了大量其他优秀作者的创作(已经在开头列出),自己从零开始,慢慢搭建了istio服务网格,每一步都在文章中详细地列出了。之所以要自己重新从头搭建,一方面是很多CSDN、简书或其他平台的教程都已经离现在(2019.8.14)太过遥远,变得不合时宜,单纯地照着别人的路子走会遇到非常多的坑;另一方面是实践出真知。
由于我也是刚开始学习istio服务网格,才疏学浅,难免有不尽如人意的地方,还请见谅。
1 前言
在上一章中,我们已经搭建了一个三个节点的kubernetes集群,运行kubectl get nodes
查看集群状态如下
[root@k8s-master centos_master]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 2d23h v1.15.1
k8s-node2 Ready <none> 3m38s v1.15.1
k8s-node3 Ready <none> 3m28s v1.15.1
另外,可查看所有pod状态,运行kubectl get pods -n kube-system
[root@k8s-master centos_master]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-7bd78b474d-4vgnh 1/1 Running 0 41h
calico-node-p59hr 1/1 Running 0 41h
calico-node-rdcqs 1/1 Running 0 4m46s
calico-node-sc79x 1/1 Running 0 4m56s
coredns-5c98db65d4-dnb85 1/1 Running 0 2d23h
coredns-5c98db65d4-jhdsl 1/1 Running 0 2d23h
etcd-k8s-master 1/1 Running 2 2d23h
kube-apiserver-k8s-master 1/1 Running 2 2d23h
kube-controller-manager-k8s-master 1/1 Running 2 2d23h
kube-proxy-78k2m 1/1 Running 2 2d23h
kube-proxy-n9ggl 1/1 Running 0 4m46s
kube-proxy-zvglw 1/1 Running 0 4m56s
kube-scheduler-k8s-master 1/1 Running 2 2d23h
如图,所有pod都是running,集群运行正常。
接下来,我们开始尝试在kubernetes集群上搭建istio服务网格。
2 下载istio
根据官方文档,运行下面的命令,进行下载和自动解压缩。
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.3.0 sh -
但是,一直报错,遂查阅资料。得先创建模目录。
[root@k8s-master centos_master]# mkdir istio
[root@k8s-master centos_master]# cd istio
继续使用这个命令依然一直出问题,不能自己创建bin文件夹,所以改成如下命令。
curl -L https://git.io/getLatestIstio | sh -
得到如下结果
[root@k8s-master istio]# curl -L https://git.io/getLatestIstio | sh -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
100 2207 100 2207 0 0 33 0 0:01:06 0:01:05 0:00:01 562
Downloading istio-1.2.3 from https://github.com/istio/istio/releases/download/1.2.3/istio-1.2.3-linux.tar.gz ... % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 614 0 614 0 0 855 0 --:--:-- --:--:-- --:--:-- 855
100 20.2M 100 20.2M 0 0 310k 0 0:01:06 0:01:06 --:--:-- 1191k
Istio 1.2.3 Download Complete!
Istio has been successfully downloaded into the istio-1.2.3 folder on your system.
Next Steps:
See https://istio.io/docs/setup/kubernetes/install/ to add Istio to your Kubernetes cluster.
To configure the istioctl client tool for your workstation,
add the /home/centos_master/istio/istio-1.2.3/bin directory to your environment path variable with:
export PATH="$PATH:/home/centos_master/istio/istio-1.2.3/bin"
Begin the Istio pre-installation verification check by running:
istioctl verify-install
Need more information? Visit https://istio.io/docs/setup/kubernetes/install/
其中
安装目录包含了如下内容:
install/ 目录中包含了 Kubernetes 安装所需的 .yaml 文件。
samples/ 目录中是示例应用。
istioctl 客户端文件保存在 bin/ 目录之中。istioctl 的功能是手工进行 Envoy Sidecar 的注入,以及对路由规则、策略的管理。
istio.VERSION 配置文件。
3 添加路径
同样走了不少弯路,直接写正确步骤。
编辑/etc/profile。
[root@k8s-master centos_master]# vim /etc/profile
在profile文件底部,增加如下一行:
export PATH="$PATH:/home/centos_master/istio/istio-1.2.3/bin"
执行source命令,使修改马上生效
[root@k8s-master centos_master]# source /etc/profile
[root@k8s-master centos_master]# echo $PATH
/home/centos_master/istio/istio-1.2.3/bin:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin
但是机缘巧合之下,发现重启虚拟机之后这个path又没了,每次重启都需要重新手动source /etc/profile
,遂继续查找资料,使用以下命令
[root@k8s-master centos_master]# vim ~/.bashrc
# .bashrc
# User specific aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
在末尾添加
source /etc/profile
重启虚拟机,直接查看路径,如下。
[root@k8s-master centos_master]# echo $PATH
/home/centos_master/istio/istio-1.2.3/bin:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin
方法有效~
4 安装istio及第一次失败
首先进入istio-1.2.3
目录中,然后执行命令。
[root@k8s-master istio-1.2.3]# for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
customresourcedefinition.apiextensions.k8s.io/virtualservices.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/destinationrules.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/serviceentries.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/gateways.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/envoyfilters.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/clusterrbacconfigs.rbac.istio.io created
customresourcedefinition.apiextensions.k8s.io/policies.authentication.istio.io created
customresourcedefinition.apiextensions.k8s.io/meshpolicies.authentication.istio.io created
customresourcedefinition.apiextensions.k8s.io/httpapispecbindings.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/httpapispecs.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/quotaspecbindings.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/quotaspecs.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/rules.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/attributemanifests.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/rbacconfigs.rbac.istio.io created
customresourcedefinition.apiextensions.k8s.io/serviceroles.rbac.istio.io created
customresourcedefinition.apiextensions.k8s.io/servicerolebindings.rbac.istio.io created
customresourcedefinition.apiextensions.k8s.io/adapters.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/instances.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/templates.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/handlers.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/sidecars.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.rbac.istio.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/issuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/certificates.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/orders.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/challenges.certmanager.k8s.io created
接着按照官方文档的描述,我们选择宽容模式的mutual TLS
,执行以下命令。
kubectl apply -f install/kubernetes/istio-demo.yaml
创建了很长的项目,这里就不贴了。
确认下列 Kubernetes 服务已经部署并都具有各自的 CLUSTER-IP。
[root@k8s-master istio-1.2.3]# kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.97.235.224 <none> 3000/TCP 79s
istio-citadel ClusterIP 10.104.37.184 <none> 8060/TCP,15014/TCP 79s
istio-egressgateway ClusterIP 10.108.174.245 <none> 80/TCP,443/TCP,15443/TCP 79s
istio-galley ClusterIP 10.111.33.223 <none> 443/TCP,15014/TCP,9901/TCP 79s
istio-ingressgateway LoadBalancer 10.108.202.14 <pending> 15020:30408/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31682/TCP,15030:32111/TCP,15031:31461/TCP,15032:31828/TCP,15443:31156/TCP 79s
istio-pilot ClusterIP 10.100.25.44 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 79s
istio-policy ClusterIP 10.106.230.184 <none> 9091/TCP,15004/TCP,15014/TCP 79s
istio-sidecar-injector ClusterIP 10.98.165.45 <none> 443/TCP 78s
istio-telemetry ClusterIP 10.108.74.211 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 79s
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 78s
jaeger-collector ClusterIP 10.103.30.254 <none> 14267/TCP,14268/TCP 78s
jaeger-query ClusterIP 10.111.14.94 <none> 16686/TCP 78s
kiali ClusterIP 10.104.150.23 <none> 20001/TCP 79s
prometheus ClusterIP 10.109.119.222 <none> 9090/TCP 79s
tracing ClusterIP 10.104.157.81 <none> 80/TCP 77s
zipkin ClusterIP 10.110.58.176 <none> 9411/TCP 78s
由于我们是没有外部负载均衡器的,所以那一项是pending。接着修改IngressGateway网络类型,ingressgateway的默认网络类型是LoadBanlancer,在没有外部负载均衡的情况下可以修改为NodePort。
分别执行命令如下。
kubectl patch service istio-ingressgateway -n istio-system -p '{"spec":{"type":"NodePort"}}'
export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o 'jsonpath={.items[0].status.hostIP}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
之后再次查看kubectl get svc -n istio-system
,pending已经消失。(以后可通过节点的IP:端口
访问。)
接下来,确认必要的 Kubernetes Pod 都已经创建并且其 STATUS 的值是 Running。
[root@k8s-master istio-1.2.3]# kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-6575997f54-z7cfv 1/1 Running 0 19m
istio-citadel-6c7b69b4bd-pnqdf 0/1 CrashLoopBackOff 6 19m
istio-cleanup-secrets-1.2.3-krq5d 0/1 Completed 0 19m
istio-egressgateway-7bfc9bcb-2tnxb 0/1 Running 0 19m
istio-galley-56c9d69897-brxkf 0/1 ContainerCreating 0 19m
istio-grafana-post-install-1.2.3-5qdg9 0/1 Error 4 19m
istio-ingressgateway-57546db474-xvq5x 0/1 Running 0 19m
istio-pilot-55fb644454-wxt7s 0/2 CrashLoopBackOff 4 19m
istio-policy-7f86484668-cblrt 1/2 CrashLoopBackOff 11 19m
istio-security-post-install-1.2.3-8dj78 0/1 Error 4 19m
istio-sidecar-injector-5d59f56878-gqdhs 0/1 ContainerCreating 0 19m
istio-telemetry-54bfb9b469-4hzks 1/2 CrashLoopBackOff 11 19m
istio-tracing-555cf644d-s9tq2 1/1 Running 0 19m
kiali-6cd6f9dfb5-c8plv 1/1 Running 0 19m
prometheus-7d7b9f7844-wjlkz 0/1 ContainerCreating 0 19m
等待,直到全部running。。。
但是
一直卡在这个地方,容器建了好久还是没反应,经过和别人的交流,推测可能还是给虚拟机的配置低了,遂放弃三节点的设想,改造成单节点。
删除节点命令如下,分别执行以下五条命令,最后一条为重置节点。
kubectl drain k8s-node2 --delete-local-data --force --ignore-daemonsets
kubectl delete nodes k8s-node2
kubectl drain k8s-node3 --delete-local-data --force --ignore-daemonsets
kubectl delete nodes k8s-node3
kubeadm reset
在第一条命令时,因为驱逐节点太慢,一直卡在istio的pod上,所以干脆就把istio卸载了,卸载了之后再次执行上面五条命令就非常快了,卸载istio的两条命令如下(同样需要进入安装目录)。
kubectl delete -f install/kubernetes/istio-demo.yaml
for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl delete -f $i; done
之后我们需要重新创建集群,以下命令就是第二章的精简版,命令分别如下:
(1)执行
kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.56.103
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
(2)查看
kubectl get pod -n kube-system
(3)执行
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
(4)观察
watch kubectl get pods --all-namespaces
(5)执行
kubectl taint nodes --all node-role.kubernetes.io/master-
(6)成功
[root@k8s-master /]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready master 8m30s v1.15.1 192.168.56.103 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://18.9.8
现在我们的集群就又成了一个单节点的kubernetes集群,同时把master节点的配置调成6核12G内存,之前讲过这里不再赘述。
5 再次安装istio
再次进入istio-1.2.3
目录,执行命令
for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
kubectl apply -f install/kubernetes/istio-demo.yaml
再执行以下命令修改IngressGateway网络类型为NodePort。
kubectl patch service istio-ingressgateway -n istio-system -p '{"spec":{"type":"NodePort"}}'
export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o 'jsonpath={.items[0].status.hostIP}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
查看是否所有服务和pod都正常
[root@k8s-master istio-1.2.3]# kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.101.66.191 <none> 3000/TCP 84s
istio-citadel ClusterIP 10.99.16.95 <none> 8060/TCP,15014/TCP 84s
istio-egressgateway ClusterIP 10.100.225.54 <none> 80/TCP,443/TCP,15443/TCP 84s
istio-galley ClusterIP 10.104.193.18 <none> 443/TCP,15014/TCP,9901/TCP 85s
istio-ingressgateway NodePort 10.104.2.228 <none> 15020:31309/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32393/TCP,15030:32692/TCP,15031:31599/TCP,15032:30925/TCP,15443:31672/TCP 84s
istio-pilot ClusterIP 10.100.15.238 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 84s
istio-policy ClusterIP 10.100.235.216 <none> 9091/TCP,15004/TCP,15014/TCP 84s
istio-sidecar-injector ClusterIP 10.100.87.195 <none> 443/TCP 84s
istio-telemetry ClusterIP 10.111.82.13 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 84s
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 84s
jaeger-collector ClusterIP 10.99.107.114 <none> 14267/TCP,14268/TCP 84s
jaeger-query ClusterIP 10.102.126.208 <none> 16686/TCP 84s
kiali ClusterIP 10.98.72.9 <none> 20001/TCP 84s
prometheus ClusterIP 10.96.9.215 <none> 9090/TCP 84s
tracing ClusterIP 10.110.119.30 <none> 80/TCP 83s
zipkin ClusterIP 10.102.201.97 <none> 9411/TCP 84s
正常~
再确认必要的 Kubernetes Pod 都已经创建并且其 STATUS 的值是 Running
[root@k8s-master istio-1.2.3]# kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-6575997f54-gfh2k 1/1 Running 0 6m19s
istio-citadel-6c7b69b4bd-ssvxh 1/1 Running 0 6m18s
istio-cleanup-secrets-1.2.3-v6qqs 0/1 Completed 0 6m20s
istio-egressgateway-7bfc9bcb-kq2q4 0/1 Running 0 6m19s
istio-galley-56c9d69897-kbhb9 1/1 Running 0 6m19s
istio-grafana-post-install-1.2.3-hj2d6 0/1 Completed 0 6m20s
istio-ingressgateway-57546db474-sg9r4 1/1 Running 0 6m19s
istio-pilot-55fb644454-qrkq2 2/2 Running 0 6m19s
istio-policy-7f86484668-9cm95 2/2 Running 5 6m19s
istio-security-post-install-1.2.3-72fdx 0/1 Completed 0 6m20s
istio-sidecar-injector-5d59f56878-dmctx 1/1 Running 0 6m18s
istio-telemetry-54bfb9b469-667pc 2/2 Running 4 6m19s
istio-tracing-555cf644d-h2wcf 1/1 Running 0 6m18s
kiali-6cd6f9dfb5-g2k7d 1/1 Running 0 6m19s
prometheus-7d7b9f7844-jthxt 1/1 Running 0 6m18s
终于成功了!!!看来之前一直卡着的原因就是给虚拟机的配置太低了,唯一遗憾的是我的物理机只能跑单节点了。
6 成功搭建istio
接下来就可以在上面部署应用,或进行其他的一些工作再深入的研究
更多推荐
所有评论(0)