36f804f4a1ae9b7f9a13bfd010b6e405.png

    开局一张图,Tanzu Kubernetes Grid v1.5.1 发布支持的一个重要功能是 Bring Your Own Host (简称BYOH)体验版本发布,这个解决方案突破了 TKGm 工作集群部署依赖 vSphere 或者 AWS,AZURE 等公有云的限制,TKGm 管理集群部署在 vSphere 或者 AWS,AZURE 之上,但是 TKGm 工作集群可以部署在X86 和 ARM 架构的任意的物理机、虚拟机、公有云、私有云上,通过集中的管理集群形成舰队管理模式。

BYOH 解决方案

  • 是 VMware 对 Cluster API 重大改进

  • 在 Cluster API 不支持的基础架构上启用 Tanzu

  • 将 Tanzu 平台扩展到裸机基础设施

  • 使用 Tanzu CLI 提供无缝的跨云体验

BYOH 是对 CLuster API 的重大改进,就不能不介绍 Cluster API 。

Cluster API介绍

671b87af842e4982bd9d3156a182131e.png

     2021年10月6日,云原生计算基金会(CNCF)宣布 Cluster API v1.0已准备好投入生产并正式迁移到 v1beta1 API。从 Alpha 项目的成熟度级别转变而来,Cluster API 已经展现出了它越来越多的采用、成熟的功能以及对社区和包容性创新的坚定承诺。Cluster API 是 Kubernetes 社区的一个开源项目, 它提供一套声明式的 Kubernetes 风格的 API 以及相关工具来简化Kubernetes集群的创建,升级和运维, 并且支持 vSphere, AWS, Azure, GCP, OpenStack 等多种云平台。Cluster API 项目创建于2018年, 由 Kubernetes Cluster Lifecycle Special Interest Group 负责管理和维护。Cluster API就得到了众多公司的贡献,包括VMware、Microsoft、Weaveworks、Google、Mattermost、IBM、RedHat、D2iQ、Equinix、Apple、Talos Systems、Spectro Cloud、戴姆勒TSS、爱立信、Giant Swarm 、AppsCode、英特尔、Twilio、New Relic、亚马逊等。

项目地址:https://github.com/kubernetes-sigs/cluster-api

Cluster API目标

78694aec6715544645b0486fc1db8eb5.png

Cluster API术语

  • Custom Resource Definition 自定制资源定义

定制资源(Custom Resource) 是对 Kubernetes API 的扩展. Cluster API 通过CRD 定义Cluster API 对象: Cluster, Machine, 和 MachineDeployment等. 并在管理集群中创建这些对象的实例.

  • Controller 控制器

控制器是Kubernetes中的重要概念. 一个控制器至少追踪一种类型的 Kubernetes 资源。这些对象有一个代表期望状态的 spec 字段。该资源的控制器负责确保此资源最终达到期望的指定状态。Cluster API 提供运行在管理集群中的多个控制器来管理其定义的CRD.

  • Management cluster 管理集群

管理集群是一个Kubernetes集群. Cluster API的所有组件(CRD和Controllers)都安装在管理集群中.

  • Workload cluster 工作负载集群

工作负载集群是Cluster API根据用户指定的资源描述信息创建的Kubernetes集群. 用户在这个集群中部署和运行各种工作负载.

  • Provider 适配器

Cluster API通过平台相关的适配器连接到不同的云平台. Cluster API提供了多种云平台的适配器: AWS, VMware vSphere, Azure, GoogleCloudPlatform等等. 每种适配器都对应一个或多个CRD和控制器:  CAPAWS, CAPVsphere, CAPAzure, CAPGCP等等.

  • Manifest 资源描述清单

资源描述清单就是一个或多个 YAML 文件, 其中声明了 Kubernetes API 对象(内置对象或者通过 CRD 定义的对象)和相关配置属性. 用户通过 kubectl create 命令或 API 在 Kubernetes 集群中创建资源描述清单中声明的Kubernetes资源对象实例。

Cluster API 工作原理

  • Cluster API的设计采用了Kubernetes的核心功能”控制器模式”

  • Cluster API通过Custom Resource Definitions (CRDs)定义了多个Cluster API定制资源, 扩展了Kubernetes API.

  • Cluster API 实现了多个控制器(Controllers)来管理这些CRD,并监控对应的定制资源(Custom Resource)的创建或更新, 以此来创建或更新Kubernetes集群.

  • 用户通过Cluster API管理Kubernetes集群的体验和管理Pods, Deployments, Services等Kubernetes内置资源是非常相似的.

8b4756145ee72fe8d71fb3929090f3c7.png

来源: cluster-api/crd-relationships.md at master · kubernetes-sigs/cluster-api

Cluster API 架构设计

451c783611d8b8760ce03e8058bccf43.png

Cluster API云平台适配器

主流的云平台都支持 Cluster API ,可以通过以下连接进行查询

10e6a2232385cb7f49a024ec3cfd2b44.png

来源: https://cluster-api.sigs.k8s.io/reference/providers.html

clusterctl 常用命令

  • clusterctl init 将Cluster API组件安装到一个Kubernetes集群,使之成为一个Cluster API管理集群

  • clusterctl upgrade 将管理集群中的Cluster API组件升级到更新的版本

  • clusterctl delete 卸载管理集群中的Cluster API组件

  • clusterctl config cluster 生成YAML文件,其中包含一个工作负载集群的资源描述清单,应用此YAML文件来创建一个工作负载集群

  • clusterctl move 将Cluster API管理的定制资源实例从一个Cluster API管理集群迁移到另一个Cluster API管理集群

Cluster API 和 VMware Tanzu

VMware Tanzu与Cluster API进行了深度集成

51302d743966b8e3faf868ea700de2e3.png

Bring Your Own Host Infrastructure Provider

c34b3c9eb5f7aa88acf53fc0531b71de.png

     BYOH 项目是 VMware 中国研发中心边缘计算实验室共同发起和参与的开源项目。2021年10月在 VMware Tanzu 下开源。自带主机(BYOH)项目是符合Cluster API v1beta1 定义的基础设施提供程序,用于已经安装标准 Linux 的物理机/虚拟机。

项目地址:https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost

详细介绍可以参考 《Kubernetes Cluster API 1.0版就绪:VMware在vSphere之外,新增贡献BYOH基础设施提供程序》

BYOH当前支持功能如下:

  • 支持 TKGm 1.5.1

  • Lifecycle management of BYOH workload clusters

  • 支持原生 Kubernetes manifest 和 API

  • 支持单节点和多节点控制面集群

  • 支持已部署的 Ubuntu 20.04 虚拟机和物理机

  • 兼容 Kubernetes v1.22.3

  • 支持 Antrea, Calico CNI

  • 支持 HostPath storage

  • 支持ARM 架构

测试环境

31f3f1a7c70c701441bb02c5a75ff0af.png
角色版本备注
vcenter7.0.3c支持6.7U3 和7.x
ESX7.0.3c支持6.7U3 和7.x
AVI (NSX Advanced Load Balancer)21.1.2负载均衡和服务发布
bootstrapUbuntu 18.04.6 LTS登陆管理tkgm集群
DHCP/DNS/NTPwindows 2012 

TKGm1.5.1官方建议部署1.5.1版本,并安装BYOH provider
Host (tkgc1,tkgw1)
Ubuntu 20.04预先安装Ubuntu操作系统,并进行初始化配置。将用来部署BYOH工作集群

部署步骤

0732e7196250886dbe1fd48f620ed676.png


1

部署TKGm 1.5.1管理集群

部署步骤参考 Tanzu学习系列之TKGm 1.4  for  vSphere 快速部署 安装 tkgm 1.5.1 

备注:BYOH 要求管理集群使用 kube-vip 模式 

设置配置文件中AVI_CONTROL_PLANE_HA_PROVIDER: "false" ,控制节点HA会采用kube-vip模式

514dc990774e1bc3f347c181771aee5a.png

以下为管理集群配置文件参考

cat /root/.config/tanzu/tkg/clusterconfigs/5njofh5qwz.yaml
AVI_CA_DATA_B64: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMwVENDQWJtZ0F3SUJBZ0lVTUZRODJvd0V2Z1k1SnVRVEF5Wlg1TEJKYUU0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd0dERVdNQlFHQTFVRUF3d05kM2QzTG5SaGJucDFMbU52YlRBZUZ3MHlNakF5TWpNd01URTNNRFJhRncwegpNakF5TWpFd01URTNNRFJhTUJneEZqQVVCZ05WQkFNTURYZDNkeTUwWVc1NmRTNWpiMjB3Z2dFaU1BMEdDU3FHClNJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURINDh3eWJRTzRCdXJpZ01FWHFyRmkxV0U1Q28vUCtuOW8KSEtRN0JRWmh1QjREQ2M5WW5Cckl6MCsvYU1ETTFNSVVXRitOdG01L3pIcytHL1lzTy9sS0NMYmJISmdSQ1ZjbAozWXZiNXI2TGFIeEN0bEZIT0FvekY4c2MySWl3OVhBdm9IQi9KKzgrc0VtTWgwdDNYWFdjVFlHMFMzMHJFM2txCkxhYmZSQVdnNHowdXF4VzNETnRhWlFmMHNaMis3NmtFL0ZENWppUVZqejRIR1dJQ0V0ZzJTTU9FOEVkY0xxN3kKTjNrTWc1cHRpdXdjL09GOVROaTlFdjhYWUpwN05yaHdubm16ck5Kb0FRblNSYVlKQVdER0lyRzJ5Tmt0YzdCdgpkSmdPajVSQWV1S2JhdVAwOGh5VHlad0FFNnF4Y01yLy9rcjVzM1cyM2tOQzN1OTNlUkdoQWdNQkFBR2pFekFSCk1BOEdBMVVkRVFRSU1BYUhCTUNvYms0d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFBNDZRNi9zT1dHZE05SmwKQ1RhNUMzcFY3ZXBLRVpLaFMzUWpGYVZ2SHJMU2I2T3pLNWpjdWVmaFJhZVNpRkc2OENra0hndm96SHJhMUd4MwpOUlZQYlVmZTR5K1h4S0J6VUpyQTd1dVhpMkpNelY4M1RQMjNQZnRUNU8wY1pTNGZzTU8vcUZ0RFl2bjc3VFpZCjl2Q0hKeCtwbGhJYnVlWHAyWGUrOEFlVmZUK2RzRlZWSHFNVzkvQ3hJUVcvb296ZDBaa0lTN005M1l5QWVmOGIKeHVDeWVDMDBGWmUwNitKSjRzR1RTMVZCRU56Sy9EQmR4RWF2UU1CbHd0RmZpWkFMb1F2b2ZXZVFkRmtyQWtORgpneW1kb3htVVVhL2Y0TFk2V2NIQlZFdUlpWUhNMUZnUDJiMzJaam9MNHBNb0JjWklzSjNTdXc3SHp1TkRuaDZlCkZwMkc0N1k9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0=
AVI_CLOUD_NAME: Default-Cloud
AVI_CONTROL_PLANE_HA_PROVIDER: "false"
AVI_CONTROLLER: 192.168.110.78
AVI_DATA_NETWORK: mgmt
AVI_DATA_NETWORK_CIDR: 192.168.110.0/24
AVI_ENABLE: "true"
AVI_LABELS: ""
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: 192.168.110.0/24
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: mgmt
AVI_PASSWORD: <encoded:Vk13YXJlMSE=>
AVI_SERVICE_ENGINE_GROUP: Default-Group
AVI_USERNAME: admin
CLUSTER_CIDR: 100.96.0.0/11
CLUSTER_NAME: tkgm
CLUSTER_PLAN: dev
ENABLE_AUDIT_LOGGING: "false"
ENABLE_CEIP_PARTICIPATION: "false"
ENABLE_MHC: "true"
IDENTITY_MANAGEMENT_TYPE: none
INFRASTRUCTURE_PROVIDER: vsphere
LDAP_BIND_DN: ""
LDAP_BIND_PASSWORD: ""
LDAP_GROUP_SEARCH_BASE_DN: ""
LDAP_GROUP_SEARCH_FILTER: ""
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: ""
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
LDAP_HOST: ""
LDAP_ROOT_CA_DATA_B64: ""
LDAP_USER_SEARCH_BASE_DN: ""
LDAP_USER_SEARCH_FILTER: ""
LDAP_USER_SEARCH_NAME_ATTRIBUTE: ""
LDAP_USER_SEARCH_USERNAME: userPrincipalName
OIDC_IDENTITY_PROVIDER_CLIENT_ID: ""
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: ""
OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: ""
OIDC_IDENTITY_PROVIDER_ISSUER_URL: ""
OIDC_IDENTITY_PROVIDER_NAME: ""
OIDC_IDENTITY_PROVIDER_SCOPES: ""
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: ""
OS_ARCH: amd64
OS_NAME: photon
OS_VERSION: "3"
SERVICE_CIDR: 100.64.0.0/13
TKG_HTTP_PROXY_ENABLED: "false"
TKG_IP_FAMILY: ipv4
VSPHERE_CONTROL_PLANE_DISK_GIB: "20"
VSPHERE_CONTROL_PLANE_ENDPOINT: 192.168.110.40
VSPHERE_CONTROL_PLANE_MEM_MIB: "4096"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "2"
VSPHERE_DATACENTER: /tanzu
VSPHERE_DATASTORE: /tanzu/datastore/localesx03a
VSPHERE_FOLDER: /tanzu/vm
VSPHERE_INSECURE: "true"
VSPHERE_NETWORK: /tanzu/network/mgmt
VSPHERE_PASSWORD: <encoded:Vk13YXJlMSE=>
VSPHERE_RESOURCE_POOL: /tanzu/host/tkg/Resources
VSPHERE_SERVER: 192.168.110.22
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC5BZPO9DUbst6nnG1ImA8Iay/+Ca+xh2d0V5cHPO0/ZC1zrImF/Yhht5x2V3xm9sdYJu6uSc5qZiYpTc3B4VygttijdmYyH+QoO8Qs/0i160NWb2wByUFpCvzFRAnp+352ZCR/CO3scILmRLl1hQGm795k0DgsTXKBLx4icyej6FY5Hku81GrxKXQDJS/D7c7ejbPPv+zWxpasyt3Pdkiai9wSAsVKn9/xW7Xxq3bu9sAnJLiOUY5MnRUAEHPprf5i13gMWcZFIxm2dIg4xzmiXzVKa2BiKwrohn0QGtFAJpuIbAoZa1hbwYUKKSTjNhjwmffOAYHPZ+bVnYL9aKEf5duJdTXDiYWtl4xxSIA1TxPGkEHaxIVmWbjf1LlJk+HZxbtiBbeqX/L7bnqfnrpoJeWCM0mGHQoVjM4yM3l8JBrNm+CT9ydXLPTecpMn2XA2K8xlhdSyK4S0ADZUkaZQSs8zuJgzcRyKLryNfm1nIjWbRfv1v7R/vu2nuwJT5FznzbwDcR1Z2sY6Rc0qboXe8/LEk/jrxy4B4nheJ3wNepVfLJQKJh7O/L2bfGsCi21PCGmUcgotHTjxU+1/kPRjPeEwkrbO73+8q4BPR0x7CAgNDlvPYr06qfxaJzh8eFXc4/c2bLI3z0keYbQx3aI4Kjr74J5SL/oT/UceiAaaJQ== tkg@vcf.com
VSPHERE_TLS_THUMBPRINT: ""
VSPHERE_USERNAME: administrator@vsphere.local
VSPHERE_WORKER_DISK_GIB: "20"
VSPHERE_WORKER_MEM_MIB: "4096"
VSPHERE_WORKER_NUM_CPUS: "2"


2

安装 BYOH provider

当前版本 TKG 1.5.1 的 BYOH provider 需要单独安装,后续会进行集成:

# 切换到管理集群的context,下载clusterctl
curl -L  https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.3/clusterctl-linux-amd64 -o clusterctl

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   661  100   661    0     0    861      0 --:--:-- --:--:-- --:--:--   861
100 59.0M  100 59.0M    0     0  15.9M      0  0:00:03  0:00:03 --:--:-- 26.5M

# chmod +x clusterctl
# cp clusterctl /usr/local/bin/

# mkdir ~/.cluster-api
# cd ~/.cluster-api


#查看clusterctl默认配置 byoh repositories

# clusterctl config repositories
NAME           TYPE                     URL                                                                                          FILE
cluster-api    CoreProvider             https://github.com/kubernetes-sigs/cluster-api/releases/latest/                              core-components.yaml
aws-eks        BootstrapProvider        https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/latest/                 eks-bootstrap-components.yaml
kubeadm        BootstrapProvider        https://github.com/kubernetes-sigs/cluster-api/releases/latest/                              bootstrap-components.yaml
talos          BootstrapProvider        https://github.com/talos-systems/cluster-api-bootstrap-provider-talos/releases/latest/       bootstrap-components.yaml
aws-eks        ControlPlaneProvider     https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/latest/                 eks-controlplane-components.yaml
kubeadm        ControlPlaneProvider     https://github.com/kubernetes-sigs/cluster-api/releases/latest/                              control-plane-components.yaml
nested         ControlPlaneProvider     https://github.com/kubernetes-sigs/cluster-api-provider-nested/releases/latest/              control-plane-components.yaml
talos          ControlPlaneProvider     https://github.com/talos-systems/cluster-api-control-plane-provider-talos/releases/latest/   control-plane-components.yaml
aws            InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/latest/                 infrastructure-components.yaml
azure          InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-azure/releases/latest/               infrastructure-components.yaml
byoh           InfrastructureProvider   https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost/releases/latest/       infrastructure-components.yaml
digitalocean   InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-digitalocean/releases/latest/        infrastructure-components.yaml
docker         InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api/releases/latest/                              infrastructure-components-development.yaml
gcp            InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-gcp/releases/latest/                 infrastructure-components.yaml
hetzner        InfrastructureProvider   https://github.com/syself/cluster-api-provider-hetzner/releases/latest/                      infrastructure-components.yaml
ibmcloud       InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-ibmcloud/releases/latest/            infrastructure-components.yaml
maas           InfrastructureProvider   https://github.com/spectrocloud/cluster-api-provider-maas/releases/latest/                   infrastructure-components.yaml
metal3         InfrastructureProvider   https://github.com/metal3-io/cluster-api-provider-metal3/releases/latest/                    infrastructure-components.yaml
nested         InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-nested/releases/latest/              infrastructure-components.yaml
openstack      InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-openstack/releases/latest/           infrastructure-components.yaml
packet         InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-packet/releases/latest/              infrastructure-components.yaml
sidero         InfrastructureProvider   https://github.com/talos-systems/sidero/releases/latest/                                     infrastructure-components.yaml
vsphere        InfrastructureProvider   https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/latest/             infrastructure-components.yaml

# 执行byoh providers安装
[root@harbor ~]# clusterctl init --infrastructure byoh
Fetching providers
Skipping installing cert-manager as it is already installed
Installing Provider="infrastructure-byoh" Version="v0.1.0" TargetNamespace="byoh-system"

# 查看byoh的pod 已经运行
# kubectl get pod -A|grep byoh
byoh-system                         byoh-controller-manager-6b59775cfd-m9sqk


3

Host Ubuntu 20.04 初始化设置

host 是部署管理工作集群的目标 host,本次测试准备了2个 host,一个作为控制节点,一个作为工作节点,需要进行初始化设置,以下是初始化步骤

1)Host 在部署 TKGm 工作集群之前,需要进行初始化设置,安装容器引擎等,当前支持 docker 和 containerd 引擎,以下步骤是安装的 containerd 引擎

# 1. Turn of the swap.
swapoff -a
vi /etc/fstab
# 2. Reboot the system.
reboot
 
#3 Load required modules for containerd
#https://kubernetes.io/docs/setup/production-environment/container-runtimes/
sudo modprobe overlay
sudo modprobe br_netfilter
 
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
 
#4. setup network forwarding
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
 
sudo sysctl --system
 
#5. install containerd
sudo apt-get update
sudo apt-get install -y containerd
 
#6 Configure containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
 
sudo vi /etc/containerd/config.toml
 
#At the end of this section
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
         
 
#Add these two lines, please watch out the white spaces.
#          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
#            SystemdCgroup = true
 
#7 restart containerd
sudo systemctl restart containerd
 
# 8. Add kubernetes package repo to apt
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
 
#9. create /etc/apt/sources.list.d/kubernetes.list with follow content
sudo add-apt-repository 'deb https://apt.kubernetes.io/ kubernetes-xenial main'

#10. Install kubelet kubectl and kubeadm
apt-cache policy kubelet | head -n 20   # will list the latest 20 kubelet versions.
apt install -y kubelet=1.22.3-00 kubeadm=1.22.3-00 kubectl=1.22.3-00
apt-mark hold containerd kubelet kubeadm kubectl
 
#11 make sure enable the containerd and kubelet service
systemctl enable kubelet.service
systemctl enable containerd.service

2)修改dns 设置

首先修改 /etc/systemd/resolved.conf 文件,在其中添加dns信息,例如:

DNS=192.168.110.10

然后退出保存。

然后以root身份在ubuntu终端中依次执行如下命令:

systemctl restart systemd-resolved

systemctl enable systemd-resolved


mv /etc/resolv.conf /etc/resolv.conf.bak

ln -s /run/systemd/resolve/resolv.conf /etc/

3)host 准备设置

// SSH to host
> ssh test@<Host IP>
 
// switch to root
> sudo su -
 
// install below packages if not already existing
> sudo apt-get install socat ebtables ethtool conntrack

 
// Update the hostname to be complaint to lowercase RFC 1123 subdomain if not already.
> hostnamectl set-hostname tkgc1
 
// Update /etc/hosts with the new hostname as in e.g. below
> cat /etc/hosts
127.0.0.1 localhost
127.0.0.1 tkgc1

4

Host 上启动 byoh agent

# 登陆bootstrap 切换到管理集群context
kubectl config  use-context tkgm-admin@tkgm
Switched to context "tkgm-admin@tkgm".

# 复制管理集群config 文件到host
[root@harbor ~]# scp -rp /root/.kube/config test@192.168.110.182:/home/test/
test@192.168.110.182's password:
config                                                                                                                                                    100% 9348     1.7MB/s   00:00
[root@harbor ~]# scp -rp /root/.kube/config test@192.168.110.183:/home/test/
test@192.168.110.183's password:
config                                                                                                                                                    100% 9348     1.1MB/s   00:00
[root@harbor ~]# cd /home/tkg1.5.1/# 复制byoh-hostagent-linux-amd64 agent 到host[root@harbor tkg1.5.1]# scp byoh-hostagent-linux-amd64 test@192.168.110.182:/home/test/
test@192.168.110.182's password:
byoh-hostagent-linux-amd64                                                                                                                                  0%    0     0.0KB/s   --:-- ETA^byoh-hostagent-linux-amd64                                                                                                                                100%   54MB  57.8MB/s   00:00
[root@harbor tkg1.5.1]# scp byoh-hostagent-linux-amd64 test@192.168.110.183:/home/test/
test@192.168.110.183's password:
byoh-hostagent-linux-amd64                                                                                                                                100%   54MB  61.5MB/s   00:00

host 在启动agent,后台,指定管理集群config文件,agent后台运行,host会自动连接到Tkgm管理集群,之后切换去管理集群进行接下来的操作

root@tkgc1:/home/test# ./byoh-hostagent-v0.1.0_vmware.3-linux-amd64  -kubeconfig config > agent.log 2>&1 & tail -f agent.log
[1] 10036
I0316 07:50:20.401133   10036 host_registrar.go:37] Registering ByoHost
I0316 07:50:20.436764   10036 host_registrar.go:71] Add Network Info
I0316 07:50:23.172202   10036 deleg.go:130] controller-runtime/metrics "msg"="metrics server is starting to listen"  "addr"=":8080"
I0316 07:50:23.186209   10036 deleg.go:130]  "msg"="starting metrics server"  "path"="/metrics"
I0316 07:50:23.188246   10036 controller.go:178] controller/byohost "msg"="Starting EventSource" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{},"status":{}}}
I0316 07:50:23.188382   10036 controller.go:186] controller/byohost "msg"="Starting Controller" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost"
I0316 07:50:23.290517   10036 controller.go:220] controller/byohost "msg"="Starting workers" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost" "worker count"=1
I0316 07:50:23.290954   10036 host_reconciler.go:49] controller/byohost "msg"="Reconcile request received" "name"="tkgc1" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost"
I0316 07:50:23.298177   10036 host_reconciler.go:88] controller/byohost "msg"="Machine ref not yet set" "name"="tkgc1" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost"
I0316 07:50:23.333688   10036 host_reconciler.go:49] controller/byohost "msg"="Reconcile request received" "name"="tkgc1" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost"
I0316 07:50:23.339974   10036 host_reconciler.go:88] controller/byohost "msg"="Machine ref not yet set" "name"="tkgc1" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost"


root@tkgw1:/home/test# ./byoh-hostagent-v0.1.0_vmware.3-linux-amd64  -kubeconfig config > agent.log 2>&1 & tail -f agent.log
[1] 24481
I0316 07:50:42.304242   24481 host_registrar.go:37] Registering ByoHost
I0316 07:50:42.325015   24481 host_registrar.go:71] Add Network Info
I0316 07:50:45.051883   24481 deleg.go:130] controller-runtime/metrics "msg"="metrics server is starting to listen"  "addr"=":8080"
I0316 07:50:45.072028   24481 deleg.go:130]  "msg"="starting metrics server"  "path"="/metrics"
I0316 07:50:45.076249   24481 controller.go:178] controller/byohost "msg"="Starting EventSource" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{},"status":{}}}
I0316 07:50:45.076340   24481 controller.go:186] controller/byohost "msg"="Starting Controller" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost"
I0316 07:50:45.178214   24481 controller.go:220] controller/byohost "msg"="Starting workers" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost" "worker count"=1
I0316 07:50:45.180967   24481 host_reconciler.go:49] controller/byohost "msg"="Reconcile request received" "name"="tkgw1" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost"
I0316 07:50:45.188157   24481 host_reconciler.go:88] controller/byohost "msg"="Machine ref not yet set" "name"="tkgw1" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost"
I0316 07:50:45.219180   24481 host_reconciler.go:49] controller/byohost "msg"="Reconcile request received" "name"="tkgw1" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost"
I0316 07:50:45.227955   24481 host_reconciler.go:88] controller/byohost "msg"="Machine ref not yet set" "name"="tkgw1" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost"

#tkgm管理集群查看日志
# kubectl logs byoh-controller-manager-6b59775cfd-qmfkk -n byoh-system -c manager


5

创建 BYOH 工作集群

在 bootstrap 切换到管理集群的 context,执行创建 BYOH 工作集群

1)在 bootstrap 上下载集群模版

wget  https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost/releases/download/v0.1.0/cluster-template.yaml

wget  https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost/releases/download/v0.1.0/cluster-template.yaml
--2022-03-16 22:51:56--  https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost/releases/download/v0.1.0/cluster-template.yaml
Resolving github.com (github.com)... 140.82.114.3
Connecting to github.com (github.com)|140.82.114.3|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/332894939/8ab758dd-cfb9-4c90-b85e-eedae42c9633?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220316%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220316T145241Z&X-Amz-Expires=300&X-Amz-Signature=8279106adfb308bcd1e6dd031b55ddb98704dd0e950ab7974746a3e9d7a02f85&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=332894939&response-content-disposition=attachment%3B%20filename%3Dcluster-template.yaml&response-content-type=application%2Foctet-stream [following]
--2022-03-16 22:51:56--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/332894939/8ab758dd-cfb9-4c90-b85e-eedae42c9633?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220316%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220316T145241Z&X-Amz-Expires=300&X-Amz-Signature=8279106adfb308bcd1e6dd031b55ddb98704dd0e950ab7974746a3e9d7a02f85&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=332894939&response-content-disposition=attachment%3B%20filename%3Dcluster-template.yaml&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.109.133, 185.199.111.133, 185.199.108.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4690 (4.6K) [application/octet-stream]
Saving to: ‘cluster-template.yaml.1’

100%[==================================================================================================================================================>] 4,690       --.-K/s   in 0s

2022-03-16 22:51:57 (28.9 MB/s) - ‘cluster-template.yaml.1’ saved [4690/4690]

2)可以修改cluster-template.yaml

注意修改cidr,不要与node网络重合

spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 172.168.0.0/16
    serviceDomain: cluster.local
    services:
      cidrBlocks:
      - 10.128.0.0/12
  controlPlaneRef:

3)设置环境变量,设置byoh工作集群名字、控制节点VIP(确保没有被使用),当前k8s版本只支持v1.22.3

export CLUSTER_NAME="byoh-wc"
export NAMESPACE="default" 
export KUBERNETES_VERSION="v1.22.3"
export CONTROL_PLANE_MACHINE_COUNT=1 
export WORKER_MACHINE_COUNT=1 
export CONTROL_PLANE_ENDPOINT_IP=192.168.110.45

4)使用模版创建创建byoh集群

# cat cluster-template.yaml | envsubst | kubectl apply -f -
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/byoh-wc-md-0 created
cluster.cluster.x-k8s.io/byoh-wc created
machinedeployment.cluster.x-k8s.io/byoh-wc-md-0 created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/byoh-wc-control-plane created
byocluster.infrastructure.cluster.x-k8s.io/byoh-wc created
byomachinetemplate.infrastructure.cluster.x-k8s.io/byoh-wc-md-0 created
byomachinetemplate.infrastructure.cluster.x-k8s.io/byoh-wc-control-plane created

5)使用的公网harbor,根据网络情况等1个小时左右,验证集群创建成功

备注:由于默认没有安装cni,所节点状态为NotReady

#从管理集群导出byoh工作集群config文件# kubectl get secret byoh-wc-kubeconfig -o jsonpath='{.data.value}' | base64 -d > byoh-wc.kubeconfig

# 使用config文件查看byoh工作集群kubectl --kubeconfig=byoh-wc.kubeconfig get nodeNAME    STATUS   ROLES                  AGE   VERSION
tkgc1   NotReady    <none>                 3d    v1.22.3
tkgw1   NotReady    control-plane,master   3d    v1.22.3

6

安装antrea CNI

支持calico 和antrea cni,本次测试安装antrea cni,安装成功之后节点状态为Ready

#wget https://github.com/antrea-io/antrea/releases/download/v1.2.3/antrea.yml
#sed -i 's/projects.registry.vmware.com\/antrea\/antrea-ubuntu\:v1.2.3/projects-stg.registry.vmware.com\/tkg\/antrea-advanced-debian\:v1.2.3_vmware.4/g' antrea.yml
# kubectl --kubeconfig=byoh-wc.kubeconfig apply -f antrea.yml

# kubectl --kubeconfig=byoh-wc.kubeconfig get node
NAME    STATUS   ROLES                  AGE   VERSION
tkgc1   Ready    <none>                 3d    v1.22.3
tkgw1   Ready    control-plane,master   3d    v1.22.3

7

安装配置 AKO

配置 AKO 与 AVI 进行对接,当前版本 AKO 不能从管理集群自动部署,需要手工部署,通过 helm 方式进行安装配置


#下载helm 文件到本地
#helm pull ako/ako --version=1.6.1
# tar xzvf ako-16.1.tgz
#安装ako
# helm install ako ./ako --version 1.6.1 --set ControllerSettings.controllerHost=192.168.110.78 --set avicredentials.username=admin --set avicredentials.password=VMware1! --set ControllerSettings.controllerVersion="21.1.2" --set AKOSettings.clusterName=byoh-wc --set NetworkSettings.subnetIP=192.168.110.0 --set NetworkSettings.subnetPrefix=24 --set NetworkSettings.networkName=mgmt --set NetworkSettings.vipNetworkList[0].cidr="192.168.110.0/24" --set NetworkSettings.vipNetworkList[0].networkName="mgmt"   --set ControllerSettings.cloudName=Default-Cloud --set AKOSettings.layer7Only=false --set AKOSettings.disableStaticRouteSync=false --set ControllerSettings.serviceEngineGroupName=byohsg --set ControllerSettings.tenantsPerCluster=false --set ControllerSettings.tenantName=admin --set L7Settings.shardVSSize=SMALL --namespace=avi-system
NAME: ako
LAST DEPLOYED: Fri Mar 18 19:11:19 2022
NAMESPACE: avi-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

# 验证部署成功
# kubectl get pod -n avi-system
NAME    READY   STATUS    RESTARTS       AGE
ako-0   1/1     Running   0             10s


8

发布应用测试

发布一个应用,服务发布模式为LoadBalancer

apiVersion: v1
kind: Service
metadata:
  name: hello-kubernetes
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-kubernetes
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-kubernetes
  template:
    metadata:
      labels:
        app: hello-kubernetes
    spec:
      imagePullSecrets:
      - name: externalimgpull
      containers:
      - name: hello-kubernetes
        image: bitnami/nginx
        ports:
        - containerPort: 8080


 # kubectl apply -f tkghello.yaml
service/hello-kubernetes created
deployment.apps/hello-kubernetes created

访问应用

be4195031844aabc3ff92b7c8c12619a.png



9

Tanzu Package管理

当前byoh默认不支持Tanzu Package管理,可以通过手工配置的方式进行支持

# 安装carvel-kapp-controller
#  kubectl apply -f https://github.com/vmware-tanzu/carvel-kapp-controller/releases/download/v0.30.0/release.yml
namespace/kapp-controller created
namespace/kapp-controller-packaging-global created
apiservice.apiregistration.k8s.io/v1alpha1.data.packaging.carvel.dev created
service/packaging-api created
customresourcedefinition.apiextensions.k8s.io/internalpackagemetadatas.internal.packaging.carvel.dev created
customresourcedefinition.apiextensions.k8s.io/internalpackages.internal.packaging.carvel.dev created
customresourcedefinition.apiextensions.k8s.io/apps.kappctrl.k14s.io created
customresourcedefinition.apiextensions.k8s.io/packageinstalls.packaging.carvel.dev created
customresourcedefinition.apiextensions.k8s.io/packagerepositories.packaging.carvel.dev created
deployment.apps/kapp-controller created
serviceaccount/kapp-controller-sa created
clusterrole.rbac.authorization.k8s.io/kapp-controller-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/kapp-controller-cluster-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/pkg-apiserver:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/pkgserver-auth-reader created

# 添加 tanzu package repository
# tanzu package repository add tanzu-standard --url projects.registry.vmware.com/tkg/packages/standard/repo:v1.5.0
- Adding package repository 'tanzu-standard' I0318 19:35:43.689320 1010571 request.go:665] Waited for 1.022663845s due to client-side throttling, not priority and fairness, request: GET:https://192.168.110.45:6443/apis/storage.k8s.io/v1beta1?timeout=32s

| Validating provided settings for the package repository
| Creating package repository resource
/ Waiting for 'PackageRepository' reconciliation for 'tanzu-standard'
| 'PackageRepository' resource install status: Reconciling

Added package repository 'tanzu-standard' in namespace 'default'
[byoh-wc-admin@byoh-wc|default] [root@harbor yaml]# tanzu package repository list -A
- Retrieving repositories... I0318 19:36:41.001823 1010650 request.go:665] Waited for 1.028699125s due to client-side throttling, not priority and fairness, request: GET:https://192.168.110.45:6443/apis/packaging.carvel.dev/v1alpha1?timeout=32s

  NAME            REPOSITORY                                               TAG     STATUS               DETAILS  NAMESPACE
  tanzu-standard  projects.registry.vmware.com/tkg/packages/standard/repo  v1.5.0  Reconcile succeeded           default

#查看 tanzu package repository可用的package 

# tanzu package available list -A
- Retrieving available packages... I0318 19:37:16.771923 1010714 request.go:665] Waited for 1.013074407s due to client-side throttling, not priority and fairness, request: GET:https://192.168.110.45:6443/apis/crd.antrea.io/v1beta1?timeout=32s
| Retrieving available packages...
  NAME                           DISPLAY-NAME  SHORT-DESCRIPTION                                                                                           LATEST-VERSION         NAMESPACE 
  cert-manager.tanzu.vmware.com  cert-manager  Certificate management                                                                                      1.5.3+vmware.2-tkg.1   default   
  contour.tanzu.vmware.com       contour       An ingress controller                                                                                       1.18.2+vmware.1-tkg.1  default   
  external-dns.tanzu.vmware.com  external-dns  This package provides DNS synchronization functionality.                                                    0.10.0+vmware.1-tkg.1  default   
  fluent-bit.tanzu.vmware.com    fluent-bit    Fluent Bit is a fast Log Processor and Forwarder                                                            1.7.5+vmware.2-tkg.1   default   
  grafana.tanzu.vmware.com       grafana       Visualization and analytics software                                                                        7.5.7+vmware.2-tkg.1   default   
  harbor.tanzu.vmware.com        harbor        OCI Registry                                                                                                2.3.3+vmware.1-tkg.1   default   
  multus-cni.tanzu.vmware.com    multus-cni    This package provides the ability for enabling attaching multiple network interfaces to pods in Kubernetes  3.7.1+vmware.2-tkg.2   default   
  prometheus.tanzu.vmware.com    prometheus    A time series database for your metrics

可以参考 Tanzu学习系列之TKGm 1.4 for vSphere 组件集成(一)Tanzu学习系列之TKGm 1.4 for vSphere 组件集成(二)Tanzu学习系列之TKGm 1.4 for vSphere 组件集成(三)Tanzu学习系列之TKGm 1.4 for vSphere 组件集成(四)完成其他组件的集成测试。

本文完


要想了解联邦学习、隐私计算、云原生和区块链等技术原理,请立即长按以下二维码,关注本公众号亨利笔记 ( henglibiji ),以免错过更新。

07ad9601263243a4bb75892b9df058aa.png

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐