openstack-helm在kubernetes中部署单节点openstack
在此设置中,OpenStack 服务作为容器部署在 Kubernetes 集群中,使组织能够利用 Kubernetes 提供的可扩展性、灵活性和易于管理性,同时仍然使用 OpenStack 来满足其云计算需求。OpenStack-Helm 的目标是提供一系列 Helm charts,以便在 Kubernetes 上简单、弹性、灵活地部署 OpenStack 及相关服务。horizon-int 对应
helm部署单节点openstack
OpenStack-Helm 的目标是提供一系列 Helm charts,以便在 Kubernetes 上简单、弹性、灵活地部署 OpenStack 及相关服务。
OpenStack 可以在独立的 Kubernetes 集群之上运行,而不是在现有基础设施之上运行。在此设置中,OpenStack 服务作为容器部署在 Kubernetes 集群中,使组织能够利用 Kubernetes 提供的可扩展性、灵活性和易于管理性,同时仍然使用 OpenStack 来满足其云计算需求。
相关项目介绍:
- loci项目,即openstack-helm依赖的openstack组件容器镜像制作
- openstack-helm-images项目,即openstack-helm依赖的其它组件容器镜像制作,该项目使用loci项目来制作openstack组件容器镜像;该项目使用loci项目来制作openstack组件容器镜像
- openstack-helm-infra项目,涉及主机配置、k8s集群部署
- openstack-helm项目,涉及openstack相关组件chart定义、组件部署
项目地址:
loci:https://github.com/openstack/loci
openstack-helm-images:https://github.com/openstack/openstack-helm-images
openstack-helm-infra:https://github.com/openstack/openstack-helm-infra
openstack-helm:https://github.com/openstack/openstack-helm
官方文档:
https://docs.openstack.org/openstack-helm/latest/
环境准备
官方文档支持单节点部署,多节点部署,以及基于NFS、Ceph的存储部署,本示例基于单节点+NFS方式进行部署。
节点规划,以1个master为例,单个网卡,单个根磁盘。
IP Address | Role | CPU | Memory |
---|---|---|---|
192.168.72.50 | master1 | 8 | 16G |
操作系统及组件版本信息:
- 操作系统:Ubuntu 22.04
- Kubernetes:v1.27.3
- containerd:v1.6.21
- 网络插件:flannel
完整部署的建议最低系统要求是:
- 16GB 内存
- 8核
- 48GB硬盘
本示例在已有kubernetes集群之上部署openstack,需要提前自行部署kubernetes,kubernetes节点信息如下:
root@ubuntu:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ubuntu Ready control-plane 22h v1.27.3 192.168.72.50 <none> Ubuntu 22.04.2 LTS 5.15.0-76-generic containerd://1.6.21
查看pods状态
root@ubuntu:~# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-b25hf 1/1 Running 0 8m44s
kube-system coredns-5d78c9869d-b5fgj 1/1 Running 0 9m39s
kube-system coredns-5d78c9869d-wqhth 1/1 Running 0 9m38s
kube-system etcd-ubuntu 1/1 Running 0 9m53s
kube-system kube-apiserver-ubuntu 1/1 Running 0 9m53s
kube-system kube-controller-manager-ubuntu 1/1 Running 2 (64s ago) 9m53s
kube-system kube-proxy-klqqq 1/1 Running 0 9m39s
kube-system kube-scheduler-ubuntu 1/1 Running 2 (65s ago) 9m55s
为kubernetes节点打上标签
kubectl label nodes ubuntu openstack-control-plane=enabled
kubectl label nodes ubuntu openstack-compute-node=enabled
kubectl label nodes ubuntu openvswitch=enabled
OpenStack-Helm 各个服务的 endpoints URL 都是以 *.openstack.svc.cluster.local
域名的形式,配置本地hosts解析
cat >>/etc/hosts<<EOF
192.168.72.50 keystone.openstack.svc.cluster.local
192.168.72.50 heat.openstack.svc.cluster.local
192.168.72.50 glance.openstack.svc.cluster.local
192.168.72.50 nova.openstack.svc.cluster.local
192.168.72.50 neutron.openstack.svc.cluster.local
EOF
部署前置要求
参考:https://docs.openstack.org/openstack-helm/latest/install/common-requirements.html
参考:https://docs.openstack.org/openstack-helm/latest/install/developer/requirements-and-host-config.html
在主机或主节点上,根据需要安装最新版本的 Git、CA Certs & Make
sudo apt-get update
sudo apt-get install --no-install-recommends -y \
ca-certificates \
git \
make \
jq \
nmap \
curl \
uuid-runtime \
bc \
python3-pip
克隆包含 OpenStack-Helm chat的git仓库:
git clone https://opendev.org/openstack/openstack-helm-infra.git
git clone https://opendev.org/openstack/openstack-helm.git
定义openstack安装版本,本示例为zed
$ cd openstack-helm
$ cat tools/deployment/common/setup-client.sh | grep OPENSTACK_RELEASE
-c${UPPER_CONSTRAINTS_FILE:=https://releases.openstack.org/constraints/upper/${OPENSTACK_RELEASE:-zed}} \
$ cat tools/deployment/common/get-values-overrides.sh | grep OPENSTACK_RELEASE:
: "${OPENSTACK_RELEASE:="zed"}"
提前创建命名空间
kubectl create ns openstack
kubectl create ns nfs
kubectl create ns ceph
安装openstack客户端
参考:https://docs.openstack.org/openstack-helm/latest/install/developer/kubernetes-and-common-setup.html
配置国内pip源
mkdir ~/.pip
cat > ~/.pip/pip.conf << EOF
[global]
trusted-host=mirrors.aliyun.com
index-url=https://mirrors.aliyun.com/pypi/simple/
EOF
解决依赖包问题:x86_64-linux-gnu-gcc No such file or directory
apt install -y build-essential python3-dev
可以通过直接运行脚本来执行此步骤:
$ cd openstack-helm
$ ./tools/deployment/developer/common/020-setup-client.sh
验证客户端安装
root@ubuntu:~# openstack --version
openstack 6.0.0
安装 ingress controller
可以通过直接运行脚本来执行此步骤:
./tools/deployment/component/common/ingress.sh
查看创建的pods
root@ubuntu:~# kubectl -n openstack get pods -l application=ingress
NAME READY STATUS RESTARTS AGE
ingress-7ffbc58c5b-nfl7l 1/1 Running 0 20h
ingress-error-pages-7b9b69bc7-jk2dl 1/1 Running 0 20h
安装 NFS Provisioner
参考:https://docs.openstack.org/openstack-helm/latest/install/developer/deploy-with-nfs.html
可以通过直接运行脚本来执行此步骤:
./tools/deployment/developer/nfs/040-nfs-provisioner.sh
查看创建的pods
root@ubuntu:~# kubectl -n nfs get pods -l application=nfs
NAME READY STATUS RESTARTS AGE
nfs-provisioner-544dbbb76f-8h5v2 1/1 Running 1 (5h53m ago) 16h
查看创建的storageclass
root@ubuntu:~# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
general nfs/nfs-provisioner Delete Immediate false 15h
所有节点安装nfs客户端
apt install -y nfs-common
安装 MariaDB
可以通过直接运行脚本来执行此步骤:
./tools/deployment/developer/nfs/050-mariadb.sh
查看部署的mariadb pods
root@ubuntu:~# kubectl -n openstack get pods -l application=mariadb
NAME READY STATUS RESTARTS AGE
mariadb-ingress-5bc446f46d-qd6tt 1/1 Running 0 15h
mariadb-ingress-5bc446f46d-zzhw5 1/1 Running 0 15h
mariadb-ingress-error-pages-69479bc687-vmv6n 1/1 Running 0 15h
mariadb-server-0 1/1 Running 0 15h
安装 RabbitMQ
可以通过直接运行脚本来执行此步骤:
./tools/deployment/developer/nfs/060-rabbitmq.sh
查看部署的rabbitmq pods
root@ubuntu:~# kubectl -n openstack get pods -l application=rabbitmq
NAME READY STATUS RESTARTS AGE
rabbitmq-cluster-wait-858b9 0/1 Completed 0 15h
rabbitmq-rabbitmq-0 1/1 Running 0 15h
rabbitmq-rabbitmq-1 1/1 Running 0 15h
安装 Memcached
可以通过直接运行脚本来执行此步骤:
./tools/deployment/developer/nfs/070-memcached.sh
查看部署的memcached pods
root@ubuntu:~# kubectl -n openstack get pods -l application=memcached
NAME READY STATUS RESTARTS AGE
memcached-memcached-768976ddbc-27cc2 1/1 Running 0 15h
安装 Keystone
可以通过直接运行脚本来执行此步骤:
./tools/deployment/developer/nfs/080-keystone.sh
查看部署的keystone pods
root@ubuntu:~# kubectl -n openstack get pods -l application=keystone
NAME READY STATUS RESTARTS AGE
keystone-api-597bff77d9-vqqbn 1/1 Running 0 15h
keystone-bootstrap-tk25v 0/1 Completed 0 15h
keystone-credential-setup-94p5r 0/1 Completed 0 15h
keystone-db-init-w9ldx 0/1 Completed 0 15h
keystone-db-sync-2z88l 0/1 Completed 0 15h
keystone-domain-manage-6lmwd 0/1 Completed 0 15h
keystone-fernet-rotate-28218480-4x6zw 0/1 Completed 0 7h17m
keystone-fernet-setup-tqz6q 0/1 Completed 0 15h
keystone-rabbit-init-wzjqb 0/1 Completed 0 15h
安装 Heat
可以通过直接运行脚本来执行此步骤:
./tools/deployment/developer/nfs/090-heat.sh
查看部署的heat pods
root@ubuntu:~# kubectl -n openstack get pods -l application=heat
NAME READY STATUS RESTARTS AGE
heat-api-785d7b9c8c-7zr6m 1/1 Running 0 14h
heat-bootstrap-6h82r 0/1 Completed 0 14h
heat-cfn-78dd5f6977-vpbwj 1/1 Running 0 14h
heat-db-init-hr68c 0/1 Completed 0 14h
heat-db-sync-rjlzw 0/1 Completed 0 14h
heat-domain-ks-user-shh9q 0/1 Completed 0 14h
heat-engine-7d64d4b567-v2xdn 1/1 Running 0 14h
heat-engine-cleaner-28218905-8755q 0/1 Completed 0 13m
heat-engine-cleaner-28218910-t5xz4 0/1 Completed 0 8m19s
heat-engine-cleaner-28218915-vz2gs 0/1 Completed 0 3m19s
heat-ks-endpoints-q2qgg 0/6 Completed 0 14h
heat-ks-service-p8gml 0/2 Completed 0 14h
heat-ks-user-q5pm6 0/1 Completed 0 14h
heat-rabbit-init-7mfd4 0/1 Completed 0 14h
heat-trustee-ks-user-lll92 0/1 Completed 0 14h
heat-trusts-p4mx5 0/1 Completed 0 14h
安装 Horizon
可以通过直接运行脚本来执行此步骤:
./tools/deployment/developer/nfs/100-horizon.sh
查看部署的horizon pods
root@ubuntu:~# kubectl -n openstack get pods -l application=horizon
NAME READY STATUS RESTARTS AGE
horizon-55d5cc85c9-q24gt 1/1 Running 0 14h
horizon-db-init-k9lgn 0/1 Completed 0 14h
horizon-db-sync-85dgm 0/1 Completed 0 14h
horizon-test 0/1 Completed 0 14h
horizon-int 对应的就是 Dashboard 服务,它的类型是 NodePort 而且映射的端口号是 31000
安装 Glance
可以通过直接运行脚本来执行此步骤:
./tools/deployment/developer/nfs/120-glance.sh
其中/tmp/glance.yaml
定义了glance使用的存储类型,values.yaml中支持的存储类型如下
# radosgw, rbd, swift or pvc
---
storage: swift
目前 Glance 支持几种 backend storage:
- pvc: 一个简单的 Kubernetes PVC存储后端。
- rbd: 使用 Ceph RBD 来存储 images。
- radosgw: 使用 Ceph RGW 来存储 images。
- swift: 另用 OpenStack switf 所提供的对象存储来存储 images.
查看部署的glance pods
root@ubuntu:~# kubectl -n openstack get pods -l application=glance
NAME READY STATUS RESTARTS AGE
glance-api-85459565c7-zpqs9 1/1 Running 0 14h
glance-bootstrap-4k2d5 0/1 Completed 0 14h
glance-db-init-bj5xf 0/1 Completed 0 14h
glance-db-sync-zcrxj 0/1 Completed 0 14h
glance-ks-endpoints-9nqfw 0/3 Completed 0 14h
glance-ks-service-24w7s 0/1 Completed 0 14h
glance-ks-user-5dqs7 0/1 Completed 0 14h
glance-metadefs-load-pqn75 0/1 Completed 0 14h
glance-rabbit-init-t5w9c 0/1 Completed 0 14h
glance-storage-init-pqkph 0/1 Completed 0 14h
安装 OpenvSwitch
可以通过直接运行脚本来执行此步骤:
./tools/deployment/developer/nfs/140-openvswitch.sh
查看部署的openvswitch pods
root@ubuntu:~# kubectl -n openstack get pods -l application=openvswitch
NAME READY STATUS RESTARTS AGE
openvswitch-xxjxd 2/2 Running 0 14h
安装 Libvirt
可以通过直接运行脚本来执行此步骤:
./tools/deployment/developer/nfs/150-libvirt.sh
查看部署的libvirt pods
root@ubuntu:~# kubectl -n openstack get pods -l application=libvirt
NAME READY STATUS RESTARTS AGE
libvirt-libvirt-default-9qx8g 1/1 Running 0 14h
安装 Compute Kit (Nova and Neutron)
可以通过直接运行脚本来执行此步骤:
./tools/deployment/developer/nfs/160-compute-kit.sh
查看部署的placement pods
root@ubuntu:~# kubectl -n openstack get pods -l application=placement
NAME READY STATUS RESTARTS AGE
placement-api-5978b49bbf-bnrk6 1/1 Running 0 14h
placement-db-init-l6wr8 0/1 Completed 0 14h
placement-db-sync-rhpcc 0/1 Completed 0 14h
placement-ks-endpoints-pgdsj 0/3 Completed 0 14h
placement-ks-service-9nl8f 0/1 Completed 0 14h
placement-ks-user-2tvzn 0/1 Completed 0 14h
查看部署的nova pods
root@ubuntu:~# kubectl -n openstack get pods -l application=nova
NAME READY STATUS RESTARTS AGE
nova-api-metadata-557f6d5f85-xr8k5 1/1 Running 1 (14h ago) 14h
nova-api-osapi-657dbd96b7-fjwvd 1/1 Running 0 14h
nova-bootstrap-tsx26 0/1 Completed 0 14h
nova-cell-setup-28218780-6447w 0/1 Completed 0 141m
nova-cell-setup-28218840-npjp7 0/1 Completed 0 81m
nova-cell-setup-28218900-g65ws 0/1 Completed 0 21m
nova-cell-setup-7gvsh 0/1 Completed 0 14h
nova-compute-default-2sc2r 1/1 Running 0 14h
nova-conductor-cf584b9bb-qhc8k 1/1 Running 0 14h
nova-db-init-k69zs 0/3 Completed 0 14h
nova-db-sync-9ns7l 0/1 Completed 0 14h
nova-ks-endpoints-hkr9h 0/3 Completed 0 14h
nova-ks-service-hpwct 0/1 Completed 0 14h
nova-ks-user-dg69j 0/1 Completed 0 14h
nova-novncproxy-58fbb5c47c-2vxgm 1/1 Running 0 14h
nova-rabbit-init-c6cxx 0/1 Completed 0 14h
nova-scheduler-666494fc78-v98tt 1/1 Running 0 14h
nova-service-cleaner-28218780-mccsr 0/1 Completed 0 141m
nova-service-cleaner-28218840-qwf6g 0/1 Completed 0 81m
nova-service-cleaner-28218900-qvbgx 0/1 Completed 0 21m
查看部署的neutron pods
root@ubuntu:~# kubectl -n openstack get pods -l application=neutron
NAME READY STATUS RESTARTS AGE
neutron-db-init-xx6ml 0/1 Completed 0 14h
neutron-db-sync-5k4ts 0/1 Completed 0 14h
neutron-dhcp-agent-default-fzcsm 1/1 Running 0 14h
neutron-ks-endpoints-cj9fc 0/3 Completed 0 14h
neutron-ks-service-2nkrs 0/1 Completed 0 14h
neutron-ks-user-tnk8n 0/1 Completed 0 14h
neutron-l3-agent-default-2k2qf 1/1 Running 0 14h
neutron-metadata-agent-default-krqxh 1/1 Running 0 14h
neutron-netns-cleanup-cron-default-w8lw5 1/1 Running 0 14h
neutron-ovs-agent-default-mlp5r 1/1 Running 0 14h
neutron-rabbit-init-z7xh2 0/1 Completed 0 14h
neutron-server-69954b8998-8q5fw 1/1 Running 0 14h
设置 public network的gateway;
安装依赖包
apt install -y net-tools
修改脚本
$ cat ./tools/deployment/developer/nfs/170-setup-gateway.sh
以下两个参数保持默认即可
: ${OSH_EXT_SUBNET:="172.24.4.0/24"}
: ${OSH_BR_EX_ADDR:="172.24.4.1/24"}
如果是containerd环境需要将docker命令换为nerdctl,如果没有nerdctl需要提前安装
setup_gateway_file="./tools/deployment/developer/nfs/170-setup-gateway.sh"
sed -i "s#sudo docker#sudo nerdctl#g" ${setup_gateway_file}
ubuntu22.04不存在/etc/kubernetes/kubelet-resolv.conf
配置文件,需要改为/run/systemd/resolve/resolv.conf
sed -i "s#/etc/kubernetes/kubelet-resolv.conf#/run/systemd/resolve/resolv.conf#g" ${setup_gateway_file}
修改OPENSTACK_RELEASE参数,与主版本一致
sed -i "s#"OPENSTACK_RELEASE:=.*}"#OPENSTACK_RELEASE:=\"zed\"}#g" ${setup_gateway_file}
修改完成后,可以通过直接运行脚本来执行此步骤
./tools/deployment/developer/nfs/170-setup-gateway.sh
Exercise the Cloud
参考:https://docs.openstack.org/openstack-helm/latest/install/developer/exercise-the-cloud.html
部署 OpenStack-Helm 后,可以使用 OpenStack 客户端或validation gates
中使用的相同 heat 模板来运行云。
可以通过直接运行脚本来执行此步骤:
./tools/deployment/developer/common/900-use-it.sh
查看创建的资源
查看helm charts
root@ubuntu:~# helm -n openstack ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
glance openstack 1 2023-08-27 04:29:12.398845063 +0800 CST deployed glance-0.4.11 v1.0.0
heat openstack 3 2023-08-27 04:22:23.925115391 +0800 CST deployed heat-0.3.5 v1.0.0
horizon openstack 1 2023-08-27 04:26:37.470730839 +0800 CST deployed horizon-0.3.11 v1.0.0
ingress-openstack openstack 3 2023-08-27 03:43:35.238327143 +0800 CST deployed ingress-0.2.15 v0.42.0
keystone openstack 3 2023-08-27 04:04:47.314694648 +0800 CST deployed keystone-0.3.3 v1.0.0
libvirt openstack 1 2023-08-27 04:40:43.912371161 +0800 CST deployed libvirt-0.1.21 v1.0.0
mariadb openstack 1 2023-08-27 03:43:51.218554184 +0800 CST deployed mariadb-0.2.31 v10.6.7
memcached openstack 1 2023-08-27 03:53:23.83252168 +0800 CST deployed memcached-0.1.13 v1.5.5
neutron openstack 1 2023-08-27 04:47:56.961418962 +0800 CST deployed neutron-0.3.18 v1.0.0
nova openstack 1 2023-08-27 04:44:56.082746508 +0800 CST deployed nova-0.3.16 v1.0.0
openvswitch openstack 1 2023-08-27 04:37:35.824852186 +0800 CST deployed openvswitch-0.1.17 v1.0.0
placement openstack 1 2023-08-27 04:43:06.732705729 +0800 CST deployed placement-0.3.7 v1.0.0
rabbitmq openstack 1 2023-08-27 03:51:47.759042033 +0800 CST deployed rabbitmq-0.1.29 v3.9.0
查看pods
root@ubuntu:~# kubectl -n openstack get pods
NAME READY STATUS RESTARTS AGE
glance-api-85459565c7-zpqs9 1/1 Running 0 15h
glance-bootstrap-4k2d5 0/1 Completed 0 15h
glance-db-init-bj5xf 0/1 Completed 0 15h
glance-db-sync-zcrxj 0/1 Completed 0 15h
glance-ks-endpoints-9nqfw 0/3 Completed 0 15h
glance-ks-service-24w7s 0/1 Completed 0 15h
glance-ks-user-5dqs7 0/1 Completed 0 15h
glance-metadefs-load-pqn75 0/1 Completed 0 15h
glance-rabbit-init-t5w9c 0/1 Completed 0 15h
glance-storage-init-pqkph 0/1 Completed 0 15h
heat-api-785d7b9c8c-7zr6m 1/1 Running 0 15h
heat-bootstrap-6h82r 0/1 Completed 0 15h
heat-cfn-78dd5f6977-vpbwj 1/1 Running 0 15h
heat-db-init-hr68c 0/1 Completed 0 15h
heat-db-sync-rjlzw 0/1 Completed 0 15h
heat-domain-ks-user-shh9q 0/1 Completed 0 15h
heat-engine-7d64d4b567-v2xdn 1/1 Running 0 15h
heat-engine-cleaner-28218930-26gqf 0/1 Completed 0 10m
heat-engine-cleaner-28218935-jvxtd 0/1 Completed 0 5m34s
heat-engine-cleaner-28218940-kvgwh 0/1 Completed 0 34s
heat-ks-endpoints-q2qgg 0/6 Completed 0 15h
heat-ks-service-p8gml 0/2 Completed 0 15h
heat-ks-user-q5pm6 0/1 Completed 0 15h
heat-rabbit-init-7mfd4 0/1 Completed 0 15h
heat-trustee-ks-user-lll92 0/1 Completed 0 15h
heat-trusts-p4mx5 0/1 Completed 0 15h
horizon-55d5cc85c9-q24gt 1/1 Running 0 15h
horizon-db-init-k9lgn 0/1 Completed 0 15h
horizon-db-sync-85dgm 0/1 Completed 0 15h
horizon-test 0/1 Completed 0 15h
ingress-7ffbc58c5b-nfl7l 1/1 Running 0 15h
ingress-error-pages-7b9b69bc7-jk2dl 1/1 Running 0 15h
keystone-api-597bff77d9-vqqbn 1/1 Running 0 15h
keystone-bootstrap-tk25v 0/1 Completed 0 15h
keystone-credential-setup-94p5r 0/1 Completed 0 15h
keystone-db-init-w9ldx 0/1 Completed 0 15h
keystone-db-sync-2z88l 0/1 Completed 0 15h
keystone-domain-manage-6lmwd 0/1 Completed 0 15h
keystone-fernet-rotate-28218480-4x6zw 0/1 Completed 0 7h40m
keystone-fernet-setup-tqz6q 0/1 Completed 0 15h
keystone-rabbit-init-wzjqb 0/1 Completed 0 15h
libvirt-libvirt-default-9qx8g 1/1 Running 0 14h
mariadb-ingress-5bc446f46d-qd6tt 1/1 Running 0 15h
mariadb-ingress-5bc446f46d-zzhw5 1/1 Running 0 15h
mariadb-ingress-error-pages-69479bc687-vmv6n 1/1 Running 0 15h
mariadb-server-0 1/1 Running 0 15h
memcached-memcached-768976ddbc-27cc2 1/1 Running 0 15h
neutron-db-init-xx6ml 0/1 Completed 0 14h
neutron-db-sync-5k4ts 0/1 Completed 0 14h
neutron-dhcp-agent-default-fzcsm 1/1 Running 0 14h
neutron-ks-endpoints-cj9fc 0/3 Completed 0 14h
neutron-ks-service-2nkrs 0/1 Completed 0 14h
neutron-ks-user-tnk8n 0/1 Completed 0 14h
neutron-l3-agent-default-2k2qf 1/1 Running 0 14h
neutron-metadata-agent-default-krqxh 1/1 Running 0 14h
neutron-netns-cleanup-cron-default-w8lw5 1/1 Running 0 14h
neutron-ovs-agent-default-mlp5r 1/1 Running 0 14h
neutron-rabbit-init-z7xh2 0/1 Completed 0 14h
neutron-server-69954b8998-8q5fw 1/1 Running 0 14h
nova-api-metadata-557f6d5f85-xr8k5 1/1 Running 1 (14h ago) 14h
nova-api-osapi-657dbd96b7-fjwvd 1/1 Running 0 14h
nova-bootstrap-tsx26 0/1 Completed 0 14h
nova-cell-setup-28218780-6447w 0/1 Completed 0 160m
nova-cell-setup-28218840-npjp7 0/1 Completed 0 100m
nova-cell-setup-28218900-g65ws 0/1 Completed 0 40m
nova-cell-setup-7gvsh 0/1 Completed 0 14h
nova-compute-default-2sc2r 1/1 Running 0 14h
nova-conductor-cf584b9bb-qhc8k 1/1 Running 0 14h
nova-db-init-k69zs 0/3 Completed 0 14h
nova-db-sync-9ns7l 0/1 Completed 0 14h
nova-ks-endpoints-hkr9h 0/3 Completed 0 14h
nova-ks-service-hpwct 0/1 Completed 0 14h
nova-ks-user-dg69j 0/1 Completed 0 14h
nova-novncproxy-58fbb5c47c-2vxgm 1/1 Running 0 14h
nova-rabbit-init-c6cxx 0/1 Completed 0 14h
nova-scheduler-666494fc78-v98tt 1/1 Running 0 14h
nova-service-cleaner-28218780-mccsr 0/1 Completed 0 160m
nova-service-cleaner-28218840-qwf6g 0/1 Completed 0 100m
nova-service-cleaner-28218900-qvbgx 0/1 Completed 0 40m
openvswitch-xxjxd 2/2 Running 0 15h
placement-api-5978b49bbf-bnrk6 1/1 Running 0 14h
placement-db-init-l6wr8 0/1 Completed 0 14h
placement-db-sync-rhpcc 0/1 Completed 0 14h
placement-ks-endpoints-pgdsj 0/3 Completed 0 14h
placement-ks-service-9nl8f 0/1 Completed 0 14h
placement-ks-user-2tvzn 0/1 Completed 0 14h
rabbitmq-cluster-wait-858b9 0/1 Completed 0 15h
rabbitmq-rabbitmq-0 1/1 Running 0 15h
rabbitmq-rabbitmq-1 1/1 Running 0 15h
查看service
root@ubuntu:~# kubectl -n openstack get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cloudformation ClusterIP 10.96.3.158 <none> 80/TCP,443/TCP 15h
glance ClusterIP 10.96.1.163 <none> 80/TCP,443/TCP 15h
glance-api ClusterIP 10.96.2.14 <none> 9292/TCP 15h
heat ClusterIP 10.96.2.13 <none> 80/TCP,443/TCP 15h
heat-api ClusterIP 10.96.0.122 <none> 8004/TCP 15h
heat-cfn ClusterIP 10.96.2.166 <none> 8000/TCP 15h
horizon ClusterIP 10.96.0.153 <none> 80/TCP,443/TCP 15h
horizon-int NodePort 10.96.3.42 <none> 80:31000/TCP 15h
ingress ClusterIP 10.96.0.175 <none> 80/TCP,443/TCP,10246/TCP 16h
ingress-error-pages ClusterIP None <none> 80/TCP 16h
ingress-exporter ClusterIP 10.96.0.247 <none> 10254/TCP 16h
keystone ClusterIP 10.96.0.106 <none> 80/TCP,443/TCP 15h
keystone-api ClusterIP 10.96.3.225 <none> 5000/TCP 15h
mariadb ClusterIP 10.96.3.12 <none> 3306/TCP 16h
mariadb-discovery ClusterIP None <none> 3306/TCP,4567/TCP 16h
mariadb-ingress-error-pages ClusterIP None <none> 80/TCP 16h
mariadb-server ClusterIP 10.96.1.12 <none> 3306/TCP 16h
memcached ClusterIP 10.96.2.153 <none> 11211/TCP 15h
metadata ClusterIP 10.96.3.25 <none> 80/TCP,443/TCP 14h
neutron ClusterIP 10.96.2.233 <none> 80/TCP,443/TCP 14h
neutron-server ClusterIP 10.96.1.167 <none> 9696/TCP 14h
nova ClusterIP 10.96.3.15 <none> 80/TCP,443/TCP 14h
nova-api ClusterIP 10.96.1.210 <none> 8774/TCP 14h
nova-metadata ClusterIP 10.96.2.29 <none> 8775/TCP 14h
nova-novncproxy ClusterIP 10.96.0.192 <none> 6080/TCP 14h
novncproxy ClusterIP 10.96.3.130 <none> 80/TCP,443/TCP 14h
placement ClusterIP 10.96.1.170 <none> 80/TCP,443/TCP 15h
placement-api ClusterIP 10.96.2.35 <none> 8778/TCP 15h
rabbitmq ClusterIP None <none> 5672/TCP,25672/TCP,15672/TCP,15692/TCP 15h
rabbitmq-mgr-7b1733 ClusterIP 10.96.0.223 <none> 80/TCP,443/TCP 15h
查看pvc
root@ubuntu:~# kubectl -n openstack get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
glance-images Bound pvc-7d9a7656-24fd-4472-b966-0a7a5ba3caf3 2Gi RWO general 15h
mysql-data-mariadb-server-0 Bound pvc-c41c7d34-e175-47c0-a555-a710ecd9553f 5Gi RWO general 15h
rabbitmq-data-rabbitmq-rabbitmq-0 Bound pvc-36681105-6785-4331-b868-b8b38ba4a0ba 768Mi RWO general 15h
rabbitmq-data-rabbitmq-rabbitmq-1 Bound pvc-36ff02b0-93c9-41f7-bd0f-fe6ba9f71304 768Mi RWO general 15h
查看ingress
root@ubuntu:~# kubectl get ingress -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
ceph ceph-ingress-ceph <none> *.ceph.svc.cluster.local 192.168.72.50 80 15h
openstack cloudformation <none> cloudformation,cloudformation.openstack,cloudformation.openstack.svc.cluster.local 192.168.72.50 80 15h
openstack glance <none> glance,glance.openstack,glance.openstack.svc.cluster.local 192.168.72.50 80 15h
openstack heat <none> heat,heat.openstack,heat.openstack.svc.cluster.local 192.168.72.50 80 15h
openstack horizon <none> horizon,horizon.openstack,horizon.openstack.svc.cluster.local 192.168.72.50 80 15h
openstack keystone <none> keystone,keystone.openstack,keystone.openstack.svc.cluster.local 192.168.72.50 80 15h
openstack metadata <none> metadata,metadata.openstack,metadata.openstack.svc.cluster.local 192.168.72.50 80 14h
openstack neutron <none> neutron,neutron.openstack,neutron.openstack.svc.cluster.local 192.168.72.50 80 14h
openstack nova <none> nova,nova.openstack,nova.openstack.svc.cluster.local 192.168.72.50 80 14h
openstack novncproxy <none> novncproxy,novncproxy.openstack,novncproxy.openstack.svc.cluster.local 192.168.72.50 80 14h
openstack openstack-ingress-openstack <none> *.openstack.svc.cluster.local 192.168.72.50 80 15h
openstack placement <none> placement,placement.openstack,placement.openstack.svc.cluster.local 192.168.72.50 80 14h
openstack rabbitmq-mgr-7b1733 <none> rabbitmq-mgr-7b1733,rabbitmq-mgr-7b1733.openstack,rabbitmq-mgr-7b1733.openstack.svc.cluster.local 192.168.72.50 80 15h
访问horizon
在web中访问以下地址,用户名密码为:admin/password
,域为default
http://192.168.72.50:31000
登陆horizon
查看实例
查看网络拓扑
本地配置hosts解析
192.168.72.50 novncproxy.openstack.svc.cluster.local
点击打开实例控制台,测试访问外网
openstack命令
临时配置
export OS_CLOUD=openstack_helm
openstack compute service list
查看clouds.yaml
配置
root@ubuntu:~# cat /etc/openstack/clouds.yaml
clouds:
openstack_helm:
region_name: RegionOne
identity_api_version: 3
auth:
username: 'admin'
password: 'password'
project_name: 'admin'
project_domain_name: 'default'
user_domain_name: 'default'
auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
永久配置
cat >/etc/profile.d/adminrc.sh<<EOF
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
source /etc/profile
测试命令
root@ubuntu:~# openstack user list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 14a19983d32b431494ce690a8a29df0a | admin |
+----------------------------------+-------+
查看service
root@ubuntu:~# openstack service list
+----------------------------------+-----------+----------------+
| ID | Name | Type |
+----------------------------------+-----------+----------------+
| 008f44fdbc5d462786e6563df768ec38 | neutron | network |
| 556287ed138f45309fe201818912687b | glance | image |
| ba006d48fc094179b336e402a1a21114 | heat | orchestration |
| c536e3eb8c8941369937977d22627775 | keystone | identity |
| dcc2e06909244bbbbc80c62dcf9494ee | placement | placement |
| e090b288846a40a189ee9f242e7dea2c | nova | compute |
| ff8604dcf5aa49469474fc3467b1569f | heat-cfn | cloudformation |
+----------------------------------+-----------+----------------+
查看stack
root@ubuntu:~# openstack stack list
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
| 30a666fa-c512-4b84-b167-7443a13c38d2 | heat-basic-vm-deployment | 4e71287d807e4f9fa530874c3471a67b | CREATE_COMPLETE | 2023-08-27T05:22:51Z | None |
| 346176f1-3bca-4f03-9c6c-e45c44d5c9a0 | heat-subnet-pool-deployment | 4e71287d807e4f9fa530874c3471a67b | CREATE_COMPLETE | 2023-08-27T05:22:36Z | None |
| fc07a5cf-3e88-4528-aa46-5c16e4ceac05 | heat-public-net-deployment | 4e71287d807e4f9fa530874c3471a67b | CREATE_COMPLETE | 2023-08-27T05:22:27Z | None |
+--------------------------------------+-----------------------------+----------------------------------+-----------------+----------------------+--------------+
更多推荐
所有评论(0)