OpenStack(T版)——网络(Neutron)服务介绍与安装
OpenStack Neutron 就是为 OpenStack 虚拟机提供网络连接的服务。类比于现实生活中的网络,就像我们需要连接电脑、手机等设备到路由器上才能上网一样,虚拟机也需要连接到虚拟网络中才能进行通信和访问外部网络。而 Neutron 就是提供这个连接的服务。它能够帮助用户灵活地管理虚拟机的网络连接,包括分配 IP 地址、设置子网、路由器等网络设备,以及控制网络访问权限等。同时,Neut
OpenStack(T版)——网络(Neutron)服务介绍与安装
OpenStack Neutron 就是为 OpenStack 虚拟机提供网络连接的服务。类比于现实生活中的网络,就像我们需要连接电脑、手机等设备到路由器上才能上网一样,虚拟机也需要连接到虚拟网络中才能进行通信和访问外部网络。而 Neutron 就是提供这个连接的服务。它能够帮助用户灵活地管理虚拟机的网络连接,包括分配 IP 地址、设置子网、路由器等网络设备,以及控制网络访问权限等。同时,Neutron 还支持多种网络虚拟化技术,能够让多个虚拟网络之间互相隔离和通信。
Flat、Local VLAN、VXLAN 和 GRE 都是 OpenStack Neutron 中的网络虚拟化技术,用于在虚拟机之间或虚拟机与外部网络之间建立虚拟网络连接
下面分别解释这四种技术的含义和特点:
Flat技术最简单,只是直接把数据包发送到物理网络,不支持流量隔离和管理
VLAN技术基于虚拟局域网,将虚拟网络划分成多个逻辑网络,适用于单一物理网络环境下的虚拟网络;
VXLAN和GRE技术则是用UDP和IP协议将虚拟网络划分成多个逻辑网络,适用于跨物理网络或数据中心之间的虚拟网络部署。
安装和配置(controller)
准备
(1)创建数据库
①在操作系统终端连接数据库
[root@controller ~]# mysql -u root -p000000
②创建Neutron数据库
MariaDB [(none)]> CREATE DATABASE neutron;
③Neutron数据库的访问权限设置
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '000000';
④退出数据库
MariaDB [(none)]> exit
(2)加载admin user的环境变量
[root@controller ~]# source admin-openrc.sh
(3)创建服务凭证
①创建Neutron用户
[root@controller ~]# openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 95213d68498d48efae7f986fc56c20ba |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
②将admin role 赋予 neutron user 和service project
[root@controller ~]# openstack role add --project service --user neutron admin
③创建Neutron 网络服务 service entity
[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 4cd0a74d9e824a2186bd5c8a25f1ce8c |
| name | neutron |
| type | network |
+-------------+----------------------------------+
(4)创建Neutron网络服务组件的API endpoint
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 5d704d8177b94d22814c77200fbf7502 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a868e8135c7242be99881bf452a826cb |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 247feaa6e95346b78eb5dbb6aecbfbb7 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a868e8135c7242be99881bf452a826cb |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2132cb74585342ccb6728889a7726fd3 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a868e8135c7242be99881bf452a826cb |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
配置Neutron网络服务组件
Neutron网络服务组件的设计架构可以分为两种,提供商网络和自服务网络。
- 提供商网络比较简单,虚拟机实例连接公共网络(Public Network),没有自服务网络(Self-Service Network)、路由和浮动IP地址(Floatiing IP address),只有管理员或者有权限的用户可以操作公共网络。在这种架构下,虚拟机实例只能通过公共网络与外部网络通信,不能直接访问公共网络之外的网络。
- 自服务网络是在简单架构的基础上增加了第三层服务,支持自服务(Self-Service)网络。在这种架构下,管理员和有权限的用户可以管理自服务网络,自服务网络可以分配给虚拟机实例,并且通过自服务网络可以访问外部网络。此外,自服务网络还支持路由和浮动IP地址,可以实现更加灵活的网络管理。
自服务网络比提供商网络更加灵活,可以实现更加复杂的网络拓扑结构,满足更加多样化的应用场景
本次采用提供商网络
以下操作均在controller节点执行
(1)安装软件
[root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
(2)配置服务器组件
编辑/etc/neutron/neutron.conf
文件并完成以下操作:
①在[database]
部分中,配置数据库访问
[root@controller ~]# vim /etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:000000@controller/neutron
②在[DEFAULT]
部分中,启动 Layer 2 (ML2) plug-in、路由服务和IP地址重叠功能
[DEFAULT]
core_plugin = ml2
service_plugins =
③在[DEFAULT]
部分中,配置RabbitMQ
消息队列访问
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
④在[DEFAULT]
和[keystone_authtoken]
部分中,配置身份服务访问
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 000000
⑤在[DEFAULT]
和[nova]
部分中,配置Neutron网络服务通知Nova计算服务关于网络拓扑的改变
[DEFAULT]
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
# 这两个参数的作用是通知 Nova 计算服务组件有关端口状态和数据变化的信息
# 具体来说,当虚拟机实例的网络端口状态发生变化时
# 例如端口被创建、更新或删除时,Neutron 会将相关信息发送给 Nova
# 以便 Nova 可以更新虚拟机实例的状态信息
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 000000
⑥在[oslo_concurrency]
部分中,配置锁定路径
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
(3)配置Layer 2 (ML2)plug-in模块
ML2 plug-in使用桥接机制为虚拟机实例桥接和交换网络信息
①编辑文件 /etc/neutron/plugins/ml2/ml2_conf.ini
,在[ml2]
项,完成以下操作
①在[ml2]
部分中,启用平面和 VLAN 网络
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan
②在[ml2]
部分中,禁用自助服务网络
[ml2]
tenant_network_types =
③在[ml2]
部分中,启用 Linux 桥接机制
[ml2]
mechanism_drivers = linuxbridge
④在[ml2]
部分中,启用端口安全扩展驱动程序:
[ml2]
extension_drivers = port_security
⑤在本[ml2_type_flat]
部分中,将提供商虚拟网络配置为平面网络
[ml2_type_flat]
flat_networks = provider
⑥在[securitygroup]
部分中,启用 ipset 以提高安全组规则的效率
[securitygroup]
enable_ipset = true
(4)配置桥接代理
Linux 桥接代理为实例构建第 2 层(桥接和交换)虚拟网络基础设施并处理安全组
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini
文件并完成以下操作
①在[linux_bridge]
部分中,将提供商虚拟网络映射到提供商物理网络接口
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34 # 定义 Linux Bridge 网络服务的物理网络映射关系。
# PROVIDER_INTERFACE_NAME 表示物理网络接口的名称,这个名称需要根据实际情况进行配置。
②在[vxlan]
部分中,禁用 VXLAN 覆盖网络:
[vxlan]
enable_vxlan = false
③在[securitygroup]
部分中,启用安全组并配置 Linux 桥接 iptables 防火墙驱动程序:
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
# firewall_driver:表示防火墙驱动类型
# enable_security_group:表示是否启用安全组功能。
(5)配置内核
[root@controller ~]# vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
# 当这些参数的值为 1 时,Linux Bridge 桥接的数据包会经过 iptables 或 ip6tables 处理
# 从而可以实现访问控制、网络隔离等安全措施。
[root@controller ~]# modprobe br_netfilter # 加载 br_netfilter 内核模块。
[root@controller ~]# sysctl -p
# 用于加载系统的 sysctl 参数文件
(6)配置DHCP代理
DHCP代理为虚拟网络提供DHCP服务。
编辑/etc/neutron/dhcp_agent.ini
文件并完成以下操作
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
# interface_driver指定了 Neutron 使用的网络插件
# dhcp_driver指定了 Neutron 使用的 DHCP 代理程序
# enable_isolated_metadata启用了 Neutron 的元数据服务。
(7)配置元数据代理
元数据代理项实例提供配置信息
编辑/etc/neutron/metadata_agent.ini
文件并完成以下操作
①在 部分中[DEFAULT]
,配置元数据主机和共享密钥
[root@controller ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 000000
# nova_metadata_host指定了 Nova 元数据服务的主机地址,即控制节点的 IP 地址或主机名
# metadata_proxy_shared_secret指定了 Nova 元数据代理程序的共享密钥
# 即元数据服务和元数据代理程序之间进行通信时使用的密钥
(8)配置计算服务以使用网络服务
必须安装 Nova 计算服务才能完成此步骤
编辑该/etc/nova/nova.conf
文件并执行以下操作
①在[neutron]
部分中,配置访问参数、启用元数据代理并配置密钥
[root@controller ~]# vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 000000
service_metadata_proxy = true
metadata_proxy_shared_secret = 000000
完成安装
①网络服务初始化脚本需要一个指向 /etc/neutron/plugin.ini
ML2 插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini
.如果此符号链接不存在,请使用以下命令创建它
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
②同步数据库
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
③重启API服务
[root@controller ~]# systemctl restart openstack-nova-api.service
④启动网络服务并设置开机自启
[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service && systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
# 自行查看日志文件没有error就可以了
安装和配置(compute)
网络服务组件安装和配置通用组件
(1)安装软件包
[root@compute ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset
(2)配置通用组件
Neutron网络服务组件的通用部分包括认证机制、消息队列和plug-in
①编辑/etc/neutron/neutron.conf
文件并完成以下操作
在[database]
部分中,注释掉所有connection
选项,因为计算节点不直接访问数据库
②在[DEFAULT]
部分中,配置RabbitMQ
消息队列访问
[root@compute ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
③在[DEFAULT]
和[keystone_authtoken]
部分中,配置身份服务访问
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 000000
④在[oslo_concurrency]
部分中,配置锁定路径
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置网络核心组件
(1)编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件
①在[linux_bridge]
部分中,将Public Network 虚拟网络端口映射到物理网络端口
[root@compute ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34
# Linux Bridge 插件将使用 ens34 网卡来连接 provider 物理网络
# 实现虚拟网络和物理网络之间的数据交换。
②在[vxlan]
部分中,启用VXLAN功能和 Layer 2 population,配置处理Overlay Network 的物理网络端口的IP地址
[vxlan]
enable_vxlan = false
③在[securitygroup]
部分中,启用安全组并配置 Linux 桥接 iptables 防火墙驱动程序
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置内核
[root@compute ~]# vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@compute ~]# modprobe br_netfilter
[root@compute ~]# sysctl -p
配置计算服务组件
(1)编辑/etc/nova/nova.conf
文件并完成以下操作
在[neutron]
部分中,配置访问参数
[root@compute ~]# vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 000000
完成安装
重新启动计算服务
[root@compute ~]# systemctl restart openstack-nova-compute.service
启动bridge agent 并设置为开机自启
[root@compute ~]# systemctl start neutron-linuxbridge-agent.service && systemctl enable neutron-linuxbridge-agent.service
验证
(1)加载环境变量文件
[root@controller ~]# source admin-openrc.sh
(2) 到controller验证 openstack network agent list
启动虚拟机
创建flavor
[root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 0 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| properties | |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------+
创建密钥对
[root@controller ~]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa):
[root@controller ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | d2:a2:a8:1b:ae:4d:08:52:57:5b:ce:07:76:df:e1:d6 |
| name | mykey |
| user_id | fef5f8c16d3d4b9d849bc1488bf50a21 |
+-------------+-------------------------------------------------+
# 验证
[root@controller ~]# openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | d2:a2:a8:1b:ae:4d:08:52:57:5b:ce:07:76:df:e1:d6 |
+-------+-------------------------------------------------+
添加安全组
向default
安全组添加规则
允许ICMP (ping)
[root@controller ~]# openstack security group rule create --proto icmp default
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2023-06-29T02:14:06Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 29966f26-1836-4e74-b8d3-de4552778917 |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='0769b940829c4078a4aa573e83d6520c', project.name='admin', region_name='', zone= |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | 0769b940829c4078a4aa573e83d6520c |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 9fd799d7-c00c-40f8-bf06-6d3282de8679 |
| tags | [] |
| updated_at | 2023-06-29T02:14:06Z |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
允许安全 shell (SSH) 访问
[root@controller ~]# openstack security group rule create --proto tcp --dst-port 22 default
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2023-06-29T02:18:24Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 16fe8b3a-1b6f-4517-aa14-8e01c22f7724 |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='0769b940829c4078a4aa573e83d6520c', project.name='admin', region_name='', zone= |
| name | None |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | 0769b940829c4078a4aa573e83d6520c |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 9fd799d7-c00c-40f8-bf06-6d3282de8679 |
| tags | [] |
| updated_at | 2023-06-29T02:18:24Z |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
创建网络
[root@controller ~]# openstack network create --share --external --provider-physical-network provider --provider-network-type flat flat-provider
# 创建子网
[root@controller ~]# openstack subnet create --network flat-provider --allocation-pool start=192.168.200.50,end=192.168.200.100 --dns-nameserver 114.114.114.114 --gateway 192.168.200.2 --subnet-range 192.168.200.0/24 flat-subnet
启动实例
[root@controller ~]# openstack server create --flavor m1.nano --image cirros --nic net-id=f711b92a-f20b-4704-9cb5-d295d423268a --security-group default --key-name mykey server1
+-------------------------------------+-----------------------------------------------+
| Field | Value |
+-------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | qcHi7Xb84BDi |
| config_drive | |
| created | 2023-06-29T02:28:35Z |
| flavor | m1.nano (0) |
| hostId | |
| id | eded4b15-e614-495f-abcf-7af4335844e2 |
| image | cirros (1e4266c4-6dbd-4f27-830e-e857dc546f5b) |
| key_name | mykey |
| name | server1 |
| progress | 0 |
| project_id | 0769b940829c4078a4aa573e83d6520c |
| properties | |
| security_groups | name='9fd799d7-c00c-40f8-bf06-6d3282de8679' |
| status | BUILD |
| updated | 2023-06-29T02:28:35Z |
| user_id | fef5f8c16d3d4b9d849bc1488bf50a21 |
| volumes_attached | |
+-------------------------------------+-----------------------------------------------+
[root@controller ~]# openstack server list
+--------------------------------------+---------+--------+------------------------------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+---------+--------+------------------------------+--------+---------+
| eded4b15-e614-495f-abcf-7af4335844e2 | server1 | ACTIVE | flat-provider=192.168.200.79 | cirros | m1.nano |
+--------------------------------------+---------+--------+------------------------------+--------+---------+
使用虚拟控制台访问实例
[root@controller ~]# openstack console url show server1
+-------+-------------------------------------------------------------------------------------------+
| Field | Value |
+-------+-------------------------------------------------------------------------------------------+
| type | novnc |
| url | http://controller:6080/vnc_auto.html?path=%3Ftoken%3Da9d0b3b4-97f6-4865-b8ce-0252a66bb058 |
+-------+-------------------------------------------------------------------------------------------+
# 解决:
[root@compute ~]# virsh capabilities
pc-i440fx-rhel7.2.0
[root@compute ~]# vim /etc/nova/nova.conf
hw_machine_type = x86_64=pc-i440fx-rhel7.2.0 # 更改虚拟化
cpu_mode = host-passthrough # 使用宿主机cpu
# 重启:
[root@compute ~]# systemctl restart openstack-nova-*
再次创建虚拟机
[root@controller ~]# openstack server create --flavor m1.nano --image cirros --nic net-id=f711b92a-f20b-4704-9cb5-d295d423268a --security-group default --key-name mykey server2
+-------------------------------------+-----------------------------------------------+
| Field | Value |
+-------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | wp3UChp8p9Vk |
| config_drive | |
| created | 2023-06-29T02:42:23Z |
| flavor | m1.nano (0) |
| hostId | |
| id | e9ae5fa0-2787-453e-8b89-6c4058544622 |
| image | cirros (1e4266c4-6dbd-4f27-830e-e857dc546f5b) |
| key_name | mykey |
| name | server2 |
| progress | 0 |
| project_id | 0769b940829c4078a4aa573e83d6520c |
| properties | |
| security_groups | name='9fd799d7-c00c-40f8-bf06-6d3282de8679' |
| status | BUILD |
| updated | 2023-06-29T02:42:23Z |
| user_id | fef5f8c16d3d4b9d849bc1488bf50a21 |
| volumes_attached | |
+-------------------------------------+-----------------------------------------------+
[root@controller ~]# openstack console url show server2
+-------+-------------------------------------------------------------------------------------------+
| Field | Value |
+-------+-------------------------------------------------------------------------------------------+
| type | novnc |
| url | http://controller:6080/vnc_auto.html?path=%3Ftoken%3D023b6a9e-d474-41de-8f78-387c864f0906 |
+-------+-------------------------------------------------------------------------------------------+
验证是否能够访问本机
也能ping通外网
远程访问实例
[root@controller ~]# ssh cirros@192.168.200.92
The authenticity of host '192.168.200.92 (192.168.200.92)' can't be established.
RSA key fingerprint is SHA256:D0GT9nWiGUGvVrvUb8ZUerhDX84Rit3IsCwVSUdcomQ.
RSA key fingerprint is MD5:17:06:0e:00:48:02:51:ac:46:c7:a4:82:c1:4b:45:c5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.200.92' (RSA) to the list of known hosts.
$
本文参考视频
:https://www.bilibili.com/video/BV1fL4y1i7NZ?p=7&vd_source=7c7cb4224e0c273f28886e581838b110
更多推荐
所有评论(0)