笔记为根据老男孩视频+官方文档学习,并记录,如有错漏,心情好的可以指出一下。
视频地址:https://www.bilibili.com/video/BV1LJ411Y7og?p=12 

计算服务(nova)

各个组件

  1. nova api服务(领导):接受并响应所有的计算服务请求,管理虚拟机(云主机)生命周期
  2. nova-compute(多个):真正的管理虚拟机,包含创建虚拟机(实现方法是nova-compute去调用libvirt,libvirt来管理虚拟机)
  3. nova-scheduler:nova调度器(挑选出最合适的nova-compute来创建虚拟机)
    1. 当nova-compute启动以后,会自动在数据库中注册设备的信息,然后每隔一分钟,汇报一次这台设备的使用情况(CPU、内存、硬盘使用情况)
    2. nova-scheduler只需要到数据库中查询nova-compute的表,就可以了解到各个宿主机设备的详细使用情况。每个指标剩余量越高,做一个评分,评分越高,就优先分配这台设备去进行虚拟化。
  4. nova-conductor:帮助nova-compute代理修改数据库中虚拟机的状态
    1. 根据设计理论理解,nova-compute的使用情况和创建的虚拟机,都会存到数据库中,所以理论上来说nova-compute是需要连接数据库的,假设,nova-compute节点过多(500-1000),那nova连接数据库的登录信息就很容易存在被黑客获取的可能;最终nova-conductor服务出现,实现了避免nova-compute节点直接去访问数据库的风险,也减少了多节点同时访问数据库造成的访问线程过大的影响。
  5. nova-network:早期openstack版本管理虚拟机的网络(已启用,改用neutron)
    1. nova-consoleaut和nova-novncproxy:web版的vnc来直接操作云主机
    2. novncproxy:web版vnc客户端
  6. nova-consoleauth:负责验证,分配密码;来现实访问控制
  7. nova-api-metadata:接收来自虚拟机发送的元数据请求(配合neutron-metadata-agent,来实现虚拟机定制化操作)

控制节点

一、 创库、授权

[root@controller01 ~]# mysql -uroot -phl044sdvwTT1LZ7Oa4wp
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 10.1.20-MariaDB MariaDB Server
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> CREATE DATABASE nova_api;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> CREATE DATABASE nova;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    ->   IDENTIFIED BY 'dwu9Rn3Q7Vt998nKJjdH';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    ->   IDENTIFIED BY 'dwu9Rn3Q7Vt998nKJjdH';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    ->   IDENTIFIED BY 'dwu9Rn3Q7Vt998nKJjdH';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    ->   IDENTIFIED BY 'dwu9Rn3Q7Vt998nKJjdH';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> ^DBye

二、 在keystone创建用户,关联角色

[root@controller01 ~]# openstack user create --domain default \
>   --password-prompt nova
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | 8cba1e7341c14ab993124909c705919a |
| enabled   | True                             |
| id        | c687cda81f05406b98ec1da628326996 |
| name      | nova                             |
+-----------+----------------------------------+

使用的为暗文密码,密码为SpDyzwPFXUKUmTW2tAuI

[root@controller01 ~]# openstack role add --project service --user nova admin

这条命令执行后,没有输出。

三、 在keystone上创建服务,注册api

[root@controller01 ~]# openstack service create --name nova \
>   --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | a901bb50a48f4ecf9e6c56cb3c4302f3 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+
[root@controller01 ~]# openstack endpoint create --region RegionOne \
>   compute public http://controller01:8774/v2.1/%\(tenant_id\)s
+--------------+---------------------------------------------+
| Field        | Value                                       |
+--------------+---------------------------------------------+
| enabled      | True                                        |
| id           | 6b90f49da12c45798d0132834a6b5597            |
| interface    | public                                      |
| region       | RegionOne                                   |
| region_id    | RegionOne                                   |
| service_id   | a901bb50a48f4ecf9e6c56cb3c4302f3            |
| service_name | nova                                        |
| service_type | compute                                     |
| url          | http://controller01:8774/v2.1/%(tenant_id)s |
+--------------+---------------------------------------------+
[root@controller01 ~]# openstack endpoint create --region RegionOne \
>   compute internal http://controller01:8774/v2.1/%\(tenant_id\)s
+--------------+---------------------------------------------+
| Field        | Value                                       |
+--------------+---------------------------------------------+
| enabled      | True                                        |
| id           | 3994cf2e3aa64108908c780ea4f806fa            |
| interface    | internal                                    |
| region       | RegionOne                                   |
| region_id    | RegionOne                                   |
| service_id   | a901bb50a48f4ecf9e6c56cb3c4302f3            |
| service_name | nova                                        |
| service_type | compute                                     |
| url          | http://controller01:8774/v2.1/%(tenant_id)s |
+--------------+---------------------------------------------+
[root@controller01 ~]# openstack endpoint create --region RegionOne \
>   compute admin http://controller01:8774/v2.1/%\(tenant_id\)s
+--------------+---------------------------------------------+
| Field        | Value                                       |
+--------------+---------------------------------------------+
| enabled      | True                                        |
| id           | 7fc57af033674b58974a0b02ada2224b            |
| interface    | admin                                       |
| region       | RegionOne                                   |
| region_id    | RegionOne                                   |
| service_id   | a901bb50a48f4ecf9e6c56cb3c4302f3            |
| service_name | nova                                        |
| service_type | compute                                     |
| url          | http://controller01:8774/v2.1/%(tenant_id)s |
+--------------+---------------------------------------------+

nova的服务端口为8774

四、 安装服务相应软件包

[root@controller01 ~]# yum install openstack-nova-api openstack-nova-conductor \
>   openstack-nova-console openstack-nova-novncproxy \
>   openstack-nova-scheduler

需要注意,这里并没有安装nova-compute组件。

五、 修改配置

编辑/etc/nova/nova.conf文件并完成下面的操作:

[DEFAULT]部分,只启用计算和元数据API:

[DEFAULT]
...
enabled_apis = osapi_compute,metadata

[api_database][database]部分,配置数据库的连接:

[api_database]
...
connection = mysql+pymysql://nova:dwu9Rn3Q7Vt998nKJjdH@controller01/nova_api

[database]
...
connection = mysql+pymysql://nova:dwu9Rn3Q7Vt998nKJjdH@controller01/nova

DEFAULT][oslo_messaging_rabbit]部分,配置 “RabbitMQ” 消息队列访问:

[DEFAULT]
...
rpc_backend = rabbit

[oslo_messaging_rabbit]
...
rabbit_host = controller01
rabbit_userid = openstack
rabbit_password = R4odtEJzSDNTe9LoHYfF

[DEFAULT][keystone_authtoken] 部分,配置认证服务访问:

[DEFAULT]
...
auth_strategy = keystone

[keystone_authtoken]
...
auth_uri = http://controller01:5000
auth_url = http://controller01:35357
memcached_servers = controller01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = SpDyzwPFXUKUmTW2tAuI

在 [DEFAULT 部分,配置my_ip 来使用控制节点的管理接口的IP 地址。

[DEFAULT]
...
my_ip = 192.168.137.11

在 [DEFAULT] 部分,使能 Networking 服务:

[DEFAULT]
...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
  • nova.virt.firewall.NoopFirewallDriver这段为python导包的方式,前两个指的是/lib/python2.7/site-packages/nova/virt/,之后的firewall指目录下的firewall.py,NoopFirewallDriver为前面那个pythone脚本中其中一个类。

PS:默认情况下,计算服务使用内置的防火墙服务。由于网络服务包含了防火墙服务,你必须使用nova.virt.firewall.NoopFirewallDriver防火墙服务来禁用掉计算服务内置的防火墙服务

[vnc]部分,配置VNC代理使用控制节点的管理接口IP地址 :

[vnc]
...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip

[glance]区域,配置镜像服务 API 的位置:

[glance]
...
api_servers = http://controller01:9292

[oslo_concurrency]部分,配置锁路径:

[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
  • 锁,指的是shell脚本锁,在执行这个脚本的时候,发现脚本有锁以后,就会进行退出。这个配置就是防止重复执行。

六、 同步数据库,创建表

  • api_db
[root@controller01 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
[root@controller01 ~]# mysql -uroot -phl044sdvwTT1LZ7Oa4wp nova_api -e "show  tables;"
+--------------------+
| Tables_in_nova_api |
+--------------------+
| build_requests     |
| cell_mappings      |
| flavor_extra_specs |
| flavor_projects    |
| flavors            |
| host_mappings      |
| instance_mappings  |
| migrate_version    |
| request_specs      |
+--------------------+
  • nova
[root@controller01 ~]# su -s /bin/sh -c "nova-manage db sync" nova
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')
  result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')
  result = self._query(query)
[root@controller01 ~]# mysql -uroot -phl044sdvwTT1LZ7Oa4wp nova -e "show  tables;"
+--------------------------------------------+
| Tables_in_nova                             |
+--------------------------------------------+
| agent_builds                               |
| aggregate_hosts                            |
| aggregate_metadata                         |
| aggregates                                 |
| allocations                                |
| block_device_mapping                       |
| bw_usage_cache                             |
| cells                                      |
| certificates                               |
| compute_nodes                              |
| console_pools                              |
| consoles                                   |
| dns_domains                                |
| fixed_ips                                  |
| floating_ips                               |
| instance_actions                           |
| instance_actions_events                    |
| instance_extra                             |
| instance_faults                            |
| instance_group_member                      |
| instance_group_policy                      |
| instance_groups                            |
| instance_id_mappings                       |
| instance_info_caches                       |
| instance_metadata                          |
| instance_system_metadata                   |
| instance_type_extra_specs                  |
| instance_type_projects                     |
| instance_types                             |
| instances                                  |
| inventories                                |
| key_pairs                                  |
| migrate_version                            |
| migrations                                 |
| networks                                   |
| pci_devices                                |
| project_user_quotas                        |
| provider_fw_rules                          |
| quota_classes                              |
| quota_usages                               |
| quotas                                     |
| reservations                               |
| resource_provider_aggregates               |
| resource_providers                         |
| s3_images                                  |
| security_group_default_rules               |
| security_group_instance_association        |
| security_group_rules                       |
| security_groups                            |
| services                                   |
| shadow_agent_builds                        |
| shadow_aggregate_hosts                     |
| shadow_aggregate_metadata                  |
| shadow_aggregates                          |
| shadow_block_device_mapping                |
| shadow_bw_usage_cache                      |
| shadow_cells                               |
| shadow_certificates                        |
| shadow_compute_nodes                       |
| shadow_console_pools                       |
| shadow_consoles                            |
| shadow_dns_domains                         |
| shadow_fixed_ips                           |
| shadow_floating_ips                        |
| shadow_instance_actions                    |
| shadow_instance_actions_events             |
| shadow_instance_extra                      |
| shadow_instance_faults                     |
| shadow_instance_group_member               |
| shadow_instance_group_policy               |
| shadow_instance_groups                     |
| shadow_instance_id_mappings                |
| shadow_instance_info_caches                |
| shadow_instance_metadata                   |
| shadow_instance_system_metadata            |
| shadow_instance_type_extra_specs           |
| shadow_instance_type_projects              |
| shadow_instance_types                      |
| shadow_instances                           |
| shadow_key_pairs                           |
| shadow_migrate_version                     |
| shadow_migrations                          |
| shadow_networks                            |
| shadow_pci_devices                         |
| shadow_project_user_quotas                 |
| shadow_provider_fw_rules                   |
| shadow_quota_classes                       |
| shadow_quota_usages                        |
| shadow_quotas                              |
| shadow_reservations                        |
| shadow_s3_images                           |
| shadow_security_group_default_rules        |
| shadow_security_group_instance_association |
| shadow_security_group_rules                |
| shadow_security_groups                     |
| shadow_services                            |
| shadow_snapshot_id_mappings                |
| shadow_snapshots                           |
| shadow_task_log                            |
| shadow_virtual_interfaces                  |
| shadow_volume_id_mappings                  |
| shadow_volume_usage_cache                  |
| snapshot_id_mappings                       |
| snapshots                                  |
| tags                                       |
| task_log                                   |
| virtual_interfaces                         |
| volume_id_mappings                         |
| volume_usage_cache                         |
+--------------------------------------------+
  • 官方文档中,同样有相同的注解:忽略输出中任何不推荐使用的信息。

七、 启动服务

  • 配置服务开机启动
[root@controller01 ~]# systemctl enable openstack-nova-api.service \
>   openstack-nova-consoleauth.service openstack-nova-scheduler.service \
>   openstack-nova-conductor.service openstack-nova-novncproxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service to /usr/lib/systemd/system/openstack-nova-consoleauth.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service.
  • 启动服务
[root@controller01 ~]# systemctl start openstack-nova-api.service \
>   openstack-nova-consoleauth.service openstack-nova-scheduler.service \
>   openstack-nova-conductor.service openstack-nova-novncproxy.service
  • 检查服务运行情况
[root@controller01 ~]# systemctl status openstack-nova-api.service   openstack-nova-consoleauth.service openstack-nova-scheduler.service   openstack-nova-conductor.service openstack-nova-novncproxy.service
● openstack-nova-api.service - OpenStack Nova API Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-10-22 13:05:48 CST; 12s ago
Main PID: 5095 (nova-api)
   CGroup: /system.slice/openstack-nova-api.service
           ├─5095 /usr/bin/python2 /usr/bin/nova-api
           ├─5149 /usr/bin/python2 /usr/bin/nova-api
           ├─5150 /usr/bin/python2 /usr/bin/nova-api
           ├─5174 /usr/bin/python2 /usr/bin/nova-api
           └─5175 /usr/bin/python2 /usr/bin/nova-api
Oct 22 13:05:38 controller01 systemd[1]: Starting OpenStack Nova API Server...
Oct 22 13:05:47 controller01 sudo[5151]:     nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf iptables-save -c
Oct 22 13:05:48 controller01 sudo[5156]:     nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c
Oct 22 13:05:48 controller01 systemd[1]: Started OpenStack Nova API Server.
● openstack-nova-consoleauth.service - OpenStack Nova VNC console auth Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-consoleauth.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-10-22 13:05:45 CST; 15s ago
Main PID: 5096 (nova-consoleaut)
   CGroup: /system.slice/openstack-nova-consoleauth.service
           └─5096 /usr/bin/python2 /usr/bin/nova-consoleauth
Oct 22 13:05:38 controller01 systemd[1]: Starting OpenStack Nova VNC console auth Server...
Oct 22 13:05:45 controller01 systemd[1]: Started OpenStack Nova VNC console auth Server.
● openstack-nova-scheduler.service - OpenStack Nova Scheduler Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-10-22 13:05:46 CST; 14s ago
Main PID: 5097 (nova-scheduler)
   CGroup: /system.slice/openstack-nova-scheduler.service
           └─5097 /usr/bin/python2 /usr/bin/nova-scheduler
Oct 22 13:05:38 controller01 systemd[1]: Starting OpenStack Nova Scheduler Server...
Oct 22 13:05:46 controller01 systemd[1]: Started OpenStack Nova Scheduler Server.
● openstack-nova-conductor.service - OpenStack Nova Conductor Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-conductor.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-10-22 13:05:45 CST; 16s ago
Main PID: 5098 (nova-conductor)
   CGroup: /system.slice/openstack-nova-conductor.service
           ├─5098 /usr/bin/python2 /usr/bin/nova-conductor
           ├─5140 /usr/bin/python2 /usr/bin/nova-conductor
           └─5141 /usr/bin/python2 /usr/bin/nova-conductor
Oct 22 13:05:38 controller01 systemd[1]: Starting OpenStack Nova Conductor Server...
Oct 22 13:05:45 controller01 systemd[1]: Started OpenStack Nova Conductor Server.
● openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-10-22 13:05:38 CST; 23s ago
Main PID: 5099 (nova-novncproxy)
   CGroup: /system.slice/openstack-nova-novncproxy.service
           └─5099 /usr/bin/python2 /usr/bin/nova-novncproxy --web /usr/share/novnc/
Oct 22 13:05:38 controller01 systemd[1]: Started OpenStack Nova NoVNC Proxy Server

验证

[root@controller01 ~]# nova service-list
+----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host         | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | controller01 | internal | enabled | up    | 2021-11-15T03:32:10.000000 | -               |
| 2  | nova-conductor   | controller01 | internal | enabled | up    | 2021-11-15T03:32:10.000000 | -               |
| 4  | nova-scheduler   | controller01 | internal | enabled | up    | 2021-11-15T03:32:10.000000 | -               |
| 7  | nova-compute     | computer01   | nova     | enabled | up    | 2021-11-15T03:32:14.000000 | -               |
+----+------------------+--------------+----------+---------+-------+----------------------------+-----------------+
  • 启动的时候,总共启动了5个服务,但是通过nova service-list仅能看到3个服务,因为nova service-list这个请求是发给nova-api的;由nova-api来给我们响应,返回结果,假设nova-api挂了,将会发生报错。
    提示端口联系不上。
[root@controller01 ~]# systemctl stop openstack-nova-api
[root@controller01 ~]# nova service-list
ERROR (ConnectFailure): Unable to establish connection to http://controller01:8774/v2.1/cfb654cc503f4da8aaed7fde4a01c1f7: HTTPConnectionPool(host='controller01', port=8774): Max retries exceeded with url: /v2.1/cfb654cc503f4da8aaed7fde4a01c1f7 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fd6f103be10>: Failed to establish a new connection: [Errno 111] Connection refused',)

而novnc服务,只需要检查6080端口是否正常运行就要得出结果,也可以通过直接访问宿主机+6080查看服务是否正确运行

如图

计算节点

一、 安装软件包

yum install openstack-nova-compute

注意查看的需要安装的包中,就会有说到libvirt和相关管理的qemu之类的包安装。

二、 修改配置文件

编辑/etc/nova/nova.conf文件并完成下面的操作:

[DEFAULT] 和 [oslo_messaging_rabbit]部分,配置RabbitMQ消息队列的连接:

[DEFAULT]
...
rpc_backend = rabbit


[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password =R4odtEJzSDNTe9LoHYfF

[DEFAULT][keystone_authtoken]部分,配置认证服务访问:
``

[DEFAULT]
...
auth_strategy = keystone

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
使用你在身份认证服务中设置的nova用户的密码替换NOVA_PASS``。

[DEFAULT]部分,配置 my_ip 选项:

[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

将其中的 MANAGEMENT_INTERFACE_IP_ADDRESS 替换为计算节点上的管理网络接口的IP 地址,例如 :ref:example architecture <overview-example-architectures>中所示的第一个节点 192.168.137.11 。设置为计算节点的IP,即可

[DEFAULT]部分,使能 Networking 服务:

[DEFAULT]
...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

注解:缺省情况下,Compute 使用内置的防火墙服务。由于 Networking 包含了防火墙服务,所以你必须通过使用 nova.virt.firewall.NoopFirewallDriver 来去除 Compute 内置的防火墙服务。

[vnc]部分,启用并配置远程控制台访问:
除了这段以外,其他的配置和控制节点的配置一致。所以这个要注意。!!
当然,如果控制节点也安装了nova-compute服务,nova.conf的配置可以直接拷贝,只需要更改其中my_ip即可

[vnc]
...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller01:6080/vnc_auto.html

服务器组件监听所有的 IP 地址,而代理组件仅仅监听计算节点管理网络接口的 IP 地址。基本的 URL 指示您可以使用 web 浏览器访问位于该计算节点上实例的远程控制台的位置。

注解:如果你运行浏览器的主机无法解析controller 主机名,你可以将 controller替换为你控制节点管理网络的IP地址。

[glance]区域,配置镜像服务 API 的位置:

[glance]
...
api_servers = http://controller:9292

[oslo_concurrency]部分,配置锁路径:

[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp

三、 启动服务

  • 配置libvirt和nova-compute开机启动
[root@computer01 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.
  • 启动libvirt和nova-compute
[root@computer01 ~]# systemctl start libvirtd.service openstack-nova-compute.service

*检查服务运行情况

[root@computer01 ~]# systemctl status libvirtd.service openstack-nova-compute.service
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2021-10-22 13:27:37 CST; 12s ago
     Docs: man:libvirtd(8)
           https://libvirt.org
Main PID: 2623 (libvirtd)
    Tasks: 17 (limit: 32768)
   CGroup: /system.slice/libvirtd.service
           └─2623 /usr/sbin/libvirtd
Oct 22 13:27:37 computer01 systemd[1]: Starting Virtualization daemon...
Oct 22 13:27:37 computer01 systemd[1]: Started Virtualization daemon.
● openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-10-22 13:27:39 CST; 10s ago
Main PID: 2640 (nova-compute)
    Tasks: 22
   CGroup: /system.slice/openstack-nova-compute.service
           └─2640 /usr/bin/python2 /usr/bin/nova-compute
Oct 22 13:27:37 computer01 systemd[1]: Starting OpenStack Nova Compute Server...
Oct 22 13:27:39 computer01 nova-compute[2640]: /usr/lib/python2.7/site-packages/pkg_resources/__init__.py:187: RuntimeWarning: You have iterated over the result of pkg_resources.parse_...
Oct 22 13:27:39 computer01 nova-compute[2640]: stacklevel=1,
Oct 22 13:27:39 computer01 systemd[1]: Started OpenStack Nova Compute Server.
Hint: Some lines were ellipsized, use -l to show in full.

补充:
我的环境是用vmware虚拟化的,默认在设置上就启用了虚拟化,可以无视这个。但如果为生产环境,需要注意一下。
确定您的计算节点是否支持虚拟机的硬件加速。

$ egrep -c '(vmx|svm)' /proc/cpuinfo

官方文档中指出:
如果这个命令返回了 one or greater 的值,那么你的计算节点支持硬件加速且不需要额外的配置。

如果这个命令返回了 zero 值,那么你的计算节点不支持硬件加速。你必须配置 libvirt 来使用 QEMU 去代替 KVM

找了一段文章来解释了QEMU和KVM

  • 这个又涉及到扩展的虚拟化知识了,暂未深究

在 /etc/nova/nova.conf 文件的 [libvirt] 区域做出如下的编辑:

[libvirt]
...
virt_type = qemu

验证操作

回到控制节点

执行命令

[root@controller01 ~]#
[root@controller01 ~]# openstack compute service list
+----+------------------+--------------+----------+---------+-------+----------------------------+
| Id | Binary           | Host         | Zone     | Status  | State | Updated At                 |
+----+------------------+--------------+----------+---------+-------+----------------------------+
|  1 | nova-consoleauth | controller01 | internal | enabled | up    | 2021-10-22T05:38:12.000000 |
|  2 | nova-conductor   | controller01 | internal | enabled | up    | 2021-10-22T05:38:11.000000 |
|  4 | nova-scheduler   | controller01 | internal | enabled | up    | 2021-10-22T05:38:12.000000 |
|  7 | nova-compute     | computer01   | nova     | enabled | up    | 2021-10-22T05:38:15.000000 |
+----+------------------+--------------+----------+---------+-------+----------------------------+

注解:该输出应该显示三个服务组件在控制节点上启用,一个服务组件在计算节点上启用。

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐