openstack 云计算平台 mitaka 搭建
keystone、glance、nova、network核心组件组件
简介
OpenStack是什么?
云计算:形象化的说法,云计算是一种按使用量付费的模式,这种模式提供可用的、便捷的、按需的网络访问
云计算的分类:
- 公有云:公有云通常指第三方提供商为用户提供的能够使用的云,公有云一般可通过 Internet 使用,可能是免费或成本低廉的,公有云的核心属性是
共享资源服务
。 - 私有云:私有云是为一个客户单独使用而构建的,因而提供对
数据
、安全性
和服务质量
的最有效控制。该公司拥有基础设施,并可以控制在此基础设施上部署应用程序的方式。私有云可部署在企业数据中心的防火墙内,也可以将它们部署在一个安全的主机托管场所,私有云的核心属性是专有资源。 - 混合云:混合云融合了公有云和私有云,是近年来云计算的主要模式和发展方向。我们已经知道私有云主要是面向企业用户,出于安全考虑,企业更愿意将
数据
存放在私有云中,但是同时又希望可以获得公有云的计算资源
,在这种情况下混合云被越来越多的采用,它将公有云和私有云进行混合和匹配,以获得最佳的效果,这种个性化的解决方案,达到了既省钱又安全的目的。
云计算的服务形式:
-
Iaas:基础架构即服务。提供云主机(各种物理资源)
-
Paas:平台即服务。提供中间件,服务组件
-
Saas:软件即服务。提供软件
OpenStack是一种Iaas的解决方案
OpenStack是一个开源的云计算管理平台项目,由几个主要的组件组合起来完成具体工作。OpenStack支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供API以进行集成。
OpenStack和RHV有什么区别
RHV是采用虚拟化资源统一集中的管理的方式,资源由管理员统一分配,类似于计划经济;而OpenStack属于自助管理,需要什么就买什么,按需付费,类似于市场经济。
OpenStack解决了什么问题
OpenStack是一个旨在为公有云以及私有云的建设与管理提供软件的开源项目。它的社区拥有超过130家企业及1350位开发者,这些机构与个人都将OpenStack作为基础设施即服务(IaaS)资源的通用前端。OpenStack项目的首要任务是简化云的部署过程并为其带来良好的可扩展性。本文希望通过提供必要的指导信息,帮助大家利用OpenStack前端来设置及管理自己的公共云或私有云。
OpenStack 中有哪些项目?
- keystone:认证服务。Keystone 认证所有 OpenStack 服务并对其进行授权。同时,它也是所有服务的端点目录。
- glance:镜像服务。Glance 可存储和检索多个位置的虚拟机磁盘镜像。
- nova:计算服务。是一个完整的 OpenStack 计算资源管理和访问工具,负责处理规划、创建和删除操作。
- neutron:网络服务。Neutron 能够连接其他 OpenStack 服务并连接网络。
- dashboard:web管理界面
- Swift: 是一种高度容错的对象存储服务,使用 RESTful API 来存储和检索非结构数据对象。
- Cinder :块存储服务。通过自助服务 API 访问持久块存储。
- Ceilometer:监控服务。计费
- Heat:集群服务。编排
从官网入手学习:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/environment.html
openstack的部署对计算机的要求是比较高的,我们只部署控制结点,计算结点,存储结点
控制结点负责调度(跑一些核心应用),计算结点负责运行云主机,存储结点负责存储数据。
环境
安全
OpenStack 服务支持各种各样的安全方式,包括密码 password、policy 和 encryption,支持的服务包括数据库服务器,且消息 broker 至少支持 password 的安全方式。
由于密码比较多,我们可以用表格来记录密码。
还可以用openssl来生成密码:
[root@server1 ~]# openssl rand -hex 10
9275b356d387e3b78215
网络
控制结点需要两个网卡:
[root@server1 network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:8b:de:3d brd ff:ff:ff:ff:ff:ff
inet 172.25.254.1/24 brd 172.25.254.255 scope global ens3
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe8b:de3d/64 scope link
valid_lft forever preferred_lft forever
3: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state D
[root@server1 network-scripts]# cat ifcfg-eth0
BOOTPROTO=none
DEVICE=eth0
ONBOOT=yes
eth0 是我新加进来的网.
[root@server1 network-scripts]# ifup eth0 /激活这个设备
[root@server1 network-scripts]# ip addr show eth0
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:b2:b9:4e brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:feb2:b94e/64 scope link
valid_lft forever preferred_lft forever
配置域名解析:
[root@server1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.67 rhel7host
172.25.254.1 controller
172.25.254.2 compute1
172.25.254.3 block1
[root@server1 ~]# hostnamectl set-hostname controller
时间同步
安装Chrony,保证openstack集群时间同步:
宿主机上:
[root@rhel7host redhat]# systemctl status chronyd
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-07-15 11:47:17 CST; 8h ago
Process: 825 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
Process: 761 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 779 (chronyd)
CGroup: /system.slice/chronyd.service
└─779 /usr/sbin/chronyd
[root@rhel7host redhat]# vim /etc/chrony.conf
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst /使用阿里云的时间同步服务器
server ntp3.aliyun.com iburst
。。
# Allow NTP client access from local network.
allow 172.25/16 /允许172.25这个网段的主机同步我的时间
控制结点上;
[root@server1 network-scripts]# yum install chrony -y
[root@server1 network-scripts]# vim /etc/chrony.conf
server 172.25.254.67 iburst 从宿主机上同步时间
[root@server1 network-scripts]# systemctl enable --now chronyd /设置开机自启
[root@server1 network-scripts]# chronyc sources -v
210 Number of sources = 1
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* rhel7host 3 6 17 31 -2665ns[-5183ns] +/- 41ms
除过控制结点,起他的结点也是同样的配置,这只是一种方式,可以根据采用不同的选择,比如只在openstack集群中同步即可
安装openstack包
我们使用的是openstack的mitaka版本:
[root@rhel7host mitaka]# pwd
/var/ftp/pub/openstack/mitaka /放在宿主机的这个路径下
在server1中配置yum源
[root@server1 yum.repos.d]# cat openstack.repo
[openstack]
name=mitaka
baseurl=ftp://172.25.254.67/pub/openstack/mitaka
gpgcheck=0
[root@server1 yum.repos.d]# yum repolist
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
openstack | 2.9 kB 00:00:00
openstack/primary_db | 141 kB 00:00:00
repo id repo name status
openstack mitaka 279
westos westos 5,152
repolist: 5,431
- 我们先更新一下系统:
[root@server1 ~]# yum upgrade
- 安装 OpenStack 客户端(命令行):
yum install python-openstackclient
- 关闭SELinux
[root@server1 ~]# getenforce
Disabled
SQL数据库
- 安装软件包:
yum install mariadb mariadb-server python2-PyMySQL
- 创建并编辑 /etc/my.cnf.d/openstack.cnf
[root@server1 ~]# vim /etc/my.cnf.d/openstack.cnf
[root@server1 ~]# cat /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 172.25.254.1 /绑定控制结点地址
default-storage-engine = innodb /存储引擎
innodb_file_per_table /独占表空间
max_connections = 4096 /最大连接数
collation-server = utf8_general_ci
character-set-server = utf8 /字符集
[root@server1 ~]# systemctl enable --now mariadb /设置开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
[root@server1 ~]# mysql_secure_installation
Set root password? [Y/n] y
New password: caoaoyuan
Re-enter new password: caoaoyuan
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] y
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y
消息队列
安装包:
[root@server1 ~]# yum install rabbitmq-server -y
启动消息队列服务并将其配置为随系统启动:
[root@server1 ~]# systemctl enable --now rabbitmq-server.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to /usr/lib/systemd/system/rabbitmq-server.service.
添加 openstack 用户:
[root@server1 ~]# rabbitmqctl add_user openstack openstack /我这里设置密码和用户名相同,后面的是密码
Creating user "openstack" ...
给 openstack 用户配置写和读权限:
[root@server1 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
这里的 .* 的意思是:
[root@server1 ~]# rabbitmq-plugins list /查看插件
[root@server1 ~]# rabbitmq-plugins enable rabbitmq_management /打开管理器,回启动响应插件
账户密码都是 guest。
可以看到我们刚才设置的权限。
Memcached
接下来的认证服务认证缓存使用Memcached缓存令牌。缓存服务memecached运行在控制节点。在生产部署中,我们推荐联合启用防火墙、认证和加密保证它的安全。以前使用的是mysql,但是会造成压力且需要定期回收,memcache可以设置最长存储时间,来解决这个问题。
安装软件包:
[root@server1 ~]# yum install memcached python-memcached -y
[root@server1 ~]# vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
#OPTIONS="-l 127.0.0.1,::1" /注释掉这里,让他监听本机所有的接口,保证远程连接可以成功
启动Memcached服务,并且配置它随机启动。
[root@server1 ~]# systemctl enable --now memcached.service
Created symlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service.
[root@server1 ~]# netstat -tnlp |grep 11211
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN 5320/memcached
tcp6 0 0 :::11211 :::* LISTEN 5320/memcached
认证服务keystone
认证服务的主要作用就是提供认证和授权服务,颁布令牌,它还有一个目录专门用于存放各个api的端点,当不同的组件进行通信时,就可以通过 api 进行访问。
在你配置 OpenStack 身份认证服务前,你必须创建一个数据库和管理员令牌。所有的数据都存放在数据库当中,美创建一个组件,就要创建相应的数据库。
[root@server1 ~]# mysql -uroot -pcaoaoyuan
/创建 keystone 数据库:
MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.00 sec)
/对``keystone``数据库授予恰当的权限:
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
% 让外部也可以连接
/生成一个随机值在初始的配置中作为管理员的令牌。
[root@controller ~]# openssl rand -hex 10
f8eea09f945c8d29431b
安全并配置组件
[root@controller ~]# vim /etc/keystone/keystone.conf
[DEFAULT]
admin_token = f8eea09f945c8d29431b /定义初始管理令牌的值,和上面生成的一致,仅在初始化是使用
[database]
connection = mysql+pymysql://keystone:keystone@controller/keystone
用户 密码 要访问的数据库
[token]
provider = fernet /选择token的提供者
以keystone初始化身份认证服务的数据库:
[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
[root@controller ~]# mysql -p
MariaDB [(none)]> use keystone;
MariaDB [keystone]> show tables;
+------------------------+
| Tables_in_keystone |
+------------------------+
| access_token |
.......
| service_provider |
| token |
| trust |
| trust_role |
| user | /表已经生成了
| user_group_membership |
| whitelisted_config |
+------------------------+
初始化Fernet keys:
[root@controller ~]# cd /etc/keystone/
[root@controller keystone]# ls
default_catalog.templates keystone.conf keystone-paste.ini logging.conf policy.json sso_callback_template.html
[root@controller keystone]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@controller keystone]# ls
default_catalog.templates fernet-keys keystone.conf keystone-paste.ini logging.conf policy.json sso_callback_template.html
生成了 fernet-keys 这个目录
配置 Apache HTTP 服务器
[root@controller keystone]# vim /etc/httpd/conf/httpd.conf
#ServerName www.example.com:80
ServerName controller /配置``ServerName`` 选项为控制节点
[root@controller conf.d]# vim /etc/httpd/conf.d/wsgi-keystone.conf
Listen 5000
Listen 35357 /监听两个端口
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public /用于外部访问
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin /用于内部控制
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
[root@controller conf.d]# systemctl enable --now httpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@controller conf.d]# netstat -tnlp | grep 5000
tcp6 0 0 :::5000 :::* LISTEN 16119/httpd
[root@controller conf.d]# netstat -tnlp | grep 35357
tcp6 0 0 :::35357 :::* LISTEN 16119/httpd
创建服务实体和API端点
先决条件
[root@controller conf.d]# export OS_TOKEN=f8eea09f945c8d29431b /配置认证令牌
[root@controller conf.d]# export OS_URL=http://controller:35357/v3 /配置端点URL
[root@controller conf.d]# export OS_IDENTITY_API_VERSION=3 /配置认证 API 版本
在你的Openstack环境中,认证服务管理服务目录。服务使用这个目录来决定您的环境中可用的服务。
创建服务实体和身份认证服务:
[root@controller conf.d]# openstack service create --name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | eb2a914b7f0d4752868784b47a31a703 |
| name | keystone |
| type | identity |
[root@controller conf.d]# openstack service list
+----------------------------------+----------+----------+
| ID | Name | Type |
+----------------------------------+----------+----------+
| eb2a914b7f0d4752868784b47a31a703 | keystone | identity |
+----------------------------------+----------+----------+
创建认证服务的 API 端点:
[root@controller conf.d]# openstack endpoint create --region RegionOne identity public http://controller:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 7be3b4c7194344dcb883a3fa63123467 |
| interface | public | /外部使用的端点
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb2a914b7f0d4752868784b47a31a703 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v3 |
+--------------+----------------------------------+
[root@controller conf.d]# openstack endpoint create --region RegionOne \
> identity internal http://controller:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6bdd28b8ece7497181ca3f97347caa37 |
| interface | internal |
| region | RegionOne | /内部使用的端点
| region_id | RegionOne |
| service_id | eb2a914b7f0d4752868784b47a31a703 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v3 |
+--------------+----------------------------------+
[root@controller conf.d]# openstack endpoint create --region RegionOne \
> identity admin http://controller:35357/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | cd6130854ba143f0bdfd6c9603fa6e95 |
| interface | admin |
| region | RegionOne | /管理员使用的端点
| region_id | RegionOne |
| service_id | eb2a914b7f0d4752868784b47a31a703 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:35357/v3 |
+--------------+----------------------------------+
[root@controller conf.d]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+
| 6bdd28b8ece7497181ca3f97347caa37 | RegionOne | keystone | identity | True | internal | http://controller:5000/v3 |
| 7be3b4c7194344dcb883a3fa63123467 | RegionOne | keystone | identity | True | public | http://controller:5000/v3 |
| cd6130854ba143f0bdfd6c9603fa6e95 | RegionOne | keystone | identity | True | admin | http://controller:35357/v3 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+
创建域、项目、用户和角色
[root@controller conf.d]# openstack domain list
/当前没有域
[root@controller conf.d]#
[root@controller conf.d]# openstack domain create --description "Default Domain" default
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Default Domain |
| enabled | True | /创建域``default``
| id | c80fd98d2b4b4946839e90c6a4eea979 |
| name | default |
+-------------+----------------------------------+
[root@controller conf.d]# openstack project create --domain default \
> --description "Admin Project" admin
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+ /创建 admin 项目:
| description | Admin Project |
| domain_id | c80fd98d2b4b4946839e90c6a4eea979 |
| enabled | True |
| id | 5bd822f5700d4153bc85403de17b35db |
| is_domain | False |
| name | admin |
| parent_id | c80fd98d2b4b4946839e90c6a4eea979 |
+-------------+----------------------------------+
[root@controller conf.d]# openstack user create --domain default \
> --password admin admin
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | c80fd98d2b4b4946839e90c6a4eea979 |
| enabled | True |
| id | 7d45ebaf9a974ce69b8f6197c7f8719f |
| name | admin | /创建 admin 用户:
+-----------+----------------------------------+
[root@controller conf.d]# openstack role create admin
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | c08749bee57d4d2ead99fb744da4bd00 |
| name | admin | /创建 admin 角色:
+-----------+----------------------------------+
[root@controller conf.d]# openstack role add --project admin --user admin admin /添加``admin`` 角色到 admin 项目和用户上:
[root@controller conf.d]# openstack project create --domain default \
> --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | c80fd98d2b4b4946839e90c6a4eea979 |
| enabled | True |
| id | 1785606c122c413c9a134bf92e6f5374 |
| is_domain | False |
| name | service | /创建``service``项目:
| parent_id | c80fd98d2b4b4946839e90c6a4eea979 |
+-------------+----------------------------------+
[root@controller conf.d]# openstack project create --domain default \
> --description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | c80fd98d2b4b4946839e90c6a4eea979 |
| enabled | True |
| id | cdc49f2f64d24621b0f54bcc72b6b5df |
| is_domain | False |
| name | demo | /创建``demo`` 项目:无特权的项目和用户
| parent_id | c80fd98d2b4b4946839e90c6a4eea979 |
+-------------+----------------------------------+
[root@controller conf.d]# openstack user create --domain default \
> --password demo demo
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | c80fd98d2b4b4946839e90c6a4eea979 |
| enabled | True |
| id | aa9345f4858145ca8a4e295f63a44647 |
| name | demo | /创建``demo`` 用户
+-----------+----------------------------------+
[root@controller conf.d]# openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | bd16a57f93ba41a9b1a1bf16af4b5dd2 |
| name | user | /创建 user 角色
+-----------+----------------------------------+
[root@controller conf.d]# openstack role add --project demo --user demo user /添加 user``角色到 ``demo 项目和用户
验证操作
重置``OS_TOKEN``和``OS_URL`` 环境变量:
[root@controller conf.d]# unset OS_TOKEN OS_URL
作为 admin 用户,连接keystone请求认证令牌:
[root@controller conf.d]# openstack --os-auth-url http://controller:35357/v3 \
> --os-project-domain-name default --os-user-domain-name default \
> --os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-07-16T04:52:21.551634Z |
| id | gAAAAABfD871WyI80v9BBl7Kykt3JMvSJa_VOs7Mr1qx-IdCJm4Q_RzXELO4JRdEXnUEkRgm9xDwKu6Hafi3wftADCQ48_gbTKxOwoqwDPGkccv5bnc7ydWf-Awoqg-Y0oLlNjwc4v8e1XVrus3Y6QZLY1r5qKU7vwDwR_WuRU8rV654x9vHIak |
| project_id | 5bd822f5700d4153bc85403de17b35db |
| user_id | 7d45ebaf9a974ce69b8f6197c7f8719f |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
默认存放在memcache中
作为``demo`` 用户,请求认证令牌:
[root@controller conf.d]# openstack --os-auth-url http://controller:5000/v3 \
> --os-project-domain-name default --os-user-domain-name default \
> --os-project-name demo --os-username demo token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-07-16T04:53:10.651151Z |
| id | gAAAAABfD88m_PHLeX07fQc9rVtznPrOGJcZjx4bGNj42V5lTj6U7XQQTpgINUgNVSRYvaiP1TRg-Z5KjJDvs3FPICMbONMoqCvt2Pypt8U9u5-8hTMJJniN0_EUAflfilI3ZCGoVWbQ5qmJBGQ8ze6LjPXZgaYItj4KfTwmaQWei5XAnA3LaJE |
| project_id | cdc49f2f64d24621b0f54bcc72b6b5df |
| user_id | aa9345f4858145ca8a4e295f63a44647 |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
列出用户:
[root@controller conf.d]# openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin user list
Password:
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 7d45ebaf9a974ce69b8f6197c7f8719f | admin |
| aa9345f4858145ca8a4e295f63a44647 | demo |
+----------------------------------+-------+
每次连接openstack都使用这种方式是比较麻烦的,所以官方让我们创建脚本,放到环境变量中:
创建 OpenStack 客户端环境脚本
[root@controller ~]# cat admin-openrc /管理员脚本
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
[root@controller ~]# vim demo-openrc /demo用户脚本
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
[root@controller ~]# source admin-openrc /切换到管理员的脚本
[root@controller ~]# openstack user list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 7d45ebaf9a974ce69b8f6197c7f8719f | admin |
| aa9345f4858145ca8a4e295f63a44647 | demo | /就可以直接列出了
+----------------------------------+-------+
[root@controller ~]# source demo-openrc
[root@controller ~]# openstack user list /切换到普通用户就不行了。
You are not authorized to perform the requested action: identity:list_users (HTTP 403) (Request-ID: req-3c904770-943e-49a9-9b92-d35ac7749b17)
[root@controller ~]# source admin-openrc
[root@controller ~]# openstack project list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 1785606c122c413c9a134bf92e6f5374 | service |
| 5bd822f5700d4153bc85403de17b35db | admin |
| cdc49f2f64d24621b0f54bcc72b6b5df | demo |
+----------------------------------+---------+
Glance镜像服务
OpenStack镜像服务包括以下组件:
glance-api
- 接收镜像API的调用,诸如镜像发现、恢复、存储。
glance-registry
- 存储、处理和恢复镜像的元数据,元数据包括项诸如大小和类型。
数据库
- 存放镜像元数据,用户是可以依据个人喜好选择数据库的,多数的部署使用MySQL或SQLite。
镜像文件的存储仓库
- 支持多种类型的仓库,它们有普通文件系统、对象存储、RADOS块设备、HTTP、以及亚马逊S3。记住,其中一些仓库仅支持只读方式使用。
元数据定义服务
- 通用的API,是用于为厂商,管理员,服务,以及用户自定义元数据。这种元数据可用于不同的资源,例如镜像,工件,卷,配额以及集合。一个定义包括了新属性的键,描述,约束以及可以与之关联的资源的类型。
安装和配置
创建glance数据库
(openstack) [root@controller ~]# mysql -u root -p
Enter password:
MariaDB [(none)]> CREATE DATABASE glance;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> Ctrl-C -- exit!
Aborted
[root@controller ~]# mysql -uglance -pglance
MariaDB [(none)]> /测试可以登陆即可
创建服务证书
[root@controller ~]# source admin-openrc
[root@controller ~]# openstack user create --domain default --password glance glance
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | c80fd98d2b4b4946839e90c6a4eea979 |
| enabled | True |
| id | 12b1ebac364f42d3b499347824d4f13a |
| name | glance | /创建 glance 用户
+-----------+----------------------------------+
[root@controller ~]# openstack role add --project service --user glance admin /添加 admin 角色到 glance 用户和 service 项目上
[root@controller ~]# openstack service create --name glance \
> --description "OpenStack Image" image /创建``glance``服务实体
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 0783cb66f29049fcb4a2c9c0789aeb14 |
| name | glance |
| type | image |
+-------------+----------------------------------+
创建镜像服务的 API 端点
[root@controller ~]# openstack endpoint create --region RegionOne \
> image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 40b0c5637a20483b97e91907a966021a |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0783cb66f29049fcb4a2c9c0789aeb14 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 36a2e7acde264897983487c2efaf6b6f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0783cb66f29049fcb4a2c9c0789aeb14 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | d18ac764e04d4b1d973b83e85349c63b |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0783cb66f29049fcb4a2c9c0789aeb14 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# yum install openstack-glance -y
[root@controller ~]# vim /etc/glance/glance-api.conf
[DEFAULT]
connection = mysql+pymysql://glance:glance@controller/glance /连接数据库
[keystone_authtoken] /连接keystone,进行认证授权
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http /在 [glance_store] 部分,配置本地文件系统存储和镜像文件位置
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[DEFAULT]
connection = mysql+pymysql://glance:glance@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance /以glance身份同步数据库
[root@controller ~]# systemctl enable --now openstack-glance-api.service openstack-glance-registry.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service to /usr/lib/systemd/system/openstack-glance-registry.service.
[root@controller ~]# netstat -tnlp|grep 9292
tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 17924/python2
验证操作
下载源镜像:
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
上传:
[root@controller ~]# openstack image create "cirros" \
> --file cirros-0.3.4-x86_64-disk.img \
> --disk-format qcow2 --container-format bare \ 公共可见
> --public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2020-07-16T05:52:49Z |
| disk_format | qcow2 |
| file | /v2/images/5b9aae7b-761b-4551-8a5d-e569b63fc070/file |
| id | 5b9aae7b-761b-4551-8a5d-e569b63fc070 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 5bd822f5700d4153bc85403de17b35db |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2020-07-16T05:52:49Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
[root@controller ~]# cd /var/lib/glance/images/
[root@controller images]# ls
5b9aae7b-761b-4551-8a5d-e569b63fc070
[root@controller images]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 5b9aae7b-761b-4551-8a5d-e569b63fc070 | cirros | active |
+--------------------------------------+--------+--------+
就成功了。
nova计算服务
OpenStack计算服务由下列组件所构成:
-
nova-api 服务
- 接收和响应来自最终用户的计算API请求。此服务支持OpenStack计算服务API,Amazon EC2 API,以及特殊的管理API用于赋予用户做一些管理的操作。它会强制实施一些规则,发起多数的编排活动,例如运行一个实例。
-
nova-api-metadata 服务
- 接受来自虚拟机发送的元数据请求。
。。。。。。
安装并配置控制节点
先决条件
[root@controller ~]# mysql -uroot -p
Enter password:
MariaDB [(none)]> CREATE DATABASE nova_api;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> CREATE DATABASE nova;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
-> IDENTIFIED BY 'nova';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
-> IDENTIFIED BY 'nova';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
-> IDENTIFIED BY 'nova';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
Query OK, 0 rows affected (0.00 sec)
[root@controller ~]# mysql -unova -p nova
Enter password:
MariaDB [nova]>
[root@controller ~]# mysql -unova -p nova_api
Enter password:
MariaDB [nova_api]> /能登录进去即可
创建服务证书:
[root@controller ~]# openstack user create --domain default \
> --password nova nova /创建 nova 用户
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | c80fd98d2b4b4946839e90c6a4eea979 |
| enabled | True |
| id | e9821d5458e241fc8c7938a8999f6176 |
| name | nova |
+-----------+----------------------------------+
[root@controller ~]# openstack role add --project service --user nova admin /给 nova 用户添加 admin 角色
[root@controller ~]# openstack service create --name nova \ :
> --description "OpenStack Compute" compute /创建 nova 服务实体
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 93792920eb764f3c8515e2532e5ab48e |
| name | nova |
| type | compute |
+-------------+----------------------------------+
创建 Compute 服务 API 端点:
[root@controller ~]# openstack endpoint create --region RegionOne \
> compute public http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | ef90b4491aa9434b9ffd50f1c6b23fee |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 93792920eb764f3c8515e2532e5ab48e |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> compute internal http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 36adfeaaf2454ee39977af8e70db4368 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 93792920eb764f3c8515e2532e5ab48e |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> compute admin http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 1886fea1ccdc47e6898f1c98ea2640e0 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 93792920eb764f3c8515e2532e5ab48e |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
安全并配置组件
[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler -y
[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata /只启用计算和元数据API
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 172.25.254.1
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver /使能 Networking 服务
[api_database]
connection = mysql+pymysql://nova:nova@controller/nova_api
[database] /配置数据库的连接:
connection = mysql+pymysql://nova:nova@controller/nova
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack /配置 “RabbitMQ” 消息队列访问
rabbit_password = openstack
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service /配置认证服务访问
username = nova
password = nova
[vnc] /配置VNC代理使用控制节点的管理接口IP地址
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance] /配置镜像服务 API 的位置
api_servers = http://controller:9292
配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
使用nova同步Compute 数据库,授权:
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
[root@controller ~]# systemctl enable openstack-nova-api.service \
> openstack-nova-consoleauth.service openstack-nova-scheduler.service \
> openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@controller ~]# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | controller | internal | enabled | up | 2020-07-16T07:17:52.000000 |
| 2 | nova-conductor | controller | internal | enabled | up | 2020-07-16T07:17:52.000000 |
| 3 | nova-scheduler | controller | internal | enabled | up | 2020-07-16T07:17:52.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
计算服务正常。
安装和配置计算节点
我们在开启一台虚拟机作为计算结点,注意它的CPU模式要设置为host-passthrough模式。不然等会就要设置nova 为 qemu 的方式。
[root@compute1 ~]# vim /etc/hosts
[root@compute1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.67 rhel7host
172.25.254.1 controller
172.25.254.2 compute1
172.25.254.3 block1 /做好本地解析
[root@controller ~]# scp /etc/yum.repos.d/openstack.repo compute1:/etc/yum.repos.d/ 配置yum源
[root@compute1 ~]# yum upgrade
[root@compute1 ~]# yum install chrony -y
[root@compute1 ~]# vim /etc/chrony.conf
server 172.25.254.67 iburst
[root@compute1 ~]# systemctl restart chronyd
安全并配置组件:
[root@compute1 ~]# yum install openstack-nova-compute -y
[root@compute1 ~]# vim /etc/nova/nova.conf
[DEFAULT]
backend = rabbit
auth_strategy = keystone
my_ip = 172.25.254.2
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
完成安装:
[root@compute1 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
2 /确定您的计算节点是否支持虚拟机的硬件加速,我们有两个cpu
[root@compute1 ~]# vim /etc/nova/nova.conf
[libvirt]
virt_type=kvm /告诉他们使用kvm虚拟化,如果cpu不是host-passthrough,则这里使用qemu的方式
[root@compute1 ~]# systemctl enable --now libvirtd.service openstack-nova-compute.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.
libvirtd.service 提供一个虚拟化接口,便于我们的管理。
[root@compute1 ~]# yum install -y libvirt-client /安装客户端,可以看后面创建的云主机。
[root@compute1 ~]# virsh list
Id Name State
---------------------------------------------------- /当前没有主机
[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | controller | internal | enabled | up | 2020-07-16T08:42:54.000000 |
| 2 | nova-conductor | controller | internal | enabled | up | 2020-07-16T08:42:54.000000 |
| 3 | nova-scheduler | controller | internal | enabled | up | 2020-07-16T08:42:54.000000 |
| 6 | nova-compute | compute1 | nova | enabled | up | 2020-07-16T08:42:58.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
创建成功。
Networking 服务
它包含下列组件:
neutron-server
接收和路由API请求到合适的OpenStack网络插件,以达到预想的目的。
OpenStack网络插件和代理
插拔端口,创建网络和子网,以及提供IP地址,这些插件和代理依赖于供应商和技术而不同,OpenStack网络基于插件和代理为Cisco 虚拟和物理交换机、NEC OpenFlow产品,Open vSwitch,Linux bridging以及VMware NSX 产品穿线搭桥。
常见的代理L3(3层),DHCP(动态主机IP地址),以及插件代理。
消息队列
大多数的OpenStack Networking安装都会用到,用于在neutron-server和各种各样的代理进程间路由信息。也为某些特定的插件扮演数据库的角色,以存储网络状态
OpenStack网络主要和OpenStack计算交互,以提供网络连接到它的实例。
安装并配置控制节点
创建数据库:
[root@controller ~]# mysql -u root -p
Enter password:
MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
-> IDENTIFIED BY 'neutron';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
-> IDENTIFIED BY 'neutron';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> Ctrl-C -- exit!
Aborted
[root@controller ~]# mysql -u neutron -p neutron
Enter password:
MariaDB [neutron]> /可以查看即可
创建服务证书:
[root@controller ~]# openstack user create --domain default --password neutron neutron
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | c80fd98d2b4b4946839e90c6a4eea979 |
| enabled | True |
| id | 9ae043c57c6e4cef9b01960fe349db53 |
| name | neutron |
+-----------+----------------------------------+
[root@controller ~]# openstack role add --project service --user neutron admin /添加``admin`` 角色到``neutron`` 用户
创建``neutron``服务实体:
[root@controller ~]# openstack service create --name neutron \
> --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 0a299286098c497fb753d989c3534377 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
网络服务API端点:
[root@controller ~]# openstack endpoint create --region RegionOne \
> network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | db64468bf6ac448ba1f22ac183ff393e |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0a299286098c497fb753d989c3534377 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0e1d98aa999d41a49f4c2cf12bc9881b |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0a299286098c497fb753d989c3534377 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | fe1f53ef8e3b4c1bad3d2a5f426b78cf |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0a299286098c497fb753d989c3534377 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
然后我们创建公共网络,将所有的云主机放到同一个网段:
[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
[root@controller ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = /启用ML2插件并禁用其他插件
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[database]
connection = mysql+pymysql://neutron:neutron@controller/neutron /配置数据库访问
[oslo_messaging_rabbit]
rabbit_host = controller /配置 “RabbitMQ” 消息队列的连接
rabbit_userid = openstack
rabbit_password = openstack
[keystone_authtoken] /配置认证服务访问
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[nova]
auth_url = http://controller:35357
auth_type = password /配置网络服务来通知计算节点的网络拓扑变化
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp /配置锁路径
配置 Modular Layer 2 (ML2) 插件:
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan /启用flat和VLAN网络
tenant_network_types = /禁用私有网络
mechanism_drivers = linuxbridge /启用Linuxbridge机制
extension_drivers = port_security /启用端口安全扩展驱动
[ml2_type_flat]
flat_networks = provider /配置公共虚拟网络为flat网络
[securitygroup]
enable_ipset = True /启用 ipset 增加安全组规则的高效性,让我们的第二块网卡(无IP)映射过来用于虚拟机的网络访问
配置Linuxbridge代理:
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0 /将公共虚拟网络和公共物理网络接口对应起来
[securitygroup] /启用安全组并配置 Linuxbridge iptables firewall driver
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = False /禁止VXLAN覆盖网络
配置DHCP代理,用于给云主机动态分配IP:
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据,这样在公共网络上的实例就可以通过网络来访问元数据
配置元数据代理:
[root@controller ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = controller
metadata_proxy_shared_secret = westos /配置访问参数,启用元数据代理并设置密码
为计算节点配置网络服务,配置访问参数,启用元数据代理并设置密码:
[root@controller ~]# vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = westos
网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini
。如果超链接不存在,使用下面的命令创建它:
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库:
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
由于更改了配置文件,所以要:
[root@controller ~]# systemctl restart openstack-nova-api.service
[root@controller ~]# systemctl enable neutron-server.service \
> neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
> neutron-metadata-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service.
[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
[root@controller ~]# neutron agent-list
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 05dadf42-2c97-41ab-b33c-727919d7e7f7 | Metadata agent | controller | | :-) | True | neutron-metadata-agent |
| d0837e5a-0351-408c-9bb8-1b808beb0e73 | DHCP agent | controller | nova | :-) | True | neutron-dhcp-agent |
| d2a9b415-d227-4815-b449-a43e1d39b9fe | Linux bridge agent | controller | | :-) | True | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
正常启动
安装和配置计算节点
[root@compute1 ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y
[root@compute1 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
...
rpc_backend = rabbit
auth_strategy = keystone
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置共有网络:
配置Linuxbridge代理:
[root@compute1 ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = False
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
为计算节点配置网络服务:
[root@compute1 ~]# vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
[root@compute1 ~]# systemctl restart openstack-nova-compute.service
[root@compute1 ~]# systemctl enable --now neutron-linuxbridge-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
[root@controller ~]# neutron agent-list
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 05dadf42-2c97-41ab-b33c-727919d7e7f7 | Metadata agent | controller | | :-) | True | neutron-metadata-agent |
| 753f0cca-82ee-44f5-a77e-856924003134 | Linux bridge agent | compute1 | | :-) | True | neutron-linuxbridge-agent |
| d0837e5a-0351-408c-9bb8-1b808beb0e73 | DHCP agent | controller | nova | :-) | True | neutron-dhcp-agent |
| d2a9b415-d227-4815-b449-a43e1d39b9fe | Linux bridge agent | controller | | :-) | True | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
计算结点也找到了代理,现在你的 OpenStack 环境包含了启动一个基本实例所必须的核心组件。接下来我们就可以启动云主机了。
启动一个实例
创建虚拟网络
我们一直是创建共有网络,所以我们现在创建提供者网络,而不是私有网络。
[root@controller ~]# neutron net-create --shared --provider:physical_network provider \
> --provider:network_type flat provider
Created a new network: /创建网络
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2020-07-16T09:54:53 |
| description | |
| id | 411727ab-294b-4e20-89ce-de39122f2cf6 |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | 1500 |
| name | provider |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | 5bd822f5700d4153bc85403de17b35db |
| updated_at | 2020-07-16T09:54:53 |
+---------------------------+--------------------------------------+
在网络上创建一个子网:
[root@controller ~]# neutron subnet-create --name provider \
> --allocation-pool start=172.25.254.100,end=172.25.254.200 \
> --dns-nameserver 114.114.114.114 --gateway 172.25.254.67 \
> provider 172.25.254.0/24
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | {"start": "172.25.254.100", "end": "172.25.254.200"} |
| cidr | 172.25.254.0/24 |
| created_at | 2020-07-16T09:58:35 |
| description | |
| dns_nameservers | 114.114.114.114 |
| enable_dhcp | True |
| gateway_ip | 172.25.254.67 |
| host_routes | |
| id | 8121a48b-406d-4eac-8955-36d53b30e8c7 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | provider |
| network_id | 411727ab-294b-4e20-89ce-de39122f2cf6 |
| subnetpool_id | |
| tenant_id | 5bd822f5700d4153bc85403de17b35db |
| updated_at | 2020-07-16T09:58:35 |
+-------------------+------------------------------------------------------+
创建m1.nano规格的主机
默认的最小规格的主机需要512 MB内存。对于环境中计算节点内存不足4 GB的,我们推荐创建只需要64 MB的m1.nano
规格的主机。若单纯为了测试的目的,请使用m1.nano
规格的主机来加载CirrOS镜像:
[root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 0 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------+
生成一个键值对
大部分云镜像支持公共密钥认证而不是传统的密码认证。在启动实例前,你必须添加一个公共密钥到计算服务。
[root@controller ~]# source demo-openrc
[root@controller ~]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa): ^C
[root@controller ~]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa):
[root@controller ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | e0:c6:84:35:88:b2:08:c4:3a:84:85:09:4c:d6:62:a5 |
| name | mykey |
| user_id | aa9345f4858145ca8a4e295f63a44647 |
+-------------+-------------------------------------------------+
[root@controller ~]# openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | e0:c6:84:35:88:b2:08:c4:3a:84:85:09:4c:d6:62:a5 |
+-------+-------------------------------------------------+
生成一个公钥,把公要上传到openstack中,叫做mykey。来实现云主机的免密访问。
增加安全组规则
让外部可以允许 ICMP (ping):的方式访问云主机
[root@controller ~]# openstack security group rule create --proto icmp default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | 6c3c4512-ac1f-4474-8b7d-9c76162f6b0d |
| ip_protocol | icmp |
| ip_range | 0.0.0.0/0 |
| parent_group_id | e079876a-a035-48b2-8681-7a6e8afd9c5b |
| port_range | |
| remote_security_group | |
+-----------------------+--------------------------------------+
允许安全 shell (SSH) 的访问:
[root@controller ~]# openstack security group rule create --proto tcp --dst-port 22 default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | 8cbdd408-d1e1-4df1-a4ea-20f8a87a104f |
| ip_protocol | tcp |
| ip_range | 0.0.0.0/0 |
| parent_group_id | e079876a-a035-48b2-8681-7a6e8afd9c5b |
| port_range | 22:22 |
| remote_security_group | |
+-----------------------+--------------------------------------+
启动一个实例
一个实例指定了虚拟机资源的大致分配,包括处理器、内存和存储。
列出可用类型:
[root@controller ~]# openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True | /我们定义的
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
| 4 | m1.large | 8192 | 80 | 0 | 4 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
+----+-----------+-------+------+-----------+-------+-----------+
列出可用镜像:
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 5b9aae7b-761b-4551-8a5d-e569b63fc070 | cirros | active |
+--------------------------------------+--------+--------+
列出可用网络:
[root@controller ~]# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| 411727ab-294b-4e20-89ce-de39122f2cf6 | provider | 8121a48b-406d-4eac-8955-36d53b30e8c7 |
+--------------------------------------+----------+--------------------------------------+
列出可用安全组
[root@controller ~]# openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+---------+------------------------+----------------------------------+
| e079876a-a035-48b2-8681-7a6e8afd9c5b | default | Default security group | cdc49f2f64d24621b0f54bcc72b6b5df |
+--------------------------------------+---------+------------------------+----------------------------------+
创建实例
[root@controller ~]# openstack server create --flavor m1.nano --image cirros \
> --nic net-id=411727ab-294b-4e20-89ce-de39122f2cf6 --security-group default \
> --key-name mykey provider-instance
+--------------------------------------+-----------------------------------------------+
| Field | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | G2dqfrhJEC87 |
| config_drive | |
| created | 2020-07-16T10:13:40Z |
| flavor | m1.nano (0) |
| hostId | |
| id | 9face239-809c-4852-994c-04440a6fb699 |
| image | cirros (5b9aae7b-761b-4551-8a5d-e569b63fc070) |
| key_name | mykey |
| name | provider-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | cdc49f2f64d24621b0f54bcc72b6b5df |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2020-07-16T10:13:40Z |
| user_id | aa9345f4858145ca8a4e295f63a44647 |
+--------------------------------------+-----------------------------------------------+
openstack server create
--flavor m1.nano /使用nao类型
--image cirros /使用cirros镜像
--nic net-id=411727ab-294b-4e20-89ce-de39122f2cf6 /网络
--security-group default \ /安全组
--key-name mykey /key
provider-instance /云主机名
[root@controller ~]# openstack server list /查看实例状态
+--------------------------------------+-------------------+--------+-------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+-------------------+--------+-------------------------+
| 9face239-809c-4852-994c-04440a6fb699 | provider-instance | ACTIVE | provider=172.25.254.101 |
+--------------------------------------+-------------------+--------+-------------------------+
[root@controller ~]# openstack console url show provider-instance /获取实例url
+-------+---------------------------------------------------------------------------------+
| Field | Value |
+-------+---------------------------------------------------------------------------------+
| type | novnc |
| url | http://controller:6080/vnc_auto.html?token=575f8909-b1af-4bb0-8307-a08ea732128d |
+-------+---------------------------------------------------------------------------------+
[root@rhel7host Downloads]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.1 server1 controller /在物理主机做好解析
访问刚才的url地址:
这就是我们的云主机啦,它把用户和密码都打在了上面。
它获得的ip时172.25.254.101,100被连接接口占用了。
[root@controller ~]# ping 172.25.254.101 /可以ping 通
PING 172.25.254.101 (172.25.254.101) 56(84) bytes of data.
64 bytes from 172.25.254.101: icmp_seq=1 ttl=64 time=0.574 ms
[root@controller ~]# ssh root@172.25.254.101
The authenticity of host '172.25.254.101 (172.25.254.101)' can't be established.
RSA key fingerprint is SHA256:Y8kKCEqRI7dE/w0kvj+IxaEUyD18KbITuCPHkINMaSg.
RSA key fingerprint is MD5:1f:f4:8b:b4:76:48:9c:6f:3c:cc:ea:74:95:c5:9f:0c.
Are you sure you want to continue connecting (yes/no)? yes^[[3~
Warning: Permanently added '172.25.254.101' (RSA) to the list of known hosts.
Please login as 'cirros' user, not as root /不允许通过root连接
^CConnection to 172.25.254.101 closed.
[root@controller ~]# ssh cirros@172.25.254.101
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:20:55:e5 brd ff:ff:ff:ff:ff:ff
inet 172.25.254.101/24 brd 172.25.254.255 scope global eth0
inet6 fe80::f816:3eff:fe20:55e5/64 scope link
valid_lft forever preferred_lft forever
$ poweroff
poweroff: Operation not permitted
$ sudo poweroff /超级用户可以关机
[root@controller ~]# source demo-openrc /新开的shell需要切换到demo用户环境
[root@controller ~]# openstack server list
+--------------------------------------+-------------------+---------+-------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+-------------------+---------+-------------------------+
| 9face239-809c-4852-994c-04440a6fb699 | provider-instance | SHUTOFF | provider=172.25.254.101 |
+--------------------------------------+-------------------+---------+-------------------------+
[root@controller ~]# openstack server start provider-instance
[root@controller ~]# openstack server list
+--------------------------------------+-------------------+--------+-------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+-------------------+--------+-------------------------+
| 9face239-809c-4852-994c-04440a6fb699 | provider-instance | ACTIVE | provider=172.25.254.101 |
它运行在compute1 主机上:
[root@compute1 ~]# virsh list
Id Name State
----------------------------------------------------
2 instance-00000001 running
其实就是一个虚拟机。使用kvm虚拟化.
更多推荐
所有评论(0)