Openstack 单节点部署(Ocata版)
一:单节点拓扑外部网络:192.168.1.0/24内部网络:172.16.1.0/24二:实验环境准备2.1:创建虚拟机并安装操作系统使用CentOS 7.2安装时传递内核参数 net.ifnames=0 biosdevname=0 ,以使安装后的系统网卡名称标准化(eth*)。2.2:系统初始环境准备2.2.1:基本初始化参见《CentOS系统初始化.md》2.2.2:配置域名/主机名解析各节
一:单节点拓扑
外部网络:192.168.1.0/24
内部网络:172.16.1.0/24
二:实验环境准备
2.1:创建虚拟机并安装操作系统
使用CentOS 7.2
安装时传递内核参数 net.ifnames=0 biosdevname=0 ,以使安装后的系统网卡名称标准化(eth*)。
2.2:系统初始环境准备
2.2.1:基本初始化
参见《CentOS系统初始化.md》
2.2.2:配置域名/主机名解析
- 各节点采用内网网段通信,解析主机名为内网地址:
]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.1.101 node101.yqc.com node101
172.16.1.102 node102.yqc.com node102
172.16.1.103 node103.yqc.com node103
172.16.1.104 node104.yqc.com node104
2.3:设置虚拟机网络环境
2.3.1:Openstack 控制端网络设置
bond0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-bond0
BOOTPROTO="none"
NAME="bond0"
DEVICE="bond0"
ONBOOT="yes"
BONDING_MASTER=yes
BONDING_OPTS="mode=1 miimon=100"
IPADDR="192.168.1.101"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="192.168.1.254"
eth0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
BOOTPROTO="none"
NAME="eth0"
DEVICE="eth0"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond0"
USERCTL="no"
SLAVE="yes"
eth1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
BOOTPROTO="none"
NAME="eth1"
DEVICE="eth1"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond0"
USERCTL="no"
SLAVE="yes"
bond1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-bond1
BOOTPROTO="none"
NAME="bond1"
DEVICE="bond1"
ONBOOT="yes"
BONDING_MASTER=yes
BONDING_OPTS="mode=1 miimon=100"
IPADDR="172.16.1.101"
PREFIX="24"
eth2 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth2
BOOTPROTO="none"
NAME="eth2"
DEVICE="eth2"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond1"
USERCTL="no"
SLAVE="yes"
eth3 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth3
BOOTPROTO="none"
NAME="eth3"
DEVICE="eth3"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond1"
USERCTL="no"
SLAVE="yes"
重启网络服务并验证
]# systemctl restart network
]# ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 192.168.1.101 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::20c:29ff:fefd:eee3 prefixlen 64 scopeid 0x20<link>
inet6 240e:324:79e:f400:20c:29ff:fefd:eee3 prefixlen 64 scopeid 0x0<global>
ether 00:0c:29:fd:ee:e3 txqueuelen 0 (Ethernet)
RX packets 44 bytes 4362 (4.2 KiB)
RX errors 0 dropped 13 overruns 0 frame 0
TX packets 33 bytes 3156 (3.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
bond1: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 172.16.1.101 netmask 255.255.255.0 broadcast 172.16.1.255
inet6 fe80::20c:29ff:fefd:eef7 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:fd:ee:f7 txqueuelen 0 (Ethernet)
RX packets 6 bytes 726 (726.0 B)
RX errors 0 dropped 5 overruns 0 frame 0
TX packets 10 bytes 748 (748.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:0c:29:fd:ee:e3 txqueuelen 1000 (Ethernet)
RX packets 3591 bytes 357562 (349.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1114 bytes 182969 (178.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:0c:29:fd:ee:e3 txqueuelen 1000 (Ethernet)
RX packets 13 bytes 1231 (1.2 KiB)
RX errors 0 dropped 13 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:0c:29:fd:ee:f7 txqueuelen 1000 (Ethernet)
RX packets 1 bytes 243 (243.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10 bytes 748 (748.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth3: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:0c:29:fd:ee:f7 txqueuelen 1000 (Ethernet)
RX packets 5 bytes 483 (483.0 B)
RX errors 0 dropped 5 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 4 bytes 208 (208.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4 bytes 208 (208.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# 验证外网通信;
]# ping www.baidu.com
PING www.a.shifen.com (180.101.49.11) 56(84) bytes of data.
64 bytes from 180.101.49.11: icmp_seq=1 ttl=52 time=36.2 ms
64 bytes from 180.101.49.11: icmp_seq=2 ttl=52 time=36.2 ms
2.3.2:基础服务节点网络配置
bond0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-bond0
BOOTPROTO="none"
NAME="bond0"
DEVICE="bond0"
ONBOOT="yes"
BONDING_MASTER=yes
BONDING_OPTS="mode=1 miimon=100"
IPADDR="172.16.1.102"
PREFIX="24"
eth0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
BOOTPROTO="none"
NAME="eth0"
DEVICE="eth0"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond0"
USERCTL="no"
SLAVE="yes"
eth1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
BOOTPROTO="none"
NAME="eth1"
DEVICE="eth1"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond0"
USERCTL="no"
SLAVE="yes"
]# systemctl restart network
]# ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 172.16.1.102 netmask 255.255.255.0 broadcast 172.16.1.255
inet6 fe80::20c:29ff:feb3:c6d8 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:b3:c6:d8 txqueuelen 0 (Ethernet)
RX packets 311 bytes 31282 (30.5 KiB)
RX errors 0 dropped 8 overruns 0 frame 0
TX packets 192 bytes 32376 (31.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:0c:29:b3:c6:d8 txqueuelen 1000 (Ethernet)
RX packets 303 bytes 30611 (29.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 204 bytes 33264 (32.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:0c:29:b3:c6:d8 txqueuelen 1000 (Ethernet)
RX packets 8 bytes 671 (671.0 B)
RX errors 0 dropped 8 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# 验证内网通信;
]# ping 172.16.1.101
PING 172.16.1.101 (172.16.1.101) 56(84) bytes of data.
64 bytes from 172.16.1.101: icmp_seq=1 ttl=64 time=3.74 ms
64 bytes from 172.16.1.101: icmp_seq=2 ttl=64 time=0.604 ms
2.3.3:Openstack 计算节点1 网络配置
bond0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-bond0
BOOTPROTO="none"
NAME="bond0"
DEVICE="bond0"
ONBOOT="yes"
BONDING_MASTER=yes
BONDING_OPTS="mode=1 miimon=100"
IPADDR="192.168.1.103"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="192.168.1.254"
eth0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
BOOTPROTO="none"
NAME="eth0"
DEVICE="eth0"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond0"
USERCTL="no"
SLAVE="yes"
eth1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
BOOTPROTO="none"
NAME="eth1"
DEVICE="eth1"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond0"
USERCTL="no"
SLAVE="yes"
bond1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-bond1
BOOTPROTO="none"
NAME="bond1"
DEVICE="bond1"
ONBOOT="yes"
BONDING_MASTER=yes
BONDING_OPTS="mode=1 miimon=100"
IPADDR="172.16.1.103"
PREFIX="24"
eth2 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth2
BOOTPROTO="none"
NAME="eth2"
DEVICE="eth2"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond1"
USERCTL="no"
SLAVE="yes"
eth3 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth3
BOOTPROTO="none"
NAME="eth3"
DEVICE="eth3"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond1"
USERCTL="no"
SLAVE="yes"
重启网络服务并验证
]# systemctl restart network
]# ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 192.168.1.103 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::20c:29ff:fea9:a679 prefixlen 64 scopeid 0x20<link>
inet6 240e:324:79e:f400:20c:29ff:fea9:a679 prefixlen 64 scopeid 0x0<global>
ether 00:0c:29:a9:a6:79 txqueuelen 0 (Ethernet)
RX packets 44 bytes 4094 (3.9 KiB)
RX errors 0 dropped 17 overruns 0 frame 0
TX packets 28 bytes 2546 (2.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
bond1: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 172.16.1.103 netmask 255.255.255.0 broadcast 172.16.1.255
inet6 fe80::20c:29ff:fea9:a68d prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:a9:a6:8d txqueuelen 0 (Ethernet)
RX packets 4 bytes 240 (240.0 B)
RX errors 0 dropped 4 overruns 0 frame 0
TX packets 11 bytes 818 (818.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:0c:29:a9:a6:79 txqueuelen 1000 (Ethernet)
RX packets 5089 bytes 523314 (511.0 KiB)
RX errors 0 dropped 1 overruns 0 frame 0
TX packets 903 bytes 143417 (140.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:0c:29:a9:a6:79 txqueuelen 1000 (Ethernet)
RX packets 16 bytes 1392 (1.3 KiB)
RX errors 0 dropped 16 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:0c:29:a9:a6:8d txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 11 bytes 818 (818.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth3: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:0c:29:a9:a6:8d txqueuelen 1000 (Ethernet)
RX packets 4 bytes 240 (240.0 B)
RX errors 0 dropped 4 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 6 bytes 312 (312.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6 bytes 312 (312.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# 验证外网通信;
]# ping www.baidu.com
PING www.a.shifen.com (180.101.49.11) 56(84) bytes of data.
64 bytes from 180.101.49.11: icmp_seq=1 ttl=52 time=100 ms
64 bytes from 180.101.49.11: icmp_seq=2 ttl=52 time=36.2 ms
# 验证内网通信;
2.3.4:Openstack 计算节点2 网络配置
bond0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-bond0
BOOTPROTO="none"
NAME="bond0"
DEVICE="bond0"
ONBOOT="yes"
BONDING_MASTER=yes
BONDING_OPTS="mode=1 miimon=100"
IPADDR="192.168.1.104"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="192.168.1.254"
eth0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
BOOTPROTO="none"
NAME="eth0"
DEVICE="eth0"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond0"
USERCTL="no"
SLAVE="yes"
eth1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
BOOTPROTO="none"
NAME="eth1"
DEVICE="eth1"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond0"
USERCTL="no"
SLAVE="yes"
bond1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-bond1
BOOTPROTO="none"
NAME="bond1"
DEVICE="bond1"
ONBOOT="yes"
BONDING_MASTER=yes
BONDING_OPTS="mode=1 miimon=100"
IPADDR="172.16.1.104"
PREFIX="24"
eth2 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth2
BOOTPROTO="none"
NAME="eth2"
DEVICE="eth2"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond1"
USERCTL="no"
SLAVE="yes"
eth3 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth3
BOOTPROTO="none"
NAME="eth3"
DEVICE="eth3"
ONBOOT="yes"
NM_CONTROLLED="no"
MASTER="bond1"
USERCTL="no"
SLAVE="yes"
重启网络服务并验证
]# systemctl restart network
# 验证外网通信;
]# ping www.baidu.com
# 验证内网通信;
2.3.5:验证域名通信
在每个服务器上ping另外三个服务器的域名和主机名,以node101为例:
]# ping node102
PING node102.yqc.com (172.16.1.102) 56(84) bytes of data.
64 bytes from node102.yqc.com (172.16.1.102): icmp_seq=1 ttl=64 time=0.898 ms
64 bytes from node102.yqc.com (172.16.1.102): icmp_seq=2 ttl=64 time=0.824 ms
]# ping node103
PING node103.yqc.com (192.168.1.103) 56(84) bytes of data.
64 bytes from 192.168.1.103: icmp_seq=1 ttl=64 time=0.543 ms
64 bytes from 192.168.1.103: icmp_seq=2 ttl=64 time=1.54 ms
]# ping node104
PING node104.yqc.com (192.168.1.104) 56(84) bytes of data.
64 bytes from 192.168.1.104: icmp_seq=1 ttl=64 time=0.566 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=64 time=0.592 ms
2.3.6:手动同步NTP
虽然系统初始化配置时已设置crontab任务计划来自动同步NTP时间,但为了防止计划任务的时长导致服务器之间时间还未同步,在开始安装配置Openstack前手动同步一遍NTP。
另外,因为内网没有NTP服务器,基础服务节点向node101同步时间。
将node101 配置为内网NTP服务端
]# yum install chrony -y
]# vim /etc/chrony.conf
# 配置node101向外网NTP服务器192.168.1.254同步时间;
server 192.168.1.254 iburst
# 允许内网网段172.16.1.0/24向node101同步时间;
allow 172.16.1.0/24
# 启动chronyd服务;
]# systemctl start chronyd
]# systemctl enable chronyd
其他节点配置NTP自动同步任务计划
# 手动同步;
]# ntpdate 172.16.1.101
29 Oct 21:26:20 ntpdate[9636]: step time server 172.16.1.101 offset 4502.960252 sec
# 添加crontab计划任务;
]# echo "*/30 * * * * /usr/sbin/ntpdate 172.16.1.101 && /usr/sbin/hwclock -w" > /var/spool/cron/root
]# crontab -l
*/30 * * * * /usr/sbin/ntpdate 172.16.1.101 && /usr/sbin/hwclock -w
至此,搭建单机Openstack的基础环境准备完毕。
三:Openstack 环境准备
3.1:基础服务节点安装MariaDB
- 为了方便安装,先将node102的网络改为外网,yum安装完成后改回内网网址:
需要先将node102的网卡改为桥接模式;
]# vim /etc/sysconfig/network-scripts/ifcfg-bond0
BOOTPROTO="none"
NAME="bond0"
DEVICE="bond0"
ONBOOT="yes"
BONDING_MASTER="yes"
BONDING_OPTS="mode=1 miimon=100"
IPADDR="192.168.1.102"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="192.168.1.254"
]# systemctl restart network
3.1.1:安装mariadb-server
]# yum install mariadb mariadb-server -y
3.1.2:编辑主配置文件
]# cp /etc/my.cnf /etc/my.cnf.bak
]# vim /etc/my.cnf
[mysqld]
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
datadir=/data/mysql
innodb_file_per_table=1
# skip-grant-tables
relay-log=/data/mysql
server-id=10
log-error=/data/mysql-log/mysql_error.log
log-bin=/data/mysql-binlog/master-log
# general_log=ON
# general_log_file=/data/general_mysql.log
long_query_time=5
slow_query_log=1
slow_query_log_file=/data/mysql-log/slow_mysql.log
max_connections=1000
bind-address=172.16.1.102
[client]
port=3306
socket=/var/lib/mysql/mysql.sock
[mysqld_safe]
log-error=/data/mysql-log/mysqld_safe.log
pid-file=/var/run/mariadb/mariadb.pid
3.1.3:编辑 Openstack自定义配置文件
]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
# 指定监听地址
bind-address = 172.16.1.102
# 默认引擎
default-storage-engine = innodb
# 开启每个表都有独立表空间
innodb_file_per_table = on
# 最大连接数
max_connections = 4096
# 不区分大小写排序
collation-server = utf8_general_ci
# 设置编码
character-set-server = utf8
3.1.4:创建数据目录并授权
]# mkdir -pv /data/{mysql,mysql-log,mysql-binlog}
]# chown mysql:mysql /data/mysql* -R
3.1.5:启动 MariaDB 并验证端口
]# systemctl start mariadb
]# systemctl enable mariadb
]# ss -tnl
3.1.6:初始化安全配置
]# mysql_secure_installation
3.2:基础服务节点安装RabbitMQ
3.2.1:配置 EPEL yum源
安装RabbitMQ需要配置EPEL源。
]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
]# yum clean all
]# yum makecache
3.2.2:安装rabbitmq-server
]# yum install rabbitmq-server httpd -y
3.2.3:启动 RabbitMQ 及 web管理插件
]# systemctl start rabbitmq-server httpd
]# systemctl enable rabbitmq-server httpd
]# rabbitmq-plugins enable rabbitmq_management
]# ss -tnl
]# rabbitmq-plugins list
RabbitMQ在启动并打开web插件后,有三个监听端口:5672、15672、25672;
3.2.4:RabbitMQ 中添加openstack用户并授权
]# rabbitmqctl add_user openstack 123456
]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
RabbitMQ默认有一个guest/guest用户;
3.3:基础服务节点安装Memcached
3.3.1:安装memcached
]# yum install memcached -y
3.3.2:编辑配置文件
]# vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="512"
OPTIONS="-l 172.16.1.102"
3.3.3:启动 Memcached 并验证
]# systemctl start memcached
]# systemctl enable memcached
]# ss -tnl
memcached默认端口为11211;
3.4:控制端和计算节点环境准备
3.4.1:控制端安装相关连接模块
]# yum install mariadb python2-PyMySQL python-memcached -y
python2-PyMySQL 为 Python 连接 MySQL 模块;
python-memcached 为 Python 连接 Memcached 模块;
3.4.2:控制端和计算节点安装ocata版本的yum源
]# vim /etc/yum.repos.d/Openstack-Ocata.repo
[ocata]
name=Openstack-Ocata
baseurl=https://mirrors.aliyun.com/centos-vault/7.4.1708/cloud/x86_64/openstack-ocata/
gpgcheck=0
]# yum clean all
]# yum makecache
3.4.3:控制端和计算节点安装Openstack客户端
]# yum install python-openstackclient -y
3.4.4:控制端和计算节点安装 openstack SElinux 管理包
]# yum install openstack-selinux -y
四:部署 keystone 认证服务
4.1:keystone 数据库准备
4.1.1:创建数据库并授权 keystone 用户
]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';
MariaDB [(none)]> flush privileges;
4.1.2:验证数据库远程连接
- 在控制端 node101 上远程连接数据库:
]# mysql -h172.16.1.102 -ukeystone -p
]# mysql -hnode102 -ukeystone -p
]# mysql -hnode102.yqc.com -ukeystone -p
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| keystone |
+--------------------+
4.2:安装及配置 keystone
4.2.1:控制端安装 keystone
]# yum install openstack-keystone httpd mod_wsgi -y
4.2.2:编辑 keystone 配置文件
- 生成临时token:
]# openssl rand -hex 10
48ed35f5a9afb2b6973c
- 编辑 /etc/keystone/keystone.conf :
]# vim /etc/keystone/keystone.conf
[DEFAULT]
admin_token = 48ed35f5a9afb2b6973c
[database]
connection = mysql+pymysql://keystone:123456@node102.yqc.com/keystone
[token]
provider = fernet
4.2.3:初始化keystone数据库
- 初始化数据库:
]# su -s /bin/sh -c "keystone-manage db_sync" keystone
- 验证结果:
]# mysql -hnode102 -ukeystone -p
MariaDB [(none)]> use keystone;
MariaDB [keystone]> show tables;
+------------------------+
| Tables_in_keystone |
+------------------------+
| access_token |
| assignment |
| config_register |
| consumer |
| credential |
| endpoint |
| endpoint_group |
| federated_user |
| federation_protocol |
……
keystone的日志文件为:/var/log/keystone/keystone.log;
4.2.4:初始化Fernet key
- 初始化证书:
]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
- 验证:
]# ll /etc/keystone/fernet-keys/
total 8
-rw------- 1 keystone keystone 44 Oct 31 18:51 0
-rw------- 1 keystone keystone 44 Oct 31 18:51 1
官方文档中采用的是自动初始化keystone认证服务的方式,初次安装不要执行这个命令,因为对openstack不熟悉,最好还是手动操作一下;并且这个命令中需要ADMIN_PASS,而到目前为止还都不知道admin的密码是什么;
以下为官方文档自动初始化keystone认证服务命令:
Bootstrap the Identity service
]# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ --bootstrap-admin-url http://controller:35357/v3/ \ --bootstrap-internal-url http://controller:5000/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne
4.3:配置 Apache 服务
4.3.1:编辑httpd主配置文件:
]# vim /etc/httpd/conf/httpd.conf
ServerName node101.yqc.com:80
4.3.2:软链接 wsgi-keystone.conf 至 httpd 配置文件目录中:
]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
4.3.3:启动 httpd 并验证:
]# systemctl start httpd
]# systemctl enable httpd
]# ss -tnl | egrep "(5000|80|35357)"
LISTEN 0 511 :::35357 :::*
LISTEN 0 511 :::5000 :::*
LISTEN 0 511 :::80 :::*
4.3.4:设置环境变量
以拥有admin的权限;
]# export OS_TOKEN=48ed35f5a9afb2b6973c
]# export OS_URL=http://node101.yqc.com:35357/v3
]# export OS_IDENTITY_API_VERSION=3
一定要在设置环境变量之后再进行后续操作;
4.4:创建域、项目、用户和角色
4.4.1:创建域
语法:openstack domain create --description “描述信息” 域名;
创建default域
]# openstack domain create --description "Default Domain" default
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Default Domain |
| enabled | True |
| id | 6917eaeda8b04ebe9dc41e023f5868ea |
| name | default |
+-------------+----------------------------------+
4.4.2:创建项目
语法:openstack project --domain 域 --description “描述” 项目名;
创建admin项目
]# openstack project create --domain default --description "Admin Project" admin
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Admin Project |
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | bcee9729f8c8470eafea545466d5f855 |
| is_domain | False |
| name | admin |
| parent_id | 6917eaeda8b04ebe9dc41e023f5868ea |
+-------------+----------------------------------+
创建service项目
]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 37cd35560d4e4622a83673327b57bef7 |
| is_domain | False |
| name | service |
| parent_id | 6917eaeda8b04ebe9dc41e023f5868ea |
+-------------+----------------------------------+
创建demo项目
该项目可用于演示和测试等;
]# openstack project create --domain default --description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 9cf63e46aed845879746d9b55eb0a965 |
| is_domain | False |
| name | demo |
| parent_id | 6917eaeda8b04ebe9dc41e023f5868ea |
+-------------+----------------------------------+
4.4.3:创建用户
创建admin用户
]# openstack user create --domain default --password-prompt admin
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 8a42f4ea98184e0f8e677d2fc1ae9fc1 |
| name | admin |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
创建demo用户
]# openstack user create --domain default --password-prompt demo
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 3705d5392dfd4907b226e37b53e39112 |
| name | demo |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
4.4.4:创建角色
一个项目里面可以有多个角色;
目前角色只能创建在/etc/keystone/policy.json 文件中定义好的角色;
创建admin角色
]# openstack role create admin
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 2c9f9ca5a58f4b33be77e8fb7adc7e89 |
| name | admin |
+-----------+----------------------------------+
创建user角色
]# openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 70e281d91b524a888280bbfb58683c7b |
| name | user |
+-----------+----------------------------------+
4.4.5:授权用户
- 将admin角色赋予admin项目的admin用户:
]# openstack role add --project admin --user admin admin
- 将user角色赋予demo项目的demo用户:
]# openstack role add --project demo --user demo user
4.5:创建各组件用户
4.5.1:glance用户
- 创建glance用户:
]# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 2ca24824fb8a41d083021766dbe55ad6 |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
- 将glance用户添加到service项目并授权admin角色:
]# openstack role add --project service --user glance admin
4.5.2:nova用户
- 创建nova用户:
]# openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | e431251a86854294b2ebb32872c83ad6 |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
- 将nova用户添加到service项目并授权admin角色:
]# openstack role add --project service --user nova admin
4.5.3:placement用户
- 创建placement用户:
]# openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 1e1f2bdd24ca4faab3304ed4fe574037 |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
- 将placement用户添加到service项目并授权admin角色:
]# openstack role add --project service --user placement admin
4.5.4:neutron用户
- 创建neutron用户:
]# openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 9861e7b9516542dd8879d535c8ec76b1 |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
- 将neutron用户添加到service项目并授权admin角色:
]# openstack role add --project service --user neutron admin
4.6:创建keystone认证服务并注册
将 keystone 服务地址注册到 openstack;
4.6.1:创建keystone认证服务
]# openstack service create --name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | 63994bdfcca54de8a8da4218c0f523d7 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
]# openstack service list
+----------------------------------+----------+----------+
| ID | Name | Type |
+----------------------------------+----------+----------+
| 63994bdfcca54de8a8da4218c0f523d7 | keystone | identity |
+----------------------------------+----------+----------+
4.6.2:创建endpoint
- 创建公共端点:
]# openstack endpoint create --region RegionOne identity public http://node101.yqc.com:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 7351f018a87344e48f44cec769014f10 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 63994bdfcca54de8a8da4218c0f523d7 |
| service_name | keystone |
| service_type | identity |
| url | http://node101.yqc.com:5000/v3 |
+--------------+----------------------------------+
- 创建内部端点:
]# openstack endpoint create --region RegionOne identity internal http://node101.yqc.com:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1d70ad2fdcfa420da1237f60d0993520 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 63994bdfcca54de8a8da4218c0f523d7 |
| service_name | keystone |
| service_type | identity |
| url | http://node101.yqc.com:5000/v3 |
+--------------+----------------------------------+
- 创建管理端点:
]# openstack endpoint create --region RegionOne identity admin http://node101.yqc.com:35357/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0ea4e98a2f8a4f82b919bbfe98992986 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 63994bdfcca54de8a8da4218c0f523d7 |
| service_name | keystone |
| service_type | identity |
| url | http://node101.yqc.com:35357/v3 |
+--------------+----------------------------------+
- 验证:
]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| 0ea4e98a2f8a4f82b919bbfe98992986 | RegionOne | keystone | identity | True | admin | http://node101.yqc.com:35357/v3 |
| 1d70ad2fdcfa420da1237f60d0993520 | RegionOne | keystone | identity | True | internal | http://node101.yqc.com:5000/v3 |
| 7351f018a87344e48f44cec769014f10 | RegionOne | keystone | identity | True | public | http://node101.yqc.com:5000/v3 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
4.7:验证keystone认证服务
必须新打开一个窗口做验证操作,因为之前的终端会话中有OS_TOKEN;
- 设置OS_IDENTITY_API_VERSION变量:
]# export OS_IDENTITY_API_VERSION=3
4.7.1:验证admin用户
- admin用户的验证使用35357端口:
]# openstack --os-auth-url http://node101.yqc.com:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
Password:
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-11-03T03:16:28+0000 |
| id | gAAAAABfoL18MNpyCH6AQ2IsgjBs0TdtHxWkBK10pXUDMdX22nqQxPjYBpEAzxyOT3JOmMfcpXx8ZR1TGvhuKPvI5IXUVOd3QmcbRMmUrrylhPTWk_ItEUqYeUUmsVI43IBe-_v5HVrE5WgHaNt- |
| | TCsKs0k-sgZeCEZL1xM6etUikERRSMoqVhc |
| project_id | bcee9729f8c8470eafea545466d5f855 |
| user_id | 8a42f4ea98184e0f8e677d2fc1ae9fc1 |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
4.7.2:验证demo用户
- demo用户的验证使用5000端口:
]# openstack --os-auth-url http://node101.yqc.com:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue
Password:
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-11-03T03:25:36+0000 |
| id | gAAAAABfoL-gGJ02jBYOeX9_qsfK776Y4_lWqc6SjUF45rwMLi48CE3O7Okq9_oP6MAw0QCvn2jnAduleH3EZ- |
| | qmlE7hYWccNDN4goLMAIhKlyZwknb_cLe7AzfQm5HvM4W2OJEJxrDtJhsSamhyN4KPB6bcN_NYU-rVzGWOeipA0NJ8KEXNZbg |
| project_id | 9cf63e46aed845879746d9b55eb0a965 |
| user_id | 3705d5392dfd4907b226e37b53e39112 |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
4.8:使用脚本设置环境变量
4.8.1:admin用户脚本
- 创建脚本:
]# vim admin-ocata.sh
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://node101.yqc.com:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
]# chmod a+x admin-ocata.sh
- 测试脚本:
测试成功的效果为,不需要输入密码即可认证成功;
]# source admin-ocata.sh
]# openstack --os-auth-url http://node101.yqc.com:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-11-03T03:27:59+0000 |
| id | gAAAAABfoMAvSPY1dbMTCPeqxwqO9PSgjI1sgAywi7wfxsJmlj1dGRft24GYkmFbTQ6RGJ9QWXsHqWQClELOHMiXhBELNa3KkWTvhc5PljzS- |
| | U_0diHmUFeB5uFoMzj71ACaiPazKCijNYCvrGkl4I_n9oXJ80fDUtHThA4_10h2CNDZuDRhkDc |
| project_id | bcee9729f8c8470eafea545466d5f855 |
| user_id | 8a42f4ea98184e0f8e677d2fc1ae9fc1 |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
4.8.2:demo用户脚本
- 创建脚本:
]# vim demo-ocata.sh
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123456
export OS_AUTH_URL=http://node101.yqc.com:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
]# chmod a+x demo-ocata.sh
- 测试脚本:
]# source demo-ocata.sh
]# openstack --os-auth-url http://node101.yqc.com:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-11-03T03:29:05+0000 |
| id | gAAAAABfoMBxP1_MrsiddGcjmm8eIcyM8FNChDM_bB-HIMy4ltrZqZshctIOiQd_qUaPd5-GAHNzjGCS2ti7F0ODcq8aIN9uejBgeR5Qx-gHC67FJJSTX9qHpIn144ugvjxwhnrvz5kg0O05-- |
| | Vd6TGd8AmJ48UzVkn7qWIfFmye7cGR_V_tD8s |
| project_id | 9cf63e46aed845879746d9b55eb0a965 |
| user_id | 3705d5392dfd4907b226e37b53e39112 |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
五:部署 glance 镜像服务
5.1:glance数据库准备
5.1.1:创建数据库并授权glance用户
]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';
MariaDB [(none)]> flush privileges;
5.1.2:验证数据库远程连接
- 在控制端 node101 上远程连接数据库:
]# mysql -h172.16.1.102 -uglance -p
]# mysql -hnode102 -uglance -p
]# mysql -hnode102.yqc.com -uglance -p
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| glance |
+--------------------+
5.2:创建glance镜像服务并注册
5.2.1:设置admin用户环境变量
]# source admin-ocata.sh
5.2.2:创建glance镜像服务
如果在之前手动设置了如下变量的窗口中创建glance服务,
]# export OS_TOKEN=dfd1b9b42cdbfdaf028f
]# export OS_URL=http://node101.yqc.com:35357/v3
]# export OS_IDENTITY_API_VERSION=3
会报错:
]# openstack service create --name glance --description "OpenStack Image" image
’init() got an unexpected keyword argument ‘user_domain_name’应该是和admin-ocata.sh有冲突;
]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | bd1616aed2b542bd8ddfdf58552c5e05 |
| name | glance |
| type | image |
+-------------+----------------------------------+
5.2.3:创建endpoint
- 创建公共端点:
]# openstack endpoint create --region RegionOne image public http://node101.yqc.com:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | da2c7440ddda44a9a43de718e2b24e55 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bd1616aed2b542bd8ddfdf58552c5e05 |
| service_name | glance |
| service_type | image |
| url | http://node101.yqc.com:9292 |
+--------------+----------------------------------+
- 创建内部端点:
]# openstack endpoint create --region RegionOne image internal http://node101.yqc.com:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 51fc58ff766d446aa8a3420babe85690 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bd1616aed2b542bd8ddfdf58552c5e05 |
| service_name | glance |
| service_type | image |
| url | http://node101.yqc.com:9292 |
+--------------+----------------------------------+
- 创建管理端点:
]# openstack endpoint create --region RegionOne image admin http://node101.yqc.com:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 85a7d40a19ef45a69b50a0e473481f1c |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bd1616aed2b542bd8ddfdf58552c5e05 |
| service_name | glance |
| service_type | image |
| url | http://node101.yqc.com:9292 |
+--------------+----------------------------------+
- 验证:
]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| 0ea4e98a2f8a4f82b919bbfe98992986 | RegionOne | keystone | identity | True | admin | http://node101.yqc.com:35357/v3 |
| 1d70ad2fdcfa420da1237f60d0993520 | RegionOne | keystone | identity | True | internal | http://node101.yqc.com:5000/v3 |
| 51fc58ff766d446aa8a3420babe85690 | RegionOne | glance | image | True | internal | http://node101.yqc.com:9292 |
| 7351f018a87344e48f44cec769014f10 | RegionOne | keystone | identity | True | public | http://node101.yqc.com:5000/v3 |
| 85a7d40a19ef45a69b50a0e473481f1c | RegionOne | glance | image | True | admin | http://node101.yqc.com:9292 |
| da2c7440ddda44a9a43de718e2b24e55 | RegionOne | glance | image | True | public | http://node101.yqc.com:9292 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
5.3:安装及配置 glance
5.3.1:控制端安装 glance
]# yum install -y openstack-glance
5.3.2:编辑 glance 配置文件
- 编辑/etc/glance/glance-api.conf:
]# vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:123456@node102.yqc.com/glance
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images
[keystone_authtoken]
auth_uri = http://node101.yqc.com:5000
auth_url = http://node101.yqc.com:35357
memcached_servers = node102.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
- 编辑/etc/glance/glance-registry.conf
]# vim /etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:123456@node102.yqc.com/glance
[keystone_authtoken]
auth_uri = http://node101.yqc.com:5000
auth_url = http://node101.yqc.com:35357
memcached_servers = node102.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
5.3.3:初始化glance数据库
- 初始化数据库:
]# su -s /bin/sh -c "glance-manage db_sync" glance
- 验证结果:
]# mysql -hnode102 -uglance -p
MariaDB [(none)]> use glance;
MariaDB [glance]> show tables;
+----------------------------------+
| Tables_in_glance |
+----------------------------------+
| alembic_version |
| artifact_blob_locations |
| artifact_blobs |
| artifact_dependencies |
| artifact_properties |
| artifact_tags |
| artifacts |
| image_locations |
……
5.3.4:启动glance并验证端口
- 启动glance并设为开机启动:
]# systemctl enable openstack-glance-api.service openstack-glance-registry.service
]# systemctl start openstack-glance-api.service openstack-glance-registry.service
- 验证端口:
]# ss -tnl | egrep '(9292|9191)'
LISTEN 0 4096 *:9292 *:*
LISTEN 0 4096 *:9191 *:*
glance-api的监听端口是9292,glance-registry的监听端口是9191;
5.3.5:glance日志
]# ll /var/log/glance/
total 8
-rw-r--r-- 1 glance glance 2698 Nov 3 11:11 api.log
-rw-r--r-- 1 glance glance 2100 Nov 3 11:11 registry.log
5.4:验证glance镜像服务
5.4.1:创建镜像
- 下载cirros测试镜像:
]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
- 创建名为cirros的镜像:
]# source admin-ocata.sh
]# openstack image create "cirros" --file /root/cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2020-11-03T06:02:34Z |
| disk_format | qcow2 |
| file | /v2/images/3dfd3361-7d85-4342-afcf-9532bcddd3d1/file |
| id | 3dfd3361-7d85-4342-afcf-9532bcddd3d1 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | bcee9729f8c8470eafea545466d5f855 |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2020-11-03T06:02:34Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
5.4.2:验证镜像
- 查看镜像列表:
]# glance image-list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| 3dfd3361-7d85-4342-afcf-9532bcddd3d1 | cirros |
+--------------------------------------+--------+
]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 3dfd3361-7d85-4342-afcf-9532bcddd3d1 | cirros | active |
+--------------------------------------+--------+--------+
- 查看镜像信息:
]# openstack image show cirros
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2020-11-03T06:02:34Z |
| disk_format | qcow2 |
| file | /v2/images/3dfd3361-7d85-4342-afcf-9532bcddd3d1/file |
| id | 3dfd3361-7d85-4342-afcf-9532bcddd3d1 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | bcee9729f8c8470eafea545466d5f855 |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2020-11-03T06:02:34Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
六:部署 nova 控制端
6.1:nova 数据库准备
6.1.1:创建数据库并授权nova用户
- 创建nova_api数据库:
]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
- 创建nova数据库:
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
- 创建nova_cell0数据库:
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';
- 刷新权限:
MariaDB [(none)]> flush privileges;
6.1.2:验证数据库远程连接
]# mysql -hnode102 -unova -p
Enter password:
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| nova |
| nova_api |
| nova_cell0 |
+--------------------+
4 rows in set (0.00 sec)
6.2:创建nova计算服务并注册
6.2.1:设置admin用户环境变量
]# source admin-ocata.sh
6.2.2:创建nova计算服务
]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | d43200a4b66e44f3847e74e8549e4bf2 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
6.2.3:创建endpoint
- 创建公共端点:
]# openstack endpoint create --region RegionOne compute public http://node101.yqc.com:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 779de714e4b84d6d810331c895e6dbb8 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d43200a4b66e44f3847e74e8549e4bf2 |
| service_name | nova |
| service_type | compute |
| url | http://node101.yqc.com:8774/v2.1 |
+--------------+----------------------------------+
- 创建内部端点:
]# openstack endpoint create --region RegionOne compute internal http://node101.yqc.com:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c8f19491bb1a4c9fa6857fc1e259953c |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d43200a4b66e44f3847e74e8549e4bf2 |
| service_name | nova |
| service_type | compute |
| url | http://node101.yqc.com:8774/v2.1 |
+--------------+----------------------------------+
- 创建管理端点:
]# openstack endpoint create --region RegionOne compute admin http://node101.yqc.com:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6d775e5928844b59809e2d8705dda6c1 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d43200a4b66e44f3847e74e8549e4bf2 |
| service_name | nova |
| service_type | compute |
| url | http://node101.yqc.com:8774/v2.1 |
+--------------+----------------------------------+
6.3:创建placement服务并注册
6.3.1:设置admin用户环境变量
]# source admin-ocata.sh
6.3.2:创建placement服务
]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | d67215a5119f438f8c94a7624f67c6f9 |
| name | placement |
| type | placement |
+-------------+----------------------------------+
6.3.3:创建endpoint
- 创建公共端点:
]# openstack endpoint create --region RegionOne placement public http://node101.yqc.com:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 89d36c4ff6d64491bec7b1efd2ba765a |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d67215a5119f438f8c94a7624f67c6f9 |
| service_name | placement |
| service_type | placement |
| url | http://node101.yqc.com:8778 |
+--------------+----------------------------------+
- 创建内部端点:
]# openstack endpoint create --region RegionOne placement internal http://node101.yqc.com:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 304693e971e548c4a2c47321268eb2ad |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d67215a5119f438f8c94a7624f67c6f9 |
| service_name | placement |
| service_type | placement |
| url | http://node101.yqc.com:8778 |
+--------------+----------------------------------+
- 创建管理端点:
]# openstack endpoint create --region RegionOne placement admin http://node101.yqc.com:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c10ce3f5bf14431a897eede9335cf36c |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d67215a5119f438f8c94a7624f67c6f9 |
| service_name | placement |
| service_type | placement |
| url | http://node101.yqc.com:8778 |
+--------------+----------------------------------+
- 验证:
]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------+
| 0ea4e98a2f8a4f82b919bbfe98992986 | RegionOne | keystone | identity | True | admin | http://node101.yqc.com:35357/v3 |
| 1d70ad2fdcfa420da1237f60d0993520 | RegionOne | keystone | identity | True | internal | http://node101.yqc.com:5000/v3 |
| 304693e971e548c4a2c47321268eb2ad | RegionOne | placement | placement | True | internal | http://node101.yqc.com:8778 |
| 51fc58ff766d446aa8a3420babe85690 | RegionOne | glance | image | True | internal | http://node101.yqc.com:9292 |
| 6d775e5928844b59809e2d8705dda6c1 | RegionOne | nova | compute | True | admin | http://node101.yqc.com:8774/v2.1 |
| 7351f018a87344e48f44cec769014f10 | RegionOne | keystone | identity | True | public | http://node101.yqc.com:5000/v3 |
| 779de714e4b84d6d810331c895e6dbb8 | RegionOne | nova | compute | True | public | http://node101.yqc.com:8774/v2.1 |
| 85a7d40a19ef45a69b50a0e473481f1c | RegionOne | glance | image | True | admin | http://node101.yqc.com:9292 |
| 89d36c4ff6d64491bec7b1efd2ba765a | RegionOne | placement | placement | True | public | http://node101.yqc.com:8778 |
| c10ce3f5bf14431a897eede9335cf36c | RegionOne | placement | placement | True | admin | http://node101.yqc.com:8778 |
| c8f19491bb1a4c9fa6857fc1e259953c | RegionOne | nova | compute | True | internal | http://node101.yqc.com:8774/v2.1 |
| da2c7440ddda44a9a43de718e2b24e55 | RegionOne | glance | image | True | public | http://node101.yqc.com:9292 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------+
6.4:安装及配置nova控制端
6.4.1:安装nova控制端
]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y
6.4.2:编辑nova控制端配置文件
]# vim /etc/nova/nova.conf
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:123456@node102.yqc.com
rpc_backend=rabbit
[api]
auth_strategy=keystone
[api_database]
connection = mysql+pymysql://nova:123456@node102.yqc.com/nova_api
[database]
connection = mysql+pymysql://nova:123456@node102.yqc.com/nova
[glance]
api_servers=http://node101.yqc.com:9292
[keystone_authtoken]
auth_uri = http://node101.yqc.com:5000
auth_url = http://node101.yqc.com:35357
memcached_servers = node102.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://node101.yqc.com:35357/v3
username = placement
password = 123456
[vnc]
enabled=true
vncserver_listen=172.16.1.101
vncserver_proxyclient_address=172.16.1.101
6.4.3:配置 apache 允许访问 placement API
- 在 00-nova-placement-api.conf 下方添加如下配置:
]# vim /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
- 重启httpd服务:
]# systemctl restart httpd
6.4.4:初始化数据库
填充 nova_api 数据库
]# su -s /bin/sh -c "nova-manage api_db sync" nova
注册 nova cell0 数据库
]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建 cell1
]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
7016aa29-4ed7-4926-b46a-4ab1b21f6178
填充 nova 数据库
]# su -s /bin/sh -c "nova-manage db sync" nova
验证 nova cell0 和 nova cell1 是否正常注册
]# nova-manage cell_v2 list_cells
+-------+--------------------------------------+
| Name | UUID |
+-------+--------------------------------------+
| cell0 | 00000000-0000-0000-0000-000000000000 |
| cell1 | 575bc0ae-7ec2-4716-8a1c-68b50a6774dc |
+-------+--------------------------------------+
6.4.5:启动nova控制端服务
- 启动nova控制端服务,并设为开机启动:
]# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
]# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
- 验证端口:
]# ss -tnl | egrep '(6080|8774|8775)'
LISTEN 0 100 *:6080 *:*
LISTEN 0 128 *:8774 *:*
LISTEN 0 128 *:8775 *:*
nova-novncproxy:6080;
nova-api:8774、8775;
6.4.6:编写nova控制端重启脚本
]# vim nova-restart.sh
#!/bin/bash
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@linux-host1 ~]# chmod a+x nova-restart.sh
6.4.7:nova控制端日志
]# ll /var/log/nova
total 60
-rw-r--r-- 1 nova nova 6723 Nov 3 15:07 nova-api.log
-rw-r--r-- 1 nova nova 1468 Nov 3 15:07 nova-conductor.log
-rw-r--r-- 1 nova nova 1049 Nov 3 15:07 nova-consoleauth.log
-rw-r--r-- 1 nova nova 36213 Nov 3 14:55 nova-manage.log
-rw-r--r-- 1 nova nova 899 Nov 3 15:07 nova-novncproxy.log
-rw-r--r-- 1 root root 0 Nov 3 14:49 nova-placement-api.log
-rw-r--r-- 1 nova nova 1193 Nov 3 15:07 nova-scheduler.log
6.5:验证 nova 控制端
6.5.1:nova service-list
]# nova service-list
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-consoleauth | node101.yqc.com | internal | enabled | up | 2020-11-03T07:34:18.000000 | - |
| 2 | nova-conductor | node101.yqc.com | internal | enabled | up | 2020-11-03T07:34:09.000000 | - |
| 3 | nova-scheduler | node101.yqc.com | internal | enabled | up | 2020-11-03T07:34:10.000000 | - |
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
6.5.2:openstack compute service list
]# openstack compute service list
+----+------------------+-----------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+-----------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | node101.yqc.com | internal | enabled | up | 2020-11-05T07:29:34.000000 |
| 2 | nova-scheduler | node101.yqc.com | internal | enabled | up | 2020-11-05T07:29:34.000000 |
| 3 | nova-conductor | node101.yqc.com | internal | enabled | up | 2020-11-05T07:29:35.000000 |
+----+------------------+-----------------+----------+---------+-------+----------------------------+
6.5.3:查看RabbitMQ连接
web登录地址:http://172.16.1.102:15672;
guest/guest
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-gGHEWKre-1605438860180)(C:\Users\My\AppData\Roaming\Typora\typora-user-images\image-20201103154756661.png)]
七:部署 nova 计算节点
-
需要确认计算节点是否支持硬件加速
]# egrep -c '(vmx|svm)' /proc/cpuinfo 4
7.1:安装及配置nova计算服务
7.1.1:安装nova计算服务
]# yum install openstack-nova-compute -y
起初安装时报:“Requires: qemu-kvm-rhev >= 2.9.0”;
编辑了下yum源后解决:
]# vim /etc/yum.repos.d/CentOS-7-ali.repo
[virt] name=solve qemu-kvm-rhev >= 2.9.0 baseurl=http://mirrors.sohu.com/centos/7/virt/x86_64/kvm-common/ gpgcheck=0
7.1.2:编辑nova计算服务配置文件
]# vim /etc/nova/nova.conf
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:123456@node102.yqc.com
[api]
auth_strategy=keystone
[glance]
api_servers=http://node101.yqc.com:9292
[keystone_authtoken]
auth_uri = http://node101.yqc.com:5000
auth_url = http://node101.yqc.com:35357
memcached_servers = node102.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://node101.yqc.com:35357/v3
username = placement
password = 123456
[vnc]
enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=172.16.1.103
novncproxy_base_url=http://172.16.1.101:6080/vnc_auto.html
7.1.3:启动nova计算服务
]# systemctl enable libvirtd.service openstack-nova-compute.service
]# systemctl start libvirtd.service openstack-nova-compute.service
7.2:添加计算节点到 cell 数据库
7.2.1:确认数据库中已有计算节点
]# source admin-ocata.sh
]# openstack hypervisor list
+----+---------------------+-----------------+---------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+---------------+-------+
| 1 | node103.yqc.com | QEMU | 192.168.1.103 | up |
+----+---------------------+-----------------+---------------+-------+
为什么这里列出的Host IP是外网地址,而不是内网地址172.16.1.103?
7.2.2:控制端发现计算节点
使用命令发现
]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': 575bc0ae-7ec2-4716-8a1c-68b50a6774dc
Found 1 computes in cell: 575bc0ae-7ec2-4716-8a1c-68b50a6774dc
Checking host mapping for compute host 'node103.yqc.com': 9dab6c0d-b405-4fac-b811-f54f0c833198
Creating host mapping for compute host 'node103.yqc.com': 9dab6c0d-b405-4fac-b811-f54f0c833198
定期主动发现
]# vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval=300
7.2.3:验证 nova 计算节点
]# nova host-list
+-----------------+-------------+----------+
| host_name | service | zone |
+-----------------+-------------+----------+
| node101.yqc.com | consoleauth | internal |
| node101.yqc.com | conductor | internal |
| node101.yqc.com | scheduler | internal |
| node103.yqc.com | compute | nova |
+-----------------+-------------+----------+
]# nova image-list
WARNING: Command image-list is deprecated and will be removed after Nova 15.0.0 is released. Use python-glanceclient or openstackclient instead
+--------------------------------------+--------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------+--------+--------+
| 3dfd3361-7d85-4342-afcf-9532bcddd3d1 | cirros | ACTIVE | |
+--------------------------------------+--------+--------+--------+
]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 3dfd3361-7d85-4342-afcf-9532bcddd3d1 | cirros | active |
+--------------------------------------+--------+--------+
- 查看服务组件是否成功注册:
]# openstack compute service list
+----+------------------+-----------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+-----------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | node101.yqc.com | internal | enabled | up | 2020-11-03T08:50:11.000000 |
| 2 | nova-conductor | node101.yqc.com | internal | enabled | up | 2020-11-03T08:50:02.000000 |
| 3 | nova-scheduler | node101.yqc.com | internal | enabled | up | 2020-11-03T08:50:11.000000 |
| 8 | nova-compute | node103.yqc.com | nova | enabled | up | 2020-11-03T08:50:02.000000 |
+----+------------------+-----------------+----------+---------+-------+----------------------------+
]# nova service-list
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-consoleauth | node101.yqc.com | internal | enabled | up | 2020-11-03T08:48:21.000000 | - |
| 2 | nova-conductor | node101.yqc.com | internal | enabled | up | 2020-11-03T08:48:22.000000 | - |
| 3 | nova-scheduler | node101.yqc.com | internal | enabled | up | 2020-11-03T08:48:21.000000 | - |
| 8 | nova-compute | node103.yqc.com | nova | enabled | up | 2020-11-03T08:48:22.000000 | - |
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
- 列出身份认证服务中的 API 端点来验证身份认证服务的连通性
]# openstack catalog list
+-----------+-----------+----------------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+----------------------------------------------+
| glance | image | RegionOne |
| | | admin: http://node101.yqc.com:9292 |
| | | RegionOne |
| | | public: http://node101.yqc.com:9292 |
| | | RegionOne |
| | | internal: http://node101.yqc.com:9292 |
| | | |
| placement | placement | RegionOne |
| | | internal: http://node101.yqc.com:8778 |
| | | RegionOne |
| | | admin: http://node101.yqc.com:8778 |
| | | RegionOne |
| | | public: http://node101.yqc.com:8778 |
| | | |
| nova | compute | RegionOne |
| | | public: http://node101.yqc.com:8774/v2.1 |
| | | RegionOne |
| | | admin: http://node101.yqc.com:8774/v2.1 |
| | | RegionOne |
| | | internal: http://node101.yqc.com:8774/v2.1 |
| | | |
| keystone | identity | RegionOne |
| | | internal: http://node101.yqc.com:5000/v3 |
| | | RegionOne |
| | | public: http://node101.yqc.com:5000/v3 |
| | | RegionOne |
| | | admin: http://node101.yqc.com:35357/v3 |
| | | |
+-----------+-----------+----------------------------------------------+
- 列出 keystone 服务中的端点,以验证 keystone 的连通性:
]# openstack catalog list
+-----------+-----------+----------------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+----------------------------------------------+
| keystone | identity | RegionOne |
| | | admin: http://node101.yqc.com:35357/v3 |
| | | RegionOne |
| | | internal: http://node101.yqc.com:5000/v3 |
| | | RegionOne |
| | | public: http://node101.yqc.com:5000/v3 |
| | | |
| glance | image | RegionOne |
| | | internal: http://node101.yqc.com:9292 |
| | | RegionOne |
| | | admin: http://node101.yqc.com:9292 |
| | | RegionOne |
| | | public: http://node101.yqc.com:9292 |
| | | |
| nova | compute | RegionOne |
| | | admin: http://node101.yqc.com:8774/v2.1 |
| | | RegionOne |
| | | public: http://node101.yqc.com:8774/v2.1 |
| | | RegionOne |
| | | internal: http://node101.yqc.com:8774/v2.1 |
| | | |
| placement | placement | RegionOne |
| | | internal: http://node101.yqc.com:8778 |
| | | RegionOne |
| | | public: http://node101.yqc.com:8778 |
| | | RegionOne |
| | | admin: http://node101.yqc.com:8778 |
| | | |
+-----------+-----------+----------------------------------------------+
- 检查 cells 和 placement API 是否工作正常
]# nova-status upgrade check
+---------------------------+
| Upgrade Check Results |
+---------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+---------------------------+
七:部署 neutron 控制端
7.1:neutron数据库准备
7.1.1:创建数据库并授权neutron用户
]# mysql -uroot -p
Enter password:
MariaDB [(none)]> CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
MariaDB [(none)]> flush privileges;
7.1.2:验证数据库远程连接
]# mysql -hnode102 -uneutron -p
Enter password:
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| neutron |
+--------------------+
7.2:创建neutron服务并注册
7.2.1:设置admin用户环境变量
]# source admin-ocata.sh
7.2.2:创建neutron服务
]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 0dae6638d0244e4dbd11d2ec679e787a |
| name | neutron |
| type | network |
+-------------+----------------------------------+
7.2.3:创建endpoint
- 创建公共端点:
]# openstack endpoint create --region RegionOne network public http://node101.yqc.com:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | cc5894ab79624b41a78989494f0cfc0d |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0dae6638d0244e4dbd11d2ec679e787a |
| service_name | neutron |
| service_type | network |
| url | http://node101.yqc.com:9696 |
+--------------+----------------------------------+
- 创建内部端点:
]# openstack endpoint create --region RegionOne network internal http://node101.yqc.com:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | b0a90dff6e6d423eae2a4f8820919676 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0dae6638d0244e4dbd11d2ec679e787a |
| service_name | neutron |
| service_type | network |
| url | http://node101.yqc.com:9696 |
+--------------+----------------------------------+
- 创建管理端点:
]# openstack endpoint create --region RegionOne network admin http://node101.yqc.com:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c7197b1a347741c58643971d8b25e3a6 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0dae6638d0244e4dbd11d2ec679e787a |
| service_name | neutron |
| service_type | network |
| url | http://node101.yqc.com:9696 |
+--------------+----------------------------------+
- 验证:
]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------+
| 0ea4e98a2f8a4f82b919bbfe98992986 | RegionOne | keystone | identity | True | admin | http://node101.yqc.com:35357/v3 |
| 1d70ad2fdcfa420da1237f60d0993520 | RegionOne | keystone | identity | True | internal | http://node101.yqc.com:5000/v3 |
| 304693e971e548c4a2c47321268eb2ad | RegionOne | placement | placement | True | internal | http://node101.yqc.com:8778 |
| 51fc58ff766d446aa8a3420babe85690 | RegionOne | glance | image | True | internal | http://node101.yqc.com:9292 |
| 6d775e5928844b59809e2d8705dda6c1 | RegionOne | nova | compute | True | admin | http://node101.yqc.com:8774/v2.1 |
| 7351f018a87344e48f44cec769014f10 | RegionOne | keystone | identity | True | public | http://node101.yqc.com:5000/v3 |
| 779de714e4b84d6d810331c895e6dbb8 | RegionOne | nova | compute | True | public | http://node101.yqc.com:8774/v2.1 |
| 85a7d40a19ef45a69b50a0e473481f1c | RegionOne | glance | image | True | admin | http://node101.yqc.com:9292 |
| 89d36c4ff6d64491bec7b1efd2ba765a | RegionOne | placement | placement | True | public | http://node101.yqc.com:8778 |
| b0a90dff6e6d423eae2a4f8820919676 | RegionOne | neutron | network | True | internal | http://node101.yqc.com:9696 |
| c10ce3f5bf14431a897eede9335cf36c | RegionOne | placement | placement | True | admin | http://node101.yqc.com:8778 |
| c7197b1a347741c58643971d8b25e3a6 | RegionOne | neutron | network | True | admin | http://node101.yqc.com:9696 |
| c8f19491bb1a4c9fa6857fc1e259953c | RegionOne | nova | compute | True | internal | http://node101.yqc.com:8774/v2.1 |
| cc5894ab79624b41a78989494f0cfc0d | RegionOne | neutron | network | True | public | http://node101.yqc.com:9696 |
| da2c7440ddda44a9a43de718e2b24e55 | RegionOne | glance | image | True | public | http://node101.yqc.com:9292 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------+
7.3:安装及配置neutron控制端(提供者网络)
7.3.1:安装neutron控制端
]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
7.3.2:配置服务组件
]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:123456@node102.yqc.com
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
connection = mysql+pymysql://neutron:123456@node102.yqc.com/neutron
[keystone_authtoken]
auth_uri = http://node101.yqc.com:5000
auth_url = http://node101.yqc.com:35357
memcached_servers = node102.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[nova]
auth_url = http://node101.yqc.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
7.3.3:配置 Modular Layer 2 (ML2) 插件
ML2 插件使用 Linuxbridge 机制来为实例创建 layer2 虚拟网络基础设施;
]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = true
7.3.4:配置Linuxbridge代理
Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。
]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:bond1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = false
7.3.5:配置 DHCP 代理
]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
7.3.6:配置元数据代理
]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = node101.yqc.com
metadata_proxy_shared_secret = 20201103
7.3.7:配置 nova 调用 neutron
]# vim /etc/nova/nova.conf
[neutron]
url = http://node101.yqc.com:9696
auth_url = http://node101.yqc.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 20201103
7.3.8:初始化neutron数据库
- 创建软链接
网络服务初始化脚本需要一个超链接
/etc/neutron/plugin.ini
指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini
;
]# ln -sv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
- 初始化数据库:
]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
- 验证结果:
]# mysql -hnode102 -uneutron -p
Enter password:
MariaDB [(none)]> use neutron;
MariaDB [neutron]> show tables;
+-----------------------------------------+
| Tables_in_neutron |
+-----------------------------------------+
| address_scopes |
| agents |
| alembic_version |
| allowedaddresspairs |
| arista_provisioned_nets |
| arista_provisioned_tenants |
| arista_provisioned_vms |
……
7.3.9:重启nova API 服务
- 重启服务:
]# systemctl restart openstack-nova-api.service
- 同时查看日志有无报错:
]# tail -f /var/log/nova/nova-api.log
7.3.10:启动 neutron 服务并设置为开机启动
]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
7.3.11验证neutron日志有无报错
- neutron日志:
]# ll /var/log/neutron/
total 32
-rw-r--r-- 1 neutron neutron 3543 Nov 3 17:58 dhcp-agent.log
-rw-r--r-- 1 neutron neutron 4735 Nov 3 17:58 linuxbridge-agent.log
-rw-r--r-- 1 neutron neutron 3254 Nov 3 17:58 metadata-agent.log
-rw-r--r-- 1 neutron neutron 14901 Nov 3 17:57 server.log
- 查看有无报错:
]# tail -f /var/log/neutron/*.log
7.3.12:neutron 控制端重启脚本
]# vim neutron-restart.sh
#!/bin/bash
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
]# chmod a+x neutron-restart.sh
7.4:安装及配置neutron控制端(自服务网络)
7.4.1:安装neutron控制端
]# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
7.4.2:配置服务组件
]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:123456@node102.yqc.com
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
connection = mysql+pymysql://neutron:123456@node102.yqc.com/neutron
[keystone_authtoken]
auth_uri = http://node101.yqc.com:5000
auth_url = http://node101.yqc.com:35357
memcached_servers = node102.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[nova]
auth_url = http://node101.yqc.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
7.4.3:配置 Modular Layer 2 (ML2) 插件
]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = true
[ml2_type_vxlan]
vni_ranges = 1:1000
7.4.4:配置Linuxbridge代理
]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:bond1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
local_ip = 192.168.1.101
l2_population = true
7.4.5:配置layer-3代理
]# vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
7.4.6:配置DHCP代理
]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
7.4.7:配置元数据代理
]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = node101.yqc.com
metadata_proxy_shared_secret = 20201103
7.4.8:配置 nova 调用 neutron
]# vim /etc/nova/nova.conf
[neutron]
url = http://node101.yqc.com:9696
auth_url = http://node101.yqc.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 20201103
7.4.9:初始化neutron数据库
- 创建软链接
网络服务初始化脚本需要一个超链接
/etc/neutron/plugin.ini
指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini
;
]# ln -sv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
- 初始化数据库:
]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
- 验证结果:
]# mysql -hnode102 -uneutron -p
Enter password:
MariaDB [(none)]> use neutron;
MariaDB [neutron]> show tables;
+-----------------------------------------+
| Tables_in_neutron |
+-----------------------------------------+
| address_scopes |
| agents |
| alembic_version |
| allowedaddresspairs |
| arista_provisioned_nets |
| arista_provisioned_tenants |
| arista_provisioned_vms |
……
7.4.10:重启nova API 服务
- 重启服务:
]# systemctl restart openstack-nova-api.service
- 同时查看日志有无报错:
]# tail -f /var/log/nova/nova-api.log
7.4.11:启动 neutron 服务并设置为开机启动
自服务网络比提供者网络多启动一个服务neutron-l3-agent.service;
]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
7.5:验证 neutron 控制端
此步骤要求各服务器时间必须一致;
]# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
| 466d9c49-582c-47f9-a367-a5f89a72001d | DHCP agent | node101.yqc.com | nova | :-) | True | neutron-dhcp-agent |
| 8283d230-6bc6-4e97-832a-705d091ef6d0 | Metadata agent | node101.yqc.com | | :-) | True | neutron-metadata-agent |
| ce7ef08f-4d6d-4b34-a8f6-292fa74021a1 | Linux bridge agent | node101.yqc.com | | :-) | True | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
首轮安装没成功,未找到原因,只想着是不是因为安装配置的顺序搞乱导致的,没有仔细检查;
]# neutron agent-list neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. The server has either erred or is incapable of performing the requested operation.<br /><br /> Neutron server returns request_ids: ['req-6158d463-eb53-43de-9881-c50ed977f72b']
第二次同样没成功,这次确信之前的步骤都是正确的,所以仔细查找原因:
neutron日志中找到一条报错:ValueError: Unable to parse connection string: “http://node102.yqc.com:11211”
neutron配置文件中的配置为:memcached_servers = http://node102.yqc.com:11211
改为:memcached_servers = node102.yqc.com:11211,重启neutron控制端;OK!
八:部署 neutron 计算节点
8.1:安装及配置neutron计算服务
8.1.1:安装neutron计算服务
]# yum install openstack-neutron-linuxbridge ebtables ipset -y
8.1.2:编辑neutron计算服务配置文件
计算节点不直接访问数据库,所以没有[database]配置;
]# vim /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:123456@node102.yqc.com
[keystone_authtoken]
auth_uri = http://node101.yqc.com:5000
auth_url = http://node101.yqc.com:35357
memcached_servers = node102.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
8.1.3:配置 linuxbridge 代理(提供者网络)
]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:bond1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = false
8.1.4:配置 linuxbridge 代理(自服务网络)
]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:bond1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
local_ip = 192.168.1.103
l2_population = true
8.1.5:配置 nova 调用 neutron
- 编辑nova计算服务配置文件:
]# vim /etc/nova/nova.conf
[neutron]
url = http://node101.yqc.com:9696
auth_url = http://node101.yqc.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
- 重启nova计算服务:
]# systemctl restart openstack-nova-compute.service
8.1.6:启动 neutron 计算服务并设置为开机启动
]# systemctl enable neutron-linuxbridge-agent.service
]# systemctl start neutron-linuxbridge-agent.service
8.1.7:neutron计算服务日志
- 启动neutron计算服务后生成日志文件:
]# ll /var/log/neutron/
total 4
-rw-r--r-- 1 neutron neutron 1667 Nov 5 17:29 linuxbridge-agent.log
- 查看有无报错:
]# tail -f /var/log/neutron/*.log
8.2:验证 neutron 计算节点
8.2.1:neutron 控制端验证计算节点是否注册成功
]# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
| 466d9c49-582c-47f9-a367-a5f89a72001d | DHCP agent | node101.yqc.com | nova | :-) | True | neutron-dhcp-agent |
| 8283d230-6bc6-4e97-832a-705d091ef6d0 | Metadata agent | node101.yqc.com | | :-) | True | neutron-metadata-agent |
| 9e2ae00a-c756-4efb-8eeb-3830c1b2e9f4 | Linux bridge agent | node103.yqc.com | | :-) | True | neutron-linuxbridge-agent |
| ce7ef08f-4d6d-4b34-a8f6-292fa74021a1 | Linux bridge agent | node101.yqc.com | | :-) | True | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
8.2.2:验证 neutron server 进程是否正常运行
列出加载的扩展来验证
neutron-server
进程是否正常启动;
]# openstack extension list --network
+-------------------------------------------------------------+---------------------------+--------------------------------------------------------------------+
| Name | Alias | Description |
+-------------------------------------------------------------+---------------------------+--------------------------------------------------------------------+
| Default Subnetpools | default-subnetpools | Provides ability to mark and use a subnetpool as the default |
| Availability Zone | availability_zone | The availability zone extension. |
| Network Availability Zone | network_availability_zone | Availability zone support for network. |
| Port Binding | binding | Expose port bindings of a virtual port to external application |
| agent | agent | The agent management extension. |
| Subnet Allocation | subnet_allocation | Enables allocation of subnets from a subnet pool |
| DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among dhcp agents |
| Tag support | tag | Enables to set tag on resources. |
| Neutron external network | external-net | Adds external network attribute to network resource. |
| Neutron Service Flavors | flavors | Flavor specification for Neutron advanced services |
| Network MTU | net-mtu | Provides MTU attribute for a network resource. |
| Network IP Availability | network-ip-availability | Provides IP availability data for each network and subnet. |
| Quota management support | quotas | Expose functions for quotas management per tenant |
| Provider Network | provider | Expose mapping of virtual networks to physical networks |
| Multi Provider Network | multi-provider | Expose mapping of virtual networks to multiple physical networks |
| Address scope | address-scope | Address scopes extension. |
| Subnet service types | subnet-service-types | Provides ability to set the subnet service_types field |
| Resource timestamps | standard-attr-timestamp | Adds created_at and updated_at fields to all Neutron resources |
| | | that have Neutron standard attributes. |
| Neutron Service Type Management | service-type | API for retrieving service providers for Neutron advanced services |
| Tag support for resources: subnet, subnetpool, port, router | tag-ext | Extends tag support to more L2 and L3 resources. |
| Neutron Extra DHCP opts | extra_dhcp_opt | Extra options configuration for DHCP. For example PXE boot options |
| | | to DHCP clients can be specified (e.g. tftp-server, server-ip- |
| | | address, bootfile-name) |
| Resource revision numbers | standard-attr-revisions | This extension will display the revision number of neutron |
| | | resources. |
| Pagination support | pagination | Extension that indicates that pagination is enabled. |
| Sorting support | sorting | Extension that indicates that sorting is enabled. |
| security-group | security-group | The security groups extension. |
| RBAC Policies | rbac-policies | Allows creation and modification of policies that control tenant |
| | | access to resources. |
| standard-attr-description | standard-attr-description | Extension to add descriptions to standard attributes |
| Port Security | port-security | Provides port security |
| Allowed Address Pairs | allowed-address-pairs | Provides allowed address pairs |
| project_id field enabled | project-id | Extension that indicates that project_id field is enabled. |
+-------------------------------------------------------------+---------------------------+--------------------------------------------------------------------+
九:部署 dashboard 仪表盘
9.1:安装及配置 dashboard
9.1.1:安装 dashboard
]# yum install openstack-dashboard -y
9.1.2:编辑 dashboard 配置文件
]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "172.16.1.101"
ALLOWED_HOSTS = ['*',]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'node102.yqc.com:11211',
}
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
……
……
}
TIME_ZONE = "Asia/Shanghai"
9.1.3:重启httpd
]# systemctl restart httpd.service
9.2:验证 dashboard
- 客户端浏览器打开http://172.16.1.101/dashboard
- 登录openstack:
更多推荐
所有评论(0)