Openstack 高可用部署(Ocata版)
一:高可用拓扑外部网络:192.168.1.0/24内部网络:172.16.1.0/24二:实验环境准备2.1:创建虚拟机并安装操作系统使用CentOS 7.2安装时传递内核参数 net.ifnames=0 biosdevname=0 ,以使安装后的系统网卡名称标准化(eth*)。2.2:系统初始环境准备2.2.1:基本初始化参见《CentOS系统初始化.md》2.2.2:配置域名/主机名解析虚拟
一:高可用拓扑
外部网络:192.168.1.0/24
内部网络:172.16.1.0/24
二:实验环境准备
2.1:创建虚拟机并安装操作系统
使用CentOS 7.2
安装时传递内核参数 net.ifnames=0 biosdevname=0 ,以使安装后的系统网卡名称标准化(eth*)。
2.2:系统初始环境准备
2.2.1:基本初始化
参见《CentOS系统初始化.md》
2.2.2:配置域名/主机名解析
虚拟机环境中搭建了一台DNS服务器192.168.1.254,192.168.1.0/24网段可以直接通过DNS直接解析。
2.3:设置虚拟机网络环境
2.3.1:node101 - Openstack 控制端网络设置
eth0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="eth0"
DEVICE="eth0"
ONBOOT="yes"
IPADDR="192.168.1.101"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="192.168.1.254"
eth1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="eth1"
DEVICE="eth1"
ONBOOT="yes"
IPADDR="172.16.1.101"
PREFIX="24"
2.3.2:node102 - Openstack 控制端网络设置
eth0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="eth0"
DEVICE="eth0"
ONBOOT="yes"
IPADDR="192.168.1.102"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="192.168.1.254"
eth1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="eth1"
DEVICE="eth1"
ONBOOT="yes"
IPADDR="172.16.1.102"
PREFIX="24"
2.3.3:node103 - Openstack 控制端网络设置
eth0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="eth0"
DEVICE="eth0"
ONBOOT="yes"
IPADDR="192.168.1.103"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="192.168.1.254"
eth1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="eth1"
DEVICE="eth1"
ONBOOT="yes"
IPADDR="172.16.1.103"
PREFIX="24"
2.3.4:node104 - Openstack 控制端网络设置
eth0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="eth0"
DEVICE="eth0"
ONBOOT="yes"
IPADDR="192.168.1.104"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="192.168.1.254"
eth1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="eth1"
DEVICE="eth1"
ONBOOT="yes"
IPADDR="172.16.1.104"
PREFIX="24"
2.3.5:node105 - 负载均衡节点网络配置
eth0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
ME="eth0"
DEVICE="eth0"
ONBOOT="yes"
IPADDR="192.168.1.105"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="192.168.1.254"
eth1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="eth1"
DEVICE="eth1"
ONBOOT="yes"
IPADDR="172.16.1.105"
PREFIX="24"
2.3.6:node106 - 负载均衡节点网络配置
eth0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
ME="eth0"
DEVICE="eth0"
ONBOOT="yes"
IPADDR="192.168.1.106"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="192.168.1.254"
eth1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="eth1"
DEVICE="eth1"
ONBOOT="yes"
IPADDR="172.16.1.106"
PREFIX="24"
2.3.7:node107 - 基础服务节点网络配置
eth0 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
ME="eth0"
DEVICE="eth0"
ONBOOT="yes"
IPADDR="192.168.1.107"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="192.168.1.254"
eth1 网卡配置
]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="eth1"
DEVICE="eth1"
ONBOOT="yes"
IPADDR="172.16.1.107"
PREFIX="24"
2.3.8:手动同步NTP
虽然系统初始化配置时已设置crontab任务计划来自动同步NTP时间,但为了防止计划任务的时长导致服务器之间时间还未同步,在开始安装配置Openstack前手动同步一遍NTP。
另外,因为内网没有NTP服务器,基础服务节点向node101同步时间。
将node101 配置为内网NTP服务端
]# yum install chrony -y
]# vim /etc/chrony.conf
# 配置node101向外网NTP服务器192.168.1.254同步时间;
server 192.168.1.254 iburst
# 允许内网网段172.16.1.0/24向node101同步时间;
allow 172.16.1.0/24
# 即使服务端没有同步到指定的网络时间,也允许向客户端同步本机时间;
local stratum 10
# 启动chronyd服务;
]# systemctl start chronyd
]# systemctl enable chronyd
其他节点配置NTP自动同步任务计划
# 手动同步;
]# ntpdate 172.16.1.101
29 Oct 21:26:20 ntpdate[9636]: step time server 172.16.1.101 offset 4502.960252 sec
# 添加crontab计划任务;
]# echo "*/30 * * * * /usr/sbin/ntpdate 172.16.1.101 && /usr/sbin/hwclock -w" > /var/spool/cron/root
]# crontab -l
*/30 * * * * /usr/sbin/ntpdate 172.16.1.101 && /usr/sbin/hwclock -w
至此,Openstack高可用的基础环境准备完毕。
三:Openstack 环境准备
3.1:各控制端和计算节点环境准备
3.1.1:部署ocata版本的yum源
在各控制端和计算节点上部署ocata版本的yum源。
]# vim /etc/yum.repos.d/Openstack-Ocata.repo
[ocata]
name=Openstack-Ocata
baseurl=https://mirrors.aliyun.com/centos-vault/7.4.1708/cloud/x86_64/openstack-ocata/
gpgcheck=0
]# yum clean all
]# yum makecache
3.1.2:安装相关连接模块
在各控制端上安装相关连接模块。
]# yum install mariadb python2-PyMySQL python-memcached -y
python2-PyMySQL 为 Python 连接 MySQL 模块;
python-memcached 为 Python 连接 Memcached 模块;
3.1.3:安装 Openstack 客户端
在各控制端和计算节点安装Openstack客户端
]# yum install python-openstackclient -y
3.1.4:安装 openstack SElinux 管理包
在各控制端和计算节点安装
]# yum install openstack-selinux -y
3.2:部署 MariaDB
在基础服务节点上安装配置 MariaDB。
3.2.1:安装mariadb-server
]# yum install mariadb mariadb-server -y
3.2.2:编辑主配置文件
]# cp /etc/my.cnf /etc/my.cnf.bak
]# vim /etc/my.cnf
[mysqld]
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
datadir=/data/mysql
innodb_file_per_table=1
# skip-grant-tables
relay-log=/data/mysql
server-id=10
log-error=/data/mysql-log/mysql_error.log
log-bin=/data/mysql-binlog/master-log
# general_log=ON
# general_log_file=/data/general_mysql.log
long_query_time=5
slow_query_log=1
slow_query_log_file=/data/mysql-log/slow_mysql.log
max_connections=1000
bind-address=192.168.1.107
[client]
port=3306
socket=/var/lib/mysql/mysql.sock
[mysqld_safe]
log-error=/data/mysql-log/mysqld_safe.log
pid-file=/var/run/mariadb/mariadb.pid
3.2.3:编辑 Openstack自定义配置文件
]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
# 指定监听地址
bind-address = 192.168.1.107
# 默认引擎
default-storage-engine = innodb
# 开启每个表都有独立表空间
innodb_file_per_table = on
# 最大连接数
max_connections = 4096
# 不区分大小写排序
collation-server = utf8_general_ci
# 设置编码
character-set-server = utf8
3.2.4:创建数据目录并授权
]# mkdir -pv /data/{mysql,mysql-log,mysql-binlog}
]# chown mysql:mysql /data/mysql* -R
3.2.5:启动 MariaDB 并验证端口
]# systemctl start mariadb
]# systemctl enable mariadb
]# ss -tnl
3.2.6:初始化安全配置
]# mysql_secure_installation
3.3:部署 RabbitMQ
在基础服务节点上安装配置RabbitMQ。
3.3.1:配置 EPEL yum源
安装RabbitMQ需要配置EPEL源。
]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
]# yum clean all
]# yum makecache
3.3.2:安装rabbitmq-server
]# yum install rabbitmq-server httpd -y
3.3.3:启动 RabbitMQ 及 web管理插件
]# systemctl start rabbitmq-server httpd
]# systemctl enable rabbitmq-server httpd
]# rabbitmq-plugins enable rabbitmq_management
]# ss -tnl
]# rabbitmq-plugins list
RabbitMQ在启动并打开web插件后,有三个监听端口:5672、15672、25672;
3.3.4:RabbitMQ 中添加openstack用户并授权
]# rabbitmqctl add_user openstack 123456
]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
RabbitMQ默认有一个guest/guest用户;
3.3.5:验证RabbitMQ
- 客户端浏览器打开http://192.168.1.107:15672,用户:guest/guest
3.4:部署 Memcached
在基础服务节点上安装配置 Memcached。
3.4.1:安装memcached
]# yum install memcached -y
3.4.2:编辑配置文件
]# vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="512"
OPTIONS="-l 192.168.1.107"
3.4.3:启动 Memcached 并验证
]# systemctl start memcached
]# systemctl enable memcached
]# ss -tnl
memcached默认端口为11211;
3.5:部署 NFS
在基础服务节点上安装配置NFS;
3.5.1:安装 NFS
在两台控制端也安装nfs-utils,以挂载NFS存储;
]# yum install nfs-utils -y
3.5.2:配置 NFS
]# vim /etc/exports
/openstack/glance *(rw,no_root_squash)
]# mkdir /openstack/glance -pv
]# chown 161:161 /openstack/glance
3.5.3:启动NFS并设为开机启动
]# systemctl start nfs
]# systemctl enable nfs
3.5.4:验证NFS挂载点
- 在node101上查看NFS挂载点:
]# showmount -e node107
Export list for node107:
/openstack/glance *
3.6:部署 Keepalived
在各负载均衡节点安装配置 Keepalived。
3.6.1:编译安装Keepalived
]# wget -O /usr/local/src/keepalived-1.3.6.tar.gz http://www.keepalived.org/software/keepalived-1.3.6.tar.gz
]# cd /usr/local/src
]# tar xvf keepalived-1.3.6.tar.gz
]# cd keepalived-1.3.6
]# yum install libnfnetlink-devel libnfnetlink ipvsadm libnl libnl-devel libnl3 libnl3-devel lm_sensors-libs net-snmp-agent-libs net-snmp-libs openssh-server openssh-clients openssl openssl-devel tree sudo psmisc lrzsz gcc gcc-c++ automake pcre pcredevel zlib zlib-devel openssl openssl-devel iproute -y
]# ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install
]# cp /usr/local/src/keepalived-1.3.6/keepalived/etc/init.d/keepalived.rh.init /etc/sysconfig/keepalived.sysconfig
]# cp /usr/local/src/keepalived-1.3.6/keepalived/keepalived.service /usr/lib/systemd/system/
]# cp /usr/local/src/keepalived-1.3.6/bin/keepalived /usr/sbin/
3.6.2:配置keepalived
node105 - master节点
- 编辑配置文件:
]# mkdir /etc/keepalived
]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
]# vim /etc/keepalived/keepalived.conf
global_defs {
notification_email {
root@node105.yqc.com
}
notification_email_from root@node105.yqc.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node105.yqc.com
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VIP_1 {
state MASTER
interface eth0
virtual_router_id 1
priority 100
advert_int 2
unicast_src_ip 192.168.1.105
unicast_peer {
192.168.1.106
}
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.1.100/24 dev eth0 label eth0:0
}
}
- 启动keepalived并设为开机启动:
]# systemctl start keepalived
]# systemctl enable keepalived
node106 - backup节点
- 编辑配置文件:
]# mkdir /etc/keepalived
]# scp node105:/etc/keepalived/keepalived.conf /etc/keepalived/
]# vim /etc/keepalived/keepalived.conf
global_defs {
notification_email {
root@node106.yqc.com
}
notification_email_from root@node106.yqc.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node106.yqc.com
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VIP_1 {
state BACKUP
interface eth0
virtual_router_id 1
priority 80
advert_int 2
unicast_src_ip 192.168.1.106
unicast_peer {
192.168.1.105
}
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.1.100/24 dev eth0 label eth0:0
}
}
- 启动keepalived并设为开机启动:
]# systemctl start keepalived
]# systemctl enable keepalived
3.6.3:为VIP配置域名解析
- 在DNS服务器中添加A记录:
]# vim /var/named/yqc.com.zone
$TTL 3600
$ORIGIN yqc.com.
@ IN SOA ns1.yqc.com. dnsadmin.yqc.com. (
2020102901
1H
10M
3D
1D )
IN NS ns1.yqc.com.
IN MX 10 mx1
ns1 IN A 192.168.1.254
mx1 IN A 192.168.1.254
node101 IN A 192.168.1.101
node102 IN A 192.168.1.102
node103 IN A 192.168.1.103
node104 IN A 192.168.1.104
node105 IN A 192.168.1.105
node106 IN A 192.168.1.106
node107 IN A 192.168.1.107
node108 IN A 192.168.1.108
node111 IN A 192.168.1.111
vip100 IN A 192.168.1.100
- 在DNS服务器中添加PTR记录:
]# vim /var/named/192.168.zone
$TTL 3600
$ORIGIN 168.192.in-addr.arpa.
@ IN SOA ns1.yqc.com. dnsadmin.yqc.com. (
2020102901
1H
10M
3D
1D )
IN NS ns1.yqc.com.
1.254 IN PTR ns1.yqc.com.
IN PTR mx1.yqc.com.
1.101 IN PTR node101.yqc.com.
1.102 IN PTR node102.yqc.com.
1.103 IN PTR node103.yqc.com.
1.104 IN PTR node104.yqc.com.
1.105 IN PTR node105.yqc.com.
1.106 IN PTR node106.yqc.com.
1.107 IN PTR node107.yqc.com.
1.108 IN PTR node108.yqc.com.
1.111 IN PTR node111.yqc.com.
1.100 IN PTR vip100.yqc.com.
- 重启named服务:
]# systemctl restart named
3.6.4:验证VIP
- 查看MASTER节点的网络配置:
]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:ca:0b:76 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.105/24 brd 192.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.1.100/24 scope global secondary eth0:0
valid_lft forever preferred_lft forever
inet6 240e:324:79e:f400:20c:29ff:feca:b76/64 scope global mngtmpaddr dynamic
valid_lft 259120sec preferred_lft 172720sec
inet6 fe80::20c:29ff:feca:b76/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:ca:0b:80 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.105/24 brd 172.16.1.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:feca:b80/64 scope link
valid_lft forever preferred_lft forever
- 从其他节点验证VIP的连通性:
[root@node101 ~]# ping 192.168.1.100
PING 192.168.1.100 (192.168.1.100) 56(84) bytes of data.
64 bytes from 192.168.1.100: icmp_seq=1 ttl=64 time=1.83 ms
64 bytes from 192.168.1.100: icmp_seq=2 ttl=64 time=0.445 ms
- 验证VIP域名的连通性:
]# ping vip100
PING vip100.yqc.com (192.168.1.100) 56(84) bytes of data.
64 bytes from 192.168.1.100: icmp_seq=1 ttl=64 time=0.971 ms
64 bytes from 192.168.1.100: icmp_seq=2 ttl=64 time=1.25 ms
64 bytes from 192.168.1.100: icmp_seq=3 ttl=64 time=0.564 ms
]# ping vip100.yqc.com
PING vip100.yqc.com (192.168.1.100) 56(84) bytes of data.
64 bytes from 192.168.1.100: icmp_seq=1 ttl=64 time=0.489 ms
64 bytes from 192.168.1.100: icmp_seq=2 ttl=64 time=2.78 ms
3.7:部署 HAProxy
在各负载均衡节点安装配置 HAProxy。
3.7.1:编译安装HAProxy
]# cd /usr/local/src/
]# tar xvf haproxy-1.8.20.tar.gz
]# cd haproxy-1.8.20/
]# make ARCH=x86_64 TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_CPU_AFFINITY=1 PREFIX=/usr/local/haproxy
]# make install PREFIX=/usr/local/haproxy
]# cp haproxy /usr/sbin/
]# useradd haproxy -s /sbin/nologin -u 100 -g 100
]# mkdir /etc/haproxy /var/lib/haproxy
]# chown haproxy.users /var/lib/haproxy/ -R
3.7.2:准备HAProxy启动脚本
]# vim /usr/lib/systemd/system/haproxy.service
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target
[Service]
ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q
ExecStart=/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
[Install]
WantedBy=multi-user.target
]# systemctl daemon-reload
3.7.3:配置HAProxy
]# vim /etc/haproxy/haproxy.cfg
global
maxconn 100000
uid 100
gid 100
daemon
nbproc 1
pidfile /run/haproxy.pid
log 127.0.0.1 local3 info
chroot /usr/local/haproxy
stats socket /var/lib/haproxy/haproxy.socket mode 600 level admin
defaults
option redispatch
option abortonclose
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 10s
timeout client 20s
timeout server 30s
timeout check 5s
listen stats
bind :9999
stats enable
#stats hide-version
stats uri /haproxy-status
stats realm HAPorxy\ Stats\ Page
stats auth haadmin:123456
stats auth admin:123456
stats refresh 30s
stats admin if TRUE
3.7.4:配置rsyslog日志
]# vim /etc/rsyslog.conf
local3.* /var/log/haproxy.log
$ModLoad imudp
$UDPServerRun 514
]# systemctl restart rsyslog
3.7.5:配置内核参数
系统初始化文档中以下两个内核参数已添加;
]# vim /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
]# sysctl -p
3.7.6:启动HAProxy并设为开机启动
]# systemctl enable haproxy
]# systemctl start haproxy
]# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 20480 *:9999 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 100 [::1]:25 [::]:*
3.8:HAProxy 配置基础服务代理
]# vim /etc/haproxy/haproxy.cfg
3.8.1:MySQL代理
#openstack-mysql================================================================
frontend openstack_mysql
bind 192.168.1.100:3306
mode tcp
default_backend openstack_mysql_node
backend openstack_mysql_node
mode tcp
balance source
server 192.168.1.107 192.168.1.107:3306 check inter 2000 fall 3 rise 5
3.8.2:RabbitMQ 代理
#openstack-rabbitmq================================================================
listen openstack_rabbitmq
bind 192.168.1.100:5672
mode tcp
log global
server 192.168.1.107 192.168.1.107:5672 check inter 2000 fall 3 rise 5
3.8.3:Memcached 代理
#openstack-memcached================================================================
frontend openstack_memcached
bind 192.168.1.100:11211
mode tcp
default_backend openstack_memcached_node
backend openstack_memcached_node
mode tcp
balance source
server 192.168.1.100 192.168.1.107:11211 check inter 2000 fall 3 rise 5
3.8.4:重载HAProxy
]# systemctl reload haproxy
3.8.5:验证
- 在node101上通过VIP访问memcached:
]# telnet vip100 11211
Trying 192.168.1.100...
Connected to vip100.
Escape character is '^]'.
四:部署 keystone 认证服务
4.1:keystone 数据库准备
4.1.1:创建数据库并授权 keystone 用户
]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';
MariaDB [(none)]> flush privileges;
4.1.2:验证数据库远程连接
- 在控制端 node101 上通过VIP远程连接数据库:
]# mysql -hvip100 -ukeystone -p
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| keystone |
+--------------------+
4.2:安装及配置 keystone(node101)
两台控制端都安装好程序包;
先在node101上执行所有配置操作,完成后打包配置目录,同步到node102;
4.2.1:安装 keystone
]# yum install openstack-keystone httpd mod_wsgi -y
openstack-keystone 是 keystone 服务;
http 是 web 服务;
mod_wsgi 是 python 的通用网关;
4.2.2:编辑 keystone 配置文件
- 生成临时token:
]# openssl rand -hex 10
659cdc8e1afd2b113c8b
- 编辑 /etc/keystone/keystone.conf :
]# vim /etc/keystone/keystone.conf
[DEFAULT]
admin_token = 659cdc8e1afd2b113c8b
[database]
connection = mysql+pymysql://keystone:123456@vip100.yqc.com/keystone
[token]
provider = fernet
4.2.3:初始化 keystone 数据库
- 初始化数据库:
]# su -s /bin/sh -c "keystone-manage db_sync" keystone
- 验证结果:
]# mysql -hvip100 -ukeystone -p
MariaDB [(none)]> use keystone;
MariaDB [keystone]> show tables;
+------------------------+
| Tables_in_keystone |
+------------------------+
| access_token |
| assignment |
| config_register |
| consumer |
| credential |
| endpoint |
| endpoint_group |
| federated_user |
| federation_protocol |
……
keystone的日志文件为:/var/log/keystone/keystone.log;
4.2.4:初始化 Fernet key
- 初始化证书:
]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
- 验证:
]# ll /etc/keystone/fernet-keys/
total 8
-rw------- 1 keystone keystone 44 Oct 31 18:51 0
-rw------- 1 keystone keystone 44 Oct 31 18:51 1
官方文档中采用的是自动初始化keystone认证服务的方式,初次安装不要执行这个命令,因为对openstack不熟悉,最好还是手动操作一下;并且这个命令中需要ADMIN_PASS,而到目前为止还都不知道admin的密码是什么;
以下为官方文档自动初始化keystone认证服务命令:
Bootstrap the Identity service
]# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ --bootstrap-admin-url http://controller:35357/v3/ \ --bootstrap-internal-url http://controller:5000/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne
4.2.5:打包 keystone 配置目录
]# cd /etc/keystone/
]# tar zcvf node101-keystone.tar.gz ./*
4.3:安装及配置 keystone(node102)
4.3.1:安装 keystone
]# yum install openstack-keystone httpd mod_wsgi -y
4.3.2:同步 node101 的 keystone 配置
]# cd /etc/keystone/
]# scp node101:/etc/keystone/node101-keystone.tar.gz ./
]# tar zxvf node101-keystone.tar.gz
4.4:配置 Apache 服务(两台)
通过 apache 代理 python;
4.4.1:编辑httpd主配置文件:
- node101配置:
]# vim /etc/httpd/conf/httpd.conf
ServerName 192.168.1.101:80
- node102配置
]# vim /etc/httpd/conf/httpd.conf
ServerName 192.168.1.102:80
4.4.2:软链接 wsgi-keystone.conf 至 httpd 配置文件目录中:
]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
4.4.3:启动 httpd 并验证:
]# systemctl start httpd
]# systemctl enable httpd
]# ss -tnl | egrep "(5000|80|35357)"
LISTEN 0 511 :::35357 :::*
LISTEN 0 511 :::5000 :::*
LISTEN 0 511 :::80 :::*
4.5:创建域、项目、用户和角色
4.5.1:设置环境变量
以拥有admin的权限;
仅需在执行操作的控制端上设置即可,因为两个控制端对应的是一套Openstack,以下操作仅需创建一次;
- 这里设置在node101上:
]# export OS_TOKEN=659cdc8e1afd2b113c8b
]# export OS_URL=http://node101.yqc.com:35357/v3
]# export OS_IDENTITY_API_VERSION=3
一定要在设置环境变量之后再进行后续操作;
4.5.2:创建域
语法:openstack domain create --description “描述信息” 域名;
创建default域
]# openstack domain create --description "Default Domain" default
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Default Domain |
| enabled | True |
| id | 6917eaeda8b04ebe9dc41e023f5868ea |
| name | default |
+-------------+----------------------------------+
4.5.3:创建项目
语法:openstack project --domain 域 --description “描述” 项目名;
创建admin项目
]# openstack project create --domain default --description "Admin Project" admin
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Admin Project |
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | bcee9729f8c8470eafea545466d5f855 |
| is_domain | False |
| name | admin |
| parent_id | 6917eaeda8b04ebe9dc41e023f5868ea |
+-------------+----------------------------------+
创建service项目
]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 37cd35560d4e4622a83673327b57bef7 |
| is_domain | False |
| name | service |
| parent_id | 6917eaeda8b04ebe9dc41e023f5868ea |
+-------------+----------------------------------+
创建demo项目
该项目可用于演示和测试等;
]# openstack project create --domain default --description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 9cf63e46aed845879746d9b55eb0a965 |
| is_domain | False |
| name | demo |
| parent_id | 6917eaeda8b04ebe9dc41e023f5868ea |
+-------------+----------------------------------+
4.5.4:创建用户
创建admin用户
]# openstack user create --domain default --password-prompt admin
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 8a42f4ea98184e0f8e677d2fc1ae9fc1 |
| name | admin |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
创建demo用户
]# openstack user create --domain default --password-prompt demo
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 3705d5392dfd4907b226e37b53e39112 |
| name | demo |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
4.5.5:创建角色
一个项目里面可以有多个角色;
目前角色只能创建在/etc/keystone/policy.json 文件中定义好的角色;
创建admin角色
]# openstack role create admin
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 2c9f9ca5a58f4b33be77e8fb7adc7e89 |
| name | admin |
+-----------+----------------------------------+
创建user角色
]# openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 70e281d91b524a888280bbfb58683c7b |
| name | user |
+-----------+----------------------------------+
4.5.6:授权用户
- 将admin角色赋予admin项目的admin用户:
]# openstack role add --project admin --user admin admin
- 将user角色赋予demo项目的demo用户:
]# openstack role add --project demo --user demo user
4.6:创建各组件用户
4.6.1:glance用户
- 创建glance用户:
]# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 2ca24824fb8a41d083021766dbe55ad6 |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
- 将glance用户添加到service项目并授权admin角色:
]# openstack role add --project service --user glance admin
4.6.2:nova用户
- 创建nova用户:
]# openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | e431251a86854294b2ebb32872c83ad6 |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
- 将nova用户添加到service项目并授权admin角色:
]# openstack role add --project service --user nova admin
4.6.3:placement用户
- 创建placement用户:
]# openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 1e1f2bdd24ca4faab3304ed4fe574037 |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
- 将placement用户添加到service项目并授权admin角色:
]# openstack role add --project service --user placement admin
4.6.4:neutron用户
- 创建neutron用户:
]# openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | 6917eaeda8b04ebe9dc41e023f5868ea |
| enabled | True |
| id | 9861e7b9516542dd8879d535c8ec76b1 |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
- 将neutron用户添加到service项目并授权admin角色:
]# openstack role add --project service --user neutron admin
4.7:创建keystone认证服务并注册
将 keystone 服务地址注册到 openstack;
4.7.1:创建keystone认证服务
]# openstack service create --name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | 63994bdfcca54de8a8da4218c0f523d7 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
]# openstack service list
+----------------------------------+----------+----------+
| ID | Name | Type |
+----------------------------------+----------+----------+
| 63994bdfcca54de8a8da4218c0f523d7 | keystone | identity |
+----------------------------------+----------+----------+
4.7.2:创建endpoint
注意地址是VIP的地址;
- 创建公共端点:
]# openstack endpoint create --region RegionOne identity public http://vip100.yqc.com:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 4d0a914ef9414e578a66ca8ab6a8ac21 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5191765d1b344541b2d7de7fc40f361d |
| service_name | keystone |
| service_type | identity |
| url | http://vip100.yqc.com:5000/v3 |
+--------------+----------------------------------+
- 创建内部端点:
]# openstack endpoint create --region RegionOne identity internal http://vip100.yqc.com:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0ce4c62b38784c9aacea0eaef29f5581 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5191765d1b344541b2d7de7fc40f361d |
| service_name | keystone |
| service_type | identity |
| url | http://vip100.yqc.com:5000/v3 |
+--------------+----------------------------------+
- 创建管理端点:
]# openstack endpoint create --region RegionOne identity admin http://vip100.yqc.com:35357/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 23e84e5911bd4111972e277d295a766a |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5191765d1b344541b2d7de7fc40f361d |
| service_name | keystone |
| service_type | identity |
| url | http://vip100.yqc.com:35357/v3 |
+--------------+----------------------------------+
- 验证:
]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| 0ea4e98a2f8a4f82b919bbfe98992986 | RegionOne | keystone | identity | True | admin | http://vip100.yqc.com:35357/v3 |
| 1d70ad2fdcfa420da1237f60d0993520 | RegionOne | keystone | identity | True | internal | http://vip100.yqc.com:5000/v3 |
| 7351f018a87344e48f44cec769014f10 | RegionOne | keystone | identity | True | public | http://vip100.yqc.com:5000/v3 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
4.8:HAProxy 配置 keystone 代理
- 配置HAProxy:
]# vim /etc/haproxy/haproxy.cfg
#openstack-keystone================================================================
listen keystone-public-internal-url
bind 192.168.1.100:5000
mode tcp
log global
balance source
server keystone1 192.168.1.101:5000 check inter 5000 rise 3 fall 3
server keystone2 192.168.1.102:5000 check inter 5000 rise 3 fall 3
listen keystone-admin-url
bind 192.168.1.100:35357
mode tcp
log global
balance source
server keystone1 192.168.1.101:35357 check inter 5000 rise 3 fall 3
server keystone2 192.168.1.102:35357 check inter 5000 rise 3 fall 3
- 重载HAProxy:
]# systemctl reload haproxy
- 验证:
]# telnet 192.168.1.100 5000
Trying 192.168.1.100...
Connected to 192.168.1.100.
Escape character is '^]'.
]# telnet 192.168.1.100 35357
Trying 192.168.1.100...
Connected to 192.168.1.100.
Escape character is '^]'.
4.9:验证keystone认证服务
必须新打开一个窗口做验证操作,因为之前的终端会话中有OS_TOKEN;
这里直接在node102上做验证;
- 设置OS_IDENTITY_API_VERSION变量:
]# export OS_IDENTITY_API_VERSION=3
4.9.1:验证admin用户
- admin用户的验证使用35357端口:
]# openstack --os-auth-url http://vip100.yqc.com:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
Password:
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-11-03T03:16:28+0000 |
| id | gAAAAABfoL18MNpyCH6AQ2IsgjBs0TdtHxWkBK10pXUDMdX22nqQxPjYBpEAzxyOT3JOmMfcpXx8ZR1TGvhuKPvI5IXUVOd3QmcbRMmUrrylhPTWk_ItEUqYeUUmsVI43IBe-_v5HVrE5WgHaNt- |
| | TCsKs0k-sgZeCEZL1xM6etUikERRSMoqVhc |
| project_id | bcee9729f8c8470eafea545466d5f855 |
| user_id | 8a42f4ea98184e0f8e677d2fc1ae9fc1 |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
4.9.2:验证demo用户
- demo用户的验证使用5000端口:
]# openstack --os-auth-url http://vip100.yqc.com:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue
Password:
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-11-03T03:25:36+0000 |
| id | gAAAAABfoL-gGJ02jBYOeX9_qsfK776Y4_lWqc6SjUF45rwMLi48CE3O7Okq9_oP6MAw0QCvn2jnAduleH3EZ- |
| | qmlE7hYWccNDN4goLMAIhKlyZwknb_cLe7AzfQm5HvM4W2OJEJxrDtJhsSamhyN4KPB6bcN_NYU-rVzGWOeipA0NJ8KEXNZbg |
| project_id | 9cf63e46aed845879746d9b55eb0a965 |
| user_id | 3705d5392dfd4907b226e37b53e39112 |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
4.10:使用脚本设置环境变量
4.10.1:admin用户脚本
- 创建脚本:
]# vim admin-ocata.sh
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://vip100.yqc.com:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
]# chmod a+x admin-ocata.sh
- 测试脚本:
测试成功的效果为,不需要输入密码即可认证成功;
]# source admin-ocata.sh
]# openstack --os-auth-url http://vip100.yqc.com:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-11-03T03:27:59+0000 |
| id | gAAAAABfoMAvSPY1dbMTCPeqxwqO9PSgjI1sgAywi7wfxsJmlj1dGRft24GYkmFbTQ6RGJ9QWXsHqWQClELOHMiXhBELNa3KkWTvhc5PljzS- |
| | U_0diHmUFeB5uFoMzj71ACaiPazKCijNYCvrGkl4I_n9oXJ80fDUtHThA4_10h2CNDZuDRhkDc |
| project_id | bcee9729f8c8470eafea545466d5f855 |
| user_id | 8a42f4ea98184e0f8e677d2fc1ae9fc1 |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
4.10.2:demo用户脚本
- 创建脚本:
]# vim demo-ocata.sh
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123456
export OS_AUTH_URL=http://vip100.yqc.com:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
]# chmod a+x demo-ocata.sh
- 测试脚本:
]# source demo-ocata.sh
]# openstack --os-auth-url http://vip100.yqc.com:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-11-03T03:29:05+0000 |
| id | gAAAAABfoMBxP1_MrsiddGcjmm8eIcyM8FNChDM_bB-HIMy4ltrZqZshctIOiQd_qUaPd5-GAHNzjGCS2ti7F0ODcq8aIN9uejBgeR5Qx-gHC67FJJSTX9qHpIn144ugvjxwhnrvz5kg0O05-- |
| | Vd6TGd8AmJ48UzVkn7qWIfFmye7cGR_V_tD8s |
| project_id | 9cf63e46aed845879746d9b55eb0a965 |
| user_id | 3705d5392dfd4907b226e37b53e39112 |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
五:部署 glance 镜像服务
5.1:glance数据库准备
5.1.1:创建数据库并授权glance用户
]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';
MariaDB [(none)]> flush privileges;
5.1.2:验证数据库远程连接
- 在控制端 node101 上远程连接数据库:
]# mysql -hvip100 -uglance -p
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| glance |
+--------------------+
5.2:创建glance镜像服务并注册
5.2.1:设置admin用户环境变量
- 这次在node102上进行操作,以验证两台控制端都能对Openstack进行操作:
]# source admin-ocata.sh
5.2.2:创建glance镜像服务
]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | bd1616aed2b542bd8ddfdf58552c5e05 |
| name | glance |
| type | image |
+-------------+----------------------------------+
5.2.3:创建endpoint
- 创建公共端点:
]# openstack endpoint create --region RegionOne image public http://vip100.yqc.com:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 42ef7c524ead4f879778f7983f4d23ac |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 780ae9727e7941049c8e515a7798748b |
| service_name | glance |
| service_type | image |
| url | http://vip100.yqc.com:9292 |
+--------------+----------------------------------+
- 创建内部端点:
]# openstack endpoint create --region RegionOne image internal http://vip100.yqc.com:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 3a79c5a9b66a4a9d91a84ef7b64a15a4 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 780ae9727e7941049c8e515a7798748b |
| service_name | glance |
| service_type | image |
| url | http://vip100.yqc.com:9292 |
+--------------+----------------------------------+
- 创建管理端点:
]# openstack endpoint create --region RegionOne image admin http://vip100.yqc.com:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 96db5eff0602442abb34d2ff443230f3 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 780ae9727e7941049c8e515a7798748b |
| service_name | glance |
| service_type | image |
| url | http://vip100.yqc.com:9292 |
+--------------+----------------------------------+
- 验证:
]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------+
| 0ce4c62b38784c9aacea0eaef29f5581 | RegionOne | keystone | identity | True | internal | http://vip100.yqc.com:5000/v3 |
| 23e84e5911bd4111972e277d295a766a | RegionOne | keystone | identity | True | admin | http://vip100.yqc.com:35357/v3 |
| 3a79c5a9b66a4a9d91a84ef7b64a15a4 | RegionOne | glance | image | True | internal | http://vip100.yqc.com:9292 |
| 42ef7c524ead4f879778f7983f4d23ac | RegionOne | glance | image | True | public | http://vip100.yqc.com:9292 |
| 4d0a914ef9414e578a66ca8ab6a8ac21 | RegionOne | keystone | identity | True | public | http://vip100.yqc.com:5000/v3 |
| 96db5eff0602442abb34d2ff443230f3 | RegionOne | glance | image | True | admin | http://vip100.yqc.com:9292 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------+
5.3:安装及配置 glance(node101)
5.3.1:安装 glance
]# yum install -y openstack-glance
5.3.2:编辑 glance 配置文件
- 编辑/etc/glance/glance-api.conf:
]# vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:123456@vip100.yqc.com/glance
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images
[keystone_authtoken]
auth_uri = http://vip100.yqc.com:5000
auth_url = http://vip100.yqc.com:35357
memcached_servers = vip100.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
- 编辑/etc/glance/glance-registry.conf
]# vim /etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:123456@vip100.yqc.com/glance
[keystone_authtoken]
auth_uri = http://vip100.yqc.com:5000
auth_url = http://vip100.yqc.com:35357
memcached_servers = vip100.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
5.3.3:初始化glance数据库
- 初始化数据库:
]# su -s /bin/sh -c "glance-manage db_sync" glance
- 验证结果:
]# mysql -hvip100 -uglance -p
MariaDB [(none)]> use glance;
MariaDB [glance]> show tables;
+----------------------------------+
| Tables_in_glance |
+----------------------------------+
| alembic_version |
| artifact_blob_locations |
| artifact_blobs |
| artifact_dependencies |
| artifact_properties |
| artifact_tags |
| artifacts |
| image_locations |
……
5.3.4:挂载 NFS 存储
- 创建glance镜像目录:
]# mkdir -pv /var/lib/glance/images
]# chown glance:glance /var/lib/glance/images
- 在两台控制端上分别挂载NFS存储:
]# vim /etc/fstab
node107:/openstack/glance /var/lib/glance/images nfs defaults,_netdev 0 0
]# mount -a
]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 49G 1.8G 47G 4% /
devtmpfs devtmpfs 1.4G 0 1.4G 0% /dev
tmpfs tmpfs 1.4G 0 1.4G 0% /dev/shm
tmpfs tmpfs 1.4G 8.7M 1.4G 1% /run
tmpfs tmpfs 1.4G 0 1.4G 0% /sys/fs/cgroup
/dev/sda1 xfs 485M 124M 362M 26% /boot
tmpfs tmpfs 284M 0 284M 0% /run/user/0
node107:/openstack/glance nfs4 49G 1.7G 47G 4% /var/lib/glance/images
5.3.5:启动glance并验证端口
- 启动glance并设为开机启动:
]# systemctl enable openstack-glance-api.service openstack-glance-registry.service
]# systemctl start openstack-glance-api.service openstack-glance-registry.service
- 验证端口:
]# ss -tnl | egrep '(9292|9191)'
LISTEN 0 4096 *:9292 *:*
LISTEN 0 4096 *:9191 *:*
glance-api的监听端口是9292,glance-registry的监听端口是9191;
5.3.6:glance日志
]# ll /var/log/glance/
total 8
-rw-r--r-- 1 glance glance 2698 Nov 3 11:11 api.log
-rw-r--r-- 1 glance glance 2100 Nov 3 11:11 registry.log
5.3.7:打包 glance 配置目录
]# cd /etc/glance/
]# tar zcvf node101-glance.tar.gz ./*
5.4:安装及配置 glance(node102)
5.4.1:安装 glance
]# yum install -y openstack-glance
5.4.2:同步 node101 的 glance 配置
]# cd /etc/glance/
]# scp node101:/etc/glance/node101-glance.tar.gz ./
]# tar zxvf node101-glance.tar.gz
5.4.3:挂载 NFS 存储
- 创建glance镜像目录:
]# mkdir -pv /var/lib/glance/images
]# chown glance:glance /var/lib/glance/images
- 在两台控制端上分别挂载NFS存储:
]# vim /etc/fstab
node107:/openstack/glance /var/lib/glance/images nfs defaults,_netdev 0 0
]# mount -a
]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 49G 1.8G 47G 4% /
devtmpfs devtmpfs 1.4G 0 1.4G 0% /dev
tmpfs tmpfs 1.4G 0 1.4G 0% /dev/shm
tmpfs tmpfs 1.4G 8.7M 1.4G 1% /run
tmpfs tmpfs 1.4G 0 1.4G 0% /sys/fs/cgroup
/dev/sda1 xfs 485M 124M 362M 26% /boot
tmpfs tmpfs 284M 0 284M 0% /run/user/0
node107:/openstack/glance nfs4 49G 1.7G 47G 4% /var/lib/glance/images
5.4.4:启动glance并验证端口
- 启动glance并设为开机启动:
]# systemctl enable openstack-glance-api.service openstack-glance-registry.service
]# systemctl start openstack-glance-api.service openstack-glance-registry.service
- 验证端口:
]# ss -tnl | egrep '(9292|9191)'
LISTEN 0 4096 *:9292 *:*
LISTEN 0 4096 *:9191 *:*
5.5:HAProxy 配置 glance 代理
- 配置HAProxy:
]# vim /etc/haproxy/haproxy.cfg
#openstack-glance================================================================
listen glance-api
bind 192.168.1.100:9292
mode tcp
log global
balance source
server glance1 192.168.1.101:9292 check inter 5000 rise 3 fall 3
server glance2 192.168.1.102:9292 check inter 5000 rise 3 fall 3
listen glance-registry
bind 192.168.1.100:9191
mode tcp
log global
balance source
server glance1 192.168.1.101:9191 check inter 5000 rise 3 fall 3
server glance2 192.168.1.102:9191 check inter 5000 rise 3 fall 3
- 重载HAProxy:
]# systemctl reload haproxy
- 验证:
]# telnet vip100 9191
Trying 192.168.1.100...
Connected to vip100.
Escape character is '^]'.
]# telnet vip100 9292
Trying 192.168.1.100...
Connected to vip100.
Escape character is '^]'.
5.6:验证glance镜像服务
5.6.1:创建镜像
- 下载cirros测试镜像:
]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
- 创建名为cirros的镜像:
]# source admin-ocata.sh
]# openstack image create "cirros" --file /root/cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2020-11-03T06:02:34Z |
| disk_format | qcow2 |
| file | /v2/images/3dfd3361-7d85-4342-afcf-9532bcddd3d1/file |
| id | 3dfd3361-7d85-4342-afcf-9532bcddd3d1 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | bcee9729f8c8470eafea545466d5f855 |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2020-11-03T06:02:34Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
× node101可以创建成功,node102不行;
5.6.2:验证镜像
- 查看镜像列表:
]# glance image-list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| 3dfd3361-7d85-4342-afcf-9532bcddd3d1 | cirros |
+--------------------------------------+--------+
]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 3dfd3361-7d85-4342-afcf-9532bcddd3d1 | cirros | active |
+--------------------------------------+--------+--------+
- 查看镜像信息:
]# openstack image show cirros
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2020-11-03T06:02:34Z |
| disk_format | qcow2 |
| file | /v2/images/3dfd3361-7d85-4342-afcf-9532bcddd3d1/file |
| id | 3dfd3361-7d85-4342-afcf-9532bcddd3d1 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | bcee9729f8c8470eafea545466d5f855 |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2020-11-03T06:02:34Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
六:部署 nova 控制端
6.1:nova 数据库准备
6.1.1:创建数据库并授权nova用户
- 创建nova_api数据库:
]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
- 创建nova数据库:
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
- 创建nova_cell0数据库:
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';
- 刷新权限:
MariaDB [(none)]> flush privileges;
6.1.2:验证数据库远程连接
]# mysql -hvip100 -unova -p
Enter password:
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| nova |
| nova_api |
| nova_cell0 |
+--------------------+
4 rows in set (0.00 sec)
6.2:创建nova计算服务并注册
6.2.1:设置admin用户环境变量
]# source admin-ocata.sh
6.2.2:创建nova计算服务
]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | d43200a4b66e44f3847e74e8549e4bf2 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
6.2.3:创建endpoint
- 创建公共端点:
]# openstack endpoint create --region RegionOne compute public http://vip100.yqc.com:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 779de714e4b84d6d810331c895e6dbb8 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d43200a4b66e44f3847e74e8549e4bf2 |
| service_name | nova |
| service_type | compute |
| url | http://node101.yqc.com:8774/v2.1 |
+--------------+----------------------------------+
- 创建内部端点:
]# openstack endpoint create --region RegionOne compute internal http://vip100.yqc.com:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c8f19491bb1a4c9fa6857fc1e259953c |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d43200a4b66e44f3847e74e8549e4bf2 |
| service_name | nova |
| service_type | compute |
| url | http://node101.yqc.com:8774/v2.1 |
+--------------+----------------------------------+
- 创建管理端点:
]# openstack endpoint create --region RegionOne compute admin http://vip100.yqc.com:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6d775e5928844b59809e2d8705dda6c1 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d43200a4b66e44f3847e74e8549e4bf2 |
| service_name | nova |
| service_type | compute |
| url | http://node101.yqc.com:8774/v2.1 |
+--------------+----------------------------------+
6.3:创建placement服务并注册
6.3.1:设置admin用户环境变量
]# source admin-ocata.sh
6.3.2:创建placement服务
]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | d67215a5119f438f8c94a7624f67c6f9 |
| name | placement |
| type | placement |
+-------------+----------------------------------+
6.3.3:创建endpoint
- 创建公共端点:
]# openstack endpoint create --region RegionOne placement public http://vip100.yqc.com:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 89d36c4ff6d64491bec7b1efd2ba765a |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d67215a5119f438f8c94a7624f67c6f9 |
| service_name | placement |
| service_type | placement |
| url | http://node101.yqc.com:8778 |
+--------------+----------------------------------+
- 创建内部端点:
]# openstack endpoint create --region RegionOne placement internal http://vip100.yqc.com:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 304693e971e548c4a2c47321268eb2ad |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d67215a5119f438f8c94a7624f67c6f9 |
| service_name | placement |
| service_type | placement |
| url | http://node101.yqc.com:8778 |
+--------------+----------------------------------+
- 创建管理端点:
]# openstack endpoint create --region RegionOne placement admin http://vip100.yqc.com:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c10ce3f5bf14431a897eede9335cf36c |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | d67215a5119f438f8c94a7624f67c6f9 |
| service_name | placement |
| service_type | placement |
| url | http://node101.yqc.com:8778 |
+--------------+----------------------------------+
- 验证:
]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------+
| 0ea4e98a2f8a4f82b919bbfe98992986 | RegionOne | keystone | identity | True | admin | http://node101.yqc.com:35357/v3 |
| 1d70ad2fdcfa420da1237f60d0993520 | RegionOne | keystone | identity | True | internal | http://node101.yqc.com:5000/v3 |
| 304693e971e548c4a2c47321268eb2ad | RegionOne | placement | placement | True | internal | http://node101.yqc.com:8778 |
| 51fc58ff766d446aa8a3420babe85690 | RegionOne | glance | image | True | internal | http://node101.yqc.com:9292 |
| 6d775e5928844b59809e2d8705dda6c1 | RegionOne | nova | compute | True | admin | http://node101.yqc.com:8774/v2.1 |
| 7351f018a87344e48f44cec769014f10 | RegionOne | keystone | identity | True | public | http://node101.yqc.com:5000/v3 |
| 779de714e4b84d6d810331c895e6dbb8 | RegionOne | nova | compute | True | public | http://node101.yqc.com:8774/v2.1 |
| 85a7d40a19ef45a69b50a0e473481f1c | RegionOne | glance | image | True | admin | http://node101.yqc.com:9292 |
| 89d36c4ff6d64491bec7b1efd2ba765a | RegionOne | placement | placement | True | public | http://node101.yqc.com:8778 |
| c10ce3f5bf14431a897eede9335cf36c | RegionOne | placement | placement | True | admin | http://node101.yqc.com:8778 |
| c8f19491bb1a4c9fa6857fc1e259953c | RegionOne | nova | compute | True | internal | http://node101.yqc.com:8774/v2.1 |
| da2c7440ddda44a9a43de718e2b24e55 | RegionOne | glance | image | True | public | http://node101.yqc.com:9292 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------+
6.4:安装及配置nova控制端(node101)
6.4.1:安装nova控制端
]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y
6.4.2:编辑nova控制端配置文件
]# vim /etc/nova/nova.conf
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:123456@node107.yqc.com
rpc_backend=rabbit
[api]
auth_strategy=keystone
[api_database]
connection = mysql+pymysql://nova:123456@vip100.yqc.com/nova_api
[database]
connection = mysql+pymysql://nova:123456@vip100.yqc.com/nova
[glance]
api_servers=http://vip100.yqc.com:9292
[keystone_authtoken]
auth_uri = http://vip100.yqc.com:5000
auth_url = http://vip100.yqc.com:35357
memcached_servers = vip100.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://vip100.yqc.com:35357/v3
username = placement
password = 123456
[vnc]
enabled=true
vncserver_listen=192.168.1.101
vncserver_proxyclient_address=192.168.1.101
6.4.3:配置 apache 允许访问 placement API
- 在 00-nova-placement-api.conf 下方添加如下配置:
]# vim /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
- 重启httpd服务:
]# systemctl restart httpd
6.4.4:初始化数据库
填充 nova_api 数据库
]# su -s /bin/sh -c "nova-manage api_db sync" nova
注册 nova cell0 数据库
]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建 cell1
]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
7016aa29-4ed7-4926-b46a-4ab1b21f6178
填充 nova 数据库
]# su -s /bin/sh -c "nova-manage db sync" nova
验证 nova cell0 和 nova cell1 是否正常注册
]# nova-manage cell_v2 list_cells
+-------+--------------------------------------+
| Name | UUID |
+-------+--------------------------------------+
| cell0 | 00000000-0000-0000-0000-000000000000 |
| cell1 | 575bc0ae-7ec2-4716-8a1c-68b50a6774dc |
+-------+--------------------------------------+
6.4.5:启动nova控制端服务
- 启动nova控制端服务,并设为开机启动:
]# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
]# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
- 验证端口:
]# ss -tnl | egrep '(6080|8774|8775)'
LISTEN 0 100 *:6080 *:*
LISTEN 0 128 *:8774 *:*
LISTEN 0 128 *:8775 *:*
nova-novncproxy:6080;
nova-api:8774、8775;
6.4.6:编写nova控制端重启脚本
]# vim nova-restart.sh
#!/bin/bash
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@linux-host1 ~]# chmod a+x nova-restart.sh
6.4.7:nova控制端日志
]# ll /var/log/nova
total 60
-rw-r--r-- 1 nova nova 6723 Nov 3 15:07 nova-api.log
-rw-r--r-- 1 nova nova 1468 Nov 3 15:07 nova-conductor.log
-rw-r--r-- 1 nova nova 1049 Nov 3 15:07 nova-consoleauth.log
-rw-r--r-- 1 nova nova 36213 Nov 3 14:55 nova-manage.log
-rw-r--r-- 1 nova nova 899 Nov 3 15:07 nova-novncproxy.log
-rw-r--r-- 1 root root 0 Nov 3 14:49 nova-placement-api.log
-rw-r--r-- 1 nova nova 1193 Nov 3 15:07 nova-scheduler.log
6.5:安装及配置nova控制端(node102)
6.5.1:安装nova控制端
]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y
6.5.2:同步 node101 的 nova 配置文件
- 从node101复制配置文件:
]# scp node101:/etc/nova/nova.conf /etc/nova/nova.conf
]# scp node101:/etc/httpd/conf.d/00-nova-placement-api.conf /etc/httpd/conf.d/00-nova-placement-api.conf
- 更改vnc配置:
]# vim /etc/nova/nova.conf
[vnc]
enabled=true
vncserver_listen=192.168.1.102
vncserver_proxyclient_address=192.168.1.102
- 重启httpd服务:
]# systemctl restart httpd
6.5.3:启动nova控制端服务
- 启动nova控制端服务,并设为开机启动:
]# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
]# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
- 验证端口:
]# ss -tnl | egrep '(6080|8774|8775)'
LISTEN 0 100 *:6080 *:*
LISTEN 0 128 *:8774 *:*
LISTEN 0 128 *:8775 *:*
nova-novncproxy:6080;
nova-api:8774、8775;
6.5.4:编写nova控制端重启脚本
]# vim nova-restart.sh
#!/bin/bash
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@linux-host1 ~]# chmod a+x nova-restart.sh
6.6:HAProxy 配置 Nova 代理
- 配置HAProxy:
]# vim /etc/haproxy/haproxy.cfg
#openstack-nova================================================================
listen nova-vnc
bind 192.168.1.100:6080
mode tcp
log global
balance source
server nova1 192.168.1.101:6080 check inter 5000 rise 3 fall 3
server nova2 192.168.1.102:6080 check inter 5000 rise 3 fall 3
listen nova-endpoints
bind 192.168.1.100:8774
mode tcp
log global
balance source
server nova1 192.168.1.101:8774 check inter 5000 rise 3 fall 3
server nova2 192.168.1.102:8774 check inter 5000 rise 3 fall 3
listen nova-api
bind 192.168.1.100:8775
mode tcp
log global
balance source
server nova1 192.168.1.101:8775 check inter 5000 rise 3 fall 3
server nova2 192.168.1.102:8775 check inter 5000 rise 3 fall 3
listen placement
bind 192.168.1.100:8778
mode tcp
log global
balance source
server placement1 192.168.1.101:8778 check inter 5000 rise 3 fall 3
server placement2 192.168.1.102:8778 check inter 5000 rise 3 fall 3
- 重载HAProxy:
]# systemctl reload haproxy
- 验证:
]# telnet vip100 6080
Trying 192.168.1.100...
Connected to vip100.
Escape character is '^]'.
]# telnet vip100 8774
Trying 192.168.1.100...
Connected to vip100.
Escape character is '^]'.
]# telnet vip100 8775
Trying 192.168.1.100...
Connected to vip100.
Escape character is '^]'.
6.7:验证 nova 控制端
6.7.1:nova service-list
]# nova service-list
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-consoleauth | node102.yqc.com | internal | enabled | up | 2020-11-08T16:02:01.000000 | - |
| 2 | nova-scheduler | node102.yqc.com | internal | enabled | up | 2020-11-08T16:02:01.000000 | - |
| 3 | nova-conductor | node102.yqc.com | internal | enabled | up | 2020-11-08T16:02:02.000000 | - |
| 5 | nova-conductor | node101.yqc.com | internal | enabled | up | 2020-11-08T16:02:03.000000 | - |
| 11 | nova-consoleauth | node101.yqc.com | internal | enabled | up | 2020-11-08T16:01:57.000000 | - |
| 12 | nova-scheduler | node101.yqc.com | internal | enabled | up | 2020-11-08T16:01:57.000000 | - |
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
6.7.2:openstack compute service list
]# openstack compute service list
+----+------------------+-----------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+-----------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | node102.yqc.com | internal | enabled | up | 2020-11-08T16:06:12.000000 |
| 2 | nova-scheduler | node102.yqc.com | internal | enabled | up | 2020-11-08T16:06:13.000000 |
| 3 | nova-conductor | node102.yqc.com | internal | enabled | up | 2020-11-08T16:06:13.000000 |
| 5 | nova-conductor | node101.yqc.com | internal | enabled | up | 2020-11-08T16:06:15.000000 |
| 11 | nova-consoleauth | node101.yqc.com | internal | enabled | up | 2020-11-08T16:06:10.000000 |
| 12 | nova-scheduler | node101.yqc.com | internal | enabled | up | 2020-11-08T16:06:10.000000 |
+----+------------------+-----------------+----------+---------+-------+----------------------------+
6.7.3:查看RabbitMQ连接
web登录地址:http://192.168.1.107:15672;
guest/guest
七:部署 nova 计算节点
-
需要确认计算节点是否支持硬件加速
]# egrep -c '(vmx|svm)' /proc/cpuinfo 4
7.1:安装及配置nova计算服务
7.1.1:安装nova计算服务
]# yum install openstack-nova-compute -y
起初安装时报:“Requires: qemu-kvm-rhev >= 2.9.0”;
编辑了下yum源后解决:
]# vim /etc/yum.repos.d/CentOS-7-ali.repo
[virt] name=solve qemu-kvm-rhev >= 2.9.0 baseurl=http://mirrors.sohu.com/centos/7/virt/x86_64/kvm-common/ gpgcheck=0
7.1.2:编辑nova计算服务配置文件
]# vim /etc/nova/nova.conf
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:123456@node107.yqc.com
[api]
auth_strategy=keystone
[glance]
api_servers=http://vip100.yqc.com:9292
[keystone_authtoken]
auth_uri = http://vip100.yqc.com:5000
auth_url = http://vip100.yqc.com:35357
memcached_servers = vip100.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://vip100.yqc.com:35357/v3
username = placement
password = 123456
[vnc]
enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.103
novncproxy_base_url=http://vip100.yqc.com:6080/vnc_auto.html
7.1.3:启动nova计算服务
]# systemctl enable libvirtd.service openstack-nova-compute.service
]# systemctl start libvirtd.service openstack-nova-compute.service
7.2:添加计算节点到 cell 数据库
7.2.1:控制端确认数据库中已有计算节点
]# source admin-ocata.sh
]# openstack hypervisor list
+----+---------------------+-----------------+---------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+---------------+-------+
| 1 | node103.yqc.com | QEMU | 192.168.1.103 | up |
+----+---------------------+-----------------+---------------+-------+
为什么这里列出的Host IP是外网地址,而不是内网地址172.16.1.103?
7.2.2:控制端发现计算节点
使用命令发现
]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': 575bc0ae-7ec2-4716-8a1c-68b50a6774dc
Found 1 computes in cell: 575bc0ae-7ec2-4716-8a1c-68b50a6774dc
Checking host mapping for compute host 'node103.yqc.com': 9dab6c0d-b405-4fac-b811-f54f0c833198
Creating host mapping for compute host 'node103.yqc.com': 9dab6c0d-b405-4fac-b811-f54f0c833198
定期主动发现
]# vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval=300
7.2.3:验证 nova 计算节点
]# nova host-list
+-----------------+-------------+----------+
| host_name | service | zone |
+-----------------+-------------+----------+
| node102.yqc.com | consoleauth | internal |
| node102.yqc.com | scheduler | internal |
| node102.yqc.com | conductor | internal |
| node101.yqc.com | conductor | internal |
| node101.yqc.com | consoleauth | internal |
| node101.yqc.com | scheduler | internal |
| node103.yqc.com | compute | nova |
+-----------------+-------------+----------+
]# nova image-list
WARNING: Command image-list is deprecated and will be removed after Nova 15.0.0 is released. Use python-glanceclient or openstackclient instead
+--------------------------------------+--------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------+--------+--------+
| 3dfd3361-7d85-4342-afcf-9532bcddd3d1 | cirros | ACTIVE | |
+--------------------------------------+--------+--------+--------+
]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 3dfd3361-7d85-4342-afcf-9532bcddd3d1 | cirros | active |
+--------------------------------------+--------+--------+
- 查看服务组件是否成功注册:
]# openstack compute service list
+----+------------------+-----------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+-----------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | node102.yqc.com | internal | enabled | up | 2020-11-08T16:21:23.000000 |
| 2 | nova-scheduler | node102.yqc.com | internal | enabled | up | 2020-11-08T16:21:22.000000 |
| 3 | nova-conductor | node102.yqc.com | internal | enabled | up | 2020-11-08T16:21:21.000000 |
| 5 | nova-conductor | node101.yqc.com | internal | enabled | up | 2020-11-08T16:21:18.000000 |
| 11 | nova-consoleauth | node101.yqc.com | internal | enabled | up | 2020-11-08T16:21:17.000000 |
| 12 | nova-scheduler | node101.yqc.com | internal | enabled | up | 2020-11-08T16:21:21.000000 |
| 13 | nova-compute | node103.yqc.com | nova | enabled | up | 2020-11-08T16:21:20.000000 |
+----+------------------+-----------------+----------+---------+-------+----------------------------+
]# nova service-list
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-consoleauth | node102.yqc.com | internal | enabled | up | 2020-11-08T16:21:33.000000 | - |
| 2 | nova-scheduler | node102.yqc.com | internal | enabled | up | 2020-11-08T16:21:32.000000 | - |
| 3 | nova-conductor | node102.yqc.com | internal | enabled | up | 2020-11-08T16:21:31.000000 | - |
| 5 | nova-conductor | node101.yqc.com | internal | enabled | up | 2020-11-08T16:21:28.000000 | - |
| 11 | nova-consoleauth | node101.yqc.com | internal | enabled | up | 2020-11-08T16:21:27.000000 | - |
| 12 | nova-scheduler | node101.yqc.com | internal | enabled | up | 2020-11-08T16:21:31.000000 | - |
| 13 | nova-compute | node103.yqc.com | nova | enabled | up | 2020-11-08T16:21:30.000000 | - |
+----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+
- 列出身份认证服务中的 API 端点来验证身份认证服务的连通性
]# openstack catalog list
+-----------+-----------+---------------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+---------------------------------------------+
| glance | image | RegionOne |
| | | public: http://vip100.yqc.com:9292 |
| | | RegionOne |
| | | admin: http://vip100.yqc.com:9292 |
| | | RegionOne |
| | | internal: http://vip100.yqc.com:9292 |
| | | |
| nova | compute | RegionOne |
| | | admin: http://vip100.yqc.com:8774/v2.1 |
| | | RegionOne |
| | | internal: http://vip100.yqc.com:8774/v2.1 |
| | | RegionOne |
| | | public: http://vip100.yqc.com:8774/v2.1 |
| | | |
| placement | placement | RegionOne |
| | | admin: http://vip100.yqc.com:8778 |
| | | RegionOne |
| | | internal: http://vip100.yqc.com:8778 |
| | | RegionOne |
| | | public: http://vip100.yqc.com:8778 |
| | | |
| keystone | identity | RegionOne |
| | | internal: http://vip100.yqc.com:5000/v3 |
| | | RegionOne |
| | | public: http://vip100.yqc.com:5000/v3 |
| | | RegionOne |
| | | admin: http://vip100.yqc.com:35357/v3 |
| | | |
+-----------+-----------+---------------------------------------------+
- 检查 cells 和 placement API 是否工作正常
]# nova-status upgrade check
+---------------------------+
| Upgrade Check Results |
+---------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+---------------------------+
八:部署 neutron 控制端
8.1:neutron数据库准备
8.1.1:创建数据库并授权neutron用户
]# mysql -uroot -p
Enter password:
MariaDB [(none)]> CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
MariaDB [(none)]> flush privileges;
8.1.2:验证数据库远程连接
]# mysql -hvip100 -uneutron -p
Enter password:
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| neutron |
+--------------------+
8.2:创建neutron服务并注册
8.2.1:设置admin用户环境变量
]# source admin-ocata.sh
8.2.2:创建neutron服务
]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 0dae6638d0244e4dbd11d2ec679e787a |
| name | neutron |
| type | network |
+-------------+----------------------------------+
8.2.3:创建endpoint
- 创建公共端点:
]# openstack endpoint create --region RegionOne network public http://vip100.yqc.com:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | cc5894ab79624b41a78989494f0cfc0d |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0dae6638d0244e4dbd11d2ec679e787a |
| service_name | neutron |
| service_type | network |
| url | http://vip100.yqc.com:9696 |
+--------------+----------------------------------+
- 创建内部端点:
]# openstack endpoint create --region RegionOne network internal http://vip100.yqc.com:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | b0a90dff6e6d423eae2a4f8820919676 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0dae6638d0244e4dbd11d2ec679e787a |
| service_name | neutron |
| service_type | network |
| url | http://vip100.yqc.com:9696 |
+--------------+----------------------------------+
- 创建管理端点:
]# openstack endpoint create --region RegionOne network admin http://vip100.yqc.com:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c7197b1a347741c58643971d8b25e3a6 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0dae6638d0244e4dbd11d2ec679e787a |
| service_name | neutron |
| service_type | network |
| url | http://vip100.yqc.com:9696 |
+--------------+----------------------------------+
- 验证:
]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| 183f4a66268e4d598a3c8b8290aa67f1 | RegionOne | keystone | identity | True | internal | http://vip100.yqc.com:5000/v3 |
| 5ffb55cf4775440187526f23f4f503d3 | RegionOne | nova | compute | True | admin | http://vip100.yqc.com:8774/v2.1 |
| 85a23f0ae0c04311a16d90f07216ff04 | RegionOne | placement | placement | True | admin | http://vip100.yqc.com:8778 |
| 86c30e5f28c447a38166b865a1a1abff | RegionOne | glance | image | True | public | http://vip100.yqc.com:9292 |
| 9fbfdad9de49453f8d55222c9077375c | RegionOne | glance | image | True | admin | http://vip100.yqc.com:9292 |
| a31da398d8984b7aa77f1e0e37adcf6d | RegionOne | neutron | network | True | admin | http://vip100.yqc.com:9696 |
| ab154afa15074193a3294810a59fec10 | RegionOne | keystone | identity | True | public | http://vip100.yqc.com:5000/v3 |
| b438d5cca02f4981b78efbaf3f6a5f68 | RegionOne | placement | placement | True | internal | http://vip100.yqc.com:8778 |
| d1a99edcd79b4892bdfe4fb5a935dc39 | RegionOne | glance | image | True | internal | http://vip100.yqc.com:9292 |
| d55a8ed7791d45628ffc880420164103 | RegionOne | nova | compute | True | internal | http://vip100.yqc.com:8774/v2.1 |
| daa4879a83144ce2b8d7348ab0192823 | RegionOne | placement | placement | True | public | http://vip100.yqc.com:8778 |
| e2870074a6284882829642a0eed3e44f | RegionOne | neutron | network | True | internal | http://vip100.yqc.com:9696 |
| eb18d1707bf54ead9dbbef32ad8299b6 | RegionOne | nova | compute | True | public | http://vip100.yqc.com:8774/v2.1 |
| eda129ac99654d4baee45d2495989063 | RegionOne | neutron | network | True | public | http://vip100.yqc.com:9696 |
| ee845eda1fe948e59684a58ede8f3067 | RegionOne | keystone | identity | True | admin | http://vip100.yqc.com:35357/v3 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
8.3:安装及配置neutron控制端(node101 提供者网络)
8.3.1:安装neutron控制端
]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
8.3.2:配置服务组件
]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:123456@node107.yqc.com
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
connection = mysql+pymysql://neutron:123456@vip100.yqc.com/neutron
[keystone_authtoken]
auth_uri = http://vip100.yqc.com:5000
auth_url = http://vip100.yqc.com:35357
memcached_servers = vip100.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[nova]
auth_url = http://vip100.yqc.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
8.3.3:配置 Modular Layer 2 (ML2) 插件
ML2 插件使用 Linuxbridge 机制来为实例创建 layer2 虚拟网络基础设施;
]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = external,internal
[securitygroup]
enable_ipset = true
8.3.4:配置Linuxbridge代理
Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。
]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = external:eth0,internal:eth1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = false
8.3.5:配置 DHCP 代理
]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
8.3.6:配置元数据代理
]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = vip100.yqc.com
metadata_proxy_shared_secret = 20201109
8.3.7:配置 nova 调用 neutron
]# vim /etc/nova/nova.conf
[neutron]
url = http://vip100.yqc.com:9696
auth_url = http://vip100.yqc.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 20201109
8.3.8:初始化neutron数据库
- 创建软链接
网络服务初始化脚本需要一个超链接
/etc/neutron/plugin.ini
指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini
;
]# ln -sv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
- 初始化数据库:
]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
- 验证结果:
]# mysql -hvip100 -uneutron -p
Enter password:
MariaDB [(none)]> use neutron;
MariaDB [neutron]> show tables;
+-----------------------------------------+
| Tables_in_neutron |
+-----------------------------------------+
| address_scopes |
| agents |
| alembic_version |
| allowedaddresspairs |
| arista_provisioned_nets |
| arista_provisioned_tenants |
| arista_provisioned_vms |
……
8.3.9:重启nova API 服务
- 重启服务:
]# systemctl restart openstack-nova-api.service
- 同时查看日志有无报错:
]# tail -f /var/log/nova/nova-api.log
8.3.10:启动 neutron 服务并设置为开机启动
]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
8.3.11:验证neutron日志有无报错
- neutron日志:
]# ll /var/log/neutron/
total 32
-rw-r--r-- 1 neutron neutron 3543 Nov 3 17:58 dhcp-agent.log
-rw-r--r-- 1 neutron neutron 4735 Nov 3 17:58 linuxbridge-agent.log
-rw-r--r-- 1 neutron neutron 3254 Nov 3 17:58 metadata-agent.log
-rw-r--r-- 1 neutron neutron 14901 Nov 3 17:57 server.log
- 查看有无报错:
]# tail -f /var/log/neutron/*.log
8.3.12:neutron 控制端重启脚本
]# vim neutron-restart.sh
#!/bin/bash
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
]# chmod a+x neutron-restart.sh
8.3.13:打包 node101 的 neutron 配置目录
]# cd /etc/neutron/
]# tar zcvf node101-neutron.tar.gz ./*
8.4:安装及配置neutron控制端(node101 自服务网络)
8.4.1:安装neutron控制端
]# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
8.4.2:配置服务组件
]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:123456@node102.yqc.com
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
connection = mysql+pymysql://neutron:123456@node102.yqc.com/neutron
[keystone_authtoken]
auth_uri = http://node101.yqc.com:5000
auth_url = http://node101.yqc.com:35357
memcached_servers = node102.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[nova]
auth_url = http://node101.yqc.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
8.4.3:配置 Modular Layer 2 (ML2) 插件
]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = true
[ml2_type_vxlan]
vni_ranges = 1:1000
8.4.4:配置Linuxbridge代理
]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:bond1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
local_ip = 192.168.1.101
l2_population = true
8.4.5:配置layer-3代理
]# vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
8.4.6:配置DHCP代理
]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
8.4.7:配置元数据代理
]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = node101.yqc.com
metadata_proxy_shared_secret = 20201103
8.4.8:配置 nova 调用 neutron
]# vim /etc/nova/nova.conf
[neutron]
url = http://node101.yqc.com:9696
auth_url = http://node101.yqc.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 20201103
8.4.9:初始化neutron数据库
- 创建软链接
网络服务初始化脚本需要一个超链接
/etc/neutron/plugin.ini
指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini
;
]# ln -sv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
- 初始化数据库:
]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
- 验证结果:
]# mysql -hnode102 -uneutron -p
Enter password:
MariaDB [(none)]> use neutron;
MariaDB [neutron]> show tables;
+-----------------------------------------+
| Tables_in_neutron |
+-----------------------------------------+
| address_scopes |
| agents |
| alembic_version |
| allowedaddresspairs |
| arista_provisioned_nets |
| arista_provisioned_tenants |
| arista_provisioned_vms |
……
8.4.10:重启nova API 服务
- 重启服务:
]# systemctl restart openstack-nova-api.service
- 同时查看日志有无报错:
]# tail -f /var/log/nova/nova-api.log
8.4.11:启动 neutron 服务并设置为开机启动
自服务网络比提供者网络多启动一个服务neutron-l3-agent.service;
]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
8.5:安装及配置neutron控制端(node102 两种网络类型通用)
8.5.1:安装neutron控制端
]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
8.5.2:同步 node101 的 neutron 配置
]# cd /etc/neutron/
]# scp node101:/etc/neutron/node101-neutron.tar.gz ./
]# tar zxvf node101-neutron.tar.gz
8.5.3:配置 nova 调用 neutron
]# vim /etc/nova/nova.conf
[neutron]
url = http://vip100.yqc.com:9696
auth_url = http://vip100.yqc.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 20201109
8.5.4:重启nova API 服务
- 重启服务:
]# systemctl restart openstack-nova-api.service
- 同时查看日志有无报错:
]# tail -f /var/log/nova/nova-api.log
8.5.5:启动 neutron 服务并设置为开机启动
]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
8.5.6:验证neutron日志有无报错
- 查看有无报错:
]# tail -f /var/log/neutron/*.log
8.5.7:neutron 控制端重启脚本
]# vim neutron-restart.sh
#!/bin/bash
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
]# chmod a+x neutron-restart.sh
8.6:HAProxy 配置 Neutron 代理
- 配置HAProxy:
]# vim /etc/haproxy/haproxy.cfg
#openstack-neutron================================================================
listen neutron
bind 192.168.1.100:9696
mode tcp
log global
balance source
server neutron1 192.168.1.101:9696 check inter 5000 rise 3 fall 3
server neutron2 192.168.1.102:9696 check inter 5000 rise 3 fall 3
- 重载HAProxy:
]# systemctl reload haproxy
- 验证:
]# telnet vip100 9696
Trying 192.168.1.100...
Connected to vip100.
Escape character is '^]'.
8.7:验证 neutron 控制端
- 验证 neutron 控制端是否注册成功
此步骤要求各服务器时间必须一致;
]# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
| 078744af-fba8-4cc2-9923-3f60281e7960 | Metadata agent | node102.yqc.com | | :-) | True | neutron-metadata-agent |
| 1801b32d-fe32-429b-9a78-bdf1c135ba1a | DHCP agent | node102.yqc.com | nova | :-) | True | neutron-dhcp-agent |
| ac87dcd7-5835-492f-ac15-222e4eac4ff5 | Linux bridge agent | node102.yqc.com | | :-) | True | neutron-linuxbridge-agent |
| d716dc9c-9df0-4291-b429-eb9cc3f9399d | Metadata agent | node101.yqc.com | | :-) | True | neutron-metadata-agent |
| ee4657bd-4c9a-4f9b-80b6-92d0c1e9c5d1 | DHCP agent | node101.yqc.com | nova | :-) | True | neutron-dhcp-agent |
| f9313a15-7854-433d-8de5-492370a34d3e | Linux bridge agent | node101.yqc.com | | :-) | True | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
九:部署 neutron 计算节点
9.1:安装及配置neutron计算服务
9.1.1:安装neutron计算服务
]# yum install openstack-neutron-linuxbridge ebtables ipset -y
9.1.2:编辑neutron计算服务配置文件
计算节点不直接访问数据库,所以没有[database]配置;
]# vim /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:123456@node107.yqc.com
[keystone_authtoken]
auth_uri = http://vip100.yqc.com:5000
auth_url = http://vip100.yqc.com:35357
memcached_servers = vip100.yqc.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
9.1.3:配置 linuxbridge 代理(提供者网络)
]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = external:eth0,internal:eth1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = false
9.1.4:配置 linuxbridge 代理(自服务网络)
]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = external:eth0,internal:eth1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
local_ip = 192.168.1.103
l2_population = true
9.1.5:配置 nova 调用 neutron
- 编辑nova计算服务配置文件:
]# vim /etc/nova/nova.conf
[neutron]
url = http://vip100.yqc.com:9696
auth_url = http://vip100.yqc.com:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
- 重启nova计算服务:
]# systemctl restart openstack-nova-compute.service
9.1.6:启动 neutron 计算服务并设置为开机启动
]# systemctl enable neutron-linuxbridge-agent.service
]# systemctl start neutron-linuxbridge-agent.service
9.1.7:neutron计算服务日志
- 启动neutron计算服务后生成日志文件:
]# ll /var/log/neutron/
total 4
-rw-r--r-- 1 neutron neutron 1667 Nov 5 17:29 linuxbridge-agent.log
- 查看有无报错:
]# tail -f /var/log/neutron/*.log
9.2:验证 neutron 计算节点
9.2.1:控制端验证 neutron 计算节点是否注册成功
]# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
| 078744af-fba8-4cc2-9923-3f60281e7960 | Metadata agent | node102.yqc.com | | :-) | True | neutron-metadata-agent |
| 0e05b142-b268-4352-91e8-4a0292f92e66 | Linux bridge agent | node103.yqc.com | | :-) | True | neutron-linuxbridge-agent |
| 1801b32d-fe32-429b-9a78-bdf1c135ba1a | DHCP agent | node102.yqc.com | nova | :-) | True | neutron-dhcp-agent |
| ac87dcd7-5835-492f-ac15-222e4eac4ff5 | Linux bridge agent | node102.yqc.com | | :-) | True | neutron-linuxbridge-agent |
| d716dc9c-9df0-4291-b429-eb9cc3f9399d | Metadata agent | node101.yqc.com | | :-) | True | neutron-metadata-agent |
| ee4657bd-4c9a-4f9b-80b6-92d0c1e9c5d1 | DHCP agent | node101.yqc.com | nova | :-) | True | neutron-dhcp-agent |
| f9313a15-7854-433d-8de5-492370a34d3e | Linux bridge agent | node101.yqc.com | | :-) | True | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+
9.2.2:验证 neutron server 进程是否正常运行
列出加载的扩展来验证
neutron-server
进程是否正常启动;
]# openstack extension list --network
+-------------------------------------------------------------+---------------------------+--------------------------------------------------------------------+
| Name | Alias | Description |
+-------------------------------------------------------------+---------------------------+--------------------------------------------------------------------+
| Default Subnetpools | default-subnetpools | Provides ability to mark and use a subnetpool as the default |
| Availability Zone | availability_zone | The availability zone extension. |
| Network Availability Zone | network_availability_zone | Availability zone support for network. |
| Port Binding | binding | Expose port bindings of a virtual port to external application |
| agent | agent | The agent management extension. |
| Subnet Allocation | subnet_allocation | Enables allocation of subnets from a subnet pool |
| DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among dhcp agents |
| Tag support | tag | Enables to set tag on resources. |
| Neutron external network | external-net | Adds external network attribute to network resource. |
| Neutron Service Flavors | flavors | Flavor specification for Neutron advanced services |
| Network MTU | net-mtu | Provides MTU attribute for a network resource. |
| Network IP Availability | network-ip-availability | Provides IP availability data for each network and subnet. |
| Quota management support | quotas | Expose functions for quotas management per tenant |
| Provider Network | provider | Expose mapping of virtual networks to physical networks |
| Multi Provider Network | multi-provider | Expose mapping of virtual networks to multiple physical networks |
| Address scope | address-scope | Address scopes extension. |
| Subnet service types | subnet-service-types | Provides ability to set the subnet service_types field |
| Resource timestamps | standard-attr-timestamp | Adds created_at and updated_at fields to all Neutron resources |
| | | that have Neutron standard attributes. |
| Neutron Service Type Management | service-type | API for retrieving service providers for Neutron advanced services |
| Tag support for resources: subnet, subnetpool, port, router | tag-ext | Extends tag support to more L2 and L3 resources. |
| Neutron Extra DHCP opts | extra_dhcp_opt | Extra options configuration for DHCP. For example PXE boot options |
| | | to DHCP clients can be specified (e.g. tftp-server, server-ip- |
| | | address, bootfile-name) |
| Resource revision numbers | standard-attr-revisions | This extension will display the revision number of neutron |
| | | resources. |
| Pagination support | pagination | Extension that indicates that pagination is enabled. |
| Sorting support | sorting | Extension that indicates that sorting is enabled. |
| security-group | security-group | The security groups extension. |
| RBAC Policies | rbac-policies | Allows creation and modification of policies that control tenant |
| | | access to resources. |
| standard-attr-description | standard-attr-description | Extension to add descriptions to standard attributes |
| Port Security | port-security | Provides port security |
| Allowed Address Pairs | allowed-address-pairs | Provides allowed address pairs |
| project_id field enabled | project-id | Extension that indicates that project_id field is enabled. |
+-------------------------------------------------------------+---------------------------+--------------------------------------------------------------------+
十:部署 dashboard 仪表盘
10.1:安装及配置 dashboard(node 101 提供者网络)
10.1.1:安装 dashboard
]# yum install openstack-dashboard -y
10.1.2:编辑 dashboard 配置文件
]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "vip100.yqc.com"
ALLOWED_HOSTS = ['*',]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'vip100.yqc.com:11211',
}
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
……
……
}
TIME_ZONE = "Asia/Shanghai"
10.1.3:重启httpd
]# systemctl restart httpd.service
10.2:安装及配置 dashboard(node 102 两种网络类型通用)
10.2.1:安装 dashboard
]# yum install openstack-dashboard -y
10.2.2:同步 node101 的 dashboard 配置文件
]# scp node101:/etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings
10.2.3:重启httpd
]# systemctl restart httpd.service
10.3:配置 HAProxy 代理 dashboard
- 配置HAProxy:
]# vim /etc/haproxy/haproxy.cfg
#openstack-dashboard================================================================
listen dashboard
bind 192.168.1.100:80
mode tcp
log global
balance source
server dashboard1 192.168.1.101:80 check inter 5000 rise 3 fall 3
server dashboard2 192.168.1.102:80 check inter 5000 rise 3 fall 3
- 重载HAProxy:
]# systemctl reload haproxy
- 验证:
]# telnet vip100 80
Trying 192.168.1.100...
Connected to vip100.
Escape character is '^]'.
10.4:验证 dashboard
- 客户端浏览器打开http://192.168.1.100/dashboard
- 登录openstack:
十一:启动实例
11.1:创建虚拟网络(提供者网络)
搭建Openstack时,网络规划如下:
外网网段:192.168.1.0/24;
内网网段:172.16.1.0/24;
11.1.1:导入 admin 凭证
]# source admin-ocata.sh
11.1.2:创建提供者网络
- 创建外部网络 external-net(网段:192.168.1.0/24)
]# openstack network create --share --external --provider-physical-network external --provider-network-type flat external-net
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2020-11-08T18:03:56Z |
| description | |
| dns_domain | None |
| id | 100a6d84-4145-4958-aa44-a611fa5d47d3 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| mtu | 1500 |
| name | external-net |
| port_security_enabled | True |
| project_id | acac1eb6c81540429c3323084bed23d9 |
| provider:network_type | flat |
| provider:physical_network | external |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 4 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| updated_at | 2020-11-08T18:03:56Z |
+---------------------------+--------------------------------------+
- 创建内部网络 internal-net(网段:172.16.1.0/24)
]# openstack network create --share --external --provider-physical-network internal --provider-network-type flat internal-net
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2020-11-06T12:09:27Z |
| description | |
| dns_domain | None |
| id | 64e2dd58-e7d4-41cb-a3c4-552c90ac4830 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| mtu | 1500 |
| name | internet |
| port_security_enabled | True |
| project_id | 0d3f8296d5de456d9ffdcb7643492918 |
| provider:network_type | flat |
| provider:physical_network | internet |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 4 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| updated_at | 2020-11-06T12:09:27Z |
+---------------------------+--------------------------------------+
11.1.3:在网络上创建子网
- 创建外网子网 external-sub:
]# openstack subnet create --network external-net \
--allocation-pool start=192.168.1.201,end=192.168.1.250 \
--dns-nameserver 192.168.1.254 --gateway 192.168.1.1 \
--subnet-range 192.168.1.0/24 external-sub
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 192.168.1.201-192.168.1.250 |
| cidr | 192.168.1.0/24 |
| created_at | 2020-11-08T18:09:54Z |
| description | |
| dns_nameservers | 192.168.1.254 |
| enable_dhcp | True |
| gateway_ip | 192.168.1.1 |
| host_routes | |
| id | 34b8c157-9c46-4d4f-9a72-95efaa9f5d32 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | external-sub |
| network_id | 100a6d84-4145-4958-aa44-a611fa5d47d3 |
| project_id | acac1eb6c81540429c3323084bed23d9 |
| revision_number | 2 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| updated_at | 2020-11-08T18:09:54Z |
+-------------------+--------------------------------------+
- 创建内网子网 internal-sub:
]# openstack subnet create --network internal-net \
--allocation-pool start=172.16.1.201,end=172.16.1.250 \
--dns-nameserver 172.16.1.254 --gateway 172.16.1.1 \
--subnet-range 172.16.1.0/24 internal-sub
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 172.16.1.201-172.16.1.250 |
| cidr | 172.16.1.0/24 |
| created_at | 2020-11-08T18:11:22Z |
| description | |
| dns_nameservers | 172.16.1.254 |
| enable_dhcp | True |
| gateway_ip | 172.16.1.1 |
| host_routes | |
| id | 6b406b0e-c55b-42cd-b4f6-5147be3e3489 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | internal-sub |
| network_id | 42c6d283-d007-4695-b27c-af4179e68857 |
| project_id | acac1eb6c81540429c3323084bed23d9 |
| revision_number | 2 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| updated_at | 2020-11-08T18:11:22Z |
+-------------------+--------------------------------------+
11.1.4:验证网络
命令行验证
- openstack network list
]# openstack network list
+--------------------------------------+--------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+--------------+--------------------------------------+
| 100a6d84-4145-4958-aa44-a611fa5d47d3 | external-net | 34b8c157-9c46-4d4f-9a72-95efaa9f5d32 |
| 42c6d283-d007-4695-b27c-af4179e68857 | internal-net | 6b406b0e-c55b-42cd-b4f6-5147be3e3489 |
+--------------------------------------+--------------+--------------------------------------+
- openstack subnet list
]# openstack subnet list
+--------------------------------------+--------------+--------------------------------------+----------------+
| ID | Name | Network | Subnet |
+--------------------------------------+--------------+--------------------------------------+----------------+
| 34b8c157-9c46-4d4f-9a72-95efaa9f5d32 | external-sub | 100a6d84-4145-4958-aa44-a611fa5d47d3 | 192.168.1.0/24 |
| 6b406b0e-c55b-42cd-b4f6-5147be3e3489 | internal-sub | 42c6d283-d007-4695-b27c-af4179e68857 | 172.16.1.0/24 |
+--------------------------------------+--------------+--------------------------------------+----------------+
- neutron net-list
]# neutron net-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------+----------------------------------+-----------------------------------------------------+
| id | name | tenant_id | subnets |
+--------------------------------------+--------------+----------------------------------+-----------------------------------------------------+
| 100a6d84-4145-4958-aa44-a611fa5d47d3 | external-net | acac1eb6c81540429c3323084bed23d9 | 34b8c157-9c46-4d4f-9a72-95efaa9f5d32 192.168.1.0/24 |
| 42c6d283-d007-4695-b27c-af4179e68857 | internal-net | acac1eb6c81540429c3323084bed23d9 | 6b406b0e-c55b-42cd-b4f6-5147be3e3489 172.16.1.0/24 |
+--------------------------------------+--------------+----------------------------------+-----------------------------------------------------+
- neutron subnet-list
]# neutron subnet-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------+----------------------------------+----------------+----------------------------------------------------+
| id | name | tenant_id | cidr | allocation_pools |
+--------------------------------------+--------------+----------------------------------+----------------+----------------------------------------------------+
| 34b8c157-9c46-4d4f-9a72-95efaa9f5d32 | external-sub | acac1eb6c81540429c3323084bed23d9 | 192.168.1.0/24 | {"start": "192.168.1.201", "end": "192.168.1.250"} |
| 6b406b0e-c55b-42cd-b4f6-5147be3e3489 | internal-sub | acac1eb6c81540429c3323084bed23d9 | 172.16.1.0/24 | {"start": "172.16.1.201", "end": "172.16.1.250"} |
+--------------------------------------+--------------+----------------------------------+----------------+----------------------------------------------------+
web端验证
11.2:创建虚拟机类型
11.2.1:创建m1.nano类型
]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 0 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| properties | |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------+
11.2.2:验证虚拟机类型
命令行验证
]# openstack flavor list
+----+---------+-----+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+-----+------+-----------+-------+-----------+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
+----+---------+-----+------+-----------+-------+-----------+
web端验证
11.3:创建密钥对
11.3.1:导入 demo 项目凭证
]# source demo-ocata.sh
11.3.2:生成key
]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa):
11.3.3:添加公钥
]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | 7e:87:32:cc:ac:37:b1:57:28:f6:30:74:f5:cb:30:4f |
| name | mykey |
| user_id | 59245b39d1c04b6791d8d56f57414fa8 |
+-------------+-------------------------------------------------+
11.3.4:验证key
命令行验证
]# openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | 7e:87:32:cc:ac:37:b1:57:28:f6:30:74:f5:cb:30:4f |
+-------+-------------------------------
web端验证
11.4:添加安全组规则
11.4.1:允许ICMP(ping)
]# openstack security group rule create --proto icmp default
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2020-11-06T06:09:13Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 58ae030b-3aef-477c-a8d1-e64cdce0f65b |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | f70f1bff40564807a09e10eefbe1b417 |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 94f58a33-f430-4299-ad5c-aae9511d0094 |
| updated_at | 2020-11-06T06:09:13Z |
+-------------------+--------------------------------------+
11.4.2:允许SSH
]# openstack security group rule create --proto tcp --dst-port 22 default
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2020-11-06T06:09:55Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | a58415a6-5bd7-4f26-9573-0d89a83aba0a |
| name | None |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | f70f1bff40564807a09e10eefbe1b417 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 94f58a33-f430-4299-ad5c-aae9511d0094 |
| updated_at | 2020-11-06T06:09:55Z |
+-------------------+--------------------------------------+
11.4.3:验证安全组规则
命令行验证
]# openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+---------+------------------------+----------------------------------+
| 94f58a33-f430-4299-ad5c-aae9511d0094 | default | Default security group | f70f1bff40564807a09e10eefbe1b417 |
+--------------------------------------+---------+------------------------+----------------------------------+
]# openstack security group rule list
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group | Security Group |
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
| 0b04789c-055a-4854-bd44-d9db2268bf86 | None | None | | 94f58a33-f430-4299-ad5c-aae9511d0094 | 94f58a33-f430-4299-ad5c-aae9511d0094 |
| 58ae030b-3aef-477c-a8d1-e64cdce0f65b | icmp | 0.0.0.0/0 | | None | 94f58a33-f430-4299-ad5c-aae9511d0094 |
| 7101631f-c8e5-4c2b-945a-854fcb8d1dd8 | None | None | | None | 94f58a33-f430-4299-ad5c-aae9511d0094 |
| 80b728b4-5f78-4a36-8039-cc0e3484fa5d | None | None | | 94f58a33-f430-4299-ad5c-aae9511d0094 | 94f58a33-f430-4299-ad5c-aae9511d0094 |
| a58415a6-5bd7-4f26-9573-0d89a83aba0a | tcp | 0.0.0.0/0 | 22:22 | None | 94f58a33-f430-4299-ad5c-aae9511d0094 |
| f83ee61e-c53e-419b-906e-581e0a475286 | None | None | | None | 94f58a33-f430-4299-ad5c-aae9511d0094 |
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
web端验证
11.5:创建实例
11.5.1:确定实例选项
列出可用类型
]# openstack flavor list
+----+---------+-----+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+-----+------+-----------+-------+-----------+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
+----+---------+-----+------+-----------+-------+-----------+
列出可用镜像
]# openstack image list
+--------------------------------------+---------+--------+
| ID | Name | Status |
+--------------------------------------+---------+--------+
| 960434ae-56e7-49a2-8388-db376ac2a406 | cirros1 | active |
| 3168eab6-7ccd-4379-addd-b92266bc6f51 | cirros2 | active |
| 54461727-4f32-4cb9-8510-3ce5d66d39cb | cirros3 | active |
+--------------------------------------+---------+--------+
列出可用网络
]# openstack network list
+--------------------------------------+--------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+--------------+--------------------------------------+
| 100a6d84-4145-4958-aa44-a611fa5d47d3 | external-net | 34b8c157-9c46-4d4f-9a72-95efaa9f5d32 |
| 42c6d283-d007-4695-b27c-af4179e68857 | internal-net | 6b406b0e-c55b-42cd-b4f6-5147be3e3489 |
+--------------------------------------+--------------+--------------------------------------+
列出可用安全组
]# openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+---------+------------------------+----------------------------------+
| 62d98b23-3efe-4b4f-8de1-2f62f1df9d55 | default | Default security group | 9a94f1a1e271459580613778bf7c3392 |
+--------------------------------------+---------+------------------------+----------------------------------+
11.5.2:启动云主机
- 创建外网虚拟机 test-node1:
]# openstack server create --flavor m1.nano --image cirros1 \
--nic net-id=100a6d84-4145-4958-aa44-a611fa5d47d3 --security-group default \
--key-name mykey test-node1
+-----------------------------+------------------------------------------------+
| Field | Value |
+-----------------------------+------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | 5wKttJm78ZBH |
| config_drive | |
| created | 2020-11-08T18:38:39Z |
| flavor | m1.nano (0) |
| hostId | |
| id | e7388a96-00d9-4e2c-b7b9-0d828e3e4ded |
| image | cirros1 (960434ae-56e7-49a2-8388-db376ac2a406) |
| key_name | mykey |
| name | test-node1 |
| progress | 0 |
| project_id | 9a94f1a1e271459580613778bf7c3392 |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2020-11-08T18:38:39Z |
| user_id | 69e61c6f12594c768bb39efb4e865a9b |
| volumes_attached | |
+-----------------------------+------------------------------------------------+
- 创建内网虚拟机 test-node2:
]# openstack server create --flavor m1.nano --image cirros1 \
--nic net-id=42c6d283-d007-4695-b27c-af4179e68857 --security-group default \
--key-name mykey test-node2
+-----------------------------+------------------------------------------------+
| Field | Value |
+-----------------------------+------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | gxddhwM8B4hq |
| config_drive | |
| created | 2020-11-08T18:45:10Z |
| flavor | m1.nano (0) |
| hostId | |
| id | e75be726-99c8-48d6-b067-d2a8473410e1 |
| image | cirros1 (960434ae-56e7-49a2-8388-db376ac2a406) |
| key_name | mykey |
| name | test-node2 |
| progress | 0 |
| project_id | 9a94f1a1e271459580613778bf7c3392 |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2020-11-08T18:45:10Z |
| user_id | 69e61c6f12594c768bb39efb4e865a9b |
| volumes_attached | |
+-----------------------------+------------------------------------------------+
11.5.3:查看实例状态
]# openstack server list
+--------------------------------------+------------+--------+-----------------------+------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+------------+--------+-----------------------+------------+
| d200efdc-3a44-4880-87b5-3b9e7d4bc3f4 | test-node1 | ACTIVE | provider=172.16.1.211 | cirros |
+--------------------------------------+------------+--------+-----------------------+------------+
11.5.4:使用虚拟控制台访问实例
查看实例访问地址
]# openstack console url show test-node1
+-------+-----------------------------------------------------------------------------------+
| Field | Value |
+-------+-----------------------------------------------------------------------------------+
| type | novnc |
| url | http://172.16.1.101:6080/vnc_auto.html?token=a2e9be73-634c-4fb4-9893-3531231a2887 |
+-------+-----------------------------------------------------------------------------------+
浏览器访问实例地址
cirros系统默认用户名/密码:cirros/cubswin:)
验证实例能否ping通网关
$ ifconfig
eth0 Link encap:Ethernet HWaddr FA:16:3E:7A:C8:4A
inet addr:172.16.1.203 Bcast:172.16.1.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe7a:c84a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:77 errors:0 dropped:0 overruns:0 frame:0
TX packets:65 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8915 (8.7 KiB) TX bytes:8258 (8.0 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
$ ping -c 5 172.16.1.1
PING 172.16.1.1 (172.16.1.1): 56 data bytes
64 bytes from 172.16.1.1: seq=0 ttl=128 time=1.306 ms
64 bytes from 172.16.1.1: seq=1 ttl=128 time=1.825 ms
64 bytes from 172.16.1.1: seq=2 ttl=128 time=1.401 ms
64 bytes from 172.16.1.1: seq=3 ttl=128 time=0.406 ms
64 bytes from 172.16.1.1: seq=4 ttl=128 time=1.269 ms
--- 172.16.1.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.406/1.241/1.825 ms
更多推荐
所有评论(0)