如何优雅部署OpenStack私有云I--Kolla
本文部署了一套基于完全开源的、方便部署、各位看官姥爷可复制的一个保姆级操作文档。整体操作做了模块拆分,循序渐进,满足你的各种求知欲。
为方便大数据平台与管理工具的研发,在公司成本不额外增加的情况下,从公司仓库里拉了几台下线物理机来做大数据平台的实验环境。但整体物理机性能都偏高,单独安装一个大数据服务,很豪,但是也很浪费。而且主机台数不是很多,无法做规模化的平台功能验证。并且在平台研发工程中,涉及到无数次的集群卸载与重装,万一把物理机搞挂了,修起来是很麻烦的。
综合以上背景情况,进行云平台的一些方面的研究,听说OpenStack很屌,所以想来尝试尝试,挑战挑战。经过一段时间的不屑努力,终于成功部署一套基于完全开源的、方便部署、各位看官姥爷可复制的一个保姆级操作文档。整体操作做了模块拆分,循序渐进,满足你的各种求知欲。此篇文档有点长,建议大家在wifi环境下观看。整个OpenStack系列,预计包含:OpenStack安装,Debug记录,页面优化,虚拟机镜像制作,私有云使用,大数据建设等模块。后续会持续更新。
本节为第一章节,进行OpenStack的安装。经过前期调研和曾经有过一丢丢的工作经验,找了一个Kolla Ansible来做整个OpenStack的自动化部署(根本就没想过一个组件一个组件安装,真心太难了...)。
1. 环境说明
1.1 主机信息
选用三台主机,每台主机配置双网口进行支持。
主机名 | IP1 | IP2 | 角色 |
dba-bigdata-node1 | 192.168.10.11 | vip:192.168.20.20 | 控制 + 计算 |
dba-bigdata-node2 | 192.168.10.12 | 控制 + 计算 | |
dba-bigdata-node3 | 192.168.10.13 | 计算 |
控制:代表需要运行的程序控制程序:包含控制服务control、网络服务Neutrun,监控服务monitor,部署服务deploy。
计算:则是负载实际虚拟机的生产,主要包含存储Storage(本篇采用LVM进行实现),计算Nova。
整个网络架构图如下
默认的磁盘做了raid信息。
/dev/sda 由4个盘做raid5,挂载与/目录
/dev/sdb 由8个盘做raid5,配置成LVM,提供云磁盘服务。
1.2 软件信息
1.2.1 版本选择
因为现在我们使用大多数系统还是Centos 7版本,所以,我们安装也采用对应的版本进行安装。具体的版本信息如下:
软件 | 版本 | 说明 |
kolla | 9.4.0 | kolla版本 |
openstack | train | 安装的OpenStack版本 |
OS | centos7 | 操作系统版本 |
编译格式 | source | binnary 使用yum安装rpm包 source 从源码编译 |
2. 部署安装
2.1 环境前置准备
以下操作为主机的基础配置,以下部分,除非特殊说明外,所有操作需要在所有主机中进行。
2.1.1 设置主机名
设置主机名,配置主机名解析
[root@dba_bigdata_node1 192.168.10.11 ~]# hostnamectl set-hostname dba_bigdata_node1 [root@dba_bigdata_node1 192.168.10.11 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.10.11 dba_bigdata_node1 192.168.10.12 dba_bigdata_node2 192.168.10.13 dba_bigdata_node3 |
2.1.2 关闭防火墙
systemctl stop firewalld.service systemctl disable firewalld.service |
2.1.3 关闭sellinux
entenforce 为马上生效,但是重启后丢失配置。
/etc/selinux/config则是重启后生效。
root@dba_bigdata_node1 192.168.10.11 ~]# sed -i '/^SELINUX=.*/c SELINUX=disabled' /etc/selinux/config [root@dba_bigdata_node1 192.168.10.11 ~]# sed -i 's/^SELINUXTYPE=.*/SELINUXTYPE=disabled/g' /etc/selinux/config [root@dba_bigdata_node1 192.168.10.11 ~]# grep --color=auto '^SELINUX' /etc/selinux/config SELINUX=disabled SELINUXTYPE=disabled [root@dba_bigdata_node1 192.168.10.11 ~]# setenforce 0 setenforce: SELinux is disabled |
2.1.4 安装常用软件包
yum install gcc vim wget net-tools ntpdate git -y |
2.1.5 配置免密钥
生成密钥文件,只在node1 中执行。
[root@dba_bigdata_node1 192.168.10.11 ~]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:3gnHbYES+WuYpfVVEltj7PMjwoOZWRc75c7IhtYYynw root@dba_bigdata_node1 The key's randomart image is: +---[RSA 2048]----+ | .. o+.| | .. . .o=o| | ... . Bo | | o++ =.+ | | S*@oX.= o| | .+%oE.* +.| | ..= + . .| | | | | +----[SHA256]-----+ |
分发密钥到其他主机,只在node1 中执行。
[root@dba_bigdata_node1 192.168.10.11 ~]# ssh-copy-id root@dba-bigdata-node1 [root@dba_bigdata_node1 192.168.10.11 ~]# ssh-copy-id root@dba-bigdata-node2 [root@dba_bigdata_node1 192.168.10.11 ~]# ssh-copy-id root@dba-bigdata-node3 |
测试node1 是否可登录其他主机
[root@dba_bigdata_node1 192.168.10.11 ~]# ssh 192.168.10.12 Unable to get valid context for root Last login: Fri Sep 9 13:48:22 2022 from 30.90.9.77 [root@dba_bigdata_node2 192.168.10.12 ~]# |
修改ssh配置,所有主机
[root@dba_bigdata_node2 192.168.10.12 ~]# sed -i 's/#ClientAliveInterval 0/ClientAliveInterval 60/g' /etc/ssh/sshd_config [root@dba_bigdata_node2 192.168.10.12 ~]# sed -i 's/#ClientAliveCountMax 3/ClientAliveCountMax 60/g' /etc/ssh/sshd_config [root@dba_bigdata_node2 192.168.10.12 ~]# systemctl daemon-reload && systemctl restart sshd && systemctl status sshd ● sshd.service - OpenSSH server daemon Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled) Active: active (running) since 五 2022-09-09 14:07:43 CST; 5ms ago Docs: man:sshd(8) man:sshd_config(5) Main PID: 4449 (sshd) CGroup: /system.slice/sshd.service └─4449 /usr/sbin/sshd -D 9月 09 14:07:43 dba_bigdata_node2 systemd[1]: Starting OpenSSH server daemon... 9月 09 14:07:43 dba_bigdata_node2 sshd[4449]: Server listening on 0.0.0.0 port 22. 9月 09 14:07:43 dba_bigdata_node2 sshd[4449]: Server listening on :: port 22. 9月 09 14:07:43 dba_bigdata_node2 systemd[1]: Started OpenSSH server daemon. [root@dba_bigdata_node2 192.168.10.12 ~]# |
2.1.6 Yum 源修改
修改为阿里云或者中科大的源地址
阿里:baseurl=http://mirrors.cloud.aliyuncs.com/alinux
中科大:baseurl=https://mirrors.ustc.edu.cn/centos
sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/centos|baseurl=https://mirrors.ustc.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-Base.repo |
重新制作缓存
yum clean all ;yum makecache |
安装epel源
yum install -y epel-release |
升级系统软件包
yum update -y |
2.1.7 安装python环境
yum install python2-devel libffi-devel openssl-devel libselinux-python -y yum install python-pip -y |
配置清华的pip源,加快速度
mkdir ~/.pip cat > ~/.pip/pip.conf << EOF [global] index-url = https://pypi.tuna.tsinghua.edu.cn/simple [install] trusted-host=pypi.tuna.tsinghua.edu.cn EOF |
升级pip
pip install --upgrade "pip < 21.0" pip install pbr |
2.2 时间同步
2.2.1 安装chrony服务
[root@dba_bigdata_node1 192.168.10.11 ~]# yum -y install chrony |
2.2.2 设置为阿里地址
[root@dba_bigdata_node1 192.168.10.11 ~]# cp /etc/chrony.conf{,.bak} [root@dba_bigdata_node1 192.168.10.11 ~]# echo " server ntp1.aliyun.com iburst server ntp2.aliyun.com iburst server ntp6.aliyun.com iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony ">/etc/chrony.conf |
2.2.3 启动服务
systemctl enable chronyd && systemctl restart chronyd && systemctl status chronyd |
2.2.4 chrony 同步源
[root@dba_bigdata_node1 192.168.10.11 ~]# chronyc sources -v [root@dba_bigdata_node1 192.168.10.11 ~]# yum install ntpdate [root@dba_bigdata_node1 192.168.10.11 ~]# ntpdate ntp1.aliyun.com #系统时间写入到硬件时间 [root@dba_bigdata_node1 192.168.10.11 ~]# hwclock -w |
2.2.5 配置定时任务
crontab -e0 */1 * * * ntpdate ntp1.aliyun.com > /dev/null 2>&1; /sbin/hwclock -w0 */1 * * * ntpdate ntp2.aliyun.com > /dev/null 2>&1; /sbin/hwclock -w |
2.2 Docker-ce
2.2.1 设置yum源
设置docker-ce源,并设置为国内的地址。
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo |
2.2.2 卸载现在的Docker信息
[root@dba_bigdata_node1 192.168.10.11 ~]# yum remove docker docker-common docker-selinux docker-engine -y 已加载插件:fastestmirror 参数 docker 没有匹配 参数 docker-common 没有匹配 参数 docker-selinux 没有匹配 参数 docker-engine 没有匹配 不删除任何软件包 |
2.2.3 安装docker
yum install docker-ce -y |
2.2.4 配置国内docker推送地址
mkdir /etc/docker/ cat > /etc/docker/daemon.json << EOF { "registry-mirrors": [ "https://registry.docker-cn.com", "http://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn" ], "insecure-registries": ["172.16.0.200:5000","192.168.10.11:4000"] } EOF |
其中172.16.0.200:5000是另外一个docker私有地址。可以同时用多个。
2.2.5 开启docker共享模式
mkdir -p /etc/systemd/system/docker.service.d
cat >> /etc/systemd/system/docker.service.d/kolla.conf << EOF
[Service]
MountFlags=shared
EOF
2.2.6 开启docker服务
systemctl daemon-reload && systemctl enable docker && systemctl restart docker&& systemctl status docker |
查看docker信息
[root@dba_bigdata_node1 192.168.10.11 ~]# docker info Client: Context: default Debug Mode: false Plugins: app: Docker App (Docker Inc., v0.9.1-beta3) buildx: Docker Buildx (Docker Inc., v0.8.2-docker) scan: Docker Scan (Docker Inc., v0.17.0) Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 20.10.17 Storage Driver: overlay2 |
2.2.7 docker region
#拉取registry镜像
docker pull registry |
设置配置目录
[root@dba_bigdata_node1 192.168.10.11 kolla]# vim /etc/docker/registry/config.yml service: registry |
启动内部源
docker run -d \
--name registry \
-p 4000:5000 \
-v /etc/docker/registry/config.yml:/etc/docker/registry/config.yml \
registry:latest
[root@dba_bigdata_node1 192.168.10.11 kolla]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7db4f7e1abbf registry:latest "/entrypoint.sh /etc…" 2 seconds ago Up 1 second registry
修改内部源地址
[root@dba_bigdata_node1 192.168.10.11 kolla]# vim /etc/docker/daemon.json
"insecure-registries": ["172.16.212.241:5000","192.168.10.11:4000"]
重启docker
[root@dba_bigdata_node1 192.168.10.11 kolla]# systemctl restart docker
[root@dba_bigdata_node1 192.168.10.11 kolla]# docker start registry
查看docker信息
[root@dba_bigdata_node1 192.168.10.11 kolla]# docker info
Insecure Registries:
172.16.212.241:5000
192.168.10.11:5000
127.0.0.0/8
测试是否可用
#查看docker中的images信息
[root@dba_bigdata_node1 192.168.10.11 kolla]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry latest 3a0f7b0a13ef 4 weeks ago 24.1MB
#重命名docker images名字
[root@dba_bigdata_node1 192.168.10.11 kolla]# docker tag registry 192.168.10.11:4000/registry
#将本地images推送到内部源
[root@dba_bigdata_node1 192.168.10.11 kolla]# docker push 192.168.10.11:4000/registry
Using default tag: latest
The push refers to repository [192.168.10.11:4000/registry]
73130e341eaf: Pushed
692a418a42be: Pushed
d3db20e71506: Pushed
145b66c455f7: Pushed
994393dc58e7: Pushed
latest: digest: sha256:29f25d3b41a11500cc8fc4e19206d483833a68543c30aefc8c145c8b1f0b1450 size: 1363
2.2.8 docker region UI
配置Docker UI 与nginx代理。通过UI管理查看docker images信息
#制作docker - ui
docker run -p 81:80 \
--name registry-ui \
-e REGISTRY_URL="http://192.168.10.11:5000" \
-d 172.16.212.241:5000/docker-registry-ui:latest
访问地址:http://192.168.10.11:81/
可以查看具体的docker images信息
2.3 Ansible
2.3.1 安装ansible
安装ansible,只需要在node1主机安装配置即可。
[root@dba_bigdata_node1 192.168.10.11 ~]# yum install -y ansible
2.3.2 配置ansible默认配置
sed -i 's/#host_key_checking = False/host_key_checking = True/g' /etc/ansible/ansible.cfg
sed -i 's/#pipelining = False/pipelining = True/g' /etc/ansible/ansible.cfg
sed -i 's/#forks = 5/forks = 100/g' /etc/ansible/ansible.cfg
2.3.3 配置ansible主机信息
[root@dba_bigdata_node1 192.168.10.11 ~]# echo "dba_bigdata_node[1:3]" >>/etc/ansible/hosts
2.3.4 验证是否安装完毕
[root@dba_bigdata_node1 192.168.10.11 ~]# ansible all -m ping
dba_bigdata_node1 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
dba_bigdata_node3 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
dba_bigdata_node2 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
2.4 Ansible Kolla
以下安装步骤,在主机node1中进行即可。
2.4.1 安装kolla-ansible
[root@dba_bigdata_node1 192.168.10.11 ~]# pip install kolla-ansible==9.4.0
若安装报错。
pip install --upgrade setuptools
pip install --upgrade pip
pip install pbr==5.6.0
2.4.2 设置默认配置到当前目录
mkdir -p /etc/kolla
chown $USER:$USER /etc/kolla
cp -r /usr/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
cp /usr/share/kolla-ansible/ansible/inventory/* .
2.4.3 配置安装主机列表
编辑/root/multinode
# These initial groups are the only groups required to be modified. The
# additional groups are for more control of the environment.
[control]
# These hostname must be resolvable from your deployment host
dba_bigdata_node1
dba_bigdata_node2
dba_bigdata_node3
# The above can also be specified as follows:
#control[01:03] ansible_user=kolla
# The network nodes are where your l3-agent and loadbalancers will run
# This can be the same as a host in the control group
[network]
dba_bigdata_node1
dba_bigdata_node2
dba_bigdata_node3
[compute]
dba_bigdata_node1
dba_bigdata_node2
dba_bigdata_node3
[monitoring]
dba_bigdata_node1
dba_bigdata_node2
dba_bigdata_node3
# When compute nodes and control nodes use different interfaces,
# you need to comment out "api_interface" and other interfaces from the globals.yml
# and specify like below:
#compute01 neutron_external_interface=eth0 api_interface=em1 storage_interface=em1 tunnel_interface=em1
[storage]
dba_bigdata_node1
dba_bigdata_node2
dba_bigdata_node3
[deployment]
dba_bigdata_node1
dba_bigdata_node2
dba_bigdata_node3
2.4.4 配置kolla安装设置
vim /etc/kolla/globals.yml
[root@dba_bigdata_node1 192.168.10.11 kolla]# cat globals.yml |egrep -v '^#|^$'
---
kolla_base_distro: "centos"
kolla_install_type: "source"
openstack_release: "train"
node_custom_config: "/etc/kolla/config"
kolla_internal_vip_address: "192.168.10.20"
docker_namespace: "kolla"
network_interface: "enp7s0f0"
api_interface: "{{ network_interface }}"
storage_interface: "{{ network_interface }}"
cluster_interface: "{{ network_interface }}"
tunnel_interface: "{{ network_interface }}"
dns_interface: "{{ network_interface }}"
neutron_external_interface: "enp7s0f1"
neutron_plugin_agent: "openvswitch"
neutron_tenant_network_types: "vxlan,vlan,flat"
keepalived_virtual_router_id: "51"
enable_haproxy: "yes"
enable_mariadb: "yes"
enable_memcached: "yes"
enable_ceilometer: "yes"
enable_cinder: "yes"
enable_cinder_backup: "yes"
enable_cinder_backend_lvm: "yes"
enable_gnocchi: "yes"
enable_heat: "yes"
enable_neutron_dvr: "yes"
enable_neutron_qos: "yes"
enable_neutron_agent_ha: "yes"
enable_neutron_provider_networks: "yes"
enable_nova_ssh: "yes"
cinder_volume_group: "cinder-volumes"
nova_compute_virt_type: "qemu"
nova_console: "novnc"
生产随机密码
kolla-genpwd
[root@dba_bigdata_node1 192.168.10.11 kolla]# cat /etc/kolla/passwords.yml
.....
zfssa_iscsi_password: XL8XeUdxaowGT9HUsOnGpxJyalvlm9YV6xNvZZx0
zun_database_password: lhxgYCRaiN4ACvFgISIbgimGxkUe1z3pVgU0CYxd
zun_keystone_password: ZDNuGXjOUyme28dUX1fWKbXdDDvItAxxtjYRwiUx
修改登录密码
sed -i 's/^keystone_admin_password.*/keystone_admin_password: 123456/' /etc/kolla/passwords.yml
2.4.5 LVM盘
发现CEPH制作盘的时候,无法创建主机,所以修改为LVM。
创建逻辑盘
pvcreate /dev/sdb
创建逻辑卷
vgcreate cinder-volumes /dev/sdb
查看是否创建成功
[root@dba-bigdata-node1 192.168.10.11 /]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 3 0 wz--n- 1.63t 0
cinder-volumes 1 2 0 wz--n- <3.82t <195.21g
[root@dba-bigdata-node1 192.168.10.11 /]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- 1.63t 0
/dev/sdb cinder-volumes lvm2 a-- <3.82t <195.21g
[root@dba-bigdata-node1 192.168.10.11 /]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home centos -wi-ao---- 1.58t
root centos -wi-ao---- 50.00g
swap centos -wi-ao---- 4.00g
cinder-volumes-pool cinder-volumes twi-aotz-- <3.63t 0.00 10.41
volume-beb1939a-b444-4ea0-b326-86f924fe3a50 cinder-volumes Vwi-a-tz-- 100.00g c
inder-volumes-pool 0.00
修改kolla的配置为LVM
enable_cinder: "yes"
enable_cinder_backup: "yes"
enable_cinder_backend_lvm: "yes"
cinder_volume_group: "cinder-volumes"
2.4.6 配置nova文件
mkdir /etc/kolla/config
mkdir /etc/kolla/config/nova
cat >> /etc/kolla/config/nova/nova-compute.conf << EOF
[libvirt]
virt_type = qemu
cpu_mode = none
EOF
2.4.7 关闭创建新卷
mkdir /etc/kolla/config/horizon/
cat >> /etc/kolla/config/horizon/custom_local_settings << EOF
LAUNCH_INSTANCE_DEFAULTS = {'create_volume': False,}
EOF
2.4.8 创建ceph配置
cat >> /etc/kolla/config/ceph.conf << EOF
[global]
osd pool default size = 3
osd pool default min size = 2
mon_clock_drift_allowed = 2
osd_pool_default_pg_num = 8
osd_pool_default_pgp_num = 8
mon clock drift warn backoff = 30
EOF
2.3 OpenStack 安装
2.3.1 编辑docker包
预先下载并推送docker包到各个节点,加快安装时候的速度
kolla-ansible -i /root/multinode pull
2.3.2 各节点依赖安装
kolla-ansible -i /root/multinode bootstrap-servers
2.3.3 执行安装前检查
kolla-ansible -i /root/multinode prechecks
2.3.4 部署
部署之前,先下载所有所有的docker依赖安装包,并推送到所有主机上,以加快部署
kolla-ansible -i /root/multinode pull
到主机通过docker images上查看
为了以后使用,让这些安装的包推送到docker registry
重命名:docker tag 原名:tag 新名:tag
docker images |grep ^kolla|grep 'train ' |awk '{print "docker tag "$1 ":" $2 " 192.168.10.11:4000/" $1 ":" $2 }'
将本地的image信息,推送到本地源
docker images |grep ^kolla|grep 'train ' |awk '{print "docker push 192.168.10.11:4000/" $1 ":" $2 }'
查看是否推送完毕
开始执行实际的安装:大概需要等待半小时左右。
kolla-ansible -i /root/multinode deploy
2.3.5 初始化
[root@dba-bigdata-node1 192.168.10.11 ~]# kolla-ansible -i multinode post-deploy
会在本机,生成console需要的各种变量
root@dba-bigdata-node1 192.168.10.11 ~]# cat /etc/kolla/admin-openrc.sh
# Clear any old environment that may conflict.
for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; done
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD='4rfv$RFV094'
export OS_AUTH_URL=http://192.168.10.20:35357/v3
export OS_INTERFACE=internal
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=RegionOne
export OS_AUTH_PLUGIN=password
安装openstack命令行程序
yum install centos-release-openstack-train -y
yum makecache fast
yum install python-openstackclient -y
修改初始化脚本,并执行初始化:/usr/share/kolla-ansible/init-runonce
ENABLE_EXT_NET=${ENABLE_EXT_NET:-1}
EXT_NET_CIDR=${EXT_NET_CIDR:-'10.0.2.0/24'}
EXT_NET_RANGE=${EXT_NET_RANGE:-'start=10.0.2.150,end=10.0.2.199'}
EXT_NET_GATEWAY=${EXT_NET_GATEWAY:-'10.0.2.1'}
浏览器访问OpenStack 页面:http://192.168.10.20即可访问
更多推荐
所有评论(0)