相关内容:

选择题可能有些题目有点小问题,请酌情参考,主要还是以实操为主

单选题(200分):

1.制定银行容器平台的需求时,考虑包括的方面叙述错误的是(10分)
A、管理大规模容器集群的能力
B、满足金融业务的监管和安全要求,平台需要考虑应用的高可用性和业务连续性、多租户安全隔离、不同等级业务隔离、防火墙策略、安全漏洞扫描、镜像安全、后台运维的4A纳管、审计日志
C、支持银行自身的应用发布体系、持续集成系统、应用建模规范、高可用管理策略
D、对接不统一身份验证、和整个金融云其它系统不同的租户定义、角色定义、资源配额定义等。(正确答案)

 

2.下哪一项最好地描述了何时完成监控项目过程组?(10分)
A、整个项目中持续进行(正确答案)
B、每个可交付成果完成时
C、计划里程碑或项目间隙
D、每个项目阶段结束时

 

3.下列交换型以太网描述正确的是? (10分)
A、各独立网段是被隔离的(正确答案)
B、站点数越多,平均带宽越小
C、收到CSMA/CD的约束
D、覆盖范围受到碰撞域的限制

 

4.192.168.0.0/24使用掩码255.255.255.240划分子网,其可用子网数为(C),每个子网内可用主机地址数为(10分)
A、14 16
B、254 14
C、16 14(正确答案)
D、14 62

 

5.下面选项关于linux下查看cpu,内存,swap,硬盘信息的命令描述错误的是?(10分)
A、cat /proc/cpuinfo 查看CPU相关参数的linux系统命令
B、cat /proc/meminfo 查看linux系统内存信息的linux系统命令
C、du -h: 查看硬盘信息(正确答案)
D、cat /proc/swaps :查看所有swap分区的信息

 

6.一台主机要实现通过局域网与另一个局域网通信,需要做的工作是?(10分)
A、配置域名服务器
B、定义一条本机指向所在网络的路由
C、定义一条本机指向所在网络网关的路由(正确答案)
D、定义一条本机指向目标网络网关的路由

 

7.SQL 语言的数据操纵语句包括 select,insert,update和delete 等,其中最重要的也是适用最频繁的语句是(10分)
A、SELECT(正确答案)
B、NSERT
C、UPDATE
D、DELETE

 

8.MySQL内建的什么功能是构建大型、高性能应用程序的基础。(10分)
A、复制功能(正确答案)
B、修复功能
C、通信功能
D、安全功能

 

9.主从数据库复制整体来说分为几个步骤?(10分)
A、1
B、2
C、3(正确答案)
D、4

 

10.OpenStack的组件不包括(10分)
A、Nova
B、Swift
C、Keystone
D、EC2(正确答案)

 

11.下列选项当中,哪个是Keystone创建用户的命令(10分)
A、openstack user create(正确答案)
B、openstack project create
C、openstack role create
D、openstack admin create

 

12.在一个实际的生产环境中,以下不属于Neutron的网络分类的是?(10分)
A、实例通信网络
B、内部存储网络
C、内部管理网络
D、实例公有网络(正确答案)

 

13.以下关于腾讯云计费描述错误的是哪项?(10分)
A、分为预付费和后付费
B、分为包年包月和按量计费
C、预付费一般为包年包月的购买形式,后付费一般为按量计费模式
D、使用按量计费总是比包年包月划算(正确答案)

 

14.下列哪个不属于客户上云的典型价值?(10分)
A、异地灾备
B、节省投资
C、简化运维
D、增强机房管理(正确答案)

 

15.下列哪项不是CDN主要解决的问题?(10分)
A、用户与业务服务器地域间物理距离较远,需要进行多次网络转发,传输延时较高且不稳定
B、服务器CPU 使用率过高(正确答案)
C、用户使用运营商与业务服务器所在运营商不同,请求需要运营商之间进行互联转发
D、业务服务器网络带宽、处理能力有限,当接收到海量用户请求时,会导致响应速度降低、可用性降低

 

16.下列关于弹性伸缩服务中,描述正确的是?(10分)
A、弹性伸缩服务中的服务器采用特殊软性材质生产
B、弹性伸缩的收费形式包括按需付费和包年包月两种
C、弹性伸缩是一种可以根据服务器压力的不同自动增加或减少实例的服务(正确答案)
D、以上皆为错误

 

17.Harbor私有仓库是以下哪家公司的项目?(10分)
A、Google
B、 Redhat
C、 Microsoft
D、 VMware(正确答案)

 

18.在Docker的描述文件Dockerfile中,FROM的含义是(10分)
A、 定义基础镜像(正确答案)
B、 作者或者维护者
C、 运行的Linux命令
D、 增加文件或目录

 

19.Ansible是一种什么_工具。(10分)
A、自动化运维(正确答案)
B、自动化测试
C、自动化引擎
D、自动化部署

 

20.Shell将用户输入的命令送入内核中执行,内核接收用户的命令后进行调度_完成操作。(10分)
A、软件
B、网络
C、硬件(正确答案)
D、信息

 

多选题(200分):

1.下面关于隧道方案的叙述错误的是(10分)
A、如果容器平台中运行业务与其它平台上运行业务需要通信,则需要配置从容器外部访问容器的路由,否则容器的地址从容器平台外部不能直接路由访问
B、由于容器网络数据包在底层网络上传输时被封装在宿主机网络报文中,因此对普通防火墙来说,容器网络数据包的地址是可见的(正确答案)
C、由于容器网络数据包需要被封装在底层宿主机网络报文中,因此会增加底层网络数据包的长度,但不会影响网络的传输速率(正确答案)
D、由于容器网络数据包需要被封装在底层宿主机网络报文中,因此会增加底层网络数据包的长度,会影响网络的传输速率

 

2.下面关于路由方案叙述正确的是(10分)
A、由技术从三层或者二层实现跨主机容器互通,没有NAT,没有Overlay方案需要的数据包封装和拆解(正确答案)
B、每一个容器具有路由可达的IP地址,且可以做到从容器平台外部路由可达(正确答案)
C、这种网络方案的好处是性能损失小、外部路由可达、传统的防火墙仍然能正常发挥作用(正确答案)
D、这种网络方案对IP地址的消耗小、容器平台外部对容器网络中网络实体的变化可以感知

 

3.以下哪些地址属于网络地址 (10分)
A、192.168.1.0/23
B、172.16.1.0/24(正确答案)
C、172.52.1.128/25(正确答案)
D、10.1.4.0/22(正确答案)

 

4.以下哪些是交换机的交换模式 (10分)
A、快速转发(正确答案)
B、信元转发
C、存储转发(正确答案)
D、分段过滤(正确答案)

 

5.Linux 系统中DHCP 服务器的服务程序是dhcpd,配置文件是dhcpd.conf,如果在该配置文件中包括如下配置内容:Subnet 192.168.2.0 netmask 255.255.255.0 {range 192.168.2.100 192.168.2.200 ;}关于以上配置内容,说法正确的是(10分)
A、对子网“192.168.2.0/16”中的主机进行动态地址分配
B、对子网“192.168.2.0/24”中的主机进行动态地址分配(正确答案)
C、在子网中用于动态分配的IP 地址数量为100 个
D、在子网中用于动态分配的IP 地址数量为101 个(正确答案)

 

6.Nginx核心口快包括(10分)
A、 HTTP模块(正确答案)
B、 EVENT模块(正确答案)
C、 MAIL模块(正确答案)
D、FLAT模块

 

7.下列哪些是zabbix的优点?(10分)
A、安装简单(正确答案)
B、集成度高(正确答案)
C、安装复杂
D、集成度较低

 

8.下列哪些可以生成Kickstart文件(10分)
A、手动书写(正确答案)
B、通过system-config-kickstart图形工具(正确答案)
C、通过红帽的安装程序Anaconda自动生成(正确答案)
D、自动书写

 

9.下列选项当中,哪些是KVM组件Libvirt包含的工具?(10分)
A、 libvirtd(正确答案)
B、 API数据库(正确答案)
C、 virsh(正确答案)
D、 qemu-kvm

 

10.下列属于Keystone基本功能的是(10分)
A、身份认证(正确答案)
B、授权(正确答案)
C、服务目录
D、资源调度(正确答案)

 

11.Openstack系统实现对底层的计算资源、存储资源和网络资源的集中管理功能主要通过以下哪些(10分)
A、命令行(CLI)(正确答案)
B、程序接口(API)(正确答案)
C、基于Web界面(GUI)(正确答案)
D、远程控制

 

12.Swift采用的是REST架构,REST架构遵循了CRUD原则,CRUD原则对于资源需要以下哪些行为?(10分)
A、Create(创建)(正确答案)
B、Read(读取)(正确答案)
C、Update(更新)(正确答案)
D、Delete(删除)(正确答案)

 

13.Zabbix Agents监控代理部署在监控目标上,能够主动监控本的什么资源。(10分)
A、 本地资源(正确答案)
B、 应用程序(正确答案)
C、本地磁盘
D、本地网络

 

14.以下哪些属于腾讯云提供的数据库产品?(10分)
A、关系型数据库MySQL(正确答案)
B、弹性缓存 Redis(正确答案)
C、文档型数据库 MongoDB(正确答案)
D、列式数据库HBase(正确答案)

 

15.腾讯云主机的主要优势有哪些?(10分)
A、云硬盘等可提降硬件配置,弹性更好(正确答案)
B、云主机可用性可达99.95%,稳定可靠(正确答案)
C、多线接入,可秒级切换BGP(正确答案)
D、安全、易用(正确答案)

 

16.弹性伸缩的应用场景有哪些?(10分)
A、提前部署扩容缩容(正确答案)
B、提高服务器性能
C、自动替换不健康CVM(正确答案)
D、低成本应对业务浪涌(正确答案)

 

17.云计算IAAS层,能够带来哪些好处?(10分)
A、资源集中自动化管理(正确答案)
B、快速供应基础设施(正确答案)
C、提高资源利用、降低能耗(正确答案)
D、共享硬件资源(正确答案)

 

18.下面属于OpenShift内部的部分核心组件和概念的是(10分)
A、 Master Node(正确答案)
B、 Scheduler
C、 Compute Node(正确答案)
D、 Etcd(正确答案)

 

19.Playbooks模块主要有哪几个部分组成_。(10分)
A、 Target section(正确答案)
B、Variable section(正确答案)
C、Task section(正确答案)
D、Handler section(正确答案)

 

20.Shell建立了哪两者之间的通讯_。(10分)
A、用户(正确答案)
B、操作系统内核(正确答案)
C、应用程序
D、计算

 

实操题(600分):

1.交换机管理(40分)

在eNSP中使用S5700交换机进行配置,通过一条命令划分vlan 2、vlan 3、vlan 1004,通过端口组的方式配置端口1-5为access模式,并添加至vlan2中。配置端口10为trunk模式,并放行vlan3。创建三层vlan 2,配置IP地址为:172.16.2.1/24,创建三层vlan1004,配置IP地址为:192.168.4.2/30。通过命令添加默认路由,下一跳为192.168.4.1。(使用完整命令)将上述操作命令及返回结果以文本形式提交到答题框。

SW1配置:

<Huawei>system-view
[Huawei]sysname SW1
[SW1]vlan batch 2 3 1004
[SW1]port-group 1
[SW1-port-group-1]group-member GigabitEthernet 0/0/1 to GigabitEthernet 0/0/5
[SW1-port-group-1]port link-type access
[SW1-port-group-1]port default vlan 2
[SW1]interface GigabitEthernet 0/0/10
[SW1-GigabitEthernet0/0/10]port link-type trunk
[SW1-GigabitEthernet0/0/10]port trunk allow-pass vlan 3
[SW1-GigabitEthernet0/0/10]quit
[SW1]interface Vlanif 2
[SW1-Vlanif2]ip address 172.16.2.1 24
[SW1-Vlanif2]quit
[SW1]interface Vlanif 1004
[SW1-Vlanif1004]ip address 192.168.4.2 30
[SW1-Vlanif1004]quit
[SW1]ip route-static 0.0.0.0 0 192.168.4.1

 

2.网络管理(40分)

配置SW1交换机vlan20地址为172.17.20.253/24,配置vrrp虚拟网关为172.17.20.254, vrid为1,配置优先级为120。配置vlan17地址为172.17.17.253/24,配置vrrp虚拟网关为172.17.17.254,vrid为2。配置mstp协议,vlan20为实例1,vlan17为实例2,vlan20在SW1上为主,vlan17在SW1上为备。将上述操作命令以文本形式提交到答题框。

  • 这个的架构图和内容太多了,自行参考pdf吧,写太多估计也看不完:

SW1配置:

[SW1]interface Vlanif 20
[SW1-Vlanif20]ip address 172.17.20.253 24
[SW1-Vlanif20]vrrp vrid 1 virtual-ip 172.17.20.254
[SW1-Vlanif20]vrrp vrid 1 priority 120
[SW1-Vlanif20]vrrp vrid 1 track interface GigabitEthernet 0/0/4 reduced 15
[SW1-Vlanif20]quit
[SW1]interface Vlanif 17
[SW1-Vlanif17]ip address 172.17.17.253 24
[SW1-Vlanif17]vrrp vrid 2 virtual-ip 172.17.17.254
[SW1-Vlanif17]quit
[SW1]stp region-configuration
[SW1-mst-region]region-name RG1
[SW1-mst-region]instance 1 vlan 20
[SW1-mst-region]instance 2 vlan 17
[SW1-mst-region]active region-configuration
[SW1-mst-region]quit
[SW1]stp instance 1 root primary
[SW1]stp instance 2 root secondary
[SW1]stp pathcost-standard legacy
[SW1]stp enable

 

3.YUM源管理(40分)

假设当前有一个centos7.2-1511.iso的镜像文件,使用这个文件配置yum源,要求将这个镜像文件挂载在/opt/centos目录。还存在一个ftp源,IP地址为192.168.100.200,ftp配置文件中配置为anon_root=/opt,/opt目录中存在一个iaas目录(该目录下存在一个repodata目录)请问如何配置自己的local.repo文件,使得可以使用这两个地方的软件包,安装软件。请将local.repo文件的内容以文本形式提交到答题框。

  • 我们拿Xserver1来模拟那个存在的ftp源,centos源我们自行挂载就行了,模拟也行,熟悉了,看题目要求就可以配置了。

Xserver1:

[root@xserver1 ~]# yum install -y vsftpd
[root@xserver1 ~]# vim /etc/vsftpd/vsftpd.conf 
anon_root=/opt
[root@xserver1 ~]# systemctl restart vsftpd
[root@xserver1 ~]# systemctl enable vsftpd
Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.service to /usr/lib/systemd/system/vsftpd.service.
[root@xserver1 ~]# systemctl stop firewalld
[root@xserver1 ~]# systemctl disable firewalld
# 注释:selinux防火墙,设置访问模式(得重启才生效):
[root@xserver1 ~]# vim /etc/selinux/config 
SELINUX=Permissive
# 注释:配置临时访问模式(无需重启):
[root@xserver1 ~]# setenforce 0
[root@xserver1 ~]# getenforce 
Permissive

 

Xserver2:

[root@xserver2 ~]# systemctl stop firewalld
[root@xserver2 ~]# systemctl disable firewalld
# 注释:selinux防火墙,设置访问模式(得重启才生效):
[root@xserver2 ~]# vim /etc/selinux/config 
SELINUX=Permissive
# 注释:配置临时访问模式(无需重启):
[root@xserver2 ~]# setenforce 0
[root@xserver2 ~]# getenforce 
Permissive
[root@xserver2 ~]# mount -o loop CentOS-7-x86_64-DVD-1511.iso /opt/centos/
mount: /dev/loop0 is write-protected, mounting read-only
[root@xserver2 ~]# cat /etc/yum.repos.d/local.repo 
[centos]
name=centos
baseurl=file:///opt/centos
enabled=1
gpgcheck=0
[iaas]
name=iaas
baseurl=ftp://192.168.100.200/iaas
enabled=1
gpgcheck=0

 

4. KVM管理(40分)

使用提供的虚拟机和软件包,完成KVM服务的安装与KVM虚拟机的启动。使用提供的cirros镜像与qemu-ifup-NAT脚本启动虚拟机,启动完毕后登录,登录有执行ip addr list命令,将该命令的返回结果以文本形式提交到答题框。

  • 将kvm_yum上传到root目录,将qemu-ifup-NAT.txt重命名为qemu-ifup-NAT.sh:
[root@localhost ~]# vim /etc/yum.repos.d/local.repo 
[centos]
name=centos
baseurl=file:///opt/centos
enabled=1
gpgcheck=0
[kvm]
name=kvm
baseurl=file:///root/kvm_yum
enabled=1
gpgcheck=0
[root@localhost ~]# yum install  -y  qemu-kvm  openssl libvirt
[root@localhost ~]# systemctl restart libvirtd
[root@localhost ~]# ln  -sv  /usr/libexec/qemu-kvm /usr/bin/qemu-kvm
‘/usr/bin/qemu-kvm’ -> ‘/usr/libexec/qemu-kvm’
[root@localhost ~]# mv qemu-ifup-NAT.txt qemu-ifup-NAT.sh
[root@localhost ~]# chmod +x qemu-ifup-NAT.sh
[root@localhost ~]# qemu-kvm -m 1024 -drive file=/root/cirros-0.3.4-x86_64-disk.img,if=virtio  -net nic,model=virtio  -net tap,script=qemu-ifup-NAT.sh  -nographic
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
Back to tcg accelerator.
Warning:dnsmasq is already running. No need to run it again.

SeaBIOS (version 1.11.0-2.el7)

iPXE (http://ipxe.org) 00:03.0 C980 PCI2.10 PnP PMM+3FF94780+3FED4780 C980                                                                          

Booting from Hard Disk...
GRUB Loading stage1.5.

GRUB loading, please wait...
Starting up ..
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.2.0-80-virtual (buildd@batsu) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1)
[    0.000000] Command line: LABEL=cirros-rootfs ro console=tty1 console=ttyS0
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   AMD AuthenticAMD
[    0.000000]   Centaur CentaurHauls
[    0.000000] BIOS-provided physical RAM map:
[    0.000000]  BIOS-e820: 0000000000000000 - 000000000009fc00 (usable)
[    0.000000]  BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved)
[    0.000000]  BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
[    0.000000]  BIOS-e820: 0000000000100000 - 000000003fffc000 (usable)
[    0.000000]  BIOS-e820: 000000003fffc000 - 0000000040000000 (reserved)
[    0.000000]  BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved)
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.4 present.
[    0.000000] No AGP bridge found
[    0.000000] last_pfn = 0x3fffc max_arch_pfn = 0x400000000
[    0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
[    0.000000] found SMP MP-table at [ffff8800000f63b0] f63b0
[    0.000000] init_memory_mapping: 0000000000000000-000000003fffc000
[    0.000000] RAMDISK: 37c92000 - 37ff0000
[    0.000000] ACPI: RSDP 00000000000f6210 00014 (v00 BOCHS )
[    0.000000] ACPI: RSDT 000000003ffffad7 00030 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
[    0.000000] ACPI: FACP 000000003ffff177 00074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
[    0.000000] ACPI: DSDT 000000003fffe040 01137 (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
[    0.000000] ACPI: FACS 000000003fffe000 00040
[    0.000000] ACPI: SSDT 000000003ffff1eb 00874 (v01 BOCHS  BXPCSSDT 00000001 BXPC 00000001)
[    0.000000] ACPI: APIC 000000003ffffa5f 00078 (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
[    0.000000] No NUMA configuration found
[    0.000000] Faking a node at 0000000000000000-000000003fffc000
[    0.000000] Initmem setup node 0 0000000000000000-000000003fffc000
[    0.000000]   NODE_DATA [000000003fff7000 - 000000003fffbfff]
[    0.000000] Zone PFN ranges:
[    0.000000]   DMA      0x00000010 -> 0x00001000
[    0.000000]   DMA32    0x00001000 -> 0x00100000
[    0.000000]   Normal   empty
[    0.000000] Movable zone start PFN for each node
[    0.000000] early_node_map[2] active PFN ranges
[    0.000000]     0: 0x00000010 -> 0x0000009f
[    0.000000]     0: 0x00000100 -> 0x0003fffc
[    0.000000] ACPI: PM-Timer IO Port: 0x608
[    0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] SMP: Allowing 1 CPUs, 0 hotplug CPUs
[    0.000000] PM: Registered nosave memory: 000000000009f000 - 00000000000a0000
[    0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
[    0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
[    0.000000] Allocating PCI resources starting at 40000000 (gap: 40000000:bffc0000)
[    0.000000] Booting paravirtualized kernel on bare hardware
[    0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1 nr_node_ids:1
[    0.000000] PERCPU: Embedded 27 pages/cpu @ffff88003fc00000 s78848 r8192 d23552 u2097152
[    0.000000] Built 1 zonelists in Node order, mobility grouping on.  Total pages: 257926
[    0.000000] Policy zone: DMA32
[    0.000000] Kernel command line: LABEL=cirros-rootfs ro console=tty1 console=ttyS0
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] Checking aperture...
[    0.000000] No AGP bridge found
[    0.000000] Memory: 1012360k/1048560k available (6576k kernel code, 452k absent, 35748k reserved, 6)
[    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[    0.000000] Hierarchical RCU implementation.
[    0.000000] 	RCU dyntick-idle grace-period acceleration is enabled.
[    0.000000] NR_IRQS:4352 nr_irqs:256 16
[    0.000000] Console: colour VGA+ 80x25
[    0.000000] console [tty1] enabled
[    0.000000] console [ttyS0] enabled
[    0.000000] allocated 8388608 bytes of page_cgroup
[    0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups
[    0.000000] Fast TSC calibration failed
[    0.000000] TSC: Unable to calibrate against PIT
[    0.000000] TSC: using PMTIMER reference calibration
[    0.000000] Detected 3390.935 MHz processor.
[    0.028662] Calibrating delay loop (skipped), value calculated using timer frequency.. 6781.87 Bogo)
[    0.029371] pid_max: default: 32768 minimum: 301
[    0.032001] Security Framework initialized
[    0.033556] AppArmor: AppArmor initialized
[    0.033685] Yama: becoming mindful.
[    0.036002] Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes)
[    0.036002] Inode-cache hash table entries: 65536 (order: 7, 524288 bytes)
[    0.036002] Mount-cache hash table entries: 256
[    0.036002] Initializing cgroup subsys cpuacct
[    0.036002] Initializing cgroup subsys memory
[    0.036002] Initializing cgroup subsys devices
[    0.036002] Initializing cgroup subsys freezer
[    0.036002] Initializing cgroup subsys blkio
[    0.036002] Initializing cgroup subsys perf_event
[    0.036002] mce: CPU supports 10 MCE banks
[    0.036002] SMP alternatives: switching to UP code
[    0.085713] Freeing SMP alternatives: 24k freed
[    0.086261] ACPI: Core revision 20110623
[    0.102019] ftrace: allocating 26610 entries in 105 pages
[    0.118594] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[    0.156009] CPU0: AMD QEMU Virtual CPU version 1.5.3 stepping 03
[    0.156009] APIC calibration not consistent with PM-Timer: 139ms instead of 100ms
[    0.156009] APIC delta adjusted to PM-Timer: 6249992 (8743536)
[    0.156009] Performance Events: Broken PMU hardware detected, using software events only.
[    0.156492] NMI watchdog disabled (cpu0): hardware events not enabled
[    0.156993] Brought up 1 CPUs
[    0.157131] Total of 1 processors activated (6781.87 BogoMIPS).
[    0.166195] devtmpfs: initialized
[    0.179266] EVM: security.selinux
[    0.179359] EVM: security.SMACK64
[    0.179427] EVM: security.capability
[    0.187359] print_constraints: dummy: 
[    0.188202] RTC time: 10:13:19, date: 05/22/20
[    0.189085] NET: Registered protocol family 16
[    0.192172] ACPI: bus type pci registered
[    0.193548] PCI: Using configuration type 1 for base access
[    0.202485] bio: create slab <bio-0> at 0
[    0.204800] ACPI: Added _OSI(Module Device)
[    0.204908] ACPI: Added _OSI(Processor Device)
[    0.204996] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.205084] ACPI: Added _OSI(Processor Aggregator Device)
[    0.226640] ACPI: Interpreter enabled
[    0.226764] ACPI: (supports S0 S5)
[    0.227204] ACPI: Using IOAPIC for interrupt routing
[    0.248481] ACPI: No dock devices found.
[    0.248617] HEST: Table not found.
[    0.248734] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.250625] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    0.252569] pci_root PNP0A03:00: host bridge window [io  0x0000-0x0cf7]
[    0.252759] pci_root PNP0A03:00: host bridge window [io  0x0d00-0xffff]
[    0.252901] pci_root PNP0A03:00: host bridge window [mem 0x000a0000-0x000bffff]
[    0.253036] pci_root PNP0A03:00: host bridge window [mem 0x40000000-0xfebfffff]
[    0.258912] pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
[    0.259224] pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
[    0.284776]  pci0000:00: Unable to request _OSC control (_OSC support mask: 0x1e)
[    0.297311] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
[    0.297999] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
[    0.298483] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[    0.298952] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
[    0.299238] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
[    0.301682] vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none
[    0.301984] vgaarb: loaded
[    0.302057] vgaarb: bridge control possible 0000:00:02.0
[    0.303180] i2c-core: driver [aat2870] using legacy suspend method
[    0.303331] i2c-core: driver [aat2870] using legacy resume method
[    0.304483] SCSI subsystem initialized
[    0.305723] usbcore: registered new interface driver usbfs
[    0.306042] usbcore: registered new interface driver hub
[    0.306686] usbcore: registered new device driver usb
[    0.308657] PCI: Using ACPI for IRQ routing
[    0.313022] NetLabel: Initializing
[    0.313105] NetLabel:  domain hash size = 128
[    0.313177] NetLabel:  protocols = UNLABELED CIPSOv4
[    0.313905] NetLabel:  unlabeled traffic allowed by default
[    0.369076] AppArmor: AppArmor Filesystem Enabled
[    0.369652] pnp: PnP ACPI init
[    0.369833] ACPI: bus type pnp registered
[    0.375223] pnp: PnP ACPI: found 6 devices
[    0.375341] ACPI: ACPI bus type pnp unregistered
[    0.394684] Switching to clocksource acpi_pm
[    0.395149] NET: Registered protocol family 2
[    0.406185] IP route cache hash table entries: 32768 (order: 6, 262144 bytes)
[    0.410624] TCP established hash table entries: 131072 (order: 9, 2097152 bytes)
[    0.412790] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
[    0.413739] TCP: Hash tables configured (established 131072 bind 65536)
[    0.413889] TCP reno registered
[    0.414060] UDP hash table entries: 512 (order: 2, 16384 bytes)
[    0.414254] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes)
[    0.415193] NET: Registered protocol family 1
[    0.415484] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    0.415638] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[    0.415842] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[    0.422966] Trying to unpack rootfs image as initramfs...
[    0.429911] audit: initializing netlink socket (disabled)
[    0.430417] type=2000 audit(1590142399.428:1): initialized
[    0.561790] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    0.578889] VFS: Disk quotas dquot_6.5.2
[    0.579617] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.589481] fuse init (API version 7.17)
[    0.590323] msgmni has been set to 1977
[    0.613806] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
[    0.614254] io scheduler noop registered
[    0.614375] io scheduler deadline registered (default)
[    0.614735] io scheduler cfq registered
[    0.615718] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[    0.616355] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[    0.618371] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[    0.618865] ACPI: Power Button [PWRF]
[    0.634672] ERST: Table is not found!
[    0.634790] GHES: HEST is not enabled!
[    0.636666] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[    0.636902] virtio-pci 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11
[    0.725351] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10
[    0.725602] virtio-pci 0000:00:04.0: PCI INT A -> Link[LNKD] -> GSI 10 (level, high) -> IRQ 10
[    0.726699] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[    0.748340] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[    0.805971] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[    0.834595] Linux agpgart interface v0.103
[    0.849149] brd: module loaded
[    0.861034] loop: module loaded
[    0.874798]  vda: vda1
[    0.894087] Freeing initrd memory: 3448k freed
[    0.896830] scsi0 : ata_piix
[    0.897847] scsi1 : ata_piix
[    0.898286] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc060 irq 14
[    0.898422] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc068 irq 15
[    0.901392] Fixed MDIO Bus: probed
[    0.901708] tun: Universal TUN/TAP device driver, 1.6
[    0.901797] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
[    0.908111] PPP generic driver version 2.4.2
[    0.909758] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.910003] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[    0.910201] uhci_hcd: USB Universal Host Controller Interface driver
[    0.910680] usbcore: registered new interface driver libusual
[    0.911227] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
[    0.913292] serio: i8042 KBD port at 0x60,0x64 irq 1
[    0.913554] serio: i8042 AUX port at 0x60,0x64 irq 12
[    0.914713] mousedev: PS/2 mouse device common for all mice
[    0.916900] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
[    0.918165] rtc_cmos 00:01: RTC can wake from S4
[    0.919431] rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
[    0.919785] rtc0: alarms up to one day, 114 bytes nvram
[    0.920627] device-mapper: uevent: version 1.0.3
[    0.922066] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised: dm-devel@redhat.com
[    0.922602] cpuidle: using governor ladder
[    0.922717] cpuidle: using governor menu
[    0.922791] EFI Variables Facility v0.08 2004-May-17
[    0.924595] TCP cubic registered
[    0.925278] NET: Registered protocol family 10
[    0.931603] NET: Registered protocol family 17
[    0.931764] Registering the dns_resolver key type
[    0.934698] registered taskstats version 1
[    1.013651]   Magic number: 4:873:224
[    1.014233] rtc_cmos 00:01: setting system clock to 2020-05-22 10:13:20 UTC (1590142400)
[    1.014437] powernow-k8: Processor cpuid 6d3 not supported
[    1.015351] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
[    1.015464] EDD information not available.
[    1.064256] ata2.00: ATAPI: QEMU DVD-ROM, 1.5.3, max UDMA/100
[    1.065808] ata2.00: configured for MWDMA2
[    1.070890] scsi 1:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     1.5. PQ: 0 ANSI: 5
[    1.074738] sr0: scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
[    1.074937] cdrom: Uniform CD-ROM driver Revision: 3.20
[    1.077154] sr 1:0:0:0: Attached scsi generic sg0 type 5
[    1.086555] Freeing unused kernel memory: 928k freed
[    1.102980] Write protecting the kernel read-only data: 12288k
[    1.128858] Freeing unused kernel memory: 1596k freed
[    1.147351] Freeing unused kernel memory: 1184k freed

info: initramfs: up at 1.23
NOCHANGE: partition 1 is size 64260. it cannot be grown
info: initramfs loading root from /dev/vda1
info: /etc/init.d/rc.sysinit: up at 1.90
info: container: none
Starting logging: OK
modprobe: module virtio_blk not found in modules.dep
modprobe: module virtio_net not found in modules.dep
WARN: /etc/rc3.d/S10-load-modules failed
Initializing random number generator... done.
Starting acpid: OK
cirros-ds 'local' up at 2.91
no results found for mode=local. up 3.16. searched: nocloud configdrive ec2
Starting network...
udhcpc (v1.20.1) started
Sending discover...
Sending select for 192.168.122.89...
Lease of 192.168.122.89 obtained, lease time 3600
cirros-ds 'net' up at 6.76
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 7.00. request failed
failed 2/20: up 19.34. request failed
failed 3/20: up 31.84. request failed
failed 4/20: up 44.06. request failed
failed 5/20: up 56.30. request failed
failed 6/20: up 68.69. request failed
failed 7/20: up 81.10. request failed
failed 8/20: up 93.30. request failed
failed 9/20: up 105.62. request failed
failed 10/20: up 117.91. request failed
failed 11/20: up 130.15. request failed
failed 12/20: up 142.46. request failed
failed 13/20: up 154.73. request failed
failed 14/20: up 167.19. request failed
failed 15/20: up 179.45. request failed
failed 16/20: up 191.85. request failed
failed 17/20: up 204.14. request failed
failed 18/20: up 216.41. request failed
failed 19/20: up 228.79. request failed
failed 20/20: up 241.07. request failed
failed to read iid from metadata. tried 20
no results found for mode=net. up 253.35. searched: nocloud configdrive ec2
failed to get instance-id of datasource
Starting dropbear sshd: generating rsa key... generating dsa key... OK
=== system information ===
Platform: Red Hat KVM
Container: none
Arch: x86_64
CPU(s): 1 @ 3390.935 MHz
Cores/Sockets/Threads: 1/1/1
Virt-type: AMD-V
RAM Size: 995MB
Disks:
NAME MAJ:MIN       SIZE LABEL         MOUNTPOINT
vda  253:0     41126400               
vda1 253:1     32901120 cirros-rootfs /
sr0   11:0   1073741312               
=== sshd host keys ===
-----BEGIN SSH HOST KEY KEYS-----
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgl4Xw51xDUhlJtF6/Yw9tjhW4yvcVKZaqXGS4GpyenRqnuJxqkpsDD4QFehMHOV+yjs
ssh-dss AAAAB3NzaC1kc3MAAACBAMuvp15NiGynOnim7Fk/P8jd2+sZMMQNm4aNiqFkARD/wgIBrFNTvapQWoJBGZNKeMafPKzueys
-----END SSH HOST KEY KEYS-----
=== network info ===
if-info: lo,up,127.0.0.1,8,::1
if-info: eth0,up,192.168.122.89,24,fe80::5054:ff:fe12:3456
ip-route:default via 192.168.122.1 dev eth0 
ip-route:192.168.122.0/24 dev eth0  src 192.168.122.89 
=== datasource: None None ===
=== cirros: current=0.3.4 uptime=255.67 ===
  ____               ____  ____
 / __/ __ ____ ____ / __ \/ __/
/ /__ / // __// __// /_/ /\ \ 
\___//_//_/  /_/   \____/___/ 
   http://cirros-cloud.net

login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
cirros login: cirros
Password: 
$ ip addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.89/24 brd 192.168.122.255 scope global eth0
    inet6 fe80::5054:ff:fe12:3456/64 scope link 
       valid_lft forever preferred_lft forever
$ route  -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.122.1   0.0.0.0         UG    0      0        0 eth0
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
# 注释:退出进入的系统(先按Ctrl+A,再按X):

 

5.主从数据库管理(40分)

使用提供的两台虚拟机,在虚拟机上安装mariadb数据库,并配置为主从数据库,实现两个数据库的主从同步。配置完毕后,请在从节点上的数据库中执行“show slave status \G”命令查询从节点复制状态,将查询到的结果以文本形式提交到答题框。

  • 上传gpmall-repo中有mariadb子文件的文件到/root目录下:

Mysql1:

[root@xiandian ~]# hostnamectl set-hostname mysql1
[root@mysql1 ~]# login
[root@mysql1 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.111    mysql1
192.168.1.112    mysql2
[root@mysql1 ~]# systemctl stop firewalld
[root@mysql1 ~]# systemctl disable firewalld
[root@mysql1 ~]# setenforce 0
[root@mysql1 ~]# vim /etc/selinux/config 
SELINUX=Permissive
[root@mysql1~]# vim /etc/yum.repos.d/local.repo 
[centos]
name=centos
baseurl=file:///opt/centos
enabled=1
gpgcheck=0
[mariadb]
name=mariadb
baseurl=file:///root/gpmall-repo
enabled=1
gpgcheck=0
[root@mysql1 ~]# yum install -y mariadb mariadb-server
[root@mysql1 ~]# systemctl restart mariadb
[root@mysql1 ~]# mysql_secure_installation
[root@mysql1 ~]# vim /etc/my.cnf
# 注释:在[mysqld]下添加:
log_bin = mysql-bin
binlog_ignore_db = mysql
server_id = 10
[root@mysql1 ~]# mysql -uroot -p000000
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 5.5.65-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> grant all privileges on *.* to 'root'@'%' identified by '000000';
Query OK, 0 rows affected (0.00 sec)

# 注释:如果你不想配置上面的host文件可以不使用主机名mysql2的形式,可以直接打IP地址,用户可以随意指定,只是一个用于连接的而已
MariaDB [(none)]> grant replication slave on *.* to 'user'@'mysql2' identified by '000000';
Query OK, 0 rows affected (0.00 sec)

 

Mysql2:

[root@xiandian ~]# hostnamectl set-hostname mysql2
[root@mysql2 ~]# login
[root@mysql2 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.111   mysql1
192.168.1.112   mysql2
[root@mysql2 ~]# systemctl stop firewalld
[root@mysql2 ~]# systemctl disable firewalld
[root@mysql2 ~]# setenforce 0
[root@mysql2 ~]# vim /etc/selinux/config 
SELINUX=Permissive
[root@mysql2~]# vim /etc/yum.repos.d/local.repo 
[centos]
name=centos
baseurl=file:///opt/centos
enabled=1
gpgcheck=0
[mariadb]
name=mariadb
baseurl=file:///root/gpmall-repo
enabled=1
gpgcheck=0
[root@mysql2 ~]# yum install -y mariadb mariadb-server
[root@mysql2 ~]# systemctl restart mariadb
[root@mysql2 ~]# mysql_secure_installation
[root@mysql2 ~]# vim /etc/my.cnf
# 注释:在[mysqld]下添加:
log_bin = mysql-bin
binlog_ignore_db = mysql
server_id = 20
[root@mysql2 ~]# mysql -uroot -p000000
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 5.5.65-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

# 注释:如果你不想配置上面的host文件可以不使用主机名mysql1的形式,可以直接打IP地址,这里的用户,密码必须和上面mysql1配置的user一致
MariaDB [(none)]> change master to master_host='mysql1',master_user='user',master_password='000000';
Query OK, 0 rows affected (0.02 sec)
MariaDB [(none)]> start slave;
MariaDB [(none)]> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: mysql1
Master_User: user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000003
Read_Master_Log_Pos: 245
Relay_Log_File: mariadb-relay-bin.000005
Relay_Log_Pos: 529
Relay_Master_Log_File: mysql-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 245
Relay_Log_Space: 1256
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 30
1 row in set (0.00 sec)

 

验证结果(主从是否同步):

Mysql1:

[root@mysql1 ~]# mysql -uroot -p000000
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 26
Server version: 5.5.65-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create database test;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> use test;
Database changed
MariaDB [test]> create table demotables(id int not null primary key,name varchar(10),addr varchar(20));
Query OK, 0 rows affected (0.01 sec)
MariaDB [test]> insert into demotables values(1,'zhangsan','lztd');
Query OK, 0 rows affected (0.00 sec)
MariaDB [test]> select * from demotables;
+----+----------+------+
| id | name     | addr |
+----+----------+------+
|  1 | zhangsan | lztd |
+----+----------+------+
1 rows in set (0.00 sec)

 

Mysql2:

[root@mysql2 ~]# mysql -uroot -p000000
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 24
Server version: 5.5.65-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)
MariaDB [(none)]> use test;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [test]> show tables;
+----------------+
| Tables_in_test |
+----------------+
| demotables     |
+----------------+
1 row in set (0.00 sec)
MariaDB [test]> select * from demotables;
+----+----------+------+
| id | name     | addr |
+----+----------+------+
|  1 | zhangsan | lztd |
+----+----------+------+
1 rows in set (0.00 sec)

 

6.读写分离数据库管理(40分)

使用提供的虚拟机与软件包,基于上一题构建的主从数据库,进一步完成Mycat读写分离数据库的配置安装。需要用的配置文件schema.xml文件如下所示(server.xml文件不再给出): select user() 配置读写分离数据库完毕后,使用netstat -ntpl命令查询端口启动情况。最后将netstat -ntpl命令的返回结果以文本形式提交到答题框。

Mycat & Mysql1 & Mysql2都执行以下操作:

# 注释:这个其实配不配都可以,看个人喜欢用主机名还是IP地址咯
[root@mycat ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.111    mysql1
192.168.1.112    mysql2
192.168.1.113    mycat

 

Mycat:

  • 上传gpmall-repo中有mariadb子文件的文件和Mycat-server-1.6-RELEASE-20161028204710-linux.gz到/root目录下,并配置yum源:
[root@mycat ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.111   mysql1
192.168.1.112   mysql2
192.168.1.113   mycat
[root@mycat~]# vim /etc/yum.repos.d/local.repo 
[centos]
name=centos
baseurl=file:///opt/centos
enabled=1
gpgcheck=0
[mariadb]
name=mariadb
baseurl=file:///root/gpmall-repo
enabled=1
gpgcheck=0
[root@mycat ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
[root@mycat ~]# tar -zvxf Mycat-server-1.6-RELEASE-20161028204710-linux.gz -C /usr/local/
[root@mycat ~]# chown -R 777 /usr/local/mycat/
[root@mycat ~]# vim /etc/profile
export MYCAT_HOME=/usr/local/mycat/
[root@mycat ~]# source /etc/profile
[root@mycat ~]# vim /usr/local/mycat/conf/schema.xml
<?xml version='1.0'?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<!--注释:name=USERDB指的是逻辑数据库,在后面添加一个dataNode="dn1",dn1上绑定的是真是数据库-->
<schema name="USERDB" checkSQLschema="true" sqlMaxLimit="100" 
dataNode="dn1"></schema>
<!--注释:name="dn1"上面与逻辑数据库引用的名称,database="test"真实数据库名字-->
<dataNode name="dn1" dataHost="localhost1" database="test" />
<dataHost name="localhost1" maxCon="1000" minCon="10" balance="3" dbType="mysql" 
dbDriver="native" writeType="0" switchType="1" slaveThreshold="100">
 <heartbeat>select user()</heartbeat>
 <writeHost host="hostM1" url="192.168.1.111:3306" user="root" password="000000">
 <readHost host="hostS1" url="192.168.1.112:3306" user="root" password="000000" />
 </writeHost>
</dataHost>
</mycat:schema>
[root@mycat ~]# chown root:root /usr/local/mycat/conf/schema.xml
# 注释:修改root用户的访问密码与数据库
[root@mycat ~]# vim /usr/local/mycat/conf/server.xml
        <user name="root">
                <property name="password">000000</property>
                <property name="schemas">USERDB</property>

                <!-- 表级 DML 权限设置 -->
                <!--            
                <privileges check="false">
                        <schema name="TESTDB" dml="0110" >
                                <table name="tb01" dml="0000"></table>
                                <table name="tb02" dml="1111"></table>
                        </schema>
                </privileges>           
                 -->
        </user>
# 注释:删除之后的<user name="user"></user>的标签与内容
[root@mycat ~]# /bin/bash /usr/local/mycat/bin/mycat start
Starting Mycat-server...
[root@mycat ~]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1114/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1992/master         
tcp        0      0 127.0.0.1:32000         0.0.0.0:*               LISTEN      3988/java           
tcp6       0      0 :::45929                :::*                    LISTEN      3988/java           
tcp6       0      0 :::9066                 :::*                    LISTEN      3988/java           
tcp6       0      0 :::40619                :::*                    LISTEN      3988/java           
tcp6       0      0 :::22                   :::*                    LISTEN      1114/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1992/master         
tcp6       0      0 :::1984                 :::*                    LISTEN      3988/java           
tcp6       0      0 :::8066                 :::*                    LISTEN      3988/java   
# 注释:验证结果(读写分离是否成功):
[root@mycat ~]# yum install -y MariaDB-client
# 注释:查看逻辑库
[root@mycat ~]# mysql -h 127.0.0.1 -P8066 -uroot -p000000
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.29-mycat-1.6-RELEASE-20161028204710 MyCat Server (OpenCloundDB)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases;
+----------+
| DATABASE |
+----------+
| USERDB   |
+----------+
1 row in set (0.003 sec)

MySQL [(none)]> use USERDB
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MySQL [USERDB]> show tables;
+----------------+
| Tables_in_test |
+----------------+
| demotables     |
+----------------+
1 row in set (0.007 sec)

MySQL [USERDB]> select * from demotables;
+----+----------+------+
| id | name     | addr |
+----+----------+------+
|  1 | zhangsan | lztd |
|  2 | xiaohong | lztd |
|  3 | xiaoli   | lztd |
|  4 | lihua    | nnzy |
+----+----------+------+
4 rows in set (0.060 sec)

MySQL [USERDB]> insert into demotables values(5,'tomo','hfdx');
Query OK, 1 row affected (0.013 sec)

MySQL [USERDB]> select * from demotables;
+----+----------+------+
| id | name     | addr |
+----+----------+------+
|  1 | zhangsan | lztd |
|  2 | xiaohong | lztd |
|  3 | xiaoli   | lztd |
|  4 | lihua    | nnzy |
|  5 | tomo     | hfdx |
+----+----------+------+
5 rows in set (0.004 sec)

MySQL [USERDB]> exit;
Bye
# 注释:查询对数据库读写操作的分离信息
[root@mycat ~]# mysql -h 127.0.0.1 -P9066 -uroot -p000000 -e 'show @@datasource;'
+----------+--------+-------+---------------+------+------+--------+------+------+---------+-----------+------------+
| DATANODE | NAME   | TYPE  | HOST          | PORT | W/R  | ACTIVE | IDLE | SIZE | EXECUTE | READ_LOAD | WRITE_LOAD |
+----------+--------+-------+---------------+------+------+--------+------+------+---------+-----------+------------+
| dn1      | hostM1 | mysql | 192.168.1.111 | 3306 | W    |      0 |   10 | 1000 |      45 |         0 |          1 |
| dn1      | hostS1 | mysql | 192.168.1.112 | 3306 | R    |      0 |    6 | 1000 |      43 |         4 |          0 |
+----------+--------+-------+---------------+------+------+--------+------+------+---------+-----------+------------+

 

一些参数注释:

sqlMaxLimit 配置默认查询数量
database 为真实数据库名
balance="0" 不开启读写分离机制,所有读操作都发送到当前可用的writeHost上
balance="1" 全部的readHost与stand by writeHost参与select语句的负载均衡,简单来说,当双主双从模式(M1->S1,M2->S2,并且M1与M2互为主备),正常情况下,M2、S1、S2都参与select语句的负载均衡
balance="2" 所有读操作都随机的在writeHost、readhost上分发
balance="3" 所有读请求随机地分发到wiriterHost对应的readhost执行,writerHost不负担读压力,注意balance=3只在1.4及其以后版本有,1.3版本没有
writeType="0" 所有写操作发送到配置的第一个writeHost,第一个挂了需要切换到还生存的第二个writeHost,重新启动后已切换后的为准,切换记录在配置文件dnindex.properties中
writeType="1" 所有写操作都随机的发送到配置的writeHost

7.Zookeeper集群(40分)

继续使用上题的三台虚拟机,使用提供的软件包,完成Zookeeper集群的安装与配置,配置完成后,在相应的目录使用./zkServer.sh status命令查看三个Zookeeper节点的状态,将三个节点的状态以文本形式提交到答题框。

  • 可以继续使用上面主从读写分离的环境,没影响的,我这没用是做题的时候没注意,都一样的。

Zookeeper1:

[root@xiandian ~]# hostnamectl set-hostname zookeeper1
[root@xiandian ~]# bash
[root@zookeeper1 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.10    zookeeper1
192.168.1.20    zookeeper2
192.168.1.30    zookeeper3
# 注释:在zookeeper1节点上传gpmall-repo,然后做vsftp进行共享,我上传到/opt
[root@zookeeper1 ~]# vim /etc/yum.repos.d/local.repo 
[centos]
name=centos
baseurl=file:///opt/centos
enabled=1
gpgcheck=0
[gpmall]
name=gpmall
baseurl=file:///opt/gpmall-repo
enabled=1
gpgcheck=0
[root@zookeeper1 ~]# yum repolist
[root@zookeeper1 ~]# yum install -y vsftpd
[root@zookeeper1 ~]# vim /etc/vsftpd/vsftpd.conf 
# 注释:添加:
anon_root=/opt
[root@zookeeper1 ~]# systemctl restart vsftpd
[root@zookeeper1 ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
[root@zookeeper1 ~]# java -version
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
# 注释:将zookeeper-3.4.14.tar.gz上传至三个节点或者设置nfs进行共享
[root@zookeeper1 ~]# tar -zvxf zookeeper-3.4.14.tar.gz
[root@zookeeper1 ~]# cd zookeeper-3.4.14/conf/
[root@zookeeper1 conf]# mv zoo_sample.cfg zoo.cfg
[root@zookeeper1 conf]# vim zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.1.10:2888:3888
server.2=192.168.1.20:2888:3888
server.3=192.168.1.30:2888:3888
[root@zookeeper1 conf]# grep -n '^'[a-Z] zoo.cfg 
2:tickTime=2000
5:initLimit=10
8:syncLimit=5
12:dataDir=/tmp/zookeeper
14:clientPort=2181
29:server.1=192.168.1.10:2888:3888
30:server.2=192.168.1.20:2888:3888
31:server.3=192.168.1.30:2888:3888
[root@zookeeper1 conf]# cd 
[root@zookeeper1 ~]# mkdir /tmp/zookeeper
[root@zookeeper1 ~]# vim /tmp/zookeeper/myid
1
[root@zookeeper1 ~]# cat /tmp/zookeeper/myid 
1
[root@zookeeper1 ~]# cd zookeeper-3.4.14/bin/
[root@zookeeper1 bin]# ./zkServer.sh start
zookeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@zookeeper1 bin]# ./zkServer.sh status
zookeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower

 

Zookeeper2:

[root@xiandian ~]# hostnamectl set-hostname zookeeper2
[root@xiandian ~]# bash
[root@zookeeper2 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.10    zookeeper1
192.168.1.20    zookeeper2
192.168.1.30    zookeeper3
[root@zookeeper2 ~]# vim /etc/yum.repos.d/local.repo 
[centos]
name=centos
baseurl=file:///opt/centos
enabled=1
gpgcheck=0
[gpmall]
name=gpmall
baseurl=ftp://zookeeper1/gpmall-repo
enabled=1
gpgcheck=0
[root@zookeeper2 ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
[root@zookeeper2 ~]# java -version
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
# 注释:将zookeeper-3.4.14.tar.gz上传至三个节点或者设置nfs进行共享
[root@zookeeper2 ~]# tar -zvxf zookeeper-3.4.14.tar.gz
[root@zookeeper2 ~]# cd zookeeper-3.4.14/conf/
[root@zookeeper2 conf]# mv zoo_sample.cfg zoo.cfg
[root@zookeeper2 conf]# vim zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.1.10:2888:3888
server.2=192.168.1.20:2888:3888
server.3=192.168.1.30:2888:3888
[root@zookeeper2 conf]# grep -n '^'[a-Z] zoo.cfg 
2:tickTime=2000
5:initLimit=10
8:syncLimit=5
12:dataDir=/tmp/zookeeper
14:clientPort=2181
29:server.1=192.168.1.10:2888:3888
30:server.2=192.168.1.20:2888:3888
31:server.3=192.168.1.30:2888:3888
[root@zookeeper2 conf]# cd 
[root@zookeeper2 ~]# mkdir /tmp/zookeeper
[root@zookeeper2 ~]# vim /tmp/zookeeper/myid
2
[root@zookeeper1 ~]# cat /tmp/zookeeper/myid 
2
[root@zookeeper2 ~]# cd zookeeper-3.4.14/bin/
[root@zookeeper2 bin]# ./zkServer.sh start
zookeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@zookeeper2 bin]# ./zkServer.sh status
zookeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: leader

 

Zookeeper3:

[root@xiandian ~]# hostnamectl set-hostname zookeeper3
[root@xiandian ~]# bash
[root@zookeeper3 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.10    zookeeper1
192.168.1.20    zookeeper2
192.168.1.30    zookeeper3
[root@zookeeper3 ~]# vim /etc/yum.repos.d/local.repo 
[centos]
name=centos
baseurl=file:///opt/centos
enabled=1
gpgcheck=0
[gpmall]
name=gpmall
baseurl=ftp://zookeeper1/gpmall-repo
enabled=1
gpgcheck=0
[root@zookeeper3 ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
[root@zookeeper3 ~]# java -version
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
# 注释:将zookeeper-3.4.14.tar.gz上传至三个节点或者设置nfs进行共享
[root@zookeeper3 ~]# tar -zvxf zookeeper-3.4.14.tar.gz
[root@zookeeper3 ~]# cd zookeeper-3.4.14/conf/
[root@zookeeper3 conf]# mv zoo_sample.cfg zoo.cfg
[root@zookeeper3 conf]# vim zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.1.10:2888:3888
server.2=192.168.1.20:2888:3888
server.3=192.168.1.30:2888:3888
[root@zookeeper3 conf]# grep -n '^'[a-Z] zoo.cfg 
2:tickTime=2000
5:initLimit=10
8:syncLimit=5
12:dataDir=/tmp/zookeeper
14:clientPort=2181
29:server.1=192.168.1.10:2888:3888
30:server.2=192.168.1.20:2888:3888
31:server.3=192.168.1.30:2888:3888
[root@zookeeper3 conf]# cd 
[root@zookeeper3 ~]# mkdir /tmp/zookeeper
[root@zookeeper3 ~]# vim /tmp/zookeeper/myid
3
[root@zookeeper3 ~]# cat /tmp/zookeeper/myid 
3
[root@zookeeper3 ~]# cd zookeeper-3.4.14/bin/
[root@zookeeper3 bin]# ./zkServer.sh start
zookeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@zookeeper3 bin]# ./zkServer.sh status
zookeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower

 

8.Kafka集群(40分)

继续使用上题的三台虚拟机,使用提供软件包,完成Kafka集群的安装与配置,配置完成后,在相应的目录使用 ./kafka-topics.sh --create --zookeeper 你的IP:2181 --replication-factor 1 --partitions 1 --topic test创建topic,将输入命令后的返回结果以文本形式提交到答题框。

  • 将kafka_2.11-1.1.1.tgz上传至三个节点:(可以在上体的基础上做kafka,因为kafka依赖于zookeeper)

Zookeeper1:

[root@zookeeper1 ~]# tar -zvxf kafka_2.11-1.1.1.tgz
[root@zookeeper1 ~]# cd kafka_2.11-1.1.1/config/
[root@zookeeper1 config]# vim server.properties
把broker.id=0和zookeeper.connect=localhost:2181使用#注释掉可以使用在vim中/加要搜索的名字,来查找,并添加三行新的内容:
#broker.id=0
#zookeeper.connect=localhost:2181
broker.id = 1
zookeeper.connect = 192.168.1.10:2181,192.168.1.20:2181,192.168.1.30:2181
listeners = PLAINTEXT://192.168.1.10:9092

[root@zookeeper1 config]# cd /root/kafka_2.11-1.1.1/bin/
[root@zookeeper1 bin]# ./kafka-server-start.sh -daemon ../config/server.properties 
[root@zookeeper1 bin]# jps
17645 QuorumPeerMain
18029 Kafka
18093 Jps
# 注释:创建topic(下面的IP请设置为自己节点的IP):
[root@zookeeper1 bin]# ./kafka-topics.sh --create --zookeeper 192.168.1.10:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test".
# 注释:测试结果:
[root@zookeeper1 bin]# ./kafka-topics.sh --list --zookeeper 192.168.1.10:2181
test

 

Zookeeper2:

[root@zookeeper2 ~]# tar -zvxf kafka_2.11-1.1.1.tgz
[root@zookeeper2 ~]# cd kafka_2.11-1.1.1/config/
[root@zookeeper2 config]# vim server.properties
# 注释:把broker.id=0和zookeeper.connect=localhost:2181使用#注释掉可以使用在vim中/加要搜索的名字,来查找,并添加三行新的内容:
#broker.id=0
#zookeeper.connect=localhost:2181
broker.id = 2
zookeeper.connect = 192.168.1.10:2181,192.168.1.20:2181,192.168.1.30:2181
listeners = PLAINTEXT://192.168.1.20:9092

[root@zookeeper2config]# cd /root/kafka_2.11-1.1.1/bin/
[root@zookeeper2 bin]# ./kafka-server-start.sh -daemon ../config/server.properties 
[root@zookeeper2 bin]# jps
3573 Kafka
3605 Jps
3178 QuorumPeerMain
# 注释:测试结果:
[root@zookeeper2 bin]# ./kafka-topics.sh --list --zookeeper 192.168.1.20:2181
test

 

Zookeeper3:

[root@zookeeper3 ~]# tar -zvxf kafka_2.11-1.1.1.tgz
[root@zookeeper3 ~]# cd kafka_2.11-1.1.1/config/
[root@zookeeper3 config]# vim server.properties
# 注释:把broker.id=0和zookeeper.connect=localhost:2181使用#注释掉可以使用在vim中/加要搜索的名字,来查找,并添加三行新的内容:
#broker.id=0
#zookeeper.connect=localhost:2181
broker.id = 3
zookeeper.connect = 192.168.1.10:2181,192.168.1.20:2181,192.168.1.30:2181
listeners = PLAINTEXT://192.168.1.30:9092

[root@zookeeper3 config]# cd /root/kafka_2.11-1.1.1/bin/
[root@zookeeper3 bin]# ./kafka-server-start.sh -daemon ../config/server.properties 
[root@zookeeper3 bin]# jps 
3904 QuorumPeerMain
4257 Kafka
4300 Jps
# 注释:测试结果:
[root@zookeeper3 bin]# ./kafka-topics.sh --list --zookeeper 192.168.1.30:2181
test

 

9.应用商城系统(40分)

继续使用上题的三台虚拟机,使用提供的软件包,基于集群应用系统部署。部署完成后,进行登录,(订单中填写的收货地址填写自己学校的地址,收货人填写自己的实际联系方式)最后使用curl命令去获取商城首页的返回信息,将curl http://你自己的商城IP/#/home获取到的结果以文本形式提交到答题框。

  • 这个题目验证答案的方式一样,用单节点也一样,所有没必要去弄集群
[root@mall ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.111   mall
192.168.1.111   kafka.mall
192.168.1.111   redis.mall
192.168.1.111   mysql.mall
192.168.1.111   zookeeper.mall
[root@mall ~]# vim /etc/yum.repos.d/local.repo 
[centos]
name=centos
baseurl=file:///opt/centos
enabled=1
gpgcheck=0
[gpmall]
name=gpmall
baseurl=file:///root/gpmall-repo
enabled=1
gpgcheck=0 
[root@mall ~]# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
[root@mall ~]# java -version
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
[root@mall ~]# yum install -y redis
[root@mall ~]# yum install -y nginx
[root@mall ~]# yum install -y mariadb mariadb-server
[root@mall ~]# tar -zvxf zookeeper-3.4.14.tar.gz
[root@mall ~]# cd zookeeper-3.4.14/conf
[root@mall conf]# mv zoo_sample.cfg zoo.cfg
[root@mall conf]# cd /root/zookeeper-3.4.14/bin/
[root@mall bin]# ./zkServer.sh start
[root@mall bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: standalone
[root@mall bin]# cd
[root@mall ~]# tar -zvxf kafka_2.11-1.1.1.tgz
[root@mall ~]# cd kafka_2.11-1.1.1/bin/
[root@mall bin]# ./kafka-server-start.sh -daemon ../config/server.properties
[root@mall bin]# jps 
7249 Kafka
17347 Jps
6927 QuorumPeerMain
[root@mall bin]# cd
[root@mall ~]# vim /etc/my.cnf
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
[mysqld]
init_connect='SET collation_connection = utf8_unicode_ci'
init_connect='SET NAMES utf8'
character-set-server=utf8
collation-server=utf8_unicode_ci
skip-character-set-client-handshake
[root@mall ~]# systemctl restart mariadb
[root@mall ~]# systemctl enable mariadb
[root@mall ~]# mysqladmin -uroot password 123456
[root@mall ~]# mysql -uroot -p123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 9
Server version: 10.3.18-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database gpmall;
Query OK, 1 row affected (0.002 sec)
MariaDB [(none)]> grant all privileges on *.* to root@localhost identified by '123456' with grant option;
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> grant all privileges on *.* to root@'%' identified by '123456' with grant option;
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> use gpmall;
Database changed
MariaDB [gpmall]> source /root/gpmall-xiangmubao-danji/gpmall.sql
MariaDB [gpmall]> Ctrl-C -- exit!
[root@mall ~]# vim /etc/redis.conf
# 注释:将bind 127.0.0.1这一行注释掉;将protected-mode yes 改为 protected-mode no
#bind 127.0.0.1
Protected-mode no
[root@mall ~]# systemctl restart redis
[root@mall ~]# systemctl enable redis
Created symlink from /etc/systemd/system/multi-user.target.wants/redis.service to /usr/lib/systemd/system/redis.service.
[root@mall ~]# rm -rf /usr/share/nginx/html/*
[root@mall ~]# cp -rf gpmall-xiangmubao-danji/dist/* /usr/share/nginx/html/
[root@mall ~]# vim /etc/nginx/conf.d/default.conf
# 注释:在server块中添加三个location块
server {
...
    location /user {
        proxy_pass http://127.0.0.1:8082;
    }   

    location /shopping {
        proxy_pass http://127.0.0.1:8081;
    }

    location /cashier {
        proxy_pass http://127.0.0.1:8083;
    }
...
}
[root@mall ~]# systemctl restart nginx
[root@mall ~]# systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
[root@mall ~]# cd gpmall-xiangmubao-danji/
[root@mall gpmall-xiangmubao-danji]# nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
[1] 3531
[root@mall gpmall-xiangmubao-danji]# nohup: ignoring input and appending output to ‘nohup.out’

[root@mall gpmall-xiangmubao-danji]# nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
[2] 3571
[root@mall gpmall-xiangmubao-danji]# nohup: ignoring input and appending output to ‘nohup.out’

[root@mall gpmall-xiangmubao-danji]# nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
[3] 3639
[root@mall gpmall-xiangmubao-danji]# nohup: ignoring input and appending output to ‘nohup.out’

[root@mall gpmall-xiangmubao-danji]# nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &
[4] 3676
[root@mall gpmall-xiangmubao-danji]# nohup: ignoring input and appending output to ‘nohup.out’

[root@mall gpmall-xiangmubao-danji]# jobs
[1]   Running          nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
[2]   Running          nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
[3]-   Running          nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
[4]+  Running          nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &
[root@mall gpmall-xiangmubao-danji]# curl http://192.168.1.111/#/home
<!DOCTYPE html><html><head><meta charset=utf-8><title>1+x-示例项目</title><meta name=keywords content=""><meta name=description content=""><meta http-equiv=X-UA-Compatible content="IE=Edge"><meta name=wap-font-scale content=no><link rel="shortcut icon " type=images/x-icon href=/static/images/favicon.ico><link href=/static/css/app.8d4edd335a61c46bf5b6a63444cd855a.css rel=stylesheet></head><body><div id=app></div><script type=text/javascript src=/static/js/manifest.2d17a82764acff8145be.js></script><script type=text/javascript src=/static/js/vendor.4f07d3a235c8a7cd4efe.js></script><script type=text/javascript src=/static/js/app.81180cbb92541cdf912f.js></script></body></html><style>body{
min-width:1250px;}</style>

 

10.Neutron服务运维(40分)

使用提供的“all-in-one”虚拟机,使用Neutron命令,查询网络服务的列表信息中的“binary”一列,并且查询网络sharednet1详细信息。然后查询网络服务DHCP agent的详细信息。将以上操作命令及结果以文本形式填入答题框

[root@xiandian ~]# neutron agent-list -c binary
+---------------------------+
| binary |
+---------------------------+
| neutron-l3-agent |
| neutron-openvswitch-agent |
| neutron-dhcp-agent |
| neutron-metadata-agent |
+---------------------------+
[root@xiandian ~]# neutron net-list
+--------------------------------------+------------+---------+
| id | name | subnets |
+--------------------------------------+------------+---------+
| bd923693-d9b1-4094-bd5b-22a038c44827 | sharednet1 | |
+--------------------------------------+------------+---------+
# neutron net-show bd923693-d9b1-4094-bd5b-22a038c44827
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2017-02-23T04:58:17 |
| description | |
| id | bd923693-d9b1-4094-bd5b-22a038c44827 |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | 1500 |
| name | sharednet1 |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | physnet1 |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | 20b1ab08ea644670addb52f6d2f2ed61 |
| updated_at | 2017-02-23T04:58:17 |
+---------------------------+--------------------------------------+
[root@xiandian ~]# neutron agent-list
+-----------+----------------+----------+-------------------+-------+-------------------+--------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+-----------+----------------+----------+-------------------+-------+-------------------+--------------+
| 7dd3ea38-c6fc-4a73-a530-8b007afeb778 | L3 agent | xiandian | nova | :-) | True | neutron-l3-agent |
| 8c0781e7-8b3e-4c9f-a8da-0d4cdc570afb | Open vSwitch agent | xiandian | | :-) | True | neutron-openvswitch-agent |
| a3504292-e108-4ad1-ae86-42ca9ccfde78 | DHCP agent | xiandian | nova | :-) | True | neutron-dhcp-agent |
| be17aa73-deba-411a-ac10-fd523079085d | Metadata agent | xiandian | | :-) | True | neutron-metadata-agent |
+-----------+----------------+----------+-------------------+-------+-------------------+--------------+
[root@xiandian ~]# neutron agent-show a3504292-e108-4ad1-ae86-42ca9ccfde78
+---------------------+----------------------------------------------------------+
| Field | Value |
+---------------------+----------------------------------------------------------+
| admin_state_up | True |
| agent_type | DHCP agent |
| alive | True |
| availability_zone | nova |
| binary | neutron-dhcp-agent |
| configurations | { |
| | "subnets": 1, |
| | "dhcp_lease_duration": 86400, |
| | "dhcp_driver": "neutron.agent.linux.dhcp.Dnsmasq", |
| | "networks": 1, |
| | "log_agent_heartbeats": false, |
| | "ports": 2 |
| | } |
| created_at | 2017-02-23 04:57:05 |
| description | |
| heartbeat_timestamp | 2019-09-28 21:33:06 |
| host | xiandian |
| id | a3504292-e108-4ad1-ae86-42ca9ccfde78 |
| started_at | 2017-02-23 04:57:05 |
| topic | dhcp_agent |
+---------------------+----------------------------------------------------------+

 

11.Cinder服务运维(40分)

使用提供的“all-in-one”虚拟机,使用Cinder命令,创建一个2 GB的云硬盘extend-demo,并且查看云硬盘信息,创建了一个名为“lvm”的卷类型。通过cinder命令查看现有的卷类型。创建一块带“lvm”标识名为type_test_demo的云硬盘,最后使用命令查看所创建的云硬盘。将以上操作命令及结果以文本形式填入答题框。

[root@xiandian ~]#cinder create --name cinder-volume-demo 2
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2019-09-28T18:59:13.000000 |
| description | None |
| encrypted | False |
| id | 5df3295d-3c92-41f5-95af-c371a3e8b47f |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | cinder-volume-demo |
| os-vol-host-attr:host | xiandian@lvm#LVM |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 0ab2dbde4f754b699e22461426cd0774 |
| replication_status | disabled |
| size | 2 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| updated_at | 2019-09-28T18:59:14.000000 |
| user_id | 53a1cf0ad2924532aa4b7b0750dec282 |
| volume_type | None |
+--------------------------------+--------------------------------------+
[root@xiandian ~]# cinder list
+--------------+-----------+---------------------+------+-------------+------------------+-----------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------+-----------+---------------------+------+-------------+------------------+-----------+
| 5df3295d-3c92-41f5-95af-c371a3e8b47f | available | cinder-volume-demo | 2 | - | false | |
+--------------+-----------+---------------------+------+-------------+------------------+-----------+
[root@xiandian ~]# cinder type-create lvm
+--------------------------------------+------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| b247520f-84dd-41cb-a706-4437e7320fa8 | lvm | - | True |
+--------------------------------------+------+-------------+-----------+
# cinder type-list
+--------------------------------------+------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| b247520f-84dd-41cb-a706-4437e7320fa8 | lvm | - | True |
+--------------------------------------+------+-------------+-----------+
[root@xiandian ~]#cinder create --name type_test_demo --volume-type lvm 1
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2019-09-28T19:15:14.000000 |
| description | None |
| encrypted | False |
| id | 12d09316-1c9f-43e1-93bd-24e54cbf7ef6 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | type_test_demo |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 0ab2dbde4f754b699e22461426cd0774 |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| updated_at | None |
| user_id | 53a1cf0ad2924532aa4b7b0750dec282 |
| volume_type | lvm |
+--------------------------------+--------------------------------------+
[root@xiandian ~]# cinder show type_test_demo
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2019-09-28T19:15:14.000000 |
| description | None |
| encrypted | False |
| id | 12d09316-1c9f-43e1-93bd-24e54cbf7ef6 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | type_test_demo |
| os-vol-host-attr:host | xiandian@lvm#LVM |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 0ab2dbde4f754b699e22461426cd0774 |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2019-09-28T19:15:15.000000 |
| user_id | 53a1cf0ad2924532aa4b7b0750dec282 |
| volume_type | lvm |
+--------------------------------+--------------------------------------+

 

12.对象存储管理(40分)

使用提供的“all-in-one”虚拟机,使用openstack命令,创建名为examtest的容器并查询,上传一个aaa.txt(可自行创建)文件到这个容器中并查询。依次将操作命令和返回结果以文本形式提交到答题框。

[root@xiandian ~]# openstack container create examtest
+---------------------------------------+-----------+------------------------------------+
| account                               | container | x-trans-id                         |  
+---------------------------------------+-----------+------------------------------------+
| AUTH_0ab2dbde4f754b699e22461426cd0774 | examtest  | tx9e7b54f8042d4a6ca5ccf-005a93daf3 |
+---------------------------------------+-----------+------------------------------------+
[root@xiandian ~]# openstack container list
+----------+
| Name     |
+----------+
| examtest |
+----------+
[root@xiandian ~]# openstack object create examtest aaa.txt
+---------+-----------+----------------------------------+
| object  | container | etag                             |
+---------+-----------+----------------------------------+
| aaa.txt | examtest  | 45226aa24b72ce0ccc4ff73eefe2e26f |
+---------+-----------+----------------------------------+
[root@xiandian ~]# openstack object list examtest
+---------+
| Name    |
+---------+
| aaa.txt |
+---------+

 

13.Docker安装(40分)

使用提供的虚拟机和软件包,自行配置YUM源,安装docker-ce服务。安装完毕后执行docker info命令的返回结果以文本形式提交到答题框。

  • 先上传Docker.tar.gz到/root目录,并解压:
[root@xiandian ~]# tar -zvxf Docker.tar.gz
[root@xiandian ~]# vim /etc/yum.repos.d/local.repo 
[centos]
name=centos
baseurl=file:///opt/centos
enabled=1
gpgcheck=0
[docker]
name=docker
baseurl=file:///root/Docker
enabled=1
gpgcheck=0
[root@xiandian ~]# iptables -F
[root@xiandian ~]# iptables -X
[root@xiandian ~]# iptables -Z
[root@xiandian ~]# iptables-save 
# Generated by iptables-save v1.4.21 on Fri May 15 02:00:29 2020
*filter
:INPUT ACCEPT [20:1320]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [11:1092]
COMMIT
# Completed on Fri May 15 02:00:29 2020
[root@xiandian ~]# vim /etc/selinux/config 
SELINUX=disabled
# 注释:关闭交换分区:
[root@xiandian ~]# vim /etc/fstab 
#/dev/mapper/centos-swap swap            swap    defaults        0 0
[root@xiandian ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           1824          95        1591           8         138        1589
Swap:             0           0           0
# 注释:在配置路由转发前,先升级系统并重启,不然会有两条规则可能报错:
[root@xiandian ~]# yum upgrade -y
[root@xiandian ~]# reboot
[root@xiandian ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@xiandian ~]# modprobe br_netfilter
[root@xiandian ~]# sysctl -p
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@xiandian ~]# yum install -y yum-utils device-mapper-persistent-data
[root@xiandian ~]# yum install -y docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io
[root@xiandian ~]# systemctl daemon-reload
[root@xiandian ~]# systemctl restart docker
[root@xiandian ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@xiandian ~]# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 18.09.6
Storage Driver: devicemapper
 Pool Name: docker-253:0-100765090-pool
 Pool Blocksize: 65.54kB
 Base Device Size: 10.74GB
 Backing Filesystem: xfs
 Udev Sync Supported: true
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 11.73MB
 Data Space Total: 107.4GB
 Data Space Available: 24.34GB
 Metadata Space Used: 17.36MB
 Metadata Space Total: 2.147GB
 Metadata Space Available: 2.13GB
 Thin Pool Minimum Free Space: 10.74GB
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.164-RHEL7 (2019-08-27)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-1127.8.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.777GiB
Name: xiandian
ID: OUR6:6ERV:3UCH:WJCM:TDLL:5ATV:E7IQ:HLAR:JKQB:OBK2:HZ7G:JC3Q
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
         Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.

 

14.部署Swarm集群(40分)

使用提供的虚拟机和软件包,安装好docker-ce。部署Swarm集群,并安装Portainer图形化管理工具,部署完成后,使用浏览器登录ip:9000界面,进入Swarm控制台。将curl swarm ip:9000返回的结果以文本形式提交到答题框。

Master:

[root@master ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.111   master
192.168.1.112   node
[root@master ~]# yum install -y chrony
[root@master ~]# vim /etc/chrony.conf 
# 注释:注释前面的四条server,并找个空白的地方写入以下内容:
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
local stratum 10
server master iburst
allow all                      
[root@master ~]# systemctl restart chronyd 
[root@master ~]# systemctl enable chronyd
[root@master ~]# timedatectl set-ntp true
[root@master ~]# vim /lib/systemd/system/docker.service
# 注释:将 
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 
# 注释:修改为 
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker
[root@master ~]# ./image.sh
[root@master ~]# docker swarm init --advertise-addr 192.168.1.111
Swarm initialized: current node (vclsb89nhs306kei93iv3rwa5) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-6d9c93ecv0e1ux4u8z5wj4ybhbkt2iadlnh74omjipyr3dwk4u-euf7iax6ubmta5qbcrbg4t3j4 192.168.1.111:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@master ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-6d9c93ecv0e1ux4u8z5wj4ybhbkt2iadlnh74omjipyr3dwk4u-euf7iax6ubmta5qbcrbg4t3j4 192.168.1.111:2377
[root@master ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
vclsb89nhs306kei93iv3rwa5 *   master              Ready               Active              Leader              18.09.6
j98yunqmdkh1ztr7thhbzumcw     node                Ready               Active                                  18.09.6
[root@master ~]# docker volume create portainer_data
portainer_data
[root@master ~]# docker service create --name portainer --publish 9000:9000 --replicas=1 --constraint 'node.role == manager' --mount  type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock --mount 
type=volume,src=portainer_data,dst=/data portainer/portainer -H unix:///var/run/docker.sock

k77m7aydf2idm1x02j60cmwsj
overall progress: 1 out of 1 tasks 
1/1: running   
verify: Service converged

 

Node:

[root@node ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.111   master
192.168.1.112   node
[root@node ~]# yum install -y chronyc                
[root@node ~]# vim /etc/chrony.conf 
# 注释:注释前面的四条server,并找个空白的地方写入以下内容:
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 192.168.1.111 iburst
local stratum 10
server master iburst
allow all    
[root@node ~]# systemctl restart chronyd
[root@node ~]# systemctl enable chronyd
[root@node ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* master                       11   6   177    42    +17us[  +60us] +/-   52ms
[root@node ~]# vim /lib/systemd/system/docker.service
# 注释:将 
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 
# 注释:修改为 
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker
[root@node ~]#     docker swarm join --token SWMTKN-1-6d9c93ecv0e1ux4u8z5wj4ybhbkt2iadlnh74omjipyr3dwk4u-euf7iax6ubmta5qbcrbg4t3j4 192.168.1.111:2377
This node joined a swarm as a worker.
[[root@master ~]# curl 192.168.1.111:9000
<!DOCTYPE html><html lang="en" ng-app="portainer">
<head>
  <meta charset="utf-8">
  <title>Portainer</title>
  <meta name="description" content="">
  <meta name="author" content="Portainer.io">

 

15.Docker Harbor安装(40分)

使用提供的虚拟机与软件包,部署Docker Harbor镜像仓库服务。安装完毕后,将执行./install.sh --with-notary --with-clair命令返回结果中的[step4]的内容以文本形式提交到答题框。

  • 安装docker-cd时解压Docker.tar.gz时会产生有一个image.sh的脚本(是一个自动上传镜像到本地仓库的脚本):
[root@zookeeper1 ~]# ./image.sh
[root@zookeeper1 ~]# mkdir  -p  /data/ssl
[root@zookeeper1 ~]# cd /data/ssl/
[root@zookeeper1 ssl]# which openssl
/usr/bin/openssl
[root@zookeeper1 ssl]# openssl  req  -newkey rsa:4096  -nodes  -sha256  -keyout ca.key  -x509  -days 2.235  -out  ca.crt
Generating a 4096 bit RSA private key
...................................................................................................................++
............................................................................................................................................++
writing new private key to 'ca.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN		# 国家
State or Province Name (full name) []:Guangxi	# 地区(州或省名)
Locality Name (eg, city) [Default City]:Liuzhou 	# 城市
Organization Name (eg, company) [Default Company Ltd]:lztd	# 公司名称
Organizational Unit Name (eg, section) []:xxjsxy	# 单位名称
Common Name (eg, your name or your server's hostname) []:	# 服务器主机名,域名
Email Address []:		# 邮箱地址

[root@zookeeper1 ssl]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout www.yidaoyun.com.key  -out www.yidaoyun.com.csr
Generating a 4096 bit RSA private key
.........................................................++
......++
writing new private key to 'www.yidaoyun.com.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN		# 国家
State or Province Name (full name) []:LGuangxi	# 地区(州或省名)
Locality Name (eg, city) [Default City]:Liuzhou 	# 城市
Organization Name (eg, company) [Default Company Ltd]:lztd	# 公司名称
Organizational Unit Name (eg, section) []:xxjsxy	# 单位名称
Common Name (eg, your name or your server's hostname) []:www.yidaoyun.com	# 服务器主机名,域名
Email Address []:		# 邮箱地址

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
[root@zookeeper1 ssl]# openssl x509  -req  -days 2.235  -in  www.yidaoyun.com.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out www.yidaoyun.com.crt
Signature ok
subject=/C=CN/ST=Guangxi/L=Liuzhou/O=lztd/OU=xxjsxy/CN=www.yidaoyun.com
Getting CA Private Key
[root@zookeeper1 ssl]# cp -rvf www.yidaoyun.com.crt /etc/pki/ca-trust/source/anchors/
‘www.yidaoyun.com.crt’ -> ‘/etc/pki/ca-trust/source/anchors/www.yidaoyun.com.crt’
[root@zookeeper1 ssl]# update-ca-trust enable
[root@zookeeper1 ssl]# update-ca-trust extract
# 注释:上传docker-compose-Linux-x86_64.64并重命名docker-compose:
[root@zookeeper1 ~]# mv docker-compose-Linux-x86_64.64  /usr/local/bin/
[root@zookeeper1 ~]# mv /usr/local/bin/docker-compose-Linux-x86_64.64 /usr/local/bin/docker-compose
[root@zookeeper1 ~]# chmod +x /usr/local/bin/docker-compose
[root@zookeeper1 ~]# docker-compose --version
docker-compose version 1.26.0-rc4, build d279b7a8
[root@zookeeper1 opt]# tar  -zvxf  harbor-offline-installer-v1.5.3.tgz  -C  /opt/
[root@zookeeper1 opt]# ll
total 1097260
drwxr-xr-x. 8 root root       4096 May 14 08:03 centos
drwx--x--x  4 root root         26 May 19 23:16 containerd
-rw-r--r--. 1 root root 1123583789 May 15 04:26 Docker.tar.gz
drwxr-xr-x  4 root root       4096 May 20 03:55 harbor
[root@zookeeper1 opt]# cd harbor/
[root@zookeeper1 harbor]# ll
total 895708
drwxr-xr-x 3 root root        22 May 20 03:55 common
-rw-r--r-- 1 root root      1185 Sep 12  2018 docker-compose.clair.yml
-rw-r--r-- 1 root root      1725 Sep 12  2018 docker-compose.notary.yml
-rw-r--r-- 1 root root      3596 Sep 12  2018 docker-compose.yml
drwxr-xr-x 3 root root       150 Sep 12  2018 ha
-rw-r--r-- 1 root root      6956 Sep 12  2018 harbor.cfg
-rw-r--r-- 1 root root 915878468 Sep 12  2018 harbor.v1.5.3.tar.gz
-rwxr-xr-x 1 root root      5773 Sep 12  2018 install.sh
-rw-r--r-- 1 root root     10764 Sep 12  2018 LICENSE
-rw-r--r-- 1 root root       482 Sep 12  2018 NOTICE
-rw-r--r-- 1 root root   1247461 Sep 12  2018 open_source_license
-rwxr-xr-x 1 root root     27840 Sep 12  2018 prepare
[root@zookeeper1 harbor]# vim harbor.cfg
# 注释:修改配置文件内容:
hostname = 192.168.1.111
ui_url_protocol = https
ssl_cert = /data/ssl/www.yidaoyun.com.crt
ssl_cert_key = /data/ssl/www.yidaoyun.com.key
harbor_admin_password = 000000
[root@zookeeper1 harbor]# ./prepare 
Generated and saved secret to file: /data/secretkey
Generated configuration file: ./common/config/nginx/nginx.conf
Generated configuration file: ./common/config/adminserver/env
Generated configuration file: ./common/config/ui/env
Generated configuration file: ./common/config/registry/config.yml
Generated configuration file: ./common/config/db/env
Generated configuration file: ./common/config/jobservice/env
Generated configuration file: ./common/config/jobservice/config.yml
Generated configuration file: ./common/config/log/logrotate.conf
Generated configuration file: ./common/config/jobservice/config.yml
Generated configuration file: ./common/config/ui/app.conf
Generated certificate, key file: ./common/config/ui/private_key.pem, cert file: ./common/config/registry/root.crt
The configuration files are ready, please use docker-compose to start the service.
[root@zookeeper1 harbor]# ./install.sh --with-notary --with-clair
[Step 0]: checking installation environment ...
Note: docker version: 18.09.6
Note: docker-compose version: 1.26.0

[Step 1]: loading Harbor images ...
dba693fc2701: Loading layer  133.4MB/133.4MB
5773887c4c41: Loading layer  30.09MB/30.09MB
6fc2abbcae42: Loading layer  15.37MB/15.37MB
d85f176a11ec: Loading layer  15.37MB/15.37MB
Loaded image: vmware/harbor-adminserver:v1.5.3
462b14c85230: Loading layer  410.1MB/410.1MB
c2e0c8cb2903: Loading layer  9.216kB/9.216kB
11bdb24cded2: Loading layer  9.216kB/9.216kB
5d8f974b49ef: Loading layer   7.68kB/7.68kB
ee04f13f4147: Loading layer  1.536kB/1.536kB
799db4dfe41a: Loading layer  11.78kB/11.78kB
f7d813585bdd: Loading layer   2.56kB/2.56kB
6300bbdbd7ab: Loading layer  3.072kB/3.072kB
Loaded image: vmware/harbor-db:v1.5.3
1d7516778a05: Loading layer  30.09MB/30.09MB
f7ec8d1b47d0: Loading layer  20.91MB/20.91MB
22b0ad749c21: Loading layer  20.91MB/20.91MB
Loaded image: vmware/harbor-jobservice:v1.5.3
2d449d67c05a: Loading layer  89.58MB/89.58MB
0bfd4e706575: Loading layer  3.072kB/3.072kB
6100e173c230: Loading layer   59.9kB/59.9kB
86fe093d1358: Loading layer  61.95kB/61.95kB
Loaded image: vmware/redis-photon:v1.5.3
Loaded image: photon:1.0
3bf3086a6569: Loading layer  30.09MB/30.09MB
641d0f77d675: Loading layer  10.95MB/10.95MB
89efbaabea87: Loading layer   17.3MB/17.3MB
1276e51f4dc2: Loading layer  15.87kB/15.87kB
49e187d04e78: Loading layer  3.072kB/3.072kB
e62fbfea411d: Loading layer  28.24MB/28.24MB
Loaded image: vmware/notary-signer-photon:v0.5.1-v1.5.3
Loaded image: vmware/mariadb-photon:v1.5.3
201f6ade61d8: Loading layer  102.5MB/102.5MB
81221fbb5879: Loading layer  6.656kB/6.656kB
2268e3c9e521: Loading layer  2.048kB/2.048kB
9fca06f4b193: Loading layer   7.68kB/7.68kB
Loaded image: vmware/postgresql-photon:v1.5.3
11d6e8a232c9: Loading layer  30.09MB/30.09MB
42650b04d53d: Loading layer  24.41MB/24.41MB
a1cd8af19e29: Loading layer  7.168kB/7.168kB
4b1cda90ba19: Loading layer  10.56MB/10.56MB
1351f0f3006a: Loading layer   24.4MB/24.4MB
Loaded image: vmware/harbor-ui:v1.5.3
e335f4c3af7d: Loading layer  79.93MB/79.93MB
2aea487bc2c4: Loading layer  3.584kB/3.584kB
d2efec3de68b: Loading layer  3.072kB/3.072kB
d0d71a5ce1dd: Loading layer  4.096kB/4.096kB
19930367abf0: Loading layer  3.584kB/3.584kB
03e5b7640db5: Loading layer  9.728kB/9.728kB
Loaded image: vmware/harbor-log:v1.5.3
5aebe8cc938c: Loading layer  11.97MB/11.97MB
Loaded image: vmware/nginx-photon:v1.5.3
ede6a57cbd7e: Loading layer  30.09MB/30.09MB
4d6dd4fc1d87: Loading layer   2.56kB/2.56kB
c86a69f49f60: Loading layer   2.56kB/2.56kB
0cf6e04c5927: Loading layer  2.048kB/2.048kB
6fbff4fe9739: Loading layer   22.8MB/22.8MB
6f527a618092: Loading layer   22.8MB/22.8MB
Loaded image: vmware/registry-photon:v2.6.2-v1.5.3
e29a8834501b: Loading layer  12.16MB/12.16MB
aaf67f1da2c7: Loading layer   17.3MB/17.3MB
8d5718232133: Loading layer  15.87kB/15.87kB
fc89aca1dd12: Loading layer  3.072kB/3.072kB
076eb5a76f6d: Loading layer  29.46MB/29.46MB
Loaded image: vmware/notary-server-photon:v0.5.1-v1.5.3
454c81edbd3b: Loading layer  135.2MB/135.2MB
e99db1275091: Loading layer  395.4MB/395.4MB
051e4ee23882: Loading layer  9.216kB/9.216kB
6cca4437b6f6: Loading layer  9.216kB/9.216kB
1d48fc08c8bc: Loading layer   7.68kB/7.68kB
0419724fd942: Loading layer  1.536kB/1.536kB
543c0c1ee18d: Loading layer  655.2MB/655.2MB
4190aa7e89b8: Loading layer  103.9kB/103.9kB
Loaded image: vmware/harbor-migrator:v1.5.0
45878c64fc3c: Loading layer  165.3MB/165.3MB
fc3d407ce98f: Loading layer  10.93MB/10.93MB
d7a0785bb902: Loading layer  2.048kB/2.048kB
a17e0f23bc84: Loading layer  48.13kB/48.13kB
57c7181f2336: Loading layer  10.97MB/10.97MB
Loaded image: vmware/clair-photon:v2.0.5-v1.5.3

[Step 2]: preparing environment ...
Clearing the configuration file: ./common/config/adminserver/env
Clearing the configuration file: ./common/config/ui/env
Clearing the configuration file: ./common/config/ui/app.conf
Clearing the configuration file: ./common/config/ui/private_key.pem
Clearing the configuration file: ./common/config/db/env
Clearing the configuration file: ./common/config/jobservice/env
Clearing the configuration file: ./common/config/jobservice/config.yml
Clearing the configuration file: ./common/config/registry/config.yml
Clearing the configuration file: ./common/config/registry/root.crt
Clearing the configuration file: ./common/config/nginx/cert/www.yidaoyun.com.crt
Clearing the configuration file: ./common/config/nginx/cert/www.yidaoyun.com.key
Clearing the configuration file: ./common/config/nginx/nginx.conf
Clearing the configuration file: ./common/config/log/logrotate.conf
loaded secret from file: /data/secretkey
Generated configuration file: ./common/config/nginx/nginx.conf
Generated configuration file: ./common/config/adminserver/env
Generated configuration file: ./common/config/ui/env
Generated configuration file: ./common/config/registry/config.yml
Generated configuration file: ./common/config/db/env
Generated configuration file: ./common/config/jobservice/env
Generated configuration file: ./common/config/jobservice/config.yml
Generated configuration file: ./common/config/log/logrotate.conf
Generated configuration file: ./common/config/jobservice/config.yml
Generated configuration file: ./common/config/ui/app.conf
Generated certificate, key file: ./common/config/ui/private_key.pem, cert file: ./common/config/registry/root.crt
Copying sql file for notary DB
Generated certificate, key file: ./cert_tmp/notary-signer-ca.key, cert file: ./cert_tmp/notary-signer-ca.crt
Generated certificate, key file: ./cert_tmp/notary-signer.key, cert file: ./cert_tmp/notary-signer.crt
Copying certs for notary signer
Copying notary signer configuration file
Generated configuration file: ./common/config/notary/server-config.json
Copying nginx configuration file for notary
Generated configuration file: ./common/config/nginx/conf.d/notary.server.conf
Generated and saved secret to file: /data/defaultalias
Generated configuration file: ./common/config/notary/signer_env
Generated configuration file: ./common/config/clair/postgres_env
Generated configuration file: ./common/config/clair/config.yaml
Generated configuration file: ./common/config/clair/clair_env
The configuration files are ready, please use docker-compose to start the service.

[Step 3]: checking existing instance of Harbor ...

[Step 4]: starting Harbor ...
Creating network "harbor_harbor" with the default driver
Creating network "harbor_harbor-clair" with the default driver
Creating network "harbor_harbor-notary" with the default driver
Creating network "harbor_notary-mdb" with the default driver
Creating network "harbor_notary-sig" with the default driver
Creating harbor-log ... done
Creating redis              ... done
Creating clair-db           ... done
Creating notary-db          ... done
Creating harbor-db          ... done
Creating registry           ... done
Creating harbor-adminserver ... done
Creating notary-signer      ... done
Creating clair              ... done
Creating harbor-ui          ... done
Creating notary-server      ... done
Creating nginx              ... done
Creating harbor-jobservice  ... done
✔ ----Harbor has been installed and started successfully.----
Now you should be able to visit the admin portal at https://192.168.1.111. 
For more details, please visit https://github.com/vmware/harbor .

©版权声明

著作权归作者所有:如需转载,请注明出处,谢谢。

Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐