前言:

虚拟机技术里比较重要的是一点是网络配置,毕竟,虚拟机的生产目标是为了使用和管理虚拟机,对吧。网络不通畅,那是没有办法使用和管理的。

这里需要提前科普两个概念,这些概念也是一直贯穿kvm虚拟机技术的。

首先,虚拟机是需要安装在一个宿主机环境下的,宿主机也可以称为host主机,虚拟机也可以称之为guest主机。

宿主机,下面都简称host主机,是根据该host主机的硬件资源配置,比如,cpu核心数,磁盘空间大小,内存大小等等三维的参数,合理的在其内部通过libvirtd服务划分若干个guest主机,也就是虚拟机,并通过libvirt服务提供的管理接口,对划分出的虚拟机进行管理,配置,kvm虚拟机的管理活动一般指的是对虚拟机的启停,扩缩容,资源配置,网络配置,克隆,配置模板机这些活动。

而由于host和guest两者之间紧密的关系,因此,网络配置也是基于host主机来做的,毕竟,你提供了什么样的食材,厨师才能做出什么样的饭菜对吧 ,总不可能要求厨师凭空做饭菜。

kvm的网络模型:

一,

nat网络模式

首先,来看看刚安装完kvm环境的host主机的网络配置:

[root@slave1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:e9:9e:89 brd ff:ff:ff:ff:ff:ff
    inet 192.168.217.17/24 brd 192.168.217.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fee9:9e89/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:a5:21:b4:7d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:f3:93:e0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:f3:93:e0 brd ff:ff:ff:ff:ff:ff

很明显,这个服务器我是安装了两个服务,一个是docker环境,一个是kvm环境,docker0网卡是docker服务专用的虚拟网卡,这里就不提了,virtr0和virtbr0-nic是kvm环境里的libvirt服务虚拟出来的两个网卡,host主机本机只有一个ens33实体网卡和ens33的回环网卡lo。

这里需要注意,virbr0这个网卡是纯粹的虚拟网卡哦,在linux的网卡配置文件存放位置是看不到这个网卡的配置文件的,该网卡完全由libvirtd服务来进行管理,该虚拟网卡的作用主要是提供给guset主机的nat网卡,注意,是nat网卡不是bridge网卡。

如果安装虚拟机的时候不指定网络工作模式,也就是network使用默认的话,那么,guest工作机将会使用virbr0这个网卡,所有guest的网络流量通过该虚拟网卡流转,而这造成一个比较严重的问题:这个形式的网络,只有host和guest可以组网,宿主机的同网段的其它服务器是无法访问guest虚拟机的(比如,宿主机是A服务器,它的同网段内还有若干服务器,B,C。。。。 但,B,C。。。是无法访问A内的虚拟机的),只因为它使用的是nat网卡。

[root@master kvm-1.5.3]# virt-install --help |grep net
  --pxe                 Boot from the network using the PXE protocol
  -w NETWORK, --network NETWORK
                        Configure a guest network interface. Ex:
                        --network bridge=mybr0
                        --network network=my_libvirt_virtual_net
                        --network network=mynet,model=virtio,mac=00:11...
                        --network none
                        --network help

例如这样安装guest:

virt-install --virt-type kvm --name centos --ram 1024 --disk /opt/CentOS-7-x86_64-GenericCloud-1905.qcow2,format=qcow2 --network network=default --graphics vnc,listen=0.0.0.0 --vncport=5922 --noautoconsole --os-type=linux --os-variant=centos7.0 --boot hd

default就表示使用nat网络模式。

本例计划使用一个xml配置文件启动一个nat网络模式的虚拟机,配置文件内容如下:

[root@master opt]# cat ~/linux_mini.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh edit linux_mini
or other application using the libvirt API.
-->

<domain type='kvm'>
  <name>newer</name>
  <uuid>187ca777-a965-4777-8e95-c1f0cfe2a363</uuid>
  <memory unit='KiB'>548576</memory>
  <currentMemory unit='KiB'>548576</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/opt/newer.linux.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:89:52:23'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='5992' autoport='no' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='vga' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </rng>
  </devices>
</domain>

vnc客户端连接虚拟机后,查询虚拟机的IP地址如下:

 

 而192.168.122.19这个IP是无法使用xshell工具连接的,究其原因是nat网络模式没有设置gateway,无法正常的路由。

这台虚拟机是安装在192.168.217.16这个宿主机上的,那么,在另一台宿主机192.168.217.17上,是无法正常ping通虚拟机192.168.122.19的,连接方式只有vnc服务提供的接口192.168.217.16:5992。

 

二,

bridge网络模式

桥接模式需要宿主机配置一个虚拟网卡,该虚拟网卡桥接到宿主机的一个真实物理网卡上。guest虚拟机安装的时候指定使用bridge的那个虚拟网卡即可。

例如,宿主机的IP地址是192.168.217.17,真实的物理网卡名称是ens33,那么,应该是这么配置的:

[root@slave1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 
TYPE="Ethernet"
BRIDGE="br0"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
NAME="ens33"
UUID="d4876b9f-42d8-446c-b0ae-546e812bc954"
DEVICE="ens33"
ONBOOT="yes"
[root@slave1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br0 
TYPE="Bridge"
NAME="br0"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
UUID="a276650e-af08-4270-8bac-08aa6197f2bc"
DEVICE="br0"
ONBOOT="yes"
PREFIX="24"
IPADDR=192.168.217.17
NETMASK=255.255.255.0
GATEWAY=192.168.217.2
DNS1=61.128.114.166
DNS2=8.8.8.8

此时,重启网络后,网卡的情况如下:

[root@slave1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN qlen 1000
    link/ether 00:0c:29:e9:9e:89 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::20c:29ff:fee9:9e89/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:a5:21:b4:7d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:f3:93:e0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:f3:93:e0 brd ff:ff:ff:ff:ff:ff
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 00:0c:29:e9:9e:89 brd ff:ff:ff:ff:ff:ff
    inet 192.168.217.17/24 brd 192.168.217.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fee9:9e89/64 scope link 
       valid_lft forever preferred_lft forever
10: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN qlen 1000
    link/ether fe:54:00:80:06:c6 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe80:6c6/64 scope link 
       valid_lft forever preferred_lft forever

guest我使用的是自己制作的一个linux版本的kvm模板机,磁盘文件是test.img ,安装命令为:

virt-install --name test01 --os-variant=linux --ram 2048 --vcpus=2 --disk path=/opt/test.img --network=bridge:br0 --vnc --vncport=5923 --vnclisten=0.0.0.0 --force --import --autostart

以上命令里需要重点关注一下这些参数:
 

--network=bridge:br0

--vncport=5923 --vnclisten=0.0.0.0

 

安装完毕后,生成的kvm配置文件是这样的:

[root@slave1 ~]# cat /etc/libvirt/qemu/test01.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh edit test01
or other application using the libvirt API.
-->

<domain type='kvm'>
  <name>test01</name>
  <uuid>91ee89da-55d7-4222-b1be-1217bc6b43d3</uuid>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='custom' match='exact' check='partial'>
    <model fallback='allow'>SandyBridge</model>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/opt/test.img'/>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:80:06:c6'/>
      <source bridge='br0'/>
      <model type='rtl8139'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='5923' autoport='no' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 此时,通过vnc客户端,可以通过宿主机IP 192.168.217.17+端口5923连接到kvm虚拟机,查询虚拟机IP如下:

 此时,可以发现,该虚拟机的IP地址和宿主机是同一个网段了,因此,可以通过xshell等工具直连kvm虚拟机啦,通过xshell连接guest后,可以看到IP确实是192.168.217.129

如果是默认的nat模式,xshell等工具是无法连接的,只能ssh直连。bridge模式可以理解为guest虚拟机变为一台独立的主机啦。 

总结

kvm虚拟机需要设置恰当的网络模式,而bridge模式相对nat模式来说,应用范围更加广泛,更加的灵活。NAT方式是kvm安装后的默认方式。它支持主机与虚拟机的互访,同时也支持虚拟机访问互联网,但不支持外界访问虚拟机,从这一点来说,安全性自然相对bridge是要高一些的。

Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐