DPDK系列之五:qemu-kvm网络简介
一、前言qemu是一个虚拟化软件,可以在用户空间模拟一个PC机的各种硬件,而KVM则通过强化Linux kernel开启了CPU新的运行模式来提高虚拟化的效率,kvm和qemu组合起来之后的qemu-kvm可以提供更高效的Linux虚拟化解决方案。虚拟化网络是qemu提供的极其重要的虚拟机基础设施,本文将对qemu-kvm的网络虚拟化技术进行简要分析。转载自https://blog.csdn.ne
一、前言
qemu是一个虚拟化软件,可以在用户空间模拟一个PC机的各种硬件,而KVM则通过强化Linux kernel开启了CPU新的运行模式来提高虚拟化的效率,kvm和qemu组合起来之后的qemu-kvm可以提供更高效的Linux虚拟化解决方案。
虚拟化网络是qemu提供的极其重要的虚拟机基础设施,本文将对qemu-kvm的网络虚拟化技术进行简要分析。
转载自https://blog.csdn.net/cloudvtech
二、qemu-kvm网络简介
qemu-vkm的网络方案分成两部分,一部分是虚拟机内部看到的模拟的网络接口(在qemu中由 -device指定),还有一部分是宿主机为虚拟机网络接口提供数据包路径的后端(在qemu中由 -netdev指定)。
2.1 qemu-kvm device
qemu-kvm的模拟网络接配置方式如下:
-device driver[,prop[=value][,...]]
add device (based on driver)
prop=value,... sets driver properties
use '-device help' to print all possible drivers
use '-device driver,help' to print all possible properties
qemu-kvm中支持的模拟网络接口如下:
/root/qemu-2.11.0/x86_64-softmmu/qemu-system-x86_64 -device ?
…
Network devices:
name "e1000", bus PCI, alias "e1000-82540em", desc "Intel Gigabit Ethernet"
name "e1000-82544gc", bus PCI, desc "Intel Gigabit Ethernet"
name "e1000-82545em", bus PCI, desc "Intel Gigabit Ethernet"
name "e1000e", bus PCI, desc "Intel 82574L GbE Controller"
name "i82550", bus PCI, desc "Intel i82550 Ethernet"
name "i82551", bus PCI, desc "Intel i82551 Ethernet"
name "i82557a", bus PCI, desc "Intel i82557A Ethernet"
name "i82557b", bus PCI, desc "Intel i82557B Ethernet"
name "i82557c", bus PCI, desc "Intel i82557C Ethernet"
name "i82558a", bus PCI, desc "Intel i82558A Ethernet"
name "i82558b", bus PCI, desc "Intel i82558B Ethernet"
name "i82559a", bus PCI, desc "Intel i82559A Ethernet"
name "i82559b", bus PCI, desc "Intel i82559B Ethernet"
name "i82559c", bus PCI, desc "Intel i82559C Ethernet"
name "i82559er", bus PCI, desc "Intel i82559ER Ethernet"
name "i82562", bus PCI, desc "Intel i82562 Ethernet"
name "i82801", bus PCI, desc "Intel i82801 Ethernet"
name "ne2k_isa", bus ISA
name "ne2k_pci", bus PCI
name "pcnet", bus PCI
name "rocker", bus PCI, desc "Rocker Switch"
name "rtl8139", bus PCI
name "usb-bt-dongle", bus usb-bus
name "usb-net", bus usb-bus
name "virtio-net-device", bus virtio-bus
name "virtio-net-pci", bus PCI, alias "virtio-net"
name "vmxnet3", bus PCI, desc "VMWare Paravirtualized Ethernet v3"
…
2.2 qemu-kvm netdev
qemu-kvm中支持的宿主机上后端如下(包括legacy的 -net配置方式):
-netdev user,id=str[,ipv4[=on|off]][,net=addr[/mask]][,host=addr]
[,ipv6[=on|off]][,ipv6-net=addr[/int]][,ipv6-host=addr]
[,restrict=on|off][,hostname=host][,dhcpstart=addr]
[,dns=addr][,ipv6-dns=addr][,dnssearch=domain][,tftp=dir]
[,bootfile=f][,hostfwd=rule][,guestfwd=rule][,smb=dir[,smbserver=addr]]
configure a user mode network backend with ID 'str',
its DHCP server and optional services
-netdev tap,id=str[,fd=h][,fds=x:y:...:z][,ifname=name][,script=file][,downscript=dfile]
[,br=bridge][,helper=helper][,sndbuf=nbytes][,vnet_hdr=on|off][,vhost=on|off]
[,vhostfd=h][,vhostfds=x:y:...:z][,vhostforce=on|off][,queues=n]
[,poll-us=n]
configure a host TAP network backend with ID 'str'
connected to a bridge (default=br0)
use network scripts 'file' (default=/etc/qemu-ifup)
to configure it and 'dfile' (default=/etc/qemu-ifdown)
to deconfigure it
use '[down]script=no' to disable script execution
use network helper 'helper' (default=/usr/local/libexec/qemu-bridge-helper) to
configure it
use 'fd=h' to connect to an already opened TAP interface
use 'fds=x:y:...:z' to connect to already opened multiqueue capable TAP interfaces
use 'sndbuf=nbytes' to limit the size of the send buffer (the
default is disabled 'sndbuf=0' to enable flow control set 'sndbuf=1048576')
use vnet_hdr=off to avoid enabling the IFF_VNET_HDR tap flag
use vnet_hdr=on to make the lack of IFF_VNET_HDR support an error condition
use vhost=on to enable experimental in kernel accelerator
(only has effect for virtio guests which use MSIX)
use vhostforce=on to force vhost on for non-MSIX virtio guests
use 'vhostfd=h' to connect to an already opened vhost net device
use 'vhostfds=x:y:...:z to connect to multiple already opened vhost net devices
use 'queues=n' to specify the number of queues to be created for multiqueue TAP
use 'poll-us=n' to speciy the maximum number of microseconds that could be
spent on busy polling for vhost net
-netdev bridge,id=str[,br=bridge][,helper=helper]
configure a host TAP network backend with ID 'str' that is
connected to a bridge (default=br0)
using the program 'helper (default=/usr/local/libexec/qemu-bridge-helper)
-netdev l2tpv3,id=str,src=srcaddr,dst=dstaddr[,srcport=srcport][,dstport=dstport]
[,rxsession=rxsession],txsession=txsession[,ipv6=on/off][,udp=on/off]
[,cookie64=on/off][,counter][,pincounter][,txcookie=txcookie]
[,rxcookie=rxcookie][,offset=offset]
configure a network backend with ID 'str' connected to
an Ethernet over L2TPv3 pseudowire.
Linux kernel 3.3+ as well as most routers can talk
L2TPv3. This transport allows connecting a VM to a VM,
VM to a router and even VM to Host. It is a nearly-universal
standard (RFC3391). Note - this implementation uses static
pre-configured tunnels (same as the Linux kernel).
use 'src=' to specify source address
use 'dst=' to specify destination address
use 'udp=on' to specify udp encapsulation
use 'srcport=' to specify source udp port
use 'dstport=' to specify destination udp port
use 'ipv6=on' to force v6
L2TPv3 uses cookies to prevent misconfiguration as
well as a weak security measure
use 'rxcookie=0x012345678' to specify a rxcookie
use 'txcookie=0x012345678' to specify a txcookie
use 'cookie64=on' to set cookie size to 64 bit, otherwise 32
use 'counter=off' to force a 'cut-down' L2TPv3 with no counter
use 'pincounter=on' to work around broken counter handling in peer
use 'offset=X' to add an extra offset between header and data
-netdev socket,id=str[,fd=h][,listen=[host]:port][,connect=host:port]
configure a network backend to connect to another network
using a socket connection
-netdev socket,id=str[,fd=h][,mcast=maddr:port[,localaddr=addr]]
configure a network backend to connect to a multicast maddr and port
use 'localaddr=addr' to specify the host address to send packets from
-netdev socket,id=str[,fd=h][,udp=host:port][,localaddr=host:port]
configure a network backend to connect to another network
using an UDP tunnel
-netdev vhost-user,id=str,chardev=dev[,vhostforce=on|off]
configure a vhost-user network, backed by a chardev 'dev'
-netdev hubport,id=str,hubid=n
configure a hub port on QEMU VLAN 'n'
-net nic[,vlan=n][,macaddr=mac][,model=type][,name=str][,addr=str][,vectors=v]
old way to create a new NIC and connect it to VLAN 'n'
(use the '-device devtype,netdev=str' option if possible instead)
-net dump[,vlan=n][,file=f][,len=n]
dump traffic on vlan 'n' to file 'f' (max n bytes per packet)
-net none use it alone to have zero network devices. If no -net option
is provided, the default is '-net nic -net user'
-net [user|tap|bridge|socket][,vlan=n][,option][,option][,...]
old way to initialize a host network interface
(use the -netdev option if possible instead)
转载自https://blog.csdn.net/cloudvtech
在这种模式下,qemu在用户态模拟一个完整的协议栈作为后端,但是虚拟机不能与外界网络互通,只能与宿主机相互访问:
/usr/libexec/qemu-kvm
-name centos -smp 2,cores=2 -m 1024
-drive file=/data/qcow2.img,media=disk,format=qcow2,if=none,id=systemdisk
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x04,drive=systemdisk,id=systemdiskvirtio,bootindex=0
-netdev user,id=user1,hostfwd=tcp::5555-:22
-device e1000,netdev=user1
-boot c
3.2 tap模式后端
创建一个tap设备作为后端,这个tap设备可以连接到bridge、fd、vhost等设备或者模块来为虚拟机网络接口创造数据包通路:
/usr/libexec/qemu-kvm
-name centos
-smp 2,cores=2 -m 1024
-drive file=/home/kvmdisk/qcow2.img,media=disk,format=qcow2,if=none,id=systemdisk
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x04,drive=systemdisk,id=systemdiskvirtio,bootindex=0
-netdev tap,id=tap1,script=/script/ovs-ifup,downscript=/script/ovs-ifdown
-device e1000,netdev=tap1,mac=`generate_mac`
-boot c
其中的“script=/script/ovs-ifup,downscript=/script/ovs-ifdown”用来指定建立和删除tap的脚本,“generate_mac”用来生成mac地址
3.3 vhost-user
这种模式后端的数据路径by-pass内核,一方面提升了东西数据通路的交互性能,另一方面便于像大页、DPDK这类数据面加速程序与虚拟化网络技术的集成:
/root/qemu-2.11.0/x86_64-softmmu/qemu-system-x86_64 -m 1024 -smp 1 -hda ./centos.qcow2 -boot c -enable-kvm -no-reboot -net none -nographic \
-chardev socket,id=char1,path=/usr/local/var/run/openvswitch/myportnameone \
-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
-object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem -mem-prealloc \
-vnc :1 --enable-kvm \
-netdev tap,id=tapnet,ifname=vnet0,script=no -device rtl8139,netdev=tapnet,mac=52:54:00:12:34:58
qemu -m 1024 -mem-path /hugetlbfs,prealloc=on,share=on -netdev type=vhost-user,id=net0,file=/path/to/socket -device virtio-net-pci,netdev=net0
转载自https://blog.csdn.net/cloudvtech
更多推荐
所有评论(0)