kubernetes系列之十二:POD多网卡方案multus-cni之概览
一、前言在生产环境下,为了保证安全性和性能,不同功能的网络进行隔离是一个必要的措施,比如管理网络、控制网络和数据网络的隔离(必要情况下,数据网络的data in和data out网络也要进行隔离)。这种隔离在物理机和虚拟机里面很容易实现,但是在POD里面,如果使用Kubernetes这样的容器云管理平台,则会面临一些限制,尤其是现在Kubernetes的POD默认还不支持多网卡设置,但是业界对PO
一、前言
在生产环境下,为了保证安全性和性能,不同功能的网络进行隔离是一个必要的措施,比如管理网络、控制网络和数据网络的隔离(必要情况下,数据网络的data in和data out网络也要进行隔离)。这种隔离在物理机和虚拟机里面很容易实现,但是在POD里面,如果使用Kubernetes这样的容器云管理平台,则会面临一些限制,尤其是现在Kubernetes的POD默认还不支持多网卡设置,但是业界对POD多网卡的需求还是很强烈的。
在这个背景下Intel的multus-cni借助CNI在一定程度上满足了这个需求。
转载自https://blog.csdn.net/cloudvtech
二、multus-cni概览
multus可以为运行在kubernetes的POD提供多个网络接口,它可以将多个CNI插件组合在一起为POD配置不同类型的网络;最新的multus还支持使用kubernetes CDR为不同功能的POD提供不同数目的单个或者多网络配置,为kubernetes下面的网络解决方案提供了更加广阔的空间。
multus支持几乎所有的CNI plugin:
- CNI开发的插件(比如Flannel,DHCP, Macvlan等)
- 第三方插件(比如Calico,Weave,Cilium,Contiv等)
- 其它网络方案(比如SRIOV,SRIOV-DPDK,OVS-DPDK以及VPP)
multus使用"delegates"的概念将多个CNI插件组合起来,并且指定一个masterplugin来作为POD的主网络并且被Kubernetes所感知。
转载自https://blog.csdn.net/cloudvtech
三、配置multus-cni
测试环境版本
[root@k8s-master /]# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
3.1 安装multus-cni
yum install go
git clone https://github.com/Intel-Corp/multus-cni.git
cd multus-cni/
./build
然后将build出来的二进制文件拷贝到所有Kubernetes的worker节点的/opt/cni/bin目录
note:为了保证环境的清洁,请执行如下命令删除worker上的任何已有的cni配置
rm -f /etc/cni/net.d/*
3.2 建立足够权限的service account
multus-rbac.yaml
# Create the clusterrole and clusterrolebinding:
# $ kubectl create -f multus-rbac.yml
# Create the pod using the same namespace used by the multus serviceaccount:
# $ kubectl create --namespace kube-system -f multus.yml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: multus
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- update
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: multus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: multus
subjects:
- kind: ServiceAccount
name: multus
namespace: kube-system
建立这个account
kubectl apply -f multus-rbac.yaml
3.3 配置multus网络
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: multus
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-multus-cfg
namespace: kube-system
labels:
tier: node
app: multus
data:
cni-conf.json: |
{
"name": "multus-demo",
"type": "multus",
"delegates": [
{
"type": "macvlan",
"master": "ens33",
"mode": "vepa",
"isGateway": true,
"ipMasq": false,
"ipam": {
"type": "host-local",
"subnet": "192.168.166.0/24",
"rangeStart": "192.168.166.21",
"rangeEnd": "192.168.166.29",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.166.2"
}
},
{
"type": "flannel",
"masterplugin": true,
"delegate": {
"isDefaultGateway": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-multus-ds
namespace: kube-system
labels:
tier: node
app: multus
spec:
template:
metadata:
labels:
tier: node
app: multus
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
serviceAccountName: multus
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-amd64
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: install-cni
image: quay.io/coreos/flannel:v0.10.0-amd64
command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-multus.conf; while true; do sleep 3600; done" ]
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-multus-cfg
这个multus网络包含flannel和macvlan两个CNI插件,所以之后建立的POD应该也会有这样两个网络接口。
建立multus网络:
[root@k8s-master multus]# kubectl apply -f multus.yaml
serviceaccount "multus" created
configmap "kube-multus-cfg" created
daemonset.extensions "kube-multus-ds" created
[root@k8s-master multus]# kubectl get pod -n kube-system | grep multus
kube-multus-ds-5p8fs 2/2 Running 0 2m
kube-multus-ds-s5crf 2/2 Running 0 2m
转载自https://blog.csdn.net/cloudvtech
四、建立POD
nginx.yml
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
kubectl apply -f nginx.yaml
可以看到新建的POD的主接口是flannel接口,kubernetes感知的是flannel分配的IP:
[root@k8s-master multus]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-j8z67 1/1 Running 0 1h 10.244.1.2 k8s-node1
nginx-pxlgl 1/1 Running 0 1h 10.244.2.2 k8s-node2
进入nginx的POD里面查看网络:
root@nginx-j8z67:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
link/ether 0a:58:0a:f4:01:02 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.2/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a8fd:bff:fe4e:3e1b/64 scope link
valid_lft forever preferred_lft forever
4: net0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 96:b8:81:34:a5:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.166.21/24 scope global net0
valid_lft forever preferred_lft forever
inet6 fe80::94b8:81ff:fe34:a5ee/64 scope link
valid_lft forever preferred_lft forever
在另外一个node对两个IP进行访问,访问flannel IP:
[root@k8s-node2 ~]# curl 10.244.1.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
</head>
<body>
......
</body>
</html>
访问macvlan IP:
[root@k8s-node2 ~]# curl 192.168.166.21
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
</head>
<body>
......
</body>
</html>
转载自https://blog.csdn.net/cloudvtech
更多推荐
所有评论(0)