k8s 1.28 搭建rabbitmq集群
注:图中这个cluster_formation.k8s.host 我一开始用的是kubernetes.default.svc.cluster.local,然后就是各种连不上,后来换上 kubernetes.default.svc就可以了,不知道是不是k8s新版本的问题。注意,内存最好充足一点,因为我就两个节点一个master、一个node,起初我的node是8g,还剩3~4G,集群竟然一直起不来,
1.环境
1.1 k8s 1.28
1.2 rabbit 3.8
1.3 工作空间default
1.4 注意,内存最好充足一点,因为我就两个节点一个master、一个node,起初我的node是8g,还剩3~4G,集群竟然一直起不来,后来将虚拟机内存扩大,并重启,rabbit集群就起来了,当然可能是和重启有关,但是我认为还是因为内存不够了。
1.5 k8s集成nfs存储,并且sc的名称 managed-nfs-storage。
1.6 镜像 docker pull registry.cn-beijing.aliyuncs.com/dotbalo/rabbitmq:3.8
2.准备yaml
2.1 01-rabbitmq-configmap.yaml
注:图中这个cluster_formation.k8s.host 我一开始用的是kubernetes.default.svc.cluster.local,然后就是各种连不上,后来换上 kubernetes.default.svc就可以了,不知道是不是k8s新版本的问题。
kind: ConfigMap
apiVersion: v1
metadata:
name: rabbitmq-cluster-config
namespace: default
labels:
addonmanager.kubernetes.io/mode: Reconcile
data:
enabled_plugins: |
[rabbitmq_management,rabbitmq_peer_discovery_k8s].
rabbitmq.conf: |
default_user = admin
default_pass = 123!@#
## Cluster formation. See https://www.rabbitmq.com/cluster-formation.html to learn more.
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
#cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
cluster_formation.k8s.host = kubernetes.default.svc
## Should RabbitMQ node name be computed from the pod's hostname or IP address?
## IP addresses are not stable, so using [stable] hostnames is recommended when possible.
## Set to "hostname" to use pod hostnames.
## When this value is changed, so should the variable used to set the RABBITMQ_NODENAME
## environment variable.
cluster_formation.k8s.address_type = hostname
## How often should node cleanup checks run?
cluster_formation.node_cleanup.interval = 30
## Set to false if automatic removal of unknown/absent nodes
## is desired. This can be dangerous, see
## * https://www.rabbitmq.com/cluster-formation.html#node-health-checks-and-cleanup
## * https://groups.google.com/forum/#!msg/rabbitmq-users/wuOfzEywHXo/k8z_HWIkBgAJ
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
## See https://www.rabbitmq.com/ha.html#master-migration-data-locality
queue_master_locator=min-masters
## See https://www.rabbitmq.com/access-control.html#loopback-users
loopback_users.guest = false
cluster_formation.randomized_startup_delay_range.min = 0
cluster_formation.randomized_startup_delay_range.max = 2
# default is rabbitmq-cluster's namespace
# hostname_suffix
cluster_formation.k8s.hostname_suffix = .rabbitmq-cluster.default.svc.cluster.local
# memory
vm_memory_high_watermark.absolute = 100MB
# disk
disk_free_limit.absolute = 2GB
2.2 02-rabbitmq-service.yaml
kind: Service
apiVersion: v1
metadata:
labels:
app: rabbitmq-cluster
name: rabbitmq-cluster
namespace: default
spec:
clusterIP: None
ports:
- name: rmqport
port: 5672
targetPort: 5672
selector:
app: rabbitmq-cluster
---
#注意:这个nodeport 是mq的管理页面端口,如果链接mq,用下面的nodeport
kind: Service
apiVersion: v1
metadata:
labels:
app: rabbitmq-cluster
name: rabbitmq-cluster-manage
namespace: default
spec:
ports:
- name: http
port: 15672
protocol: TCP
targetPort: 15672
selector:
app: rabbitmq-cluster
type: NodePort
---
#注意:这个nodeport是将5672端口放出来,程序通过这个端口可以访问mq
kind: Service
apiVersion: v1
metadata:
labels:
app: rabbitmq-cluster
name: rabbitmq-server-port
namespace: default
spec:
ports:
- name: rmq-server-port
port: 5672
protocol: TCP
targetPort: 5672
selector:
app: rabbitmq-cluster
type: NodePort
2.3 03-rabbitmq-rbac.yaml
kind: Service
apiVersion: v1
metadata:
labels:
app: rabbitmq-cluster
name: rabbitmq-cluster
namespace: default
spec:
clusterIP: None
ports:
- name: rmqport
port: 5672
targetPort: 5672
selector:
app: rabbitmq-cluster
---
kind: Service
apiVersion: v1
metadata:
labels:
app: rabbitmq-cluster
name: rabbitmq-cluster-manage
namespace: default
spec:
ports:
- name: http
port: 15672
protocol: TCP
targetPort: 15672
selector:
app: rabbitmq-cluster
type: NodePort
[root@master rabbitmq]# cat 03-rabbitmq-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rabbitmq-cluster
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rabbitmq-cluster
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rabbitmq-cluster
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rabbitmq-cluster
subjects:
- kind: ServiceAccount
name: rabbitmq-cluster
namespace: default
2.4 04-rabbitmq-cluster-sts.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rabbitmq-cluster
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rabbitmq-cluster
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rabbitmq-cluster
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rabbitmq-cluster
subjects:
- kind: ServiceAccount
name: rabbitmq-cluster
namespace: default
[root@master rabbitmq]# cat 04-rabbitmq-cluster-sts.yaml
kind: StatefulSet
apiVersion: apps/v1
metadata:
labels:
app: rabbitmq-cluster
name: rabbitmq-cluster
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: rabbitmq-cluster
serviceName: rabbitmq-cluster
template:
metadata:
labels:
app: rabbitmq-cluster
spec:
containers:
- args:
- -c
- cp -v /etc/rabbitmq/rabbitmq.conf ${RABBITMQ_CONFIG_FILE}; exec docker-entrypoint.sh
rabbitmq-server
command:
- sh
env:
- name: TZ
value: 'Asia/Shanghai'
- name: RABBITMQ_ERLANG_COOKIE
value: 'SWvCP0Hrqv43NG7GybHC95ntCJKoW8UyNFWnBEWG8TY='
- name: K8S_SERVICE_NAME
value: rabbitmq-cluster
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: RABBITMQ_USE_LONGNAME
value: "true"
- name: RABBITMQ_NODENAME
value: rabbit@$(POD_NAME).$(K8S_SERVICE_NAME).$(POD_NAMESPACE).svc.cluster.local
- name: RABBITMQ_CONFIG_FILE
value: /var/lib/rabbitmq/rabbitmq.conf
image: 192.168.2.73:80/library/rabbitmq:3.8
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- rabbitmq-diagnostics
- status
# See https://www.rabbitmq.com/monitoring.html for monitoring frequency recommendations.
initialDelaySeconds: 60
periodSeconds: 60
timeoutSeconds: 15
name: rabbitmq
ports:
- containerPort: 15672
name: http
protocol: TCP
- containerPort: 5672
name: amqp
protocol: TCP
readinessProbe:
exec:
command:
- rabbitmq-diagnostics
- status
initialDelaySeconds: 20
periodSeconds: 60
timeoutSeconds: 10
volumeMounts:
- mountPath: /etc/rabbitmq
name: config-volume
readOnly: false
- mountPath: /var/lib/rabbitmq
name: rabbitmq-storage
readOnly: false
- name: timezone
mountPath: /etc/localtime
readOnly: true
serviceAccountName: rabbitmq-cluster
terminationGracePeriodSeconds: 30
volumes:
- name: config-volume
configMap:
items:
- key: rabbitmq.conf
path: rabbitmq.conf
- key: enabled_plugins
path: enabled_plugins
name: rabbitmq-cluster-config
- name: timezone
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
volumeClaimTemplates:
- metadata:
name: rabbitmq-storage
spec:
accessModes:
- ReadWriteMany
storageClassName: "managed-nfs-storage"
resources:
requests:
storage: 2Gi
以上就是搭建rabbitmq集群需要的yaml文件
3 执行yaml
3.1 执行命令
kubectl apply -f .
3.2 查看集群状态
启动可能有点慢,需要一点时间去等待,大概几分钟吧三个节点就可以启动了
3.3 进入集群,查看状态
执行 kubectl exec -it rabbitmq-cluster-0 -- /bin/bash
执行 rabbitmqctl cluster_status
可以看到,几点都已经进来了。
4. 浏览器访问rabbitmq客户端
4.1 查看端口
执行 kubectl get pvc
4.2 输入在 01-rabbitmq-configmap.yaml 中配置的用户名和密码,进行登录
注:我遇到了一个问题,就是当我输入我在 01-rabbitmq-configmap.yaml 中配置的用户名和密码时,检验没有通过,并提示这个,初步怀疑用户没起作用。
解决方案:
1. 执行 kubectl exec -it rabbitmq-cluster-0 -- /bin/bash
2.新增用户 rabbitmqctl add_user ldy(用户名) 123456(密码)
3.设置权限 rabbitmqctl set_user_tags ldy administrator
4.重新在浏览器用新的账户和密码登录就可以了
更多推荐
所有评论(0)