华为云cce生态环境搭建整理
环境准备在华为云购买cce环境,设置node节点密码安装完成以后调整cce环境自带的安全组策略一台运维管理机部署nginx作为yaml文件的远程存放目录直接kubectl apply使用部署docker服务作为制作镜像使用管理机部署nfs作为挂载华为云文件存储的文件操作机使用node节点安全组策略修改:放通vpc的服务器访问容器内环境对公司公网开放node-port型的service的所有随机端口
环境准备
- 在华为云购买cce环境,设置node节点密码
- 安装完成以后调整cce环境自带的安全组策略
- 一台运维管理机
部署nginx作为yaml文件的远程存放目录直接kubectl apply使用
部署docker服务作为制作镜像使用管理机
部署nfs作为挂载华为云文件存储的文件操作机使用
node节点安全组策略修改:
放通vpc的服务器访问容器内环境
对公司公网开放node-port型的service的所有随机端口
打通所需要安全组之间的内网互通
限制ssh端口只允许公司访问
control控制节点安全组策略修改:
修改内网安全组通讯
搭建环境
- 直接在cce管理界面安装kubectl插件,安装完后记住用户名密码和访问连接
- 安装dashboard插件,修改dashboard的用户权限,便于以后研发使用,下附yaml文件
kubectl edit ClusterRole kubernetes-dashboard-minimal -n kube-system
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: "2021-07-06T08:52:07Z"
labels:
release: cceaddon-dashboard
name: kubernetes-dashboard-minimal
resourceVersion: "33743869"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/kubernetes-dashboard-minimal
uid: af5bd859-6966-470f-989d-0247d9671082
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resourceNames:
- kubernetes-dashboard-key-holder
- kubernetes-dashboard-certs
- kubernetes-dashboard-csrf
resources:
- secrets
verbs:
- get
- update
- delete
- apiGroups:
- ""
resourceNames:
- kubernetes-dashboard-settings
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- pods
- pods/log
- pods/exec
verbs:
- get
- list
- watch
- create
- delete
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
- services
- persistentvolumeclaims
- events
- replicationcontrollers
- namespaces
- persistentvolumes
- nodes
- endpoints
- resourcequotas
- limitranges
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- apps
resources:
- deployments
- deployments/scale
verbs:
- get
- list
- watch
- update
- apiGroups:
- apps
resources:
- statefulsets
- daemonsets
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- jobs
- cronjobs
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
- clusterroles
verbs:
- get
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resourceNames:
- heapster
- dashboard-metrics-scraper
resources:
- services
verbs:
- proxy
- apiGroups:
- ""
resourceNames:
- heapster
- 'http:heapster:'
- 'https:heapster:'
- dashboard-metrics-scraper
- http:dashboard-metrics-scraper
resources:
- services/proxy
verbs:
- get
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
- 开通swr华为镜像仓库(jenkins打包制作镜像直接上传镜像仓库),增加仓库使用永久秘钥添加到jenkins服务器,并增加华为子账户可以单独访问控制swr镜像仓库
- 搭建ELK
对于日志收集规模不是很大的情况下,我们采用最简单的架构elasticsearch、filebeat、kibana
对于日志收集规模比较大的情况下,我们可以采用filebeat—>kafka—>logstash—>elasticsearch 然后kibana展示出来
filebeat
在每个宿主机节点启动一个filebeat,把每个容器的日志都挂载到宿主机目录
在cce以ds方式在每台宿主机部署filebeat
Dockerfile配置文件制作镜像
FROM debian:jessie
# 如果更换版本,需在官网下载同版本LINUX64-BIT的sha替换FILEBEAT_SHA1
ENV FILEBEAT_VERSION=7.10.0 \ FILEBEAT_SHA1=509f0d7f2a16d70850c127dd20bea7c735fc749f8d90f8e797196d11887ceccf32d8d71e1177ae9dbe7c8d081133b7d75e431997123512fc17ee1e04e96a6bc5
ADD filebeat-7.10.0-linux-x86_64.tar.gz /opt/
RUN set -x && \
sed -i "s@http://ftp.debian.org@http://mirrors.aliyun.com@g" /etc/apt/sources.list && \
sed -i "s@http://security.debian.org@http://mirrors.aliyun.com@g" /etc/apt/sources.list && \
sed -i "s@http://deb.debian.org@http://mirrors.aliyun.com@g" /etc/apt/sources.list && \
apt-get update && \
apt-get install -y apt-transport-https && \
apt-get install -y wget && \
cd /opt && \
cd filebeat-* && \
cp filebeat /bin && \
cd /opt && \
rm -rf filebeat* && \
apt-get purge -y wget && \
apt-get autoremove -y && \
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && \
mkdir /data
COPY docker-entrypoint.sh /
ENTRYPOINT ["/bin/bash","/docker-entrypoint.sh"]
filebeat容器启动脚本
#!/bin/bash
ENV=${ENV:-"test"} # 定义日志收集的环境
PROJ_NAME=${PROJ_NAME:-"no-define"} # 定义项目名称
PASSWD=${PASSWD:-"H1BsfFprRCk"}
MULTILINE=${MULTILINE:-"^\["} # 多行匹配,以2个数据开头的为一行,反之
cat >/etc/filebeat.yaml << EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /data/*.log
- /data/*/*.log
- /data/*/*/*.log
- /data/*/*/*/*.log
- /data/*/*/*/*/*.log
scan_frequency: 120s
max_bytes: 10485760
multiline.pattern: ${MULTILINE}
multiline.negate: true
multiline.match: after
output.elasticsearch:
hosts: ["192.168.1.213:9200"]
index: "re_ks_cce_${ENV}_%{+YYYY.MM.dd}"
username: "elastic"
password: "${PASSWD}"
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch:
setup.ilm.enabled: false
setup.template.name: "re_ks_cce_${ENV}"
setup.template.pattern: "re_ks_cce_${ENV}*"
setup.template.enabled: false
setup.template.overwrite: true
EOF
set -xe
if [[ "$1" == "" ]]; then
exec filebeat -c /etc/filebeat.yaml
else
exec "$@"
fi
安装包可以去清华源网站下载
cce平台手动创建ds
es和kibana找安装包二进制安装
- 安装配置中心apollo直接二进制找到安装包单机启动
- 直接在华为云平台插件安装nginx-ingress
- 负载均衡需要有只对公司暴露的负载,有只对内网使用负载,有nginx-ingress专用负载
- 在域名解析处配置域名解析
配置jenkins
jenkins安装pipline流水线插件,新建任务,每个服务一个任务
流水线脚本
def createVersion() {
return new Date().format('yyyyMMdd_HHmm')
}
pipeline {
agent any
environment {
_version = createVersion()
}
parameters {
string defaultValue: 'https://gitlab.bjkcwl.com/kcwl-service/kcwl-carrier.git', description: 'git地址,如需改动请重新填写,例如:https://gitlab.bjkcwl.com/kcwl-service/kcwl-carrier.git', name: 'git_address', trim: false
string defaultValue: 'v5.x-beta', description: '分支名字,默认v5.x-beta', name: 'branch', trim: false
string defaultValue: 'kc_base_jre:8u181-release-v0.3', description: '基础镜像', name: 'base_image', trim: false
string defaultValue: 'v5.2', description: '镜像名字,例如:v5.2', name: 'add_tag', trim: false
string defaultValue: 'kcwl-carrier', description: '项目jar包名称,例如:kcwl-carrier', name: 'project_name', trim: false
string defaultValue: 'kcwl-carrier', description: '项目的deployment', name: 'deployment_name', trim: false
string defaultValue: 'kcwl-test', description: '项目所在的名称空间名字,默认kcwl-test,如不清楚,咨询管理员', name: 'namespace', trim: false
}
stages {
stage('clean') {
steps {
cleanWs()
}
}
stage('pull code') {
steps {
checkout([$class: 'GitSCM', branches: [[name: '*/${branch}']], extensions: [], userRemoteConfigs: [[credentialsId: '7a7a70cc-0d39-4e1f-89a5-d74d80615580', url: '${git_address}']]])
}
}
stage('maven') {
steps {
sh label: '', script: 'mvn clean install -Dmaven.test.skip=true'
}
}
stage('package') {
steps {
sh label: '', script: """cd ./target && mkdir app && mv ${params.project_name}.jar ./app"""
}
}
stage('image') {
steps {
writeFile file: './Dockerfile', text: """FROM swr.cn-north-4.myhuaweicloud.com/kcwl-test/${params.base_image}
ADD ./target/app /software/app"""
sh label: '', script: """docker build -t swr.cn-north-4.myhuaweicloud.com/kcwl-test/${params.project_name}:${branch}_${add_tag}_${_version} . && docker push swr.cn-north-4.myhuaweicloud.com/kcwl-test/${params.project_name}:${branch}_${add_tag}_${_version}"""
}
}
stage('update_deployment') {
steps {
sh label: '', script: """ssh 121.36.48.177 'python3 /data/scripts/set_deployment.py ${params.deployment_name} ${params.namespace} swr.cn-north-4.myhuaweicloud.com/kcwl-test/${params.project_name}:${branch}_${add_tag}_${_version}' """
}
}
}
}
搭建grafana+prometheus监控体系
首先安装export指标监控插件,版本yaml里面都有
以ds部署到到每台宿主机node-export.yaml文件:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: node-exporter
namespace: monitoring
labels:
daemon: "node-exporter"
grafanak8sapp: "true"
spec:
selector:
matchLabels:
daemon: "node-exporter"
grafanak8sapp: "true"
template:
metadata:
name: node-exporter
labels:
daemon: "node-exporter"
grafanak8sapp: "true"
spec:
volumes:
- name: proc
hostPath:
path: /proc
type: ""
- name: sys
hostPath:
path: /sys
type: ""
- name: root
hostPath:
type: ""
path: /
containers:
- name: node-exporter
image: swr.cn-north-4.myhuaweicloud.com/kcwl-test/prom/node-exporter:v1.0.1
imagePullPolicy: IfNotPresent
args:
- --path.procfs=/host_proc
- --path.sysfs=/host_sys
- --path.rootfs=/host/root
ports:
- name: node-exporter
hostPort: 9100
containerPort: 9100
protocol: TCP
volumeMounts:
- name: sys
readOnly: true
mountPath: /host_sys
- name: proc
readOnly: true
mountPath: /host_proc
- name: root
readOnly: true
mountPath: /host/root
imagePullSecrets:
- name: default-secret
hostNetwork: true
hostPID: true
kube-state-metrics插件安装
rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: kube-state-metrics
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: kube-state-metrics
rules:
- apiGroups:
- ""
resources:
- configmaps
- secrets
- nodes
- pods
- services
- resourcequotas
- replicationcontrollers
- limitranges
- persistentvolumeclaims
- persistentvolumes
- namespaces
- endpoints
verbs:
- list
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- list
- watch
- apiGroups:
- extensions
- apps
resources:
- daemonsets
- deployments
- replicasets
verbs:
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- list
- watch
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- list
- watch
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
- mutatingwebhookconfigurations
verbs:
- list
- watch
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests
verbs:
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- volumeattachments
- storageclasses
verbs:
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
- networkpolicies
verbs:
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: kube-state-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-state-metrics
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: monitoring
dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
labels:
grafanak8sapp: "true"
app: kube-state-metrics
name: kube-state-metrics
namespace: monitoring
spec:
selector:
matchLabels:
grafanak8sapp: "true"
app: kube-state-metrics
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
grafanak8sapp: "true"
app: kube-state-metrics
spec:
containers:
- name: kube-state-metrics
image: swr.cn-north-4.myhuaweicloud.com/kcwl-test/kube-state-metrics:v1.9.8
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: http-metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
imagePullSecrets:
- name: default-secret
serviceAccountName: kube-state-metrics
安装cadvisor-export
以ds方式安装在每台node节点上
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cadvisor
namespace: monitoring
labels:
app: cadvisor
spec:
selector:
matchLabels:
name: cadvisor
template:
metadata:
labels:
name: cadvisor
spec:
hostNetwork: true
containers:
- name: cadvisor
image: swr.cn-north-4.myhuaweicloud.com/kcwl-test/cadvisor:v0.33.0
imagePullPolicy: IfNotPresent
resources:
requests:
memory: 400Mi
cpu: 400m
limits:
memory: 1000Mi
cpu: 800m
volumeMounts:
- name: rootfs
mountPath: /rootfs
readOnly: true
- name: var-run
mountPath: /var/run
- name: sys
mountPath: /sys
readOnly: true
- name: docker
mountPath: /var/lib/docker
readOnly: true
ports:
- name: http
containerPort: 4195
protocol: TCP
readinessProbe:
tcpSocket:
port: 4195
initialDelaySeconds: 5
periodSeconds: 10
args:
- -disable_metrics=udp
- -housekeeping_interval=10s
- -port=4195
imagePullSecrets:
- name: default-secret
terminationGracePeriodSeconds: 30
volumes:
- name: rootfs
hostPath:
path: /
- name: var-run
hostPath:
path: /var/run
- name: sys
hostPath:
path: /sys
- name: docker
hostPath:
path: /var/lib/docker
安装blackbox插件获取服务健康检查监控
blackbox有一个界面需要暴露出去,可以根据需要设置黑白名单
yaml文件
cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: blackbox-exporter
name: blackbox-exporter
namespace: monitoring
data:
blackbox.yml: |-
modules:
http_2xx:
prober: http
timeout: 2s
http:
valid_http_versions: ["HTTP/1.1", "HTTP/2"]
valid_status_codes: [200,301,302]
method: GET
preferred_ip_protocol: "ip4"
tcp_connect:
prober: tcp
timeout: 2s
dp.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: blackbox-exporter
namespace: monitoring
labels:
app: blackbox-exporter
annotations:
deployment.kubernetes.io/revision: "1"
spec:
replicas: 1
selector:
matchLabels:
app: blackbox-exporter
template:
metadata:
labels:
app: blackbox-exporter
spec:
volumes:
- name: config
configMap:
name: blackbox-exporter
defaultMode: 420
containers:
- name: blackbox-exporter
image: swr.cn-north-4.myhuaweicloud.com/kcwl-test/blackbox-exporter:v0.18.0
imagePullPolicy: IfNotPresent
args:
- --config.file=/etc/blackbox_exporter/blackbox.yml
- --log.level=info
- --web.listen-address=:9115
ports:
- name: blackbox-port
containerPort: 9115
protocol: TCP
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 50Mi
volumeMounts:
- name: config
mountPath: /etc/blackbox_exporter
readinessProbe:
tcpSocket:
port: 9115
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
imagePullSecrets:
- name: default-secret
svc.yaml
kind: Service
apiVersion: v1
metadata:
name: blackbox-exporter
namespace: monitoring
spec:
selector:
app: blackbox-exporter
ports:
- name: blackbox-port
protocol: TCP
port: 9115
ingress.yaml
直接cce操作页面直接创建,并且限制白名单ip访问
安装prometheus-server端
需要购买文件存储挂载到prometheus的数据存储目录
ingress根据实际情况设置黑白名单
rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: prometheus
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: prometheus
rules:
- apiGroups:
- ""
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: monitoring
dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "5"
labels:
name: prometheus
name: prometheus
namespace: monitoring
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 7
selector:
matchLabels:
app: prometheus
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: prometheus
spec:
nodeName: 192.168.3.65
containers:
- name: prometheus
image: swr.cn-north-4.myhuaweicloud.com/kcwl-test/prometheus:v2.23.0
imagePullPolicy: IfNotPresent
command:
- /bin/prometheus
args:
- --config.file=/data/etc/prometheus.yml
- --storage.tsdb.path=/data/prom-db
- --storage.tsdb.min-block-duration=60m
- --storage.tsdb.retention=360h
- --web.enable-lifecycle
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: /data
name: cce-sfs-kqsvrz6z-monitor
subPath: prometheus
resources:
requests:
cpu: "1000m"
memory: "8Gi"
limits:
cpu: "3000m"
memory: "16Gi"
imagePullSecrets:
- name: default-secret
securityContext:
runAsUser: 0
serviceAccountName: prometheus
volumes:
- name: cce-sfs-kqsvrz6z-monitor
persistentVolumeClaim:
claimName: cce-sfs-krcwvcll-v0dl-monitor
svc.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: monitoring
spec:
ports:
- port: 9090
protocol: TCP
targetPort: 9090
selector:
app: prometheus
安装grafana
需要购买文件存储挂载到grafana的数据存储目录
rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: grafana
rules:
- apiGroups:
- "*"
resources:
- namespaces
- deployments
- pods
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: grafana
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: grafana
subjects:
- kind: User
name: k8s-node
dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
name: grafana
namespace: monitoring
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 7
selector:
matchLabels:
name: grafana
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: grafana
name: grafana
spec:
containers:
- name: grafana
image: swr.cn-north-4.myhuaweicloud.com/kcwl-test/grafana:7.5.10
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /var/lib/grafana
name: cce-sfs-grafana
imagePullSecrets:
- name: default-secret
securityContext:
runAsUser: 0
volumes:
- name: cce-sfs-grafana
persistentVolumeClaim:
claimName: cce-sfs-monitor-grafana #更换为创建好的pvc的名字
svc.yaml
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: monitoring
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: grafana
ingress根据实际情况设置黑白名单
grafana确认启动好以后,需要进入grafana容器内部,安装以下插件
kubectl -n infra exec -it grafana /bin/bash
# 以下命令在容器内执行
grafana-cli plugins install grafana-kubernetes-app(新版本已失效)
grafana-cli plugins install devopsprodigy-kubegraf-app(新版本安装)
grafana-cli plugins install grafana-clock-panel
grafana-cli plugins install grafana-piechart-panel
grafana-cli plugins install briangann-gauge-panel
grafana-cli plugins install natel-discrete-panel
jenkins配置k8s
安装kubernetes插件
需要提前配置生成一个证书
安装cfssl
此工具生成证书非常方便, pem证书与crt证书,编码一致可直接使用
vim admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HangZhou",
"L": "XS",
"O": "system:masters",
"OU": "System"
}
]
}
证书请求中的O 指定该证书的 Group 为 system:masters
而 RBAC 预定义的 ClusterRoleBinding 将 Group system:masters 与 ClusterRole cluster-admin 绑定,这就赋予了该证书具有所有集群权限 。
创建证书和私钥
cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key --profile=kubernetes admin-csr.json | cfssljson -bare admin
最终生成以下3个文件:
admin.csr
admin-key.pem
admin.pem
通过openssl来转换成pkc格式:
openssl pkcs12 -export -out ./jenkins-admin.pfx -inkey ./admin-key.pem -in ./admin.pem -passout pass:secret
将jenkins-admin.pfx 下载至桌面
配置jenkins
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
hostAliases:
- ip: "192.168.2.87"
hostnames:
- "jenkins.bjkcwl.com"
- ip: "192.168.2.38"
hostnames:
- "gitlab.bjkcwl.com"
更多推荐
所有评论(0)