背景

已经通过helm完成dolphinscheduler 3.0.0部署在k8s中;详细步骤可以参考我另一份文档:部署DS 。通过nodePort暴露12345端口过后,登录到DS 中,当使用“资源中心”–“上传文件”,在提交时会提示“存储未启用”。
但是我按照他们官方文档分别配置过minio的存储、本地存储两个方式用来存储文件,一样提示“存储未启用”。

修改values.yaml关于存储的配置

我是使用minio存储,所以按照s3配置,可以参考官网的配置文件:官网k8s部署

common:
  ## Configmap
  configmap:
    DOLPHINSCHEDULER_OPTS: ""
    DATA_BASEDIR_PATH: "/dolphinscheduler/tmp"
###minio存储配置
    RESOURCE_STORAGE_TYPE: "S3"
    RESOURCE_UPLOAD_PATH: "/dolphinschduler"
    FS_DEFAULT_FS: "s3a://dolphinschduler"
    FS_S3A_ENDPOINT: "http://192.168.1.11:31911"
    FS_S3A_ACCESS_KEY: "minio"
    FS_S3A_SECRET_KEY: "12345678"

重新部署DS

helm uninstall dol -n dol //第一个dol是helm的版本名字,第二个dol是命名空间
helm install dol . -n dol //部署DS

查看configmap详细信息

kubectl get configmap dol-common -n dol -o yaml能够看到配置信息已经注入到configmap中。上传文件依然提示“存储未启用”。

开始排查

翻阅官方文档

详细查看了官方文档,同时请求了架构师,将github上的代码拉取下来分析,依然没有得到结果;然后在github项目的问题中找到一个类似的链接,在此收到启发,可能api和worker中没有引用到这个configmap.

登录api和worker的Pod中验证猜想

kubectl exec -it pod_name -n dol /bin/bash 发现在/opt//opt/dolphinscheduler/conf/common.properties文件中,是默认的配置,configmap的内容没有被使用。

解决办法

解决办法有多种,我采用的是通过新建configmap然后挂载到pod里面:

1、将现在pod中的配置文件common.properties文件内容取出,并修改
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# user data local directory path, please make sure the directory exists and have read write permissions
data.basedir.path=/dolphinscheduler/tmp

# resource storage type: HDFS, S3, NONE
#--------------------------修改部分--------------------------------------
resource.storage.type=S3
#-------------------------修改结束------------------------------------
# resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended
#--------------------------修改部分--------------------------------------
resource.upload.path=/dolphinschduler
#-------------------------修改结束------------------------------------
# whether to startup kerberos
hadoop.security.authentication.startup.state=false

# java.security.krb5.conf path
java.security.krb5.conf.path=/opt/krb5.conf

# login user from keytab username
login.user.keytab.username=hdfs-mycluster@ESZ.COM

# login user from keytab path
login.user.keytab.path=/opt/hdfs.headless.keytab

# kerberos expire time, the unit is hour
kerberos.expire.time=2
# resource view suffixs
#resource.view.suffixs=txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js
# if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path
hdfs.root.user=hdfs
# if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if resource.storage.type=HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir
#--------------------------修改部分--------------------------------------
fs.defaultFS=s3a://dolphinschduler
aws.access.key.id=root
aws.secret.access.key=12345678
aws.region=us-east-1
aws.endpoint=http://192.168.1.1:31911
#-------------------------修改结束------------------------------------
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=8088
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://ds1:%s/ws/v1/cluster/apps/%s
# job history status url when application number threshold is reached(default 10000, maybe it was set to 1000)
yarn.job.history.status.address=http://ds1:19888/ws/v1/history/mapreduce/jobs/%s

# datasource encryption enable
datasource.encryption.enable=false

# datasource encryption salt
datasource.encryption.salt=!@#$%^&*

# data quality option
data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar

#data-quality.error.output.path=/tmp/data-quality-error-data

# Network IP gets priority, default inner outer

# Whether hive SQL is executed in the same session
support.hive.oneSession=false

# use sudo or not, if set true, executing user is tenant user and deploy user needs sudo permissions; if set false, executing user is the deploy user and doesn't need sudo permissions
sudo.enable=true

# network interface preferred like eth0, default: empty
#dolphin.scheduler.network.interface.preferred=

# network IP gets priority, default: inner outer
#dolphin.scheduler.network.priority.strategy=default

# system env path
#dolphinscheduler.env.path=dolphinscheduler_env.sh

# development state
development.state=false

# rpc port
alert.rpc.port=50052

# Url endpoint for zeppelin RESTful API
zeppelin.rest.url=http://localhost:8080

注:api和worker可以使用同一个配置文件;

将上述文件保存到/home/k8s/common.properties中。

创建configmap
kubectl create configmap comm-config --from-file=/home/k8s/common.properties -n dol

2、修改模板文件

修改helm的模板文件,在api和worker的模板文件下添加configmap挂载;
以worker为例:

cd kubernetes/dolphinscheduler/templates
vim statefulset-dolphinscheduler-worker.yaml
##在volumeMounts字段下添加一个挂载
- mountPath: /opt/dolphinscheduler/conf/common.properties
  subPath: common.properties
  name: comm-config

##在volumes:字段下添加卷 ,不要添加到模板卷字段下面去了。
- name: comm-config
  configMap:
    name: comm-config
3、重新部署DS

使用helm 重新部署。然后就能上传文件了。但是你需要新建租户(用户组),需要在“安全中心”去操作,然后将当前用户加入到这个组才能上传文件;

遗留问题

在使用minio作为DS文件存储中心时,不能自定义bucket,即使你在fs.defaultFS=s3a://dolphinscheduler定义了bucket,同时也在minio中创建了对应的bucket,但是DS还是会自动创建一个名为“dolphinscheduler-test”的存储桶;

Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐