Springboot搭建kafka和spark出现的Bug总览

Caused by: org.apache.kafka.common.KafkaException: org.codehaus.jackson.map.deser.std.StringDeserializer is not an instance of org.apache.kafka.common.serialization.Deserializer

org.codehaus.jackson.map.deser.std.StringDeserializer不是一个org.apache.kafka.common.serialization.Deserializer的实例,表示此时我们的jar包导入错误
注意导入的应该时org.apache.kafka.common.serialization.Deserializer

System memory 259522560 must be at least 471859200. Please increase heap size using the --driver-memory option or spark.driver.memory in Spark configuration.

此处为没有设置spark的缓存
在SparkConf定义变量后加入变量名.set(“spark.testing.memory”, “2147480000”)如conf.set(“spark.testing.memory”, “2147480000”)

java.io.IOException: Could not locate executable D:\hadoop\hadoop-3.3.0\bin\winutils.exe in the Hadoop binaries.

启动时出现这类错误时因为windows系统不兼容的原因,下载winutils.exe替代hadoop下的bin即可
下载链接

Exception in thread “main” java.lang.NoClassDefFoundError: scala/Cloneable

此错误为spark版本与所加载插件版本不符合,我们可以spark降低版本与配置的一致

Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐