前言

这周三,一边调试api crashloopbackoff的问题,一边准备写promSQL语句,然后开发小伙伴说环境3的es又挂了,表现就是查询数据报错:

circuit_breaking_exception[[parent] Data too large, data for [<transport_request>] would be [1318476937/1.2gb], which is larger than the limit of [1237372108/1.1gb], real usage: [1318456248/1.2gb], new bytes reserved: [20689/20.2kb]];

分析

其实这个es查询的问题出现很多次了,先说下基本情况

[root@10-0-0-13 ~]# curl http://elastic:passwd@10.0.0.2:9200/_cat/indices
green  open logstash-2021.02.09            twxisEqWQyW9ZgWcmzXaQQ 5 1  8004 0   6.5mb   2.8mb
yellow open logstash-2021.02.02            Mt-sXNwXSESrKFGLfZJI2A 5 1   296 0 408.4kb 225.3kb
green  open logstash-2021.02.11            s2yPSkHpT96p5gZgUg6GBw 5 1  2467 0   1.4mb 631.7kb
green  open logstash-2021.02.22            sJxC4xivTyqXaGzTMKCNlg 5 1  7720 0   6.3mb   2.7mb
yellow open logstash-2021.02.20            KFkfENrtTIidras9vSTCLQ 5 1           1.6mb        
green  open logstash-2021.02.03            zzegW9P2Rv6Ot0jGhF5GSA 5 1   119 0   367kb 211.1kb
green  open logstash-2021.02.12            3q0XTx1QRaezjSNGxsmM8A 5 1  5258 0     2mb   1.1mb
  • 三个节点组成的集群
  • 5个分片1个副本,
  • index按照日期创建,
  • 数据量都是几百kb,不到1mb,
  • 聚合的字段是long类型的request_id
  • 索引对时间戳和request_id都加了
curl http://elastic:passwd@10.0.0.2:9200/_template?pretty"
{
  "logstash-2021.02.16" : {
    "mappings" : {
      "fluentd" : {
        "_all" : {
          "enabled" : false
        },
        "dynamic_templates" : [
          {
            "message_full" : {
              "match" : "message_full",
              "mapping" : {
                "fields" : {
                  "keyword" : {
                    "ignore_above" : 2048,
                    "type" : "keyword"
                  }
                },
                "type" : "text"
              }
            }
          },
          {
            "message" : {
              "match" : "message",
              "mapping" : {
                "type" : "text"
              }
            }
          },
          {
            "strings" : {
              "match_mapping_type" : "string",
              "mapping" : {
                "type" : "keyword"
              }
            }
          }
        ],
        "properties" : {
          "@timestamp" : {
            "type" : "date"
          },
          "error_code" : {
            "type" : "long"
          },
          "function_name" : {
            "type" : "keyword"
          },
          "key_merged" : {
            "type" : "boolean"
          },
          "kind" : {
            "type" : "keyword"
          },
          "level" : {
            "type" : "keyword"
          },
          "message" : {
            "type" : "text"
          },
          "namespace_name" : {
            "type" : "keyword"
          },
          "request_id" : {
            "type" : "keyword"
          },
          "src" : {
            "type" : "keyword"
          },
          "time_nano" : {
            "type" : "long"
          }
        }
      },
      "_default_" : {
        "_all" : {
          "enabled" : false
        },
        "dynamic_templates" : [
          {
            "message_full" : {
              "match" : "message_full",
              "mapping" : {
                "fields" : {
                  "keyword" : {
                    "ignore_above" : 2048,
                    "type" : "keyword"
                  }
                },
                "type" : "text"
              }
            }
          },
          {
            "message" : {
              "match" : "message",
              "mapping" : {
                "type" : "text"
              }
            }
          },
          {
            "strings" : {
              "match_mapping_type" : "string",
              "mapping" : {
                "type" : "keyword"
              }
            }
          }
        ]
      }
    }
  }
}

分片情况:

curl http://elastic:password@localhost:9200/_cat/shards?index=*&s=node,store:desc
[1] 14015
logstash-2021.02.09 4 r STARTED    2484  898.1kb 10.0.0.9 1612253205000016909
logstash-2021.02.09 4 p STARTED    2484  898.1kb 10.0.0.5 1612253205000016809
logstash-2021.02.09 2 r STARTED    2490  903.6kb 10.0.0.5 1612253205000016809
logstash-2021.02.09 2 p STARTED    2490  903.6kb 10.0.0.2 1612253205000016709
logstash-2021.02.09 1 r STARTED    2400  878.3kb 10.0.0.5 1612253205000016809
logstash-2021.02.09 1 p STARTED    2400  878.3kb 10.0.0.2 1612253205000016709
logstash-2021.02.09 3 r STARTED    2839 1014.9kb 10.0.0.9 1612253205000016909
logstash-2021.02.09 3 p STARTED    2839 1009.1kb 10.0.0.2 1612253205000016709
logstash-2021.02.09 0 r STARTED    2765  986.1kb 10.0.0.9 1612253205000016909
logstash-2021.02.15 4 r STARTED     668  171.4kb 10.0.0.5 1612253205000016809
logstash-2021.02.15 4 p STARTED     668  171.4kb 10.0.0.2 1612253205000016709
logstash-2021.02.15 2 r STARTED     951  230.6kb 10.0.0.9 1612253205000016909
logstash-2021.02.15 2 p STARTED     951  230.6kb 10.0.0.2 1612253205000016709
logstash-2021.02.15 1 r STARTED     659  171.4kb 10.0.0.5 1612253205000016809
···

可以看出分片的还是很均匀的,mapping也是没啥问题的,
查询语句:

{
   "aggs":{
      "by_request_id":{
         "aggs":{
            "error_code_filter":{
               "bucket_selector":{
                  "buckets_path":{
                     "max_error_code":"max_error-code"
                  },
                  "script":"params.max_error_code > 0"
               }
            },
            "logs":{
               "top_hits":{
                  "size":1000,
                  "sort":[
                     {
                        "time_nano":{
                           "order":"asc"
                        }
                     }
                  ]
               }
            },
            "max_error_code":{
               "max":{
                  "field":"error_code"
               }
            },
            "min_time_nano":{
               "min":{
                  " field":"time_nano"
               }
            },
            "min_time_nano_sort":{
               "bucket_sort":{
                  "from":0,
                  "size":20,
                  "sort":[
                     {
                        "min_time_nano":{
                           "order":"desc"
                        }
                     }
                  ]
               }
            }
         },
         "composite":{
            "size":10000,
            "sources":{
               "request_id":{
                  "terms":{
                     "field":"request_id"
                  }
               }
            }
         }
      }
   },
   "query":{
      "bool":{
         "must":[
            {
               "match":{
                  "namespace_name":"test-func"
               }
            },
            {
               "match":{
                  "functionname":"api-pod"
               }
            },
            {
               "range":{
                  "@timestamp":{
                     "gte":"2021-02-23 19:39:17",
                     "lte":"2021-02-24 19:39:17"
                  }
               }
            }
         ]
      }
   }
}

并且查询会指定index,

		res, err := esClient.Search(
			esClient.Search.WithContext(ctx),
			esClient.Search.WithIndex(logstashIndex),
			esClient.Search.WithBody(&queryStr),
			esClient.Search.WithScroll(time.Minute),
		)

查询也不复杂,但是就是会报错Data too large,先照例请教Google老大哥,发现一个帖子,里边的情况和我们这里的差不多,给出的可能原因是:

Clearly the specific request is likely not the problem here. There are two main possible causes here:
1, Something else is holding on to excessive amounts of memory. Notice that some parts of ES auto-scales with heap size.
2, The GC cannot (or did not) keep up with garbage in the heap causing the node to go above the circuit breaker limit.
有什么占据了大量内存,GC无法对heap进行垃圾回收,导致node内存用量超出limit限制,给出的建议是尝试调整JVM参数,或者加大内存。看了一下es的启动参数,heap: XMS==XMX=4G,lucene: 4G,相应的JVM参数都有设置:

/usr/local/services/jdk1.8.0_232/bin/java 
-Xms4000m 
-Xmx4000m 
-XX:+UseConcMarkSweepGC 
-XX:CMSInitiatingOccupancyFraction=75 
-XX:+UseCMSInitiatingOccupancyOnly 
-Des.networkaddress.cache.ttl=60 
-Des.networkaddress.cache.negative.ttl=10 
-XX:+AlwaysPreTouch 
-Xss1m 
-Djava.awt.headless=true 
-Dfile.encoding=UTF-8 
-Djna.nosys=true 
-XX:-OmitStackTraceInFastThrow 
-Dio.netty.noUnsafe=true 
-Dio.netty.noKeySetOptimization=true 
-Dio.netty.recycler.maxCapacityPerThread=0 
-Dlog4j.shutdownHookEnabled=false 
-Dlog4j2.disable.jmx=true 
-Djava.io.tmpdir=/tmp/elasticsearch-1347575564397636849 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=data 
-XX:ErrorFile=logs/hs_err_pid%p.log 
-XX:+PrintGCDetails 
-XX:+PrintGCDateStamps 
-XX:+PrintTenuringDistribution 
-XX:+PrintGCApplicationStoppedTime 
-Xloggc:logs/gc.log 
-XX:+UseGCLogFileRotation 
-XX:NumberOfGCLogFiles=32 
-XX:GCLogFileSize=64m 
-Des.path.home=/data1/containers/1612253205000016809/es 
-Des.path.conf=/data1/containers/1612253205000016809/es/config 
-Des.distribution.flavor=oss -Des.distribution.type=zip 
-cp /data1/containers/1612253205000016809/es/lib/* org.elasticsearch.bootstrap.Elasticsearch 
-d

所以那就还剩下加内存了,大佬最后给出的终极大招也是加内存,不过我们在其他环境的es使用情况来看,es数据量比这个还大,配置是一样的,但是却不会有问题,百思不得其解,于是请教了es的大佬,大佬给出的建议是可以尝试将indices.breaker.fielddata.limit这个参数调大些,并且删除全部索引缓存试试,或者直接加内存,一般情况下都是内存不够的问题,按照建议操作修改了参数,并且清理和缓存之后,果然es查询不报错了,不过测试中发现,如果频繁多次点击的话,这个报错就又出来了,所以还剩下一条路,加内存。

暂时性解法

于是我们想到会不会是这个环境的es有什么问题,就想重建一下es,也就是重新部署一个es,这个是测试环境,数据是可以删除的,我们的es使用的是k8s的服务目录方式部署的,因此,需要删除掉ServiceBinding和ServiceInstance,然后重新创建一个,在一波操作梦如虎之后,弄了一个新的es实例,当然了,这时候es是不会报错的,因为根本就是没有数据的。

遗留的问题

本质上这个问题是没有解决的,Data too large报错的原因也是没有找到,重建了一个es虽然是正常的,但是这种方式是不大合理的,毕竟可能用户环境的es数据很重要不可以删除,如果出现了这样的问题,除了加内存还有什么别的方式尼?所以这个问题暂且记下,后边回过头来看,也许就不难了尼。

tips

这里还有个小点,值得分享一下,那就是如果查询不存在的index,会报错:

index_not_found_exception: no such index

这里也是稍微卡了一会儿的,因为index是根据第一次写入数据,按照template创建的,不过如果有一天没有写入数据,那么index也就不会创建,那么查询就会报这个错误,于是小伙伴就想着能不能创建个定时任务,虽然我查了一下貌似可以自动创建:

curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'{
    "persistent": {
        "action.auto_create_index": "YourIndexName,index10,-index1*,+ind*" 
    }
}'

不过试了一下,发现没成功,也可能是用错了,至少第二天检查的时候,index没有创建出来,然后转换了一下思路,查询也可以做操作的吧,然后搜索了一下,果然有开关:

		res, err := esClient.Search(
			esClient.Search.WithContext(ctx),
			esClient.Search.WithIndex(logstashIndex),
			esClient.Search.WithBody(&queryStr),
			esClient.Search.WithScroll(time.Minute),
			esClient.Search.WithIgnoreUnavailable(true),
		)

esClient.Search.WithIgnoreUnavailable(true)
忽略不存在的index就好了,完美,果然思路不能被带偏啊!

查询不存在index会报错

  • 每天自动创建 ( Not Good! )
  • 查询屏蔽没有的index ( Good! )

后续

就在重建serviceinstance后的第一个礼拜,es又又又又炸了,还是一样的报错,不过这次拉了es的专家看了下,还是熔断机制抛的异常,不过这里通过_cat/nodes?h=*倒是发现了一个问题

curl http://elastic:jqfxkxwaqj3j@10.0.0.10:48888/_cat/nodes?h=*
2AK7 47290 10.0.0.9 9300 10.0.0.9:9200 6.8.2 oss zip f1ae577 1.8.0_232 49gb 2.7gb 46.3gb 5.59 642.9mb 16 3.8gb 30.8gb 99 31.2gb 968 0 600000 8 0.51 0.65 0.73 6.6d mdi * 1614257757000018209 0b 19.1kb 0 0b 0 30.4mb     0  235  4431 793 11.1s 0 0s 0 0s 0 0s 0 0 28ms 75 0 22.3s 67271 0 0 0 0b 3335 4084383 2gb 1.2m 35888 1.8m 0 1 0 0  38ms  45 0 0  6.1s  4777 0 2.3h  87 30 113.8kb 0b 0b 0b 0 0s 0
qGnQ 10354 10.0.0.2 9300 10.0.0.2:9200 6.8.2 oss zip f1ae577 1.8.0_232 49gb 2.7gb 46.3gb 5.58 614.1mb 15 3.8gb 30.8gb 99 31.2gb 962 0 600000 9 0.43 0.58 0.83 6.6d mdi - 1614257757000018009 0b  5.7kb 0 0b 0 29.1mb 11495 1370 16843 776 11.3s 0 0s 0 0s 0 0s 0 0   0s  0 0 25.2s 67265 0 0 0 0b 3347 4178531 2gb 1.3m 35845 1.9m 0 1 0 0 126ms 262 0 0 10.4s 18539 0 8.7h 322 36 131.2kb 0b 0b 0b 0 0s 0
CsPz       10.0.0.5 9300 -             6.8.2                                                                                                                       mdi - 1614257757000018109

也就是三个node的10.0.0.5号机器挂了,并且剩下两台机器的内存使用率才只有16%和15%

2AK7 47290 10.0.0.9 9300 10.0.0.9:9200 16 
qGnQ 10354 10.0.0.2 9300 10.0.0.2:9200 15 
CsPz       10.0.0.5 9300 -        

然而报错data too large就代表着内存过高,然后dump这台挂掉机器的heap,并且查看了es的log日志,大致可以断定,是这个查询语句的问题:

[2021-03-04T00:18:37,854][DEBUG][o.e.a.s.TransportSearchAction] [1614257757000018109] [logstash-2021.02.26][0], node[2AK74_9QSTST0w2kXYfXLw], [R], s[STARTED], a[id=KcUhKP0pRSaUD3_vR--XMA]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[logstash-*], indicesOptions=IndicesOptions[ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true, forbid_closed_indices=true, ignore_aliases=false, ignore_throttled=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, maxConcurrentShardRequests=15, batchedReduceSize=512, preFilterShardSize=128, allowPartialSearchResults=true, localClusterAlias=null, getOrCreateAbsoluteStartMillis=-1, source={"size":0,"query":{"bool":{"must":[{"match":{"namespace_name":{"query":"hadong-ha-15453f-fun","operator":"OR","prefix_length":0,"max_expansions":50,"fuzzy_transpositions":true,"lenient":false,"zero_terms_query":"NONE","auto_generate_synonyms_phrase_query":true,"boost":1.0}}},{"match":{"function_name":{"query":"tcb-api-pod","operator":"OR","prefix_length":0,"max_expansions":50,"fuzzy_transpositions":true,"lenient":false,"zero_terms_query":"NONE","auto_generate_synonyms_phrase_query":true,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}},"aggregations":{"by_request_id":{"composite":{"size":10000,"sources":[{"request_id":{"terms":{"field":"request_id","missing_bucket":false,"order":"asc"}}}]},"aggregations":{"logs":{"top_hits":{"from":0,"size":1000,"version":false,"seq_no_primary_term":false,"explain":false,"sort":[{"time_nano":{"order":"asc"}}]}},"max_error_code":{"max":{"field":"error_code"}},"min_time_nano":{"min":{"field":"time_nano"}},"error_code_filter":{"bucket_selector":{"buckets_path":{"max_error_code":"max_error_code"},"script":{"source":"params.max_error_code > 0","lang":"painless"},"gap_policy":"skip"}},"min_time_nano_sort":{"bucket_sort":{"sort":[{"min_time_nano":{"order":"desc"}}],"from":0,"size":20,"gap_policy":"SKIP"}}}}}}}] lastShard [true]

专家说这里的"composite":{“size”:10000,…},10000太大了,可能会导致内存问题,因为是分片集群,数据是会分片的,所以查询的时候,会从另外两台机器上获取到数据,然后在10.0.0.5这台机器上归并聚合,而这个聚合,耗费的内存就比较大。
然后给出的建议是,对于top hits这种比较消耗内存的聚合操作,建议直接增加内存。因此我们最后的解决方法也就是从查询语句出发,减少聚合的数据量,从而解决了这个问题。

Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐