Failed bulk item: RemoteTransportException
I've tried several changes to the configuration of the elasticsearch.yml file but continue to get the above error. To review the status of each node I executed the following command:
curl -XGET http://ip_address:9200/_nodes/stats?pretty
To identify which node is causing the rejections, I looked for thread_pool.bulk stats for each node and I found, that one of nodes had a high rejection number:
"thread_pool" : {
"bulk" : {
"threads" : 100,
"queue" : 0,
"active" : 0,
"rejected" : 1482,
"largest" : 100,
"completed" : 121512
},
I had a look at the memory profile by looking under jvm.mem stats:
"mem" : {
"heap_used_in_bytes" : 3336515992,
"heap_used_percent" : 62,
"heap_committed_in_bytes" : 5333843968,
"heap_max_in_bytes" : 5333843968,
"non_heap_used_in_bytes" : 55600912,
"non_heap_committed_in_bytes" : 84336640,
"pools" : {
"young" : {
"used_in_bytes" : 121153000,
"max_in_bytes" : 279183360,
"peak_used_in_bytes" : 279183360,
"peak_max_in_bytes" : 279183360
},
"survivor" : {
"used_in_bytes" : 21682640,
"max_in_bytes" : 34865152,
"peak_used_in_bytes" : 34865152,
"peak_max_in_bytes" : 34865152
},
"old" : {
"used_in_bytes" : 3193680352,
"max_in_bytes" : 5019795456,
"peak_used_in_bytes" : 3865208384,
"peak_max_in_bytes" : 5019795456
}
}
},
I could see that the node was running out of memory since it exceeded my memory configuration of 50% reserved for indexing using up 62%.
/etc/elasticsearch/elasticsearch.yml configuration
indices.memory.index_buffer_size: 50%
After finding the node causing the problem, I looked at the log files found under /var/log/elasticsearch folder I could see the exception:
Caused by: java.lang.OutOfMemoryError: Java heap space
[2016-02-29 02:35:39,599][WARN ][bootstrap ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-02-29 02:35:39,600][WARN ][bootstrap ] This can result in part of the JVM being swapped out.
[2016-02-29 02:35:39,600][WARN ][bootstrap ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-02-29 02:35:39,600][WARN ][bootstrap ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
Double check that the mlockall is true:
[root@webserver logs]# curl http://
x.x.x.x
:9200/_nodes/process?pretty { "cluster_name" : "cpi", "nodes" : { "BPKQWgTFTtmT4uh2w7WRLw" : { "name" : "cpi1", "transport_address" : "
x.x.x.x
:9300", "host" : "x.x.x.x", "ip" : "
x.x.x.x
", "version" : "2.1.2", "build" : "63c285e", "http_address" : "
x.x.x.x
:9200", "process" : { "refresh_interval_in_millis" : 1000, "id" : 28226, "mlockall" : true } }, "0LVlWGZvSRu3E8lC6bVzfw" : { "name" : "cpi2", "transport_address" : "
x.x.x.x
:9300", "host" : "
x.x.x.x
", "ip" : "
x.x.x.x
", "version" : "2.1.2", "build" : "63c285e", "http_address" : "
x.x.x.x
:9200", "process" : { "refresh_interval_in_millis" : 1000, "id" : 17308, "mlockall" : true } }, "07kUt8PPShqMQ8-W_cigWA" : { "name" : "cpi3", "transport_address" : "
x.x.x.x
:9300", "host" : "
x.x.x.x
", "ip" : "
x.x.x.x
", "version" : "2.1.2", "build" : "63c285e", "http_address" : "
x.x.x.x
:9200", "process" : { "refresh_interval_in_millis" : 1000, "id" : 20625, "mlockall" : true } } } }
I modified the /etc/security/limits.conf and added the following lines:
elasticsearch - nofile 65535
elasticsearch - memlock unlimited
root - memlock unlimited
Also,
The number of conccurent threads importing into elasticsearch was reduced from 8 to 5
No comments:
Post a Comment