elasticsearch操作命令

优化命令

数据从内存刷新到磁盘间隔时间默认为5s, 现在调整为30s, 可以提升写入量,代价就是让新写入的数据在30s之后可以被搜索,新数据可见的及时性有所下降

PUT /logstash-test/_settings
{
  "index" : {
    "refresh_interval" : "30s"
  }
}

translog持久化日志设置, 目前的设置为:

  • 当translog的大小超过1GB flush一次磁盘;
  • 同步间隔超过30s flush一次磁盘; 这个设置可以减少磁盘的IO操作, 缺点是如果集群故障可能会丢失故障发生前30s的数据
  • 同步方式采用异步同步
PUT /logstash-test/_settings
{
  "index": {
    "translog": {
    "flush_threshold_size": "1024mb",
    "sync_interval": "30s",
    "durability": "async"
    }
  }
}

设置副本为1

PUT /logstash-test/_settings
{
   "number_of_replicas" : "1"
}

重启elasticserch集群

#一个集群节点重启前要先临时禁用自动分配,设置cluster.routing.allocation.enable为none,否则节点停止后,当前节点的分片会自动分配到其他节点上,本节点启动后需要等其他节点RECOVERING后才会RELOCATING,也就是分片在其他节点恢复后又转移回来,浪费大量时间

#kibana控制台执行如下:
PUT /_cluster/settings
{
    "transient" : {
        "cluster.routing.allocation.enable" : "all"
    }
}

#命令行执行如下:
curl -XPUT http://127.0.0.1:9200/_cluster/settings -d '{
"transient" : {
"cluster.routing.allocation.enable" : "none"
}
}'

#逐个重启elasticsearch服务
systemctl restart elasticsearch

#集群启动后再改回配置

#kibana控制台执行如下:
PUT /_cluster/settings
{
    "transient" : {
        "cluster.routing.allocation.enable" : "none"
    }
}

#命令行执行如下:
curl -XPUT http://127.0.0.1:9200/_cluster/settings -d '{
"transient" : {
"cluster.routing.allocation.enable" : "all"
}
}'

创建模板

PUT _template/hrb_template
{
"order" : 1,
"index_patterns": ["hrb_*", "hrb-*"],
"settings": {
"number_of_shards": 6,
"number_of_replicas":1,
"index": {
"translog": {
"flush_threshold_size": "1024mb",
"sync_interval": "30s",
"durability": "async"
       }
    }
  }
}
GET nginx-access-2020.03.11/_mapping

GET logstash-test/_settings

GET hrb-service-perbank-2028.10.10/_segments

POST hrb-mobile-2020.*/_forcemerge?max_num_segments=1

GET _nodes/stats/process?filter_path=**.max_file_descriptors

POST /_cluster/reroute?retry_failed=true
posted @ 2020-12-03 16:26  培天王  阅读(294)  评论(0编辑  收藏  举报