一次ES从5.2版本升级到7.13测试生产完全实操记录(Elasticsearch upgrade)

一次ES从5.2版本升级到7.13测试生产完全实操记录

目前正在使用的elasticsearch 版本是5.2 ,使用Java Transport Client + Springbooot 构建的项目 ,
升级es引擎到最新版本7.13,需要代码层面的改动。由于项目Springboot 版本是1.4.2,即选用Java REST Client[7.13]
因为是跨大版本升级,ES官网给出升级方案

If you are running a version prior to 6.0, upgrade to 6.8 and reindex your old indices or bring up a new 7.13.4 cluster and reindex from remote.

数据迁移方式:新建集群重建索引(bring up a new 7.13.4 cluster and reindex from remote)

测试环境ES实践

1. 部署ES集群

  测试环境2台节点的集群,启动2台虚拟机,
  ip分别位192.169.10.167、192.169.10.168
  • 下载tag包,选择自己需要的版本,并下载相应版本的kibana
  • 解压Es 到/usr/local/es,
    tar -zxvf elasticsearch-7.13.0-linux-x86_64.tar.gz -C /usr/local/es
    但是由于安全的考虑,elasticsearch不允许使用root用户来启动,所以需要创建一个新的用户,并为这个账户赋予相应的权限来启动elasticsearch集群。
useradd es
passwd es 

#创建文件夹用于存放数据和日志
mkdir -p /var/data/elasticsearch
mkdir -p /var/log/elasticsearch

# 改变文件的拥有者为es
chown -R es /usr/local/es/
chown -R es /var/log/elasticsearch
chown -R es /var/data/elasticsearch
  • 配置
    JDK配置
    新版本自带适应的JDK,这里直接采用修改启动文件./bin/elasticsearch
#vim ./bin/elasticsearch添加如下配置
export JAVA_HOME=/usr/local/es/elasticsearch-7.13.0/jdk
export PATH=$JAVA_HOME/bin:$PATH

ES 配置文件主要位于 $ES_HOME/config 目录下

最新版本 Elasticsearch 主要有三个配置文件:
elasticsearch.yml ES 的配置,more
jvm.options ES JVM 配置,more
log4j2.properties ES 日志配置,more

复制5.2版本的配置 注意参数名版本之间的变化
elasticsearch.yml 配置如下

#集群名
cluster.name:  microants-es-004
#节点名
node.name: node-01

# 数据目录
path.data: /var/data/elasticsearch
# 日志目录
path.logs: /var/log/elasticsearch

#网络配置
network.host: 192.168.10.167
http.port: 9200
#迁移地址白名单
reindex.remote.whitelist: ["192.168.10.154:9200","192.168.10.155:9200"]

discovery.seed_hosts: ["192.168.10.167", "192.168.10.168"]
cluster.initial_master_nodes: ["node-01", "node-02"]

jvm.options

-Xms2g
-Xmx2g

其他可能需要的配置
1.修改文件描述符数目 2.最大映射数量MMP

#编辑 /etc/security/limits.conf,追加以下内容;
* soft nofile 65536
* hard nofile 65536

#/etc/sysctl.conf文件最后添加一行
vm.max_map_count=262144
#执行/sbin/sysctl -p 立即生效
  • 启动
#切换es 启动 否则会失败 
su es
./bin/elasticsearch -d
{
"name": "node-01",
"cluster_name": "microants-es-004",
"cluster_uuid": "_na_",
"version": {
"number": "7.13.0",
"build_flavor": "default",
"build_type": "tar",
"build_hash": "5ca8591c6fcdb1260ce95b08a8e023559635c6f3",
"build_date": "2021-05-19T22:22:26.081971330Z",
"build_snapshot": false,
"lucene_version": "8.8.2",
"minimum_wire_compatibility_version": "6.8.0",
"minimum_index_compatibility_version": "6.0.0-beta1"
},
"tagline": "You Know, for Search"
}

至此 启动第二台 192.169.10.168 ,设置elasticsearch.yml 时 集群名保持一致
验证 http://192.168.10.167:9200/_cat/nodes 返回集群节点

192.168.10.168 19 45 15 0.43 0.19 0.11 cdfhilmrstw - node-02
192.168.10.167 23 49  0 0.24 0.11 0.07 cdfhilmrstw * node-01
tar -zxvf kibana-7.13.0-linux-x86_64.tar.gz -C /opt/soft

配置 vim ./config/kibana.yml

server.port: 5601
server.host: "192.168.10.167"

elasticsearch.hosts: ["http://192.168.10.167:9200"]

i18n.locale: "zh-CN"

启动 (最好添加kibana 用户)

nohup ./bin/kibana &

集群安全设置

  1. 启动
    ./bin/elasticsearch
  2. 生成用户和密码采用走动生成
    ./bin/elasticsearch-setup-passwords auto
    结果会打印再控制台 保存 如:
hanged password for user apm_system
PASSWORD apm_system = IywjBT4YDU86NDw7ox

Changed password for user kibana_system
PASSWORD kibana_system = SQmORp23LcZyPZU48l

Changed password for user kibana
PASSWORD kibana = SQmORp2Nb3LcPZU48l

Changed password for user logstash_system
PASSWORD logstash_system = cEtJPTbzktxc7aPuQx

Changed password for user beats_system
PASSWORD beats_system = 5wDk7jQNu4iP5J7eLg

Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = p1HkCWzYVtt8SPGEtN

Changed password for user elastic
PASSWORD elastic = PRmId87ogKerJboyLw
  1. 任何节点生成ca证书
    ./bin/elasticsearch-certutil ca
    会生成一个elastic-stack-ca.p12 文件
    ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
    生成elastic-certificates.p12 文件
  2. 复制文件 elastic-certificates.p12 到所有节点的conf目录下
  3. 如果证书有密码 执行 ./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
    7.重启es ,验证查看节点是否可以通信

2.数据迁移

  • 插件适配
查看老版本使用的插件
http://192.168.10.154:9200/_cat/plugins
返回:
node-0001 analysis-icu      5.2.0 
node-0001 analysis-ik       5.2.0 
node-0001 analysis-kuromoji 5.2.0 
node-0001 analysis-pinyin   5.2.1
 
node-0002 analysis-icu      5.2.0 
node-0002 analysis-ik       5.2.0 
node-0002 analysis-pinyin   5.2.1     
先官网查看[插件文档](https://www.elastic.co/guide/en/elasticsearch/plugins/7.13/installation.html) 

https://www.elastic.co/guide/en/elasticsearch/plugins/7.13/installation.html

# 安装插件格式
sudo bin/elasticsearch-plugin install [plugin_name]
  • 安装icu
[root@localhost elasticsearch-7.13.0]# sudo bin/elasticsearch-plugin install analysis-icu
-> Installing analysis-icu
-> Downloading analysis-icu from elastic
[=================================================] 100%   
-> Installed analysis-icu
-> Please restart Elasticsearch to activate any plugins installed
  • 安装ik 、pinyin
./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.3.0/elasticsearch-analysis-ik-7.13.0.zip
#网速慢 下载 了几次失败
#github手动下载download https://github.com/medcl/elasticsearch-analysis-ik/releases
#create plugin folder cd your-es-root/plugins/ && mkdir ik
yum install -y unzip zip
#unzip plugin to folder your-es-root/plugins/ik
#解压二者到plugins下
unzip elasticsearch-analysis-ik-7.13.0.zip -d /usr/local/es/elasticsearch-7.13.0/plugins/ik/
unzip elasticsearch-analysis-pinyin-7.13.0.zip -d /usr/local/es/elasticsearch-7.13.0/plugins/pinyin/
#重启es
ps -ef | grep elastic
kill 9 4093
su es
./bin/elasticsearch -d

查看结果 :http://192.168.10.167:9200/_cat/plugins

node-01 analysis-icu    7.13.0
node-01 analysis-ik     7.13.0
node-01 analysis-pinyin 7.13.0

node-02 analysis-ik     7.13.0
node-02 analysis-pinyin 7.13.0

To manually reindex your old indices in place:
1. Create an index with 7.x compatible mappings.
2. Set the refresh_interval to -1 and the number_of_replicas to 0 for efficient reindexing.
3. Use the reindexAPI to copy documents from the 5.x index into the new index. You can use a script to perform any necessary modifications to the document data and metadata during reindexing.
4. Reset the refresh_interval and number_of_replicas to the values used in the old index.
5. Wait for the index status to change to green.
6. In a single update aliases request:
Delete the old index.
Add an alias with the old index name to the new index.
Add any aliases that existed on the old index to the new index.

green  open  seller_subject_index_v1     efIwSm31QAiNwEdus0BVNg 5 1   6002    23     3mb   1.5mb
green  open  search_keyword_index_v1     BTalZTHlRrCTaxgFrcH-jA 5 1   3415     0   2.1mb     1mb
green  open  platform_coupon_index_v1    lD15Hyl6TtSWV2GP79aX8Q 5 1     57     0 192.4kb  96.2kb
	.
	.
	.
	.
green  open  syscate_index_v1            dEWSZp1sSq-lA9TINGyt0A 5 1   4660    15     9mb   4.5mb
green  open  cars                        gSqMCkN-SSa3EhH6Vm1nCw 5 1      8     0  47.3kb  23.6kb
green  open  product_keyword_hint_v1     ubQgLk4FRVaAciEHP7imQw 1 1    280     0 324.5kb 162.2kb
green  open  subject_index_v1            u-M6le0JSxmuxLxGLkNCUQ 5 1   1445    79  17.1mb   8.5mb

此处以seller_subject_index_v1索引例子演示

Create an index with 7.x compatible mappings.

  • es.5.2的seting、mapping
点击查看代码
#kibana 管理工具
get seller_subject_index_v1/_settings
{
  "seller_subject_index_v1": {
    "settings": {
      "index": {
        "number_of_shards": "5",
        "provided_name": "seller_subject_index_v1",
        "creation_date": "1557301817602",
        "analysis": {
          "analyzer": {
            "keyword_analyzer": {
              "filter": [
                "lowercase"
              ],
              "type": "custom",
              "tokenizer": "keyword"
            },
            "comma_analyzer": {
              "filter": [
                "lowercase"
              ],
              "pattern": ",",
              "type": "pattern"
            },
            "semicolon_analyzer": {
              "filter": [
                "lowercase"
              ],
              "pattern": ";",
              "type": "pattern"
            }
          }
        },
        "number_of_replicas": "1",
        "uuid": "efIwSm31QAiNwEdus0BVNg",
        "version": {
          "created": "5020199"
        }
      }
    }
  }
}
get seller_subject_index_v1/_mapping
{
  "seller_subject_index_v1": {
    "mappings": {
      "seller_subject": {
        "_all": {
          "enabled": false
        },
        "_routing": {
          "required": true
        },
        "properties": {
          "buyer_uid": {
            "type": "long",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          },
          "check_time": {
            "type": "long"
          },
          "create_time": {
            "type": "long"
          },
          "modify_time": {
            "type": "long"
          },
          "origin": {
            "type": "byte",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          },
          "publish_type": {
            "type": "short",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          },
          "seller_uid": {
            "type": "long",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          },
          "sort_time": {
            "type": "long"
          },
          "status": {
            "type": "short",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          },
          "status_reason": {
            "type": "short",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          },
          "subject_id": {
            "type": "long",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          }
        }
      }
    }
  }
}
- es7.13 setings、mapping
点击查看代码
	put seller_subject_index_v1
{
  "settings": {
      "index": {
        "refresh_interval":-1,
        "number_of_shards": "5",
        "analysis": {
          "analyzer": {
            "keyword_analyzer": {
              "filter": [
                "lowercase"
              ],
              "type": "custom",
              "tokenizer": "keyword"
            },
            "comma_analyzer": {
              "filter": [
                "lowercase"
              ],
              "pattern": ",",
              "type": "pattern"
            },
            "semicolon_analyzer": {
              "filter": [
                "lowercase"
              ],
              "pattern": ";",
              "type": "pattern"
            }
          }
        },
        "number_of_replicas": "1"
      }
  },
  "mappings": {
        "_routing": {
          "required": true
        },
        "properties": {
          "buyer_uid": {
            "type": "long",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          },
          "check_time": {
            "type": "long"
          },
          "create_time": {
            "type": "long"
          },
          "modify_time": {
            "type": "long"
          },
          "origin": {
            "type": "byte",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          },
          "publish_type": {
            "type": "short",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          },
          "seller_uid": {
            "type": "long",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          },
          "sort_time": {
            "type": "long"
          },
          "status": {
            "type": "short",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          },
          "status_reason": {
            "type": "short",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          },
          "subject_id": {
            "type": "long",
            "fields": {
              "comma": {
                "type": "text",
                "term_vector": "with_positions_offsets",
                "analyzer": "comma_analyzer"
              }
            }
          }
        }
      }
}



post /_aliases
{
    "actions":[
      { "add":{ "index":"seller_subject_index_v1","alias":"seller_subject_index"}
  }]
}

最后别忘了 别名映射

##迁移数据seller_subject_index_v1  再kibana 工具执行
##

Post _reindex
{
  "source":{
    "remote":{
      "host":"http://192.168.10.154:9200",
      "socket_timeout": "1m",
      "connect_timeout": "10s"
    },
   "index":"seller_subject_index_v1"
  },
  "dest":{
    "index":"seller_subject_index_v1"
  }
}

put /seller_subject_index_v1/_settings
{
  "index":{
    "refresh_interval": "1s"
  }
}
  • 继续执行其他索引
  • 小结及重要的点
    1.出现{"statusCode":502,"error":"Bad Gateway","message":"Client request timeout"} ,加上参数wait_for_completion=false
    2. 一个索引有多个doc时需要拆分 加type参数

应用层代码修改

官方文档https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html

This section describes how to migrate existing code from the TransportClient to the Java High Level REST Client released with the version 5.6.0 of Elasticsearch.

  • 如何迁移

Adapting existing code to use the RestHighLevelClient instead of the TransportClient requires the following steps:
1. Update dependencies
2.Update client initialization
3.Update application code

调整现有代码以使用RestHighLevelClient代替TransportClient 需要以下步骤:
- 更新依赖
- 更新客户端初始化
- 更新应用代码

Since the Java High Level REST Client does not support request builders, applications that use them must be changed to use requests constructors instead

**代码层面的改**变就是TransportClient的请求构建器不支持使用 如:
IndexRequestBuilder indexRequestBuilder   = transportClient.prepareIndex();  
DeleteRequestBuilder deleteRequestBuilder = transportClient.prepareDelete(); 
SearchRequestBuilder searchRequestBuilder = transportClient.prepareSearch(); 
用Java High Level REST 构造器替换 如:
IndexRequest request = new IndexRequest("index").id("id"); 
request.source("{\"field\":\"value\"}", XContentType.JSON);
posted @ 2021-11-15 12:04  千里送e毛  阅读(1390)  评论(0编辑  收藏  举报