ELK安装及布署(待更新版)
|
Packetbeat |
用于搜集网络流量数据 |
|
Heartbeat |
用于运行时间监控 |
|
Filebeat |
用于搜集文件数据 |
|
Winlogbeat |
用于搜集windows事件数据 |
|
Metricbeat |
用于指标 |
|
Auditbeat |
用于审计数据 |
|
项目程序 |
es7.x |
es7.3.2 |
|
jdk8 |
jdk11以上 |
jdk12 |
vim /etc/profile export JAVA_HOME=/opt/jdk1.8.0_121 export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$JAVA_HOME/bin:$PATH 使用配置生效 source /etc/profile vim /data/apps/elasticsearch-7.9.0/bin #配置自定义jdk11 export JAVA_HOME=/data/apps/jdk-11.0.8 export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib #ES vm.max_map_count=655360 fs.file-max=655360 #解压 tar -zxvf elasticsearch-7.9.0-linux-x86_64.tar.gz -C elasticsearch cd /data/apps/elasticsearch-7.9.0/ chown -R elk:elk elasticsearch 在root用户下 更改读写权限 建议都改成777 如下语法: chmod -R 777 文件夹名称 #切换到elk用户 su elk cd /data/apps/elasticsearch-7.9.0/config/ vim elasticsearch.yml # 主节点相关配置
node.master: true node.data: false node.ingest: false node.ml: false cluster.remote.connect: false # 跨域 http.cors.enabled: true http.cors.allow-origin: "*" # 从主节点相关配置 node.master: false node.data: true node.ingest: false node.ml: false cluster.remote.connect: false # 跨域 http.cors.enabled: true http.cors.allow-origin: "*" # ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: #配置集群名称,默认elasticsearch cluster.name: "array-es-cluster" # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: #配置节点名称 node.name: "master" # # Add custom attributes to the node: #每个节点都可以定义一些与之关联的通用属性,用于后期集群进行碎片分配时的过滤 #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): #分配给当前节点的索引数据所在的位置 path.data: /data/apps/elasticsearch-7.9.0/data # # Path to log files: #日志文件所在位置 path.logs: /data/apps/elasticsearch-7.9.0/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: ## 锁住内存,不被使用到交换分区去 #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: "10.241.42.41" # # Set a custom port for HTTP: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # #Elasticsearch7新增参数,写入候选主节点的设备地址,来开启服务时就可以被选为主节点 discovery.seed_hosts: ["10.241.42.41:9300", "10.241.42.165:9300"] # # Bootstrap the cluster using an initial set of master-eligible nodes: #Elasticsearch7新增参数,写入候选主节点的设备地址,来开启服务时就可以被选为主节点 cluster.initial_master_nodes: ["master"] # 主节点相关配置 node.master: true node.data: false node.ingest: false node.ml: false cluster.remote.connect: false # 跨域 http.cors.enabled: true http.cors.allow-origin: "*" # x-head 访问配置 http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type #security安全认证,开启xpack认证机制 #xpack.security.enabled: true #xpack.security.transport.ssl.enabled: true # 配置ssl和CA证书配置 #xpack.ssl.key: elasticsearch/elasticsearch.key #xpack.ssl.certificate: elasticsearch/elasticsearch.crt #xpack.ssl.certificate_authorities: ca/ca.crt # For more information, consult the discovery and cluster formation module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: #允许在N个节点启动后恢复过程 gateway.recover_after_nodes: 2 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: #设置是否可以通过正则或者_all删除或者关闭索引,默认false 允许 可设置true不允许 #action.destructive_requires_name: true
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: "array-es-cluster"
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: "node-1"
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/apps/elk/elasticsearch-7.9.0/data
#
# Path to log files:
#
path.logs: /data/apps/elk/elasticsearch-7.9.0/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
## 锁住内存,不被使用到交换分区去
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: "10.241.42.165"
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["10.241.42.165:9300","10.241.42.41:9300"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"]
# 从主节点相关配置
node.master: false
node.data: true
node.ingest: false
node.ml: false
cluster.remote.connect: false
# 跨域
http.cors.enabled: true
http.cors.allow-origin: "*"
#security安全认证,开启xpack认证机制
#xpack.security.enabled: true
#xpack.security.transport.ssl.enabled: true
# 配置ssl和CA证书配置
#xpack.ssl.key: elasticsearch/elasticsearch.key
#xpack.ssl.certificate: elasticsearch/elasticsearch.crt
#xpack.ssl.certificate_authorities: ca/ca.crt
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
gateway.recover_after_nodes: 2
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#Elasticsearch7新增参数,写入候选主节点的设备地址,来开启服务时就可以被选为主节点
discovery.seed_hosts: ["10.241.42.41", "10.241.42.165"]
#使用分层表单来设置管道的批处理大小和批处理延迟
pipeline:
batch:
size: 125 #管道批处理大小
delay: 5 #管道批处理延迟
#若要表示与平面键相同的值:
pipeline.batch.size: 125
pipeline.batch.delay: 5
#节点名称,在集群中具备唯一性,默认为logstash主机的主机名
node.name: logstast-node1
#logstash及其插件所使用的数据路径,默认路径为logstash家目录下的data目录
path.data: /usr/local/logstash-7.9.0/data/
#管道的ID,默认为main
pipeline.id: main
#输入、输出及过滤器的总工作数量,也就是logstash的工作进程,此工作进程默认为主机的cpu核心数量
pipeline.workers: 16
#在输入阶段,单个工作线程将从输入中收集的最大事件数,此事件数堆内存开销较大,内存开销可在jvm.options中设置堆内存大小来优化此选项
pipeline.batch.size: 125
#在将一个较小的批发送到filters+output之前,轮询下一个事件时等待的时间(以毫秒为单位)
pipeline.batch.delay: 50
#设置为true时,在强制关闭logstash期间,即使内存中还有事件,那么为true将会强制关闭,导致数据丢失;默认为false,false在强制关闭logstash期间,将拒绝退出,直到所有在管道中的事件被安全输出,再关闭。
pipeline.unsafe_shutdown: false
#指定管道配置的目录,在此目录下的所有管道配置文件都将被logstash读取,除管道配置外,不要放任何文件
path.config: /usr/local/logstash-7.9.0/conf.d/
#在启动时,测试配置是否有效并退出,检测配置文件是否正确,包括检测管道配置文件,默认为false
config.test_and_exit: true
#定期检查配置是否更改并重新加载管道,默认为false
config.reload.automatic: true
#logstash间隔多久检查一次配置中的更改,默认为3秒
config.reload.interval: 600s
#设置为true时,将完全编译的配置显示为调试日志消息
config.debug: false
#用于事件缓冲的内部排队模型;可以指定内存memory或者磁盘persisted,内存处理速度相对磁盘来说效率要高,默认为内存
queue.type: memory
#启用持久队列时将存储数据文件的目录路径,默认为logstash路径下的queue
path.queue: /usr/local/logstash-7.9.0/queue/
#启用持久队列时使用的页面数据文件的大小(queue.type: persisted)队列数据由分成页面的仅附加数据文件组成
queue.page_capacity: 64mb
#启用持久队列时队列中未读事件的最大数量(queue.type: persisted),默认为0,0为无限制
queue.max_events: 0
#队列的总容量,以字节数表示,默认为1G,根据业务需求而定
queue.max_bytes: 1024mb
#启用持久队列时强制检查点之前最大的ACK事件数量(queue.type: persisted),设置为0,表示无限制,默认为1024
queue.checkpoint.acks: 1024
#启用持久队列时强制检查点之前写入事件的最大数量(queue,type: persisted),设置为0,表示无限制,默认为1024
queue.checkpoint.writes: 1024
#启用持久队列(queue,type: persisted),强制在头部页面上设置检查点的间隔(以毫秒为单位),有周期性检查点的默认值是1000毫秒
queue.checkpoint.interval: 1000
#用于指示logstast启用插件支持DLQ功能的标志,默认为false
dead_letter_queue.enable: false
#每个死信队列的最大大小,如果条目超过此设置会增加死信队列的大小,则会删除条目,默认为1024mb
dead_letter_queue.max_bytes: 1024mb
#为死信队列存储数据文件的目录路径
path.dead_letter_queue: /usr/local/logstash-7.9.0/letter-queue
#度量标准REST端点的绑定地址,默认为127.0.0.1
http.host: "127.0.0.1"
#度量标准REST端点的绑定端口,默认为9600
http.port: 9600
#日志级别,可以设置为以下几种级别,默认为info
log.level: info
fatal
error
warn
info (default)
debug
trace
#logstash日志目录位置,默认为logstash路径下的logs
path.logs: /usr/local/logstash-7.9.0/logs
#logstash插件路径
path.plugins: []
filter {
date {
match => [ "logdate", "MMM dd yyyy HH:mm:ss" ]
}
}
{"logdate":"Jan 01 2018 12:02:03"}
{
"@version" => "1",
"host" => "Node2",
"@timestamp" => 2018-01-01T04:02:03.000Z,
"logdate" => "Jan 01 2018 12:02:03"
}
match => [ "logdate", "MMM dd yyyy HH:mm:ss" ,"MMM d yyyy HH:mm:ss","ISO8601"]
55.3.244.1 GET /index.html 15824 0.043
filter {
grok {
match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
}
}
client: 55.3.244.1
method: GET
request: /index.html
bytes: 15824
duration: 0.043
vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns
# Nginx logs
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:timestamp}\] "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response} (?:%{NUMBER:bytes}|-) (?:"(?:%{URI:referrer}|-)"|%{QS:referrer}) %{QS:agent}
%{}字段
%{}之间是分隔符
Apr 26 12:20:02 localhost systemd[1]: Starting system activity accounting tool...
filter {
dissect {
mapping => { "message" => "%{ts} %{+ts} %{+ts} %{src} %{prog}[%{pid}]: %{msg}" }
}
}
{
"msg" => "Starting system activity accounting tool...",
"@timestamp" => 2017-04-26T19:33:39.257Z,
"src" => "localhost",
"@version" => "1",
"host" => "localhost.localdomain",
"pid" => "1",
"message" => "Apr 26 12:20:02 localhost systemd[1]: Starting system activity accounting tool...",
"type" => "stdin",
"prog" => "systemd",
"ts" => "Apr 26 12:20:02"
}
Apr 26 12:20:02
%{ts} %{+ts} %{+ts} #+代表该匹配值追加到ts字段下
{
"ts":"Apr 26 12:20:02"
}
two three one go
%{+order/2} %{+order/3} %{+order/1} %{+order/4} #/后面的数字代表拼接的次序
{
"order": "one two three go"
}
a=1&b=2
%{?key1}=%{&key1}&%{?key2}=%{&key2} #%{?}代表忽略匹配值,但是富裕字段名,用于后续匹配用;%{&}代表将匹配值赋予key1的匹配值
{
"a":"1",
"b":"2"
}
#dissect可以自动处理空的匹配值
John Smith,Big Oaks,Wood Lane,Hambledown,Canterbury,CB34RY
%{name},%{addr1},%{addr2},%{addr3},%{city},%{zip}
Jane Doe,4321 Fifth Avenue,,,New York,87432
{
"name":"Jane Doe",
"addr1":"4321 Fifth Avenue",
"addr2":"",
"addr3":"",
"city":"New York",
"zip":"87432"
}
#dissect分割后的字段值都是字符串,可以使用convert_datatype属性进行类型转换
filter{
dissect{
convert_datatype => {age => "int"}
}
}
convert #类型转换
gsub #字符串替换
split/join/merge #字符串切割、数组合并为字符串、数组合并为数组
rename #字段重命名
update/replace #字段内容更新或替换
remove_field #删除字段
convert`:实现字段类型的转换,类型为hash,仅支持转换为`integer、float、string`和`Boolean
filter{
mutate{
convert => {"age" => "integer"}
}
}
filter {
mutate {
gsub => [
# replace all forward slashes with underscore
"fieldname", "/", "_",
# replace backslashes, question marks, hashes, and minuses
# with a dot "."
"fieldname2", "[\\?#-]", "."
]
}
}
filter {
mutate {
split => { "fieldname" => "," }
}
}
filter {
mutate {
update => { "sample" => "My new message" }
update => { "message" => "source from c:%{source_host}" } #%{source_host}可以引用logstash Event中的字段值
}
}
input {
stdin{type=>stdin}
}
filter{
dissect{ mapping => {"message" => "%{a}-%{b}-%{c}"} }
mutate{ replace => {"d" =>"source from c:%{c}"} }
}
output{
stdout{codec=>rubydebug}
}
hi-hello-123
{
"a" => "hi",
"b" => "hello",
"@timestamp" => 2018-06-29T02:01:24.473Z,
"c" => "123",
"d" => "source from c:123",
"@version" => "1",
"host" => "Node2",
"message" => "hi-hello-123",
"type" => "stdin"
}
filter{
ruby{
code => 'size = event.get("message").size;
event.set("message_size",size)'
}
}
ruby {
code => "event.set('@read_timestamp',event.get('@timestamp'))"
}
查看端口号是否被占用:netstat -anp |grep 9100
查看已占用端口号:netstat -nultp
sudo ufw reload
防火墙重启(ubuntu下执行)
sudo ufw enable
防火墙开启(ubuntu下执行)
sudo ufw allow 9600
打开9600端口(ubuntu下执行)
linux相关:
注:我们这里采用输入log,输出到logstash
缺点:Java语言开发,重量级组件,启动运行时会消耗大量内存、CPU等计算资源
优点 :运行稳定且包含完备的数据传输和数据过滤功能
Logstash:
缺点:不具备复杂的数据加工能力;
优点: Go 语言开发,是轻量级组件;
Filebeat:
1、组件
kibana-plugin install plugin_location 安装插件
nohup kibana & 系统后台运行
4、启动:bin目录下./kibana
- 官方自带汉化资源文件(位于您的kibana目录下的/kibana/x-pack/plugins/translations/translations目录。
- 修改您的kibana配置文件kibana.yml中的配置项:
i18n.locale: "zh-CN" - sudo -i service kibana stop 关闭
sudo -i service kibana start 重启
3、汉化
修改配置:vim /etc/kibana/kibana.yml
查看已占用端口号:netstat -nultp
2、配置
运行不同主版本号的 Kibana 和 Elasticsearch 是不支持的(例如 Kibana 5.x 和 Elasticsearch 2.x),若主版本号相同,运行 Kibana 子版本号比 Elasticsearch 子版本号新的版本也是不支持的(例如 Kibana 5.1 和 Elasticsearch 5.0)
Kibana 的版本需要和 Elasticsearch 的版本一致。这是官方支持的配置。
1、安装
详细介绍请查看官方文档。
最灵活的插件,可以以ruby语言来随心所欲的修改Logstash Event对象
filter {
geoip {
source => "clientip"
}
}
常用的插件,根据ip地址提供对应的地域信息,比如经纬度、城市名等,方便进行地理数据分析
filter {
json {
source => "message" #要解析的字段名
target => "msg_json" #解析后的存储字段,默认和message同级别
}
}
将字段内容为json格式的数据进行解析
join
merge
rename
update/replace
split
gsub
使用最频繁的操作,可以对字段进行各种操作,比如重命名、删除、替换、更新等,主要操作如下:
说明:
解剖过滤器应用后,事件将被解剖到以下领域:
以下配置解析消息:
例如,假设日志中包含以下消息:
dissect的应用有一定的局限性:主要适用于每行格式相似且分隔符明确简单的场景
dissect语法比较简单,有一系列字段(field)和分隔符(delimiter)组成
使用分隔符将非结构化事件数据提取到字段中。 解剖过滤器不使用正则表达式,速度非常快。 但是,如果数据的结构因行而异,grok过滤器更合适。
基于分隔符原理解析数据,解决grok解析时消耗过多cpu资源的问题
我们也可以在安装的Kibana
我们也可以自定义规则,比如我们下面添加的nginx
常见pattern
%{SYNTAX:SEMANTIC}
%{NUMBER:duration}
%{NUMBER:duration:float}
Grok语法:
应用过滤器后,示例中的事件将具有以下字段:
以下配置将消息解析为字段:
将非结构化事件数据分析到字段中。 这个工具非常适用于系统日志,Apache和其他网络服务器日志,MySQL日志,以及通常为人类而不是计算机消耗的任何日志格式。
match
target
timezone
说明:
返回结果:
从字段解析日期以用作事件的Logstash时间戳,以下配置解析名为logdate
输出插件安装:
./logstash-plugin install logstash-output-graphite
./logstash-plugin install logstash-output-statsd
./logstash-plugin install logstash-output-elasticsearch
./logstash-plugin install logstash-codec-json
cd到bin目录下:
./logstash-plugin install logstash-filter-mutate
./logstash-plugin install logstash-filter-date
./logstash-plugin install logstash-filter-grok
./logstash-plugin install logstash-filter-json
./logstash-plugin install logstash-filter-geoip
./logstash-plugin install logstash-filter-useragent
(16)org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
解决方案:集群的名称要保持统一,且不随意乱更改:如主节点的cluster name 为A,子节点的cluster name应仍为A
remote cluster name [Array_data_2] does not match local cluster name [Array_data_1]
(15)集群之间的名称不匹配;
解决方案:ps -ef | grep elasticsearch 之后kill掉
java.lang.IllegalStateException: failed to obtain node locks, tried [[/data/apps/elasticsearch-7.9.0/data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes]
(14)线程被占用;
b、修改elasticsearch.yml配置的发现host,注意列表格式
Sudo ufw disable
关闭防火 墙
sudoufw status
查看本地的端口开启情况(ubuntu下执行)
SELINUX=disabled
a、关闭防火墙,设置/etc/selinux/config
解决方案:
org.elasticsearch.discovery.MasterNotDiscoveredException: ClusterBlockException[index [.kibana_task_manager_1] blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];]
(13)发现不了主节点错误:
解决方案:把该文件下node-modules文件删掉或者执行npm install phantomjs-prebuilt@2.1.14 --ignore-scripts后,再npm install 一次,即可通过
(12)安装插件head-master的npm 报错:phantomjs-prebuilt@2.1.16 install: 'node install.js'
解决方案:将elasticsearch/logs下所有内容删除,elasticsearch/data下所有内容删除,更改elasticsearch整个目录的所有者及777权限,重启es成功
(11)java.lang.IllegalStateException: No factory method found for class org.apache.logging.log4j.core.appender.RollingFileAppender
解决方案:yml格式不对,去掉空格
in 'reader', line 17, column 1:
cluster.name: "elasticsearch"
^
expected <block end>, but found '<block mapping start>'
in 'reader', line 75, column 2:
node.master: true
^
(10)Caused by: while parsing a block mapping
解决方案:安装gradle
data/apps/elasticsearch-7.9.0/bin$ ./elasticsearch-plugin install analysis-icu future versions of Elasticsearch will require Java 11; your Java version from [/usr/lib/jvm/java-8-openjdk-amd64/jre] does not meet this requirement
(9)安装插件时java版本不对:
解决方案:
Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
(8)jvm.options的配置UseConcMarkSweepGC相对应的版本错误
解决方案:修改sysctl.conf,添加如下内容
(7)max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
重启生效
/etc/security/limits.d/test-limits.conf limits.d 下原是空文件,新建test-limits.conf,增加下图配置
ulimit -a 检查系统当前的各种用户进程限制
解决方案:
(6)max number of threads [1024] for user [coder] is too low, increase to at least [4096]
建议如9200端口未被占用的话就用默认9200
开放端口命令: /sbin/iptables -I INPUT -p tcp --dport 9600 -j ACCEPT
(5) 启动 ES 后后台未报错前台就是起不来,检查一下端口是否加入生效了
启动时 ES 权限用户不可以是 root
(4) can not run elasticsearch as root
解决方案:
(3) the default discovery setting are unsuitable for production use;
ES7.9版本运行环境需要 java11及以上的问题,重装 jdk11解决
(2) future versions of Elasticsearch will require Java 11; your Java version from JAVA_HOME does not meet this requirement
*hard memlock unlimited
*soft memlock unlimited
*hard nproc 4096
*soft nproc 2048
*hard nofile 65536
解决方案:vim /etc/security/limits.conf 末尾处添加以下内容:
(1)bootstrap check failed;
注意:0.0.0.0:9000 红框位置需填写完整
之后后台启动 nohup ./cerebro &
./bin/cerebro -Dhttp.port=9200 -Dhttp.address=${node_name} &
启动cerebro
./cerebro
第一次前台启动
下载地址:https://github.com/lmenezes/cerebro/releases
注意:红框部分默认值为http://localhost:9200,需改为当前机器的es的IP
nohup npm run start(后台启动)
npm run start
启动
vi _site/app.js ,只需要修改其中Ip即可:this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://10.241.42.165:9200"
修改:vi Gruntfile.js,添加hostname: '0.0.0.0',如图
最后执行:npm install
安装grunt: npm install -g grunt-cli
npm -v
node -v
检查版本:
apt install nodejs
curl --silent --location https://rpm.nodesource.com/setup_10.x | bash -
cd /data/apps/elasticsearch-head-master
head插件地址:https://github.com/mobz/elasticsearch-head
将下载的zip包放入该目录下解压,重启es即可
mkdir analysis_ik/
在es的plugin目录下:cd /data/apps/elasticsearch-7.9.0/plugins/
注意事项:分词器对应的版本应与es对应的版本一至,否则启动报错
下载地址:https://github.com/medcl/elasticsearch-analysis-ik/releases/tag/v7.9.0
分析列表:
soft nproc 4096root soft nproc unlimited
* hard nofile 65536* soft nproc 2048* hard nproc 4096* soft memlock unlimited* hard memlock unlimited

浙公网安备 33010602011771号