Elasticsearch+head+logstash
抄袭文章:https://www.cnblogs.com/xiaojianfeng/p/9435507.html
1.docker安装Elasticsearch
安装:https://www.docker.elastic.co/
解压文件夹.bin:脚本目录(启动\停止);config:配置文件目录;plugins:插件.
2.配置,另存为UTF-8无BOM格式
cluster.name: haha#集群名称
node.name: node1#节点名称
network.host: 0.0.0.0#访问限制,所有地址都可以访问
http.port: 9200#对外暴露端口
transport.tcp.por: 9300#内部通信端口
node.master: true#该节点是否有资格选举为master节点
node.data: true#该节点是否存储索引数据作为数据节点
discovery.zen.ping.unicast.hosts: {"0.0.0.0:9300", "0.0.0.0:9301"}#集群中master节点的初始列表
discovery.zen.mininum_master_nodes: 1#主节点数量的最少值(master_eligible_nodes/2)+1
node.ingest: true#是否允许成为协调节点
bootstrap.memory_lock:false#是否锁定内存空间仅供es使用,避免内存与swap分区交换数据
node.max_local_storage_nodes: 2#单机允许的最大存储节点数
path.data: data文件夹路径
path.logs: logs日志路径
http.cors.enable: true
http.cors.allow-origin: /.*/
jvm.options中设置-Xms和-Xmx 两个值相等,且不超过物理内存的一半.
ES使用log4j 注意设置日志级别为error.
3.安装head插件:
安装node.js
<1>安装Node.js
下载解压
wget https://nodejs.org/dist/v6.10.2/node-v6.10.2-linux-x64.tar.xz
xz –d node-v6.10.2-linux-x64.tar.xz tar xvf node-v6.10.2-linux-x64.tar mv node-v6.10.2-linux-x64 /usr/local/node
配置并生效
vim /etc/profile export NODE_HOME=/usr/local/node export PATH=$PATH:$NODE_HOME/bin source /etc/profile
查看版本验证
node -v\npm -v
<2>下载head插件
如果未安装git ,则先安装git工具
yum install –y git git clone https://github.com/mobz/elasticsearch-head.git
<3>安装grunt
cd elasticsearch-head npm install -g grunt --registry=https://registry.npm.taobao.org
<4>安装插件
npm install
在elasticsearch-head目录下node_modules/grunt下如果没有grunt二进制程序,需要执行:npm install grunt --save
npm run start
打开浏览器http://localhost:9100/
4.logstash:数据库与ES进行数据同步插件,安装版本必须与ES保持一致
<1>安装ruby:https://rubyinstaller.org/
<2>logstash6.x不带logstash-input-jdbc,需单独安装 执行.\logstash-plugin.bat install logstash-input-jdbc
<3>在logstash/config文件夹下创建haha_template.json(创建映射的json)
{
"mappings" : {
"doc" : {
"properties" : {
"name" : {
"analyzer" : "ik_max_word",
"search_analyzer" : "ik_smart",
"type" : "text"
},
"pic" : {
"index" : false,
"type" : "keyword"
},
"price" : {
"type" : "float"
}
"pub_time" : {
"format" : "yyyy‐MM‐dd HH:mm:ss",
"type" : "date"
}
}
}
},
"template" : "haha"
}
<4>在logstash/config文件夹下创建mysql.conf文件
input {
stdin {
}
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/haha?
useUnicode=true&characterEncoding=utf‐8&useSSL=true&serverTimezone=UTC"
# the user we wish to excute our statement as
jdbc_user => "root"
jdbc_password => mysql
# the path to our downloaded jdbc driver
jdbc_driver_library => "F:/develop/maven/repository3/mysql/mysql‐connector‐java/5.1.41/mysql‐
connector‐java‐5.1.41.jar"
# the name of the driver class for mysql
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
#要执行的sql文件
#statement_filepath => "/conf/haha.sql"
statement => "select * from course_pub where timestamp > date_add(:sql_last_value,INTERVAL 8
HOUR)"
#定时配置
schedule => "* * * * *"
record_last_run => true
last_run_metadata_path => "D:/ElasticSearch/logstash‐6.2.1/config/logstash_metadata"
}
}
output {
elasticsearch {
#ES的ip地址和端口
hosts => "localhost:9200"
#hosts => ["localhost:9200","localhost:9202","localhost:9203"]
#ES索引库名称
index => "xc_course"
document_id => "%{id}"
document_type => "doc"
template =>"D:/ElasticSearch/logstash‐6.2.1/config/xc_course_template.json"
template_name =>"haha" #对应<3>中template属性
template_overwrite =>"true"
}
stdout {
#日志输出
codec => json_lines
}
}
<5>启动命令:logstash.bat -f ../config.mysql.conf

浙公网安备 33010602011771号