debezium配置mysql binlog

1、confluent平台安装

下载地址

(1) 配置connect-distributed.properties(confluent-5.5.2-2.12/etc/kafka/connect-distributed.properties)

配置安装插件的目录:

plugin.path=/插件的目录/

(2) 启动confluent

#后台启动
/bin/connect-distributed.sh --daemon /etc/kafka/connect-distributed.sh

2、mysql环境配置

(1) 开启binlog

vi /etc/my.cnf

[mysqld] 
server-id=223344 
log_bin=mysql-bin 
binlog_format=ROW 
binlog_row_image=FULL 
expire_logs_days=10

(2) 创建用户(不建议创建user用户)

CREATE USER 'user'@'*' IDENTIFIED BY 'password';

(3) 授权

grant select,reload,show databases, replication slave, replication client on *.* to 'user' identified by 'password'

(4) 刷新

flush privileges;

(5) 检测binlog是否开启

SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::" FROM information_schema.global_variables WHERE variable_name='log_bin';

(6) 从mysql5.7.6开始information_schema.global_status已经开始被舍弃,为了兼容性,此时需要打开 show_compatibility_56

Set global show_compatibility_56=on;

(7) 再检测binlog是否开启

3、配置mysql同步任务

1、新建test.json文件

{
    "name": "cbc-connector",
    "config": {
        "connector.class": "io.debezium.connector.mysql.MySqlConnector",
        "database.hostname": "xxx.xxx.xxx.xxx",
        "database.port": "3306",
        "database.user": "xxxx",
        "database.password": "xxxx",
        "database.server.id": "111111",
        "database.server.name": "xx",
        "database.whitelist": "xx",
        "database.history.kafka.bootstrap.servers": "ip:host",
        "database.history.kafka.topic": "topic_name",
        "transforms": "unwrap",
        "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
        "transforms.unwrap.add.fields": "op,table,snapshot,source.ts_ms",
        "transforms.unwrap.add.headers": "db",
        "transforms.unwrap.delete.handling.mode": "rewrite"
        }
}

更多配置:
https://debezium.io/documentation/reference/1.3/connectors/mysql.html

2、通过restful api管理任务

(1)启动任务

curl -X POST -H "Content-Type: application/json" --data "@/root/java/test.json" http://ip:port/connectors

(2) 删除任务

curl -X DELETE http://ip:port/connectors/<connector-name>

(3) 查看任务状态

curl -s GET http://ip:port/connectors/<connector-name>/status

(4) 获取 Connect Worker 信息

curl -s http://ip:port/

(5) 列出 Connect Worker 上所有 Connector

curl -s http://ip:port/connector-plugins

(6) 获取 Connector 上 Task 以及相关配置的信息

curl -s http://ip:port/connectors/<Connector名字>/tasks

(7) 获取 Connector 配置信息

curl -s http://ip:port/connectors/<Connector名字>/config

(8) 暂停 Connector

curl -s -X PUT http://ip:port/connectors/<Connector名字>/pause

(9) 重启 Connector

curl -s -X PUT http://ip:port/connectors/<Connector名字>/resume

(10) 更新 Connector配置 (以FileStreamSourceConnector举例)

curl -s -X PUT -H "Content-Type: application/json" --data 
'{"connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector",
"key.converter.schemas.enable":"true",
"file":"demo-file.txt",
"tasks.max":"2",
"value.converter.schemas.enable":"true",
"name":"file-stream-demo-distributed",
"topic":"demo-2-distributed",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter":"org.apache.kafka.connect.json.JsonConverter"}' 
http://ip:port/connectors/file-stream-demo-distributed/config
posted @ 2022-03-05 22:11  bystanderC  阅读(505)  评论(0编辑  收藏  举报