.net core canal 使用 (一)
官网的一些说明
QuickStart · alibaba/canal Wiki (github.com)
mysql 官网是这么说的
对于自建 MySQL , 需要先开启 Binlog 写入功能,配置 binlog-format 为 ROW 模式,my.cnf 中配置如下
[mysqld]
log-bin=mysql-bin # 开启 binlog
binlog-format=ROW # 选择 ROW 模式
server_id=1 # 配置 MySQL replaction 需要定义,不要和 canal 的 slaveId 重复
这步 我弄了 之后用canal 用户 怎么也连接 不上 后来 直接 用root 用户连接上了

之前 用的canal 1.1.5版本 连接 mysql 各种出错
报错1
Caused by: java.net.SocketTimeoutException: Timeout occurred, failed to read total 4 bytes in 5000 milliseconds, actual read only 0 bytes
报错2
ERROR c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - dump address 172.16.0.20:3306 has an error, retrying. caused by
报错2
Caused by: java.io.IOException: Error When doing Client Authentication:
(25条消息) Docker整合canal 踩坑实录_NiudiezzZ的博客-CSDN博客
(25条消息) docker安装并使用阿里巴巴Canal连接Mysql_canal 连接 docker mysql_随影随行的博客-CSDN博客
(25条消息) 同步部署在docker中的mysql时canal连接异常_youAreRidiculous的博客-CSDN博客
(25条消息) canal环境搭建及出现问题解决_爬台阶的蚂蚁的博客-CSDN博客
docker安装Alibaba Canal的步骤 – 编码砖家 (codingbrick.com)
Canal使用报错解决办法 - 健身男儿挑灯夜读 - 博客园 (cnblogs.com)
3、canal读取mysql采坑大全 - 简书 (jianshu.com)
(25条消息) canal环境搭建及出现问题解决_爬台阶的蚂蚁的博客-CSDN博客
以上 都没解决 问题 最后我怒了
猜测是 canal 版本 可能 与 mysql 版本 没有对应上
接下来 我直接下载的 最新的 mysql 和 最新的canal
源码地址
alibaba/canal: 阿里巴巴 MySQL binlog 增量订阅&消费组件 (github.com)
编译后的地址
Releases · alibaba/canal (github.com)
我下载的是这个版本

mysql 版本

这几条sql 语句比较 有用 查看
binlog_format 是否开启
先开启binlog写入功能
show variables like "%log_bin%"
配置 binlog-format 为 ROW 模式 show variables like "%binlog_format%";
mysql的server_id show variables like "%server_id%"; show master status; show variables like "%case%";
canal主要 做数据增量同步和订阅 ,主要通过mysql binlog日志实现
接下来 是canal 配置



instance.properties 文件如下
################################################# ## mysql serverId , v1.0.26+ will autoGen canal.instance.mysql.slaveId=1234 # enable gtid use true/false canal.instance.gtidon=false # position info canal.instance.master.address=192.168.0.192:3306 canal.instance.master.journal.name= canal.instance.master.position= canal.instance.master.timestamp= canal.instance.master.gtid= # rds oss binlog canal.instance.rds.accesskey= canal.instance.rds.secretkey= canal.instance.rds.instanceId= # table meta tsdb info canal.instance.tsdb.enable=true #canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb #canal.instance.tsdb.dbUsername=canal #canal.instance.tsdb.dbPassword=canal #canal.instance.standby.address = #canal.instance.standby.journal.name = #canal.instance.standby.position = #canal.instance.standby.timestamp = #canal.instance.standby.gtid= # username/password canal.instance.dbUsername=root canal.instance.dbPassword=123456 canal.instance.connectionCharset = UTF-8 # enable druid Decrypt database password canal.instance.enableDruid=false #canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ== # table regex canal.instance.filter.regex=shopproject.Orders # table black regex canal.instance.filter.black.regex=mysql\\.slave_.* # table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2) #canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch # table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2) #canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch # mq config canal.mq.topic=example # dynamic topic route by schema or table regex #canal.mq.dynamicTopic=mytest1.user,topic2:mytest2\\..*,.*\\..* canal.mq.partition=0 # hash partition config #canal.mq.enableDynamicQueuePartition=false #canal.mq.partitionsNum=3 #canal.mq.dynamicTopicPartitionNum=test.*:4,mycanal:6 #canal.mq.partitionHash=test.table:id^name,.*\\..* #################################################


canal.properties 配置
################################################# ######### common argument ############# ################################################# # tcp bind ip canal.ip = # register ip to zookeeper canal.register.ip = canal.port = 11111 canal.metrics.pull.port = 11112 # canal instance user/passwd # canal.user = canal # canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458 # canal admin config #canal.admin.manager = 127.0.0.1:8089 canal.admin.port = 11110 canal.admin.user = admin canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441 # admin auto register #canal.admin.register.auto = true #canal.admin.register.cluster = #canal.admin.register.name = canal.zkServers = # flush data to zk canal.zookeeper.flush.period = 1000 canal.withoutNetty = false # tcp, kafka, rocketMQ, rabbitMQ, pulsarMQ canal.serverMode = tcp # flush meta cursor/parse position to file canal.file.data.dir = ${canal.conf.dir} canal.file.flush.period = 1000 ## memory store RingBuffer size, should be Math.pow(2,n) canal.instance.memory.buffer.size = 16384 ## memory store RingBuffer used memory unit size , default 1kb canal.instance.memory.buffer.memunit = 1024 ## meory store gets mode used MEMSIZE or ITEMSIZE canal.instance.memory.batch.mode = MEMSIZE canal.instance.memory.rawEntry = true ## detecing config canal.instance.detecting.enable = false #canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now() canal.instance.detecting.sql = select 1 canal.instance.detecting.interval.time = 3 canal.instance.detecting.retry.threshold = 3 canal.instance.detecting.heartbeatHaEnable = false # support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery canal.instance.transaction.size = 1024 # mysql fallback connected to new master should fallback times canal.instance.fallbackIntervalInSeconds = 60 # network config canal.instance.network.receiveBufferSize = 16384 canal.instance.network.sendBufferSize = 16384 canal.instance.network.soTimeout = 30 # binlog filter config canal.instance.filter.druid.ddl = true canal.instance.filter.query.dcl = false canal.instance.filter.query.dml = false canal.instance.filter.query.ddl = false canal.instance.filter.table.error = false canal.instance.filter.rows = false canal.instance.filter.transaction.entry = false canal.instance.filter.dml.insert = false canal.instance.filter.dml.update = false canal.instance.filter.dml.delete = false # binlog format/image check canal.instance.binlog.format = ROW,STATEMENT,MIXED canal.instance.binlog.image = FULL,MINIMAL,NOBLOB # binlog ddl isolation canal.instance.get.ddl.isolation = false # parallel parser config canal.instance.parser.parallel = true ## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors() canal.instance.parser.parallelThreadSize = 16 ## disruptor ringbuffer size, must be power of 2 canal.instance.parser.parallelBufferSize = 256 # table meta tsdb info canal.instance.tsdb.enable = true canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:} canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL; canal.instance.tsdb.dbUsername = canal canal.instance.tsdb.dbPassword = canal # dump snapshot interval, default 24 hour canal.instance.tsdb.snapshot.interval = 24 # purge snapshot expire , default 360 hour(15 days) canal.instance.tsdb.snapshot.expire = 360 ################################################# ######### destinations ############# ################################################# canal.destinations = example # conf root dir canal.conf.dir = ../conf # auto scan instance dir add/remove and start/stop instance canal.auto.scan = true canal.auto.scan.interval = 5 # set this value to 'true' means that when binlog pos not found, skip to latest. # WARN: pls keep 'false' in production env, or if you know what you want. canal.auto.reset.latest.pos.mode = false canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml #canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml canal.instance.global.mode = spring canal.instance.global.lazy = false canal.instance.global.manager.address = ${canal.admin.manager} #canal.instance.global.spring.xml = classpath:spring/memory-instance.xml canal.instance.global.spring.xml = classpath:spring/file-instance.xml #canal.instance.global.spring.xml = classpath:spring/default-instance.xml ################################################## ######### MQ Properties ############# ################################################## # aliyun ak/sk , support rds/mq canal.aliyun.accessKey = canal.aliyun.secretKey = canal.aliyun.uid= canal.mq.flatMessage = true canal.mq.canalBatchSize = 50 canal.mq.canalGetTimeout = 100 # Set this value to "cloud", if you want open message trace feature in aliyun. canal.mq.accessChannel = local canal.mq.database.hash = true canal.mq.send.thread.size = 30 canal.mq.build.thread.size = 8 ################################################## ######### Kafka ############# ################################################## kafka.bootstrap.servers = 127.0.0.1:9092 kafka.acks = all kafka.compression.type = none kafka.batch.size = 16384 kafka.linger.ms = 1 kafka.max.request.size = 1048576 kafka.buffer.memory = 33554432 kafka.max.in.flight.requests.per.connection = 1 kafka.retries = 0 kafka.kerberos.enable = false kafka.kerberos.krb5.file = ../conf/kerberos/krb5.conf kafka.kerberos.jaas.file = ../conf/kerberos/jaas.conf # sasl demo # kafka.sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \\n username=\"alice\" \\npassword="alice-secret\"; # kafka.sasl.mechanism = SCRAM-SHA-512 # kafka.security.protocol = SASL_PLAINTEXT ################################################## ######### RocketMQ ############# ################################################## rocketmq.producer.group = test rocketmq.enable.message.trace = false rocketmq.customized.trace.topic = rocketmq.namespace = rocketmq.namesrv.addr = 127.0.0.1:9876 rocketmq.retry.times.when.send.failed = 0 rocketmq.vip.channel.enabled = false rocketmq.tag = ################################################## ######### RabbitMQ ############# ################################################## rabbitmq.host = rabbitmq.virtual.host = rabbitmq.exchange = rabbitmq.username = rabbitmq.password = rabbitmq.deliveryMode = ################################################## ######### Pulsar ############# ################################################## pulsarmq.serverUrl = pulsarmq.roleToken = pulsarmq.topicTenantPrefix =
最后 搞个代码测试一下
nuget 下载
<PackageReference Include="CanalSharp" Version="1.2.0" />
using CanalSharp.Connections; using CanalSharp.Protocol; using Microsoft.Extensions.Logging; using System.Collections.Generic; using System.Threading.Tasks; using System; using System.Linq; using CSRedis; using ShopWepAPI; namespace OrderWorkAPI { public class OrderSync { /// </summary> /// <param name="id"></param> /// <returns></returns> public static async void Start() { var loggerFactory = LoggerFactory.Create(builder => { builder .AddFilter("Microsoft", LogLevel.Debug) .AddFilter("System", LogLevel.Information) .AddConsole(); }); var logger = loggerFactory.CreateLogger<SimpleCanalConnection>(); var conn = new SimpleCanalConnection(new SimpleCanalOptions("127.0.0.1", 11111, "101"), logger); //连接到 Canal Server await conn.ConnectAsync(); //订阅 await conn.SubscribeAsync("shopproject.*\\.Orders.*"); while (true) { try { //获取数据 var messsage = await conn.GetWithoutAckAsync(1024); var entries = messsage.Entries; foreach (var entry in entries) { //不处理事务标记 if (entry.EntryType == EntryType.Transactionbegin || entry.EntryType == EntryType.Transactionend) { continue; } RowChange rowChange = RowChange.Parser.ParseFrom(entry.StoreValue); foreach (var rowData in rowChange.RowDatas) { //删除的数据 if (rowChange.EventType == EventType.Delete) { PrintColumn(rowData.BeforeColumns.ToList()); } //插入的数据 else if (rowChange.EventType == EventType.Insert) { PrintColumn(rowData.AfterColumns.ToList()); } //更新的数据 else { // logger.LogInformation("-------> before"); // PrintColumn(rowData.BeforeColumns.ToList()); logger.LogInformation("-------> 更新后"); PrintColumn(rowData.AfterColumns.ToList()); } } } await conn.AckAsync(messsage.Id); } catch (Exception e) { logger.LogError(e, "Error."); //发生异常执行重连,此方法只有集群连接对象才有 } } } /// <summary> /// 具体业务操作 /// </summary> /// <param name="columns"></param> private static void PrintColumn(List<Column> columns) { foreach (var column in columns) { Console.WriteLine($"{column.Name} : {column.Value} update= {column.Updated}"); //if (column.Name.Equals("ProductId")) //{ // seckill.ProductId = Convert.ToInt32(column.Value); //} //else if (column.Name.Equals("SeckillStock")) //{ // seckill.SeckillStock = Convert.ToInt32(column.Value); //} } } } }
调用 OrderSync.Start();
using infrastructure.FrameWork.Cap; using infrastructure.FrameWork.Redis; using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; using Microsoft.AspNetCore.HttpsPolicy; using Microsoft.AspNetCore.Mvc; using Microsoft.EntityFrameworkCore; using Microsoft.Extensions.Configuration; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting; using Microsoft.Extensions.Logging; using Microsoft.OpenApi.Models; using OrderWorkAPI; using ShopWepAPI.EFCore; using ShopWepAPI.IReposity; using ShopWepAPI.IService; using ShopWepAPI.Models; using ShopWepAPI.Reposity; using ShopWepAPI.Service; using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; namespace ShopWepAPI { public class Startup { public Startup(IConfiguration configuration) { Configuration = configuration; } public IConfiguration Configuration { get; } // This method gets called by the runtime. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) { services.AddControllers(); services.AddSwaggerGen(c => { c.SwaggerDoc("v1", new OpenApiInfo { Title = "ShopWepAPI", Version = "v1" }); }); services.AddTransient(typeof(IRepository<Order>), typeof(OrderReposity)); services.AddTransient(typeof(IService<Order>), typeof(OrderService)); services.AddCsRedis(Configuration); services.AddDbContext<ShopDBContext>(options => { string con = Configuration.GetConnectionString("con"); options.UseLazyLoadingProxies().UseMySql(con, MySqlServerVersion.AutoDetect(con)); }); // services.AddCustomCap(Configuration); // canal 同步 OrderSync.Start(); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseSwagger(); app.UseSwaggerUI(c => { c.RoutePrefix = ""; c.SwaggerEndpoint("/swagger/v1/swagger.json", "ShopWepAPI v1"); } ); } app.UseHttpsRedirection(); app.UseRouting(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllers(); }); } } }
成功 接收到数据

另外 这个 视频不错可以看看
8_08-大数据采集技术-Canal(TCP模式 创建项目&Canal封装数据格式分析)_大数据Canal教程丨Alibaba大厂数据实时同步神器_哔哩哔哩_bilibili
浙公网安备 33010602011771号