-
Notifications
You must be signed in to change notification settings - Fork 7.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
canal-adapter从kafka消费数据一定要配置源数据库信息吗 #5314
Comments
试试在 canal-deployer或者kafka资源所在节点部署一个 otter 是不是就可以干这个事情 ? |
之前kafka的报错解决了 是因为kafka的监听地址 没有配置 ,配置了ip地址之后可以连接了,但是现在报了个新的错就是再canal-adapter同步数据的时候报错如下: |
@callmedba otter解决不了这个问题吧 两个网络之间只有kafka是通的 只能通过kafka去传输数据,otter不能读取kafka的数据吧 |
rdb配置文件如下: Mirror schema synchronize configdataSourceKey: defaultDS |
Question
我有一个需求就是两个不同网络的mysql需要同步数据,如果要做主从就需要把mysql端口映射到外网,我不想把数据库映射出来,想通过mysql同步至canal-deployer,canal-deployer同步kafka,canal-adapter消费同步至另外一台mysql这种方式同步,canal-adapter部署再从机mysql网络,和源MySQL网络不通,canal-adapter一定要配置源数据库的连接信息吗?
我的配置如下,这样不能同步吗
`server:
port: 8081
spring:
jackson:
date-format: yyyy-MM-dd HH:mm:ss
time-zone: GMT+8
default-property-inclusion: non_null
canal.conf:
mode: kafka #tcp kafka rocketMQ rabbitMQ
flatMessage: true
zookeeperHosts:
syncBatchSize: 1000
retries: -1
timeout:
accessKey:
secretKey:
consumerProperties:
# canal tcp consumer
canal.tcp.server.host: 127.0.0.1:11111
canal.tcp.zookeeper.hosts:
canal.tcp.batch.size: 500
canal.tcp.username:
canal.tcp.password:
# kafka consumer
kafka.bootstrap.servers: 192.168.126.129:9092
kafka.enable.auto.commit: false
kafka.auto.commit.interval.ms: 1000
kafka.auto.offset.reset: latest
kafka.request.timeout.ms: 40000
kafka.session.timeout.ms: 30000
kafka.isolation.level: read_committed
kafka.max.poll.records: 1000
# rocketMQ consumer
rocketmq.namespace:
rocketmq.namesrv.addr: 192.168.126.129:9876
rocketmq.batch.size: 1000
rocketmq.enable.message.trace: false
rocketmq.customized.trace.topic:
rocketmq.access.channel:
rocketmq.subscribe.filter:
# rabbitMQ consumer
rabbitmq.host:
rabbitmq.virtual.host:
rabbitmq.username:
rabbitmq.password:
rabbitmq.resource.ownerId:
canalAdapters:
groups:
outerAdapters:
name: logger
name: rdb
key: mysql1
properties:
jdbc.driverClassName: com.mysql.jdbc.Driver
jdbc.url: jdbc:mysql://127.0.0.1:3306/hucb?useUnicode=true
jdbc.username: hucb
jdbc.password: 123456
druid.stat.enable: false
druid.stat.slowSqlMillis: 1000`
启动后一直有一个告警
2024-10-30 07:37:15.011 [Thread-3] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=c9f4a0, groupId=g1] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 2024-10-30 07:37:15.985 [Thread-3] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=c9f4a0, groupId=g1] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 2024-10-30 07:37:16.933 [Thread-3] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=c9f4a0, groupId=g1] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
我配置了远端的kafka地址怎么还要读本地的kafka呢,这个要怎么配置
The text was updated successfully, but these errors were encountered: