Project Overview
Canal provides real-time Change Data Capture (CDC) for MySQL, MariaDB and PostgreSQL. It parses database binlogs, exposes incremental data streams and powers downstream pipelines for replication, caching, search indexing and analytics.
What Canal Does
- Emulates a MySQL slave to fetch and parse binlogs.
- Transforms binlog events into high-level change messages.
- Delivers change messages to clients or sinks in real time.
Problems It Solves
- Eliminates custom polling for data changes.
- Decouples your database from downstream systems (Kafka, search engines, caches).
- Provides low-latency, incremental data synchronization without application changes.
Major Components
Server
Core service that
- Connects to source databases, reads binlogs.
- Parses events into
Entry
andRowChange
objects. - Exposes a proprietary protocol over TCP for clients and connectors.
Admin
Web UI and CLI for
- Configuring Canal instances and destinations.
- Monitoring metrics: network bandwidth, sink/dump status, delay, throughput.
Adapters
Plug-in modules to support non-MySQL sources (e.g. PostgreSQL). They map native WAL/log formats to Canal’s protocol.
Connectors
Built-in sinks for popular messaging systems:
- Kafka
- RocketMQ
- Custom HTTP/webhook adapters
Client
Java library (com.alibaba.otter:canal.client
) that
- Implements the Canal protocol.
- Provides
CanalConnector
to subscribe, fetch and ack messages.
Maven dependency:
<dependency>
<groupId>com.alibaba.otter</groupId>
<artifactId>canal.client</artifactId>
<version>1.1.9-SNAPSHOT</version>
</dependency>
Protocol
Defines binary message formats:
Message
header with destination, position.Entry
events for DDL/DML.RowChange
payload with before/after images.
Typical Use Cases
- Streaming to Kafka
Route database changes into Kafka topics for microservices or stream processors. - Cache Invalidation
Push updated keys to Redis or Memcached when underlying rows change. - Search Indexing
Synchronize MySQL tables with Elasticsearch or Solr in real time. - Cross-DB Replication
Mirror MySQL data into PostgreSQL or other heterogeneous stores. - Real-time Analytics
Feed OLAP engines (ClickHouse, Druid) with continuous inserts.
Quick Start: Java Client Example
import com.alibaba.otter.canal.client.CanalConnector;
import com.alibaba.otter.canal.client.CanalConnectors;
import com.alibaba.otter.canal.protocol.Message;
import com.alibaba.otter.canal.protocol.CanalEntry.Entry;
import java.net.InetSocketAddress;
import java.util.List;
public class SimpleCanalClient {
public static void main(String[] args) {
// Connect to Canal Server
CanalConnector connector = CanalConnectors.newSingleConnector(
new InetSocketAddress("127.0.0.1", 11111),
"example_destination", "", "");
connector.connect();
connector.subscribe(".*\\..*"); // subscribe all tables
connector.rollback(); // reset cursor if needed
while (true) {
// fetch up to 1000 entries, block up to 1s
Message message = connector.get(1000L, 1000);
List<Entry> entries = message.getEntries();
if (entries.isEmpty()) {
continue;
}
entries.forEach(entry -> {
// process Entry: DDL/DML, parse RowChange
System.out.printf("Received entry: %s%n", entry);
});
connector.ack(message.getId());
}
}
}
Getting Started
Fast path from zero to a running Canal stack.
1. Local Quick-Start with Embedded Server & Example Client
This approach runs Canal server in “local” mode alongside a Java example client on your machine.
Prerequisites
• JDK 8+
• Maven 3.5+
Build deployer and example modules:
# From project root
mvn clean package -pl deployer,example -Denv=dev
Start Canal Server in local mode:
# Adjust path if version changes
cd deployer/target/canal-*/
sh bin/startup.sh
# Logs → deployer/logs/canal.log
Start the Example Client (Simple mode):
cd example/target/canal-example-*/
sh bin/startup.sh Simple
# Connects to localhost:11111, subscribes to default destination
You should see binlog entries printed to stdout as tables change in your local MySQL.
2. Docker One-Liner (Server + Admin)
Spin up Canal-Admin and Canal-Server containers with built-in entrypoints.
- Start Canal-Admin
docker run -d \
--name canal-admin \
-p 8089:8089 \
canal/canal-admin:latest
- Start Canal-Server via provided script
# Grant execute
chmod +x docker/run.sh
# Run server, pointing at admin
./docker/run.sh \
--name canal-server \
--env canal.admin.manager=host.docker.internal:8089 \
--publish 11110:11110 \
--publish 11111:11111 \
--publish 11112:11112
Verify
curl http://localhost:11110/
# Should respond from Canal-Server admin API
3. Helm/Kubernetes Installation
Deploy a full stack (Zookeeper, MySQL, Canal-Admin, Canal-Server) in Kubernetes.
# Namespace
kubectl create ns canal
# Zookeeper
helm install zk -n canal charts/zookeeper
# MySQL
helm install mysql -n canal charts/mysql
# Canal-Admin
helm install canal-admin -n canal charts/canal-admin
# Canal-Server (auto-registers to admin)
helm install canal-server -n canal charts/canal-server \
--set server.config="canal.admin.manager=canal-admin.canal.svc.cluster.local:8089\ncanal.admin.register.auto=true"
Check pods
kubectl get pods -n canal
4. Basic Binlog Subscription Test
Use the Java client API to subscribe and print binlog events.
import com.alibaba.otter.canal.client.*;
import com.alibaba.otter.canal.protocol.*;
public class SimpleSubscriber {
public static void main(String[] args) {
// Connect to local server
CanalConnector connector = CanalConnectors.newSingleConnector(
new InetSocketAddress("127.0.0.1", 11111),
"example", "", "");
connector.connect();
connector.subscribe(".*\\..*"); // all schemas
while (true) {
Message msg = connector.get(100L);
if (msg.getId() != -1) {
for (Entry entry : msg.getEntries()) {
if (entry.getEntryType() != EntryType.ROWDATA) continue;
System.out.println(entry);
}
}
}
}
}
Compile and run:
# Assuming example module is on classpath
javac -cp example/target/canal-example-*.jar SimpleSubscriber.java
java -cp .:example/target/canal-example-*.jar SimpleSubscriber
You should see binlog Entry
objects streamed as you perform inserts/updates/deletes on your MySQL database.
Core Concepts & Architecture
Canal captures MySQL binlog changes, parses them into a uniform message model, and delivers them to downstream systems. Its architecture splits into four core modules:
1. Connector
Handles TCP connection to MySQL and streams raw binlog bytes.
- Supports GTID and position-based failover
- Manages heartbeat and reconnection
2. Parser
Transforms raw bytes into Packet
→ Entry
→ RowChange
via Protobuf.
- Decompresses (
ZLIB
,GZIP
) if packet is compressed - Builds
Entry
headers (schema, table, timestamp) - Extracts DDL/DML rows as
RowChange
objects
3. Instance & Store
Each Canal “instance” maintains:
- Position store (memory, file, ZK, RDBMS)
- Snapshot and incremental switch
- Filter rules (include/exclude schemas, tables)
4. Sink (Adapters)
Delivers parsed messages to messaging systems or applications:
- Kafka, RocketMQ, Pulsar adapters
- HTTP/JDBC delivery
- Custom plugin support
Data Flow
- MySQL writes binlog event (INSERT/UPDATE/DELETE/DDL).
- Connector pulls binlog bytes into
CanalPacket.Packet
. - Parser
- Decompresses packet body
Entry.parseFrom(body)
→ retrieves header +storeValue
RowChange.parseFrom(entry.getStoreValue())
- Instance applies filter rules (schema/table/event type).
- Sink serializes
Entry
orFlatMessage
and pushes to target. - Client acks processed binlog position to enable fail-safe restart.
Example: Consuming with SimpleCanalClient
This Java example shows subscribing, fetching, and acknowledging messages:
import com.alibaba.otter.canal.client.CanalConnector;
import com.alibaba.otter.canal.client.CanalConnectors;
import com.alibaba.otter.canal.protocol.CanalEntry.Entry;
import com.alibaba.otter.canal.protocol.CanalEntry.RowChange;
import java.net.InetSocketAddress;
import java.util.List;
public class CanalSimpleConsumer {
public static void main(String[] args) {
// 1. Create connector (single-node example)
CanalConnector connector = CanalConnectors.newSingleConnector(
new InetSocketAddress("127.0.0.1", 11111),
"example", // instance name in instance.properties
"", // username
""
);
connector.connect();
connector.subscribe("db_test\\.user"); // regex: schema.table
connector.rollback(); // start from last acknowledged
final int BATCH_SIZE = 1000;
while (true) {
// 2. Fetch batch of entries
var batch = connector.getWithoutAck(BATCH_SIZE);
long batchId = batch.getId();
List<Entry> entries = batch.getEntries();
try {
for (Entry entry : entries) {
if (entry.getEntryType() == Entry.EntryType.ROWDATA) {
RowChange rowChange = RowChange.parseFrom(entry.getStoreValue());
// 3. Process rowChange (e.g., map to FlatMessage)
// processRowChange(entry.getHeader(), rowChange);
}
}
// 4. Acknowledge processed batch
connector.ack(batchId);
} catch (Exception e) {
// On error, rollback to retry batch
connector.rollback(batchId);
}
}
}
}
Common Configurations
instance.properties (per-canal-instance):
# MySQL connection
canal.instance.master.address=127.0.0.1:3306
canal.instance.dbUsername=canal_user
canal.instance.dbPassword=secret
# Binlog filter
canal.instance.filter.regex=db_test\\.user,db_sales\\.orders
# Position store (file/zookeeper/redis)
canal.instance.store.mode=file
canal.instance.store.file.dir=/var/canal/positions
# Batch & timeout
canal.instance.fetch.batchSize=500
canal.instance.fetch.intervalMillis=1000
Practical Tips
- Use GTID mode in MySQL to simplify failover.
- Tune
batchSize
vs.intervalMillis
for throughput vs. latency. - Enable compression on high-volume streams to reduce network I/O.
- Monitor
ack
gaps to detect consumer lag. - Implement idempotent downstream processing (retries may re-deliver).
Configuration Guide
This guide lists all configuration parameters across Canal modules, with default values and usage examples.
1. Admin Web Application
File: admin/admin-web/src/main/resources/application.yml
1.1 Server & Jackson
server:
port: 8089 # HTTP port for Admin UI
spring:
jackson:
date-format: yyyy-MM-dd HH:mm:ss # Default date format
time-zone: GMT+8 # Timezone for JSON serialization
1.2 Datasource (MySQL)
spring:
datasource:
url: jdbc:mysql://localhost:3306/canal_admin # JDBC URL
username: canal # Default user
password: canal # Default password
driver-class-name: com.mysql.cj.jdbc.Driver # JDBC driver
hikari:
maximum-pool-size: 10 # Connection pool size
1.3 Canal Admin Credentials
canal:
admin:
username: admin # Default login user
password: admin # Default login password
auth-enabled: true # Enable authentication
2. canal-admin Helm Chart
File: charts/canal-admin/values.yaml
replicaCount: 1
image:
repository: canal/canal-admin # Docker image repo
tag: latest # Image tag
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 8010 # Service port
ingress:
enabled: false
annotations: {}
hosts: []
tls: []
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 80
volumeMounts: []
volumes: []
canalAdmin:
server:
port: 8010 # Canal Admin internal port
mysql:
host: localhost
port: 3306
username: canal
password: canal
database: canal_admin
2.1 Example: Overriding MySQL Connection
helm install canal-admin charts/canal-admin \
--set canalAdmin.mysql.host=mysql.prod.svc.cluster.local \
--set canalAdmin.mysql.username=prod_user \
--set canalAdmin.mysql.password=secret
3. canal-server Helm Chart
File: charts/canal-server/values.yaml
replicaCount: 2
image:
repository: canal/canal-server
tag: latest
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 11111 # Canal server port
ingress:
enabled: false
annotations: {}
hosts: []
tls: []
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
autoscaling:
enabled: false
minReplicas: 2
maxReplicas: 5
targetCPUUtilizationPercentage: 75
config:
canalServer:
port: 11111 # Matches service.port
# Additional server-specific YAML entries can be loaded here
3.1 Example: Enabling Horizontal Pod Autoscaling
helm upgrade canal-server charts/canal-server \
--set autoscaling.enabled=true \
--set autoscaling.minReplicas=3 \
--set autoscaling.maxReplicas=10
4. Client Adapter Launcher
File: client-adapter/launcher/src/main/resources/application.yml
4.1 Server & Jackson
server:
port: 9090
spring:
jackson:
date-format: yyyy-MM-dd HH:mm:ss
time-zone: UTC
4.2 Canal Mode & Connection
canal:
mode: tcp # Options: tcp, kafka, rocketmq, rabbitmq
tcp:
host: 127.0.0.1
port: 11111
destination: example
username:
password:
filter.regex: .*\\..* # DB.Table regex filter
kafka:
bootstrap-servers: kafka:9092
group-id: canal-client # Consumer group
topic: canal_topic
rocketmq:
namesrv-addr: rmq:9876
producer-group: canal-producer
topic: canal_topic
rabbitmq:
addresses: amqp://guest:guest@rmq:5672
queue: canal_queue
4.3 Adapter Definitions
adapters:
- group: log # Nested adapter groups
outer-adapters:
- type: logging # Built-in logging adapter
- group: db-sync
outer-adapters:
- type: jdbc
url: jdbc:mysql://db:3306/target_db
username: root
password: root
5. Deployer: canal.properties
File: deployer/src/main/resources/canal.properties
5.1 Server & Network
canal.server.ip=0.0.0.0 # Bind address
canal.server.port=11110 # TCP port for admin commands
canal.server.detectingEnable=true
5.2 Instance Defaults
canal.instance.global.spring.xml=classpath:spring/*.xml
canal.instance.memory.buffer.size=16384
canal.instance.memory.buffer.memunit=KB
5.3 Binlog & Filtering
canal.instance.filter.regex=.*\\. # Allow all schemas/tables
canal.instance.filter.black.regex=.*\\.system\\..*
canal.instance.gtidon=false
5.4 Storage
canal.instance.tsdb.enable=false # Disable time-series DB
canal.instance.store.mode=MEMORY
5.5 MQ Integrations
# Kafka
kafka.bootstrap.servers=localhost:9092
kafka.topic=canal_binlog
# RocketMQ
rocketmq.namesrv.addr=localhost:9876
rocketmq.producer.group=canal_producer
rocketmq.topic=canal_binlog
# RabbitMQ
rabbitmq.addresses=amqp://guest:guest@localhost:5672
rabbitmq.exchange=canal_exchange
# Pulsar
pulsar.serviceUrl=pulsar://localhost:6650
pulsar.topic=canal_binlog
6. Example Instance Properties
File: deployer/src/main/resources/example/instance.properties
# MySQL Master Connection
canal.instance.master.address=127.0.0.1:3306
canal.instance.dbUsername=canal
canal.instance.dbPassword=canal
canal.instance.enableDruid=false
# GTID & RDS
canal.instance.gtidon=false
canal.instance.rdsAccessKey=
canal.instance.rdsSecretKey=
# Filtering
canal.instance.filter.regex=.*\\.user,.*\\.orders
canal.instance.filter.black.regex=internal\\..*
# Message Queue
canal.mq.topic=example_topic
canal.mq.partition=3
Use this reference to tune Canal deployment, adapter behavior, and Helm-based Kubernetes setups. Adjust each parameter to match your infrastructure and security requirements.
Deployment & Operations
Operational playbooks for production deployments, scaling, and monitoring in the alibaba/canal ecosystem.
Admin Maintenance Scripts: Log Cleanup and Health Checks
Provide automated log rotation/cleanup and container health signaling for Canal Docker images. Place these under /home/admin/bin
and wire into cron, Docker HEALTHCHECK, or Kubernetes livenessProbe.
clean_log.sh – Disk‐aware Log Cleanup
Location: /home/admin/bin/clean_log.sh
What it does
- Checks overall disk usage (
df -h
), then:- Deletes
/tmp/hsperfdata_admin
files older than 15 days. - Prunes Canal logs in
/home/admin/canal-server/logs
or/home/admin/canal-admin/logs
:- If usage ≥ 90% → remove logs older than 7 days
- If usage ≥ 80% → remove logs older than 3 days
- Removes empty directories (> 3 days) and
*.tmp
files.
- Deletes
- Re‐checks disk and repeats base cleanup if still high.
Key parameters
CUTOFF
(export to override thresholds)- Auto‐detects Canal logs directory.
Example: schedule hourly with cron inside the container
# as root inside container
cat <<EOF >/etc/cron.d/canal-cleanup
0 * * * * admin /home/admin/bin/clean_log.sh >> /var/log/clean_log.log 2>&1
EOF
chmod 0644 /etc/cron.d/canal-cleanup
crond
health.sh – HTTP‐based Health Check
Location: /home/admin/bin/health.sh
What it does
- Detects UI container (
/home/admin/canal-admin
) or metrics‐only. - Builds URL:
- UI:
http://127.0.0.1:${server.port:-8089}/index.html
looking for “Canal” - Metrics:
http://127.0.0.1:${canal.metrics.pull.port:-11112}/metrics
looking for “canal”
- UI:
- Prints
[ OK ]
(exit 0) or[FAILED]
(exit 1).
Environment overrides
server.port
for UIcanal.metrics.pull.port
for metrics
Docker HEALTHCHECK example
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \
CMD ["sh","/home/admin/bin/health.sh"]
Kubernetes livenessProbe example
livenessProbe:
exec:
command: ["/home/admin/bin/health.sh"]
initialDelaySeconds: 15
periodSeconds: 20
Practical Tips
chmod +x /home/admin/bin/*.sh
to ensure executability.- Run cleanup cron under
admin
user to preserve file ownership. - Tune
-mtime
values inclean_log.sh
for your log growth.
Horizontal Pod Autoscaler (HPA) Configuration
Enable and tune Kubernetes HPAs for canal-admin
and canal-server
via Helm chart values.
Configuration Keys
.Values.autoscaling.enabled
–true
to render HPA..Values.autoscaling.minReplicas
/.Values.autoscaling.maxReplicas
– replica bounds..Values.autoscaling.targetCPUUtilizationPercentage
– CPU‐based scaling..Values.autoscaling.targetMemoryUtilizationPercentage
– memory‐based scaling.
Example values.yaml
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 75
targetMemoryUtilizationPercentage: 80
Rendered HPA (canal-admin)
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-release-canal-admin
labels:
app.kubernetes.io/name: canal-admin
app.kubernetes.io/instance: my-release
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-release-canal-admin
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 75
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Practical Usage
- Deploy Kubernetes Metrics Server for CPU/memory metrics.
- Specify
resources.requests
in your Deployment; HPA uses it as baseline. - Tune
minReplicas
/maxReplicas
based on load and capacity. - To scale on CPU only, omit
targetMemoryUtilizationPercentage
. - Apply via Helm:
helm upgrade --install my-release charts/canal-admin \
--set autoscaling.enabled=true \
--set autoscaling.minReplicas=2 \
--set autoscaling.maxReplicas=10 \
--set autoscaling.targetCPUUtilizationPercentage=75
Canal Instances Grafana Dashboard Template
Ship a preconfigured Grafana dashboard JSON under deployer/src/main/resources/metrics
. The deployer provisions it automatically.
Key Elements
- Inputs:
DS_PROMETHEUS
– Prometheus datasource variable.
- Variables:
datasource
resolves to$DS_PROMETHEUS
.destination
useslabel_values(canal_instance, destination)
.
- Panel Groups:
- Instance status
- Throughput (TPS)
- Client metrics (QPS, latency, etc.)
- Store metrics (pending events, memory usage)
- Defaults: dark theme, 6h time window, 15s refresh, PromQL 2m windows.
Code Snippets
- Inputs and Requirements
"__inputs": [
{
"name": "DS_PROMETHEUS",
"label": "prometheus",
"type": "datasource",
"pluginId": "prometheus"
}
],
"__requires": [
{ "type": "grafana", "version": "5.2.2" },
{ "type": "panel", "id": "graph", "version": "5.0.0" },
{ "type": "datasource","id":"prometheus","version":"5.0.0" }
],
- Templating Variables
"templating": {
"list": [
{
"name": "datasource",
"type": "datasource",
"current": { "text": "prometheus", "value": "prometheus" }
},
{
"name": "destination",
"type": "query",
"datasource": "$datasource",
"label": "destination",
"query": "label_values(canal_instance, destination)"
}
]
},
- Example Panel – Basic Info
{
"title": "Basic",
"type": "graph",
"datasource": "$datasource",
"description": "Canal instance 基本信息。",
"targets": [
{ "refId": "A", "expr": "canal_instance{destination=~\"$destination\"}", "legendFormat": "Destination: {{destination}}", "instant": true },
{ "refId": "B", "expr": "canal_instance_parser_mode{destination=~\"$destination\"}", "legendFormat": "Parallel parser: {{parallel}}" },
{ "refId": "C", "expr": "canal_instance_store{destination=~\"$destination\"}", "legendFormat": "Batch mode: {{batchMode}}" }
]
}
Practical Usage
- Place the JSON in
deployer/src/main/resources/metrics
. - Ensure your Grafana provisioning YAML includes this directory.
- On startup, Grafana imports “Canal instances” (UID
8vh8NGpiz
). - In Grafana UI:
- Select the Prometheus datasource.
- Use the “destination” dropdown to filter clusters.
- To add panels: copy an existing panel block, adjust
expr
andlegendFormat
, then bump the dashboardversion
. Could you specify which “Extending Canal” subsection you’d like detailed—new adapter, metrics provider, filter, or event handler—and share any relevant file summaries or context? That will help me generate a focused, actionable guide.
Contribution & Development Guide
This guide shows how to set up your environment, build and test Canal, integrate with our CI pipelines, and adhere to coding standards before submitting a pull request.
1. Build and Test Locally
- Clone the repository and switch to the desired branch:
git clone https://github.com/alibaba/canal.git cd canal git checkout -b feature/your-feature
- Use the Maven wrapper to build and run tests:
# Full clean build, including module compilation and tests ./mvnw clean install # Run only unit tests ./mvnw test # Skip tests (e.g. for quick packaging) ./mvnw clean package -DskipTests
- Inspect test reports and coverage under
target/surefire-reports
andtarget/jacoco
.
2. Continuous Integration
GitHub Actions (.github/workflows/maven.yml)
- Runs on pushes and pull requests to
master
. - Builds with JDK 8, 11, 17, 21 on Ubuntu.
- Caches Maven dependencies for faster runs.
To reproduce locally using Docker (Ubuntu + JDK 11):
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app maven:3.9.0-jdk-11 \
mvn clean install
Travis CI (.travis.yml)
- Executes builds on the same JDK matrix.
- Caches
~/.m2/repository
between jobs. - Uploads coverage reports to Codecov on success.
- On failure, prints recent logs to help diagnose issues.
To view a failing build’s logs:
- Re-run the job on Travis with debug enabled.
- Examine the bottom of the build output for stack traces.
3. Code Style Enforcement
Eclipse Formatter (codeformat.xml)
Import the provided profile to ensure consistent Java formatting:
In Eclipse:
Window → Preferences → Java → Code Style → Formatter → Import… →codeformat.xml
Select “canal-format” and make it active.In IntelliJ IDEA (via Eclipse Code Formatter plugin):
Settings → Other Settings → Eclipse Code Formatter → Point tocodeformat.xml
→ Activate “canal-format”.
Maven Formatting Plugin
Add this to your pom.xml
to auto-apply formatting during verify
:
<build>
<plugins>
<plugin>
<groupId>net.revelc.code.formatter</groupId>
<artifactId>formatter-maven-plugin</artifactId>
<version>2.17.0</version>
<executions>
<execution>
<goals><goal>format</goal></goals>
<phase>verify</phase>
</execution>
</executions>
<configuration>
<configFile>${basedir}/codeformat.xml</configFile>
<encoding>UTF-8</encoding>
<lineEnding>LF</lineEnding>
</configuration>
</plugin>
</plugins>
</build>
Run formatting locally before committing:
./mvnw formatter:format
4. Code Templates (codetemplates.xml)
Import these templates to speed up common Java patterns in Eclipse:
- Eclipse:
Window → Preferences → Java → Editor → Templates → Import… →codetemplates.xml
.
Use provided templates for getters, setters, constructors, and Javadoc stubs.
5. License and Contribution Process
- Canal is licensed under Apache License 2.0 (
LICENSE.txt
). You may use, modify, and distribute under its terms. - Before submitting a PR:
- Ensure all tests pass locally and on CI.
- Format your code with the project’s formatter.
- Follow the existing commit message style.
- Reference the issue number in your PR description.
- We review pull requests on GitHub. A successful build on CI and adherence to code style are required for merge.
Thank you for contributing to Canal!