MirrorMaker 2
Cross-cluster Kafka replication — runs on DR, pulls from DC.
Quick facts
What it is
MM2 runs only on DR by design — it pulls from DC's bootstrap and writes into the local DR Kafka. This is the carrier for the Redis WAL pattern (ADR-0018): app writes go to DC Redis, are journaled to DC Kafka, replicated by MM2 to DR Kafka, then applied to DR Redis by redis-applier.
Architecture
DC Kafka DR Kafka
──────── ────────
broker-{0,1,2} broker-{0,1,2}
topic redis-writes (12 parts, RF 3) topic redis-writes (auto-created by MM2)
│
│ source connector pulls
▼
MirrorMaker 2 (Kafka Connect, 2 replicas)
KafkaMirrorMaker2 CR `mm2` in DR's kafka ns
├─ MirrorSourceConnector (DC → DR data plane)
├─ MirrorCheckpointConnector (offset/group state)
└─ MirrorHeartbeatConnector (liveness probe)
│
└─ writes to DR ──▶ topic redis-writes (mirrored from DC)
MM2 runs as a Kafka Connect cluster (2 pods) managed by a KafkaMirrorMaker2 CR in DR. It reads from DC over the external Kafka listener (SCRAM-SHA-512 over TLS) using a dedicated mm2 KafkaUser; topic-level replication is one-directional (DC → DR) by design. Heartbeat + checkpoint connectors ride alongside so consumer-group offsets translate cleanly across clusters.
Configuration
Source: clusters/dr/manifests/kafka/kafka-mirrormaker2.yaml (CR), clusters/dr/manifests/kafka/users/mm2.yaml (DR-side KafkaUser for connecting back to DR Kafka), and clusters/dc/manifests/kafka/users/mm2.yaml (DC-side KafkaUser for the Connect cluster's source endpoint).
Source bootstrap: bootstrap.kafka.apps.sub.comptech-lab.com:443 (edge SNI passthrough). TLS via the DC cluster CA (kafka-cluster-ca-cert). Auth: SCRAM-SHA-512 with DC's mm2 KafkaUser credentials, copied into DR as a Secret.
Topics replicated: redis-writes (and any other future user topics). Connect's internal topics live in DR.
Operations
- Status:
kubectl -n kafka get kafkamirrormaker2 mm2 -o yaml | yq .statusshows connector states. - Lag:
kubectl -n kafka exec kafka-broker-0 -- bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group mm2-... - Restart connectors:
kubectl -n kafka annotate kafkamirrormaker2 mm2 strimzi.io/restart=true - Pause replication (e.g. while DC is going through a maintenance window with non-replicatable changes): set
spec.connectors.on the CR..pause: true
Failover
If DC dies, MM2 stops replicating (source unreachable). DR Kafka keeps serving its already-replicated data. After DC recovery, MM2 resumes from the last successful offset — no manual intervention if the lag is within retention (default 7d).
If DR is the new primary (and DC is being rebuilt), the direction reverses conceptually but currently we don't run MM2 in the DC → DR mode dynamically; rebuilding DC means pulling from DR via a one-shot connector or by replaying from MinIO snapshots (per ADR-0017's recovery flow).
References
- Kafka — both source and destination data plane
- redis-applier — the consumer of the replicated
redis-writestopic on DR - ADR-0017 (Kafka DC/DR via MirrorMaker 2), ADR-0018 (Redis via Kafka WAL)
- Kafka geo-replication design · Strimzi MM2 docs