These days, you don't need Zookeeper to run a Kafka cluster. Instead, when correctly configured, Kafka uses the Raft algorithm (where "the nodes trust the elected leader"[Wikipedia]) to coordinate itself.
I started to follow Gunnar Morling's blog but it seems his version of Kafka containers have not been updated so I used Bitnami's. However, configuring them to run a Raft cluster proved difficult.
I want to programatically create the cluster rather than use docker-compose as I want greater control over it. So, I wrote this code that talks to Docker via it's API using a Java library.
Firstly, the Kafka instances couldn't see each other.
Diagnosing the containers proved difficult as I could not install my favourite Linux tools. When I tried, I was told directory /var/lib/apt/lists/partial is missing. This seems to be deliberate as the Dockerfile explicitly deletes it, to keep images slim. So, I took out that line and added:
RUN apt-get update && apt-get upgrade -y && \
apt-get clean && apt-get update && \
apt-get install -y net-tools && \
apt-get install -y iputils-ping && \
apt-get install -y procps && \
apt-get install -y lsof
then rebuilt the containers. [Aside: use ps -ax to see all the processes in these containers. I was stumped for a while not seeing the Java process that I knew was running].
Using these Linux tools, I could see the containers could not even
ping each other. Oops, I need to create a
Docker network [SO] and add it to the containers. Now, their logs show that the Kafka containers are at least starting and talking to each other.
However, the client running on the host machine was puking lots of messages like "
Cancelled in-flight API_VERSIONS request with correlation id 1 due to node -1 being disconnected". First, I
checked [SO] that the Kafka client library and container were both version 3. But the consensus on the internet appears to be that this error is due to a connection failure.
Using netstat on the host showed that the host port was indeed open. But this seemed to be due to Docker opening the port to map it to its container but the container not LISTENing on that port. It appears you can tell Kafka on which port to listen with an environment variable that looks like:
KAFKA_CFG_LISTENERS=PLAINTEXT://:$hostPort,CONTROLLER://:$controllerPort
where hostPort is what you want Docker to map and controllerPort corresponds to what is in the KAFKA_CFG_CONTROLLER_QUORUM_VOTERS environment variable.
The next problem was when my client connects, it cannot see the machine called kafka2. What's happening here is that having connected to the bootstrap, the client is asked to contact another machine, in this case something called kafka2.
Now, the JVM running on the host knows nothing about a network that is internal to Docker. To solve this, you could have Docker use the
host network (which means that everything running on the machine can see each other - fine for testing but a security nightmare). You could subvert the JVM's DNS mappings (rather than faffing around with a
DNS proxy) using
BurningWave or Java 18's
InetAddressResolverProvider. But perhaps the simplest way is configuring Kafka itself to advertise itself as
localhost [Confluent] using the
KAFKA_CFG_ADVERTISED_LISTENERS environment variable.
And that was it: a Kafka cluster running on my laptop that was reading and writing messages using the Raft algorithm. There are still a few lose ends: why on some runs a node drops out of the cluster non-deterministically even if the functionality was correct as far as the client was concerned. I'll solve that another day.