If you ask Strimzi to set up a Kafka cluster out of the box, you'll see this when connecting:
$ kafka-topics.sh --bootstrap-server 10.152.183.163:9092 --list
...
[2025-04-04 16:35:59,464] WARN [AdminClient clientId=adminclient-1] Error connecting to node my-cluster-dual-role-0.my-cluster-kafka-brokers.kafka.svc:9092 (id: 0 rack: null) (org.apache.kafka.clients.NetworkClient)
java.net.UnknownHostException: my-cluster-dual-role-0.my-cluster-kafka-brokers.kafka.svc
...
Where that IP address comes from:
$ kubectl get service -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE...
kafka my-cluster-kafka-bootstrap ClusterIP 10.152.183.163 <none> 9091/TCP,9092/TCP,9093/TCP 21d
kubectl get kafka my-cluster -n kafka -o yaml > /tmp/kafka.yaml
then edit /tmp/kafka.yaml adding:
- name: external
port: 32092
tls: false
type: nodeport
in the spec/kafka/listeners block and apply it with:
kubectl apply -n kafka -f /tmp/kafka.yaml
Now I can see:
$ kubectl get svc -n kafka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
...
my-cluster-kafka-bootstrap ClusterIP 10.152.183.163 <none> 9091/TCP,9092/TCP,9093/TCP 21d
my-cluster-kafka-external-bootstrap NodePort 10.152.183.27 <none> 32092:32039/TCP 11d
It appears that Strimzi has created a new service for us - hurrah!
However, making a call to Kafka still fails. And this is because of the very architecture of Kubernetes. I am indeed communicating with a Kafka broker within Kubernetes but then it's forwarding me to another domain name, my-cluster-dual-role-0.my-cluster-kafka-brokers.kafka.svc. The host knows nothing about this Kubernetes domain name. Incidentally, the same happens in Kafka for a pure Docker configuration.
Kubernetes pods resolve their domain names using the internal DNS.
$ kubectl exec -it my-cluster-dual-role-1 -n kafka -- cat /etc/resolv.conf
Defaulted container "kafka" out of: kafka, kafka-init (init)
search kafka.svc.cluster.local svc.cluster.local cluster.local home
nameserver 10.152.183.10
options ndots:5
This nameserver is kube-dns (I'm using Microk8s):
$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 21d
and we can query it from the host:
$ nslookup my-cluster-dual-role-external-0.kafka.svc.cluster.local 10.152.183.10
Server: 10.152.183.10
Address: 10.152.183.10#53
Name: my-cluster-dual-role-external-0.kafka.svc.cluster.local
Address: 10.152.183.17
Now, to get the host to use the Kubernetes DNS for K8s domain names, I had to:
$ sudo apt update
$ sudo apt install dnsmasq
$ sudo vi /etc/dnsmasq.d/k8s.conf
This was a new file and needed:
# Don't clash with systemd-resolved which listens on loopback address 127.0.0.53:
listen-address=127.0.0.1
bind-interfaces
# Rewrite .svc to .svc.cluster.local
address=/.svc/10.152.183.10
server=/svc.cluster.local/10.152.183.10
That listen-address line was because sudo ss -ulpn | grep :53 showed both dnsmasq and systemd-resolved were fighting over the same port.
I also had to add:
[Resolve]
DNS=127.0.0.1
FallbackDNS=8.8.8.8
Domains=~svc.cluster.local
to /etc/systemd/resolved.conf to tell it to defer to dnsMasq first for domains ending with svc.cluster.local. Finally, restarting
$ sudo systemctl restart systemd-resolved
$ sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
$ sudo systemctl restart dnsmasq
Now let's use that external port we configured at the top of the post:
$ kubectl get svc -n kafka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
my-cluster-kafka-external-bootstrap NodePort 10.152.183.27 <none> 32092:32039/TCP
$ ./kafka-topics.sh --bootstrap-server 10.152.183.27:32092 --create --topic my-new-topic --partitions 3 --replication-factor 2
Created topic my-new-topic.
$ ./kafka-topics.sh --bootstrap-server 10.152.183.27:32092 --list
my-new-topic
Banzai!