Thursday, March 5, 2026
Gradle cheat sheet
Thursday, February 19, 2026
An unruly Terraform
PhillHenryI'm using Terraform to manage my AWS stack that (amongst other things) creates a load balancer using an aws-load-balancer-controller. I'm finding destroying the stack just hangs then times out after 20 minutes.I've had to introduce bash scripts that patch finalizers in services and installations plus force delete CRDs. Finally, tofu detroy cleans everything up but I can't help feeling I'm doing it all wrong by having to add hacks.Is this normal? If not, can somebody point me in the right direction over what I'm doing wrong?
snuufixIt is normal with buggy providers, it's just sad that even AWS is one.
The_Ketchup, CJOThis is mainly for my homelab to teardown when Im done for the day. So when the aws ingress controller makes an LB via K8s, terraform doesnt know about it so I have to manually go in and delete it in the aws console. Its not very clean. So I was thinking maybe if its managed under argocd it will know about it and delete it? Idk its kinda confusing. Maybe I jsut do kubectl delete ingress --all or something and THEN do terraform destroy?Cuz right now it just wont delete my subnets since theres an LB in there when I do terraform destroyDarkwind The Dark DuckU could use AWS Nuke to clean anything remaining 😄
More Kubernetes notes
Networking
This was an interesting online conversation about how network packets set to the cluster's IP address are redirected to a pod in Kubernetes cluster.
Each pod has its own IP which is managed by the Container Networking Interface (CNI). Every node runs a kube-proxy which manages how a cluster IPs map to pod IPs. Pod IPs are updated dynamically and only include pods passing their health check.
The node receiving the TCP request does forward to the destination pod, but the mechanisms sort of depend on the CNI. In cloud environments like AWS and GCP, the CNI just sends it directly out on to the network and the network itself knows the pod IPs and takes care of it. Those are so-called VPC Native networking.
Some CNIs have no knowledge of the existing network, they run an overlay inside the cluster that manages the transport and typically that's done with IP encapsulation and sending the encapsulated packet to the destination node.
In VPC Native networking, your node just sends packets to the destination pod like a regular packet. The pods are fully understood and routable by the network itself.
It works differently on-prem. It depends on your CNI. In an on-prem network, using most other CNIs, including microk8s which uses Calico, the network doesn't know anything about pod IPs. Calico sets up an overlay network which mimic a separate network to handle pod-to-pod communication.In VPC Native networking, things that are outside your kubernetes cluster can communicate directly with k8s pods. GCP actually supports this, while AWS uses security groups to block this by default (but you can enable it). in overlay CNIs like Calico or Flannel, you have to be inside the cluster to talk to pods in the cluster.
Saturday, February 14, 2026
GPU Programming pt 1.
Wednesday, February 11, 2026
My Polaris PR
git pull https://github.com/apache/polaris.git main --rebase
The --rebase at the end is saying "make my branch exactly the same as the original repo then add my deltas on to it at the end of its history."
git rebase -i HASH_OF_LAST_COMMIT_THAT_IS_NOT_YOURS
Tuesday, February 3, 2026
Polaris Federation notes
- Iceberg can talk to Google no problem using org.apache.iceberg.gcp.auth.GoogleAuthManager.
- However, there is currently no Polaris code to use GoogleAuthManager in an external catalog.
- Instead, the only way to do it currently is to use the standard OAuth2 code.
- However, Google does not completely follow the OAuth2 spec, hence this Iceberg ticket that lead to the writing of GoogleAuthManager and this StackOverflow post that says GCP does not support the grant_type that Iceberg's OAuth2Util uses.