Friday, March 20, 2026
Multi-cloud Devops tips
Saturday, March 14, 2026
What happens in an LLM? (part 1)
A nice overview that's detailed but not too intricate is here [blog of SteelPh0enix AKA Wojciech Olech]
Note that when using a fully trained LLM, things are conceptually much simpler because it is more or less just a feedforward network. That is, the weights are immutable. State lives outside of the ANN and is updated by the output after each token runs through the feedforward network.
z(i) = Σj=1T αij x(i)
Friday, March 6, 2026
Permissions and Lakes
"It allows clients to verify the identity of the end user based on the authentication performed by an authorization server, as well as to obtain basic profile information about the end user in an interoperable and RESTlike manner." Zero Trust Networks (O'Reilly)
"Uploading and managing TLS secrets can be difficult. In addition, certificates can often come at a significant cost. To help solve this problem, there is a nonprofit called “Let’s Encrypt” running a free Certificate Authority that is API-driven. Since it is API-driven, it is possible to set up a Kubernetes cluster that automatically fetches and installs TLS certificates for you. It can be tricky to set up, but when working, it’s very simple to use. The missing piece is an open source project called cert-manager created by Jetstack, a UK startup, onboarded to the CNCF." - Kubernetes Up & Running 3rd Ed., O'Reilly
Cloud and K8s
- we bought a domain name from AWS via Route53.
- we delegated the nameservers of this domain to Microsoft or Google.
- a Kubernetes sidecar starts up and contacts Let's Encrypt's API .
- Let's encrypt returns a token
- the sidecar encodes with its private key and hosts it (a.k.a Key Authorization) on port 80.
- Let's encrypt reads that file and decodes it with the cluster's public key. Now it can grant a certificate.
- network security groups (or lack of them)
- misconfigured ports
- selectors not pointing at the correct pods
- ingress (optional - see above)
- service
- endpoint
- pod
Thursday, March 5, 2026
Gradle cheat sheet
Thursday, February 19, 2026
An unruly Terraform
PhillHenryI'm using Terraform to manage my AWS stack that (amongst other things) creates a load balancer using an aws-load-balancer-controller. I'm finding destroying the stack just hangs then times out after 20 minutes.I've had to introduce bash scripts that patch finalizers in services and installations plus force delete CRDs. Finally, tofu detroy cleans everything up but I can't help feeling I'm doing it all wrong by having to add hacks.Is this normal? If not, can somebody point me in the right direction over what I'm doing wrong?
snuufixIt is normal with buggy providers, it's just sad that even AWS is one.
The_Ketchup, CJOThis is mainly for my homelab to teardown when Im done for the day. So when the aws ingress controller makes an LB via K8s, terraform doesnt know about it so I have to manually go in and delete it in the aws console. Its not very clean. So I was thinking maybe if its managed under argocd it will know about it and delete it? Idk its kinda confusing. Maybe I jsut do kubectl delete ingress --all or something and THEN do terraform destroy?Cuz right now it just wont delete my subnets since theres an LB in there when I do terraform destroyDarkwind The Dark DuckU could use AWS Nuke to clean anything remaining 😄
More Kubernetes notes
Networking
This was an interesting online conversation about how network packets set to the cluster's IP address are redirected to a pod in Kubernetes cluster.
Each pod has its own IP which is managed by the Container Networking Interface (CNI). Every node runs a kube-proxy which manages how a cluster IPs map to pod IPs. Pod IPs are updated dynamically and only include pods passing their health check.
The node receiving the TCP request does forward to the destination pod, but the mechanisms sort of depend on the CNI. In cloud environments like AWS and GCP, the CNI just sends it directly out on to the network and the network itself knows the pod IPs and takes care of it. Those are so-called VPC Native networking.
Some CNIs have no knowledge of the existing network, they run an overlay inside the cluster that manages the transport and typically that's done with IP encapsulation and sending the encapsulated packet to the destination node.
In VPC Native networking, your node just sends packets to the destination pod like a regular packet. The pods are fully understood and routable by the network itself.
It works differently on-prem. It depends on your CNI. In an on-prem network, using most other CNIs, including microk8s which uses Calico, the network doesn't know anything about pod IPs. Calico sets up an overlay network which mimic a separate network to handle pod-to-pod communication.In VPC Native networking, things that are outside your kubernetes cluster can communicate directly with k8s pods. GCP actually supports this, while AWS uses security groups to block this by default (but you can enable it). in overlay CNIs like Calico or Flannel, you have to be inside the cluster to talk to pods in the cluster.