Cancellation is a thorny issue in software - especially when we use containers as they're constantly being killed.
First, let's look at how some Scala frameworks handle it. This Discord chat is about where the boundaries lie in Cats. Basically, if code running flatMaps that are chained together is cancelled, the next flatMap may be executed (you can see that with a println) but the IO created in it is not.
val resource = Resource.make(IO.println("acquire"))(_ => IO.println("release"))
def run: IO[Unit] = for {
_ <- resource.use(_ => IO.readLine)
} yield {
()
with
kill -SIGTERM $(jps | grep CatsCleanup | awk '{print $1}')
In 3.5.4 the Resource is released and the output looks like:
acquire
release
Process finished with exit code 143 (interrupted by signal 15:SIGTERM)
(The exit code when killing a process is 128 + the sigterm code. You can see the last exit code of a process in a Unix-like system with echo $?).
The equivalent code in ZIO (2.0.21):
val release = ZIO.logInfo("release")
val resource = ZIO.acquireRelease(ZIO.logInfo("acquire"))(_ => {
release
})
override def run: ZIO[Any & ZIOAppArgs & Scope, Any, Any] = for {
_ <- resource
_ <-zio.Console.readLine("press a key")
} yield ()
does nothing.
Why it's important
With many workloads moving to, say, Kubernetes, cancellation comes with the territory.
"What happens when a pod starts up, and what happens when a pod shuts down?
"When a pod starts in a rolling deployment without the readiness probe configured ... the pod starts receiving traffic even though the pod is not ready. The absence of a readiness probe makes the application unstable. ...
[HashNode]
The problem is"it takes more time to update the iptables rules than for the containers to be terminated by the Kubelet... The Kubelet immediately sends a SIGTERM signal to the container, and the endpoints controller sends a request back to the API server for the pod endpoints to be removed from all service objects... Due to the difference in task completion time, Services still route traffic to the endpoints of the terminating pods.
[The solution involves] "adding a preStop hook to the deployment configuration. Before the container shuts down completely, we will configure the container to wait for 20 seconds. It is a synchronous action, which means the container will only shut down when this wait time is complete".
This gives your application time to clean itself up.