Env variables, you will (likely) find set in my Kubernetes deployments
Kubernetes allows us to pass the values declared in a Pod’s manifest, to its containers via environment variables (docs). A typical situation, where I find this handy is when I run a Go application in a Pod.
As discussed in the previous note, out of the box, Go runtime isn’t aware if it runs inside a container. This can lead to confusing situations, when the runtime adjusts its behaviour, after observing the resources (CPU and memory) available on the cluster’s node, instead of the resources, a developer or an operator restricted the deployment with.
We can help Go runtime to align its expectations with the reality, by passing the restrictions, specified in the manifest, down to the application, as env variables (note the env
part in the yaml below):
apiVersion: apps/v1
kind: Deployment
···
spec:
template:
spec:
containers:
- name: app-server
resources:
requests:
cpu: 1
memory: 500Mi
limits:
cpu: 1
memory: 500Mi
env:
# Defines the available CPU cores from the limits.
# This helps to avoid extensive throttling when running on a multi-core node.
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
resource: limits.cpu
divisor: 1
# Defines soft memory limit from the limits.
# This helps the runtime to respect the limit and adjusts the frequency of the GC.
- name: GOMEMLIMIT
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: 1
Here, we define GOMAXPROCS
and GOMEMLIMIT
(the latter is available since Go 1.19), passing the values from the manifest’s resource limits, using the combination of valueFrom
, resourceFieldRef
, and the divisor
.
From Kubernetes docs:
The divisor of 1 means cores for cpu resources, or bytes for memory resources.
It’s worth highlighting that, to specify the CPU limits or not is still a debate in Kubernetes community. As outlined in the “Kubernetes resources under the hood — Part 3” article, both options have their pros and cons. But, if you aren’t sure, it’s very likely that your applications don’t need the CPU limits — keep the CPU requests, and only set the memory limits.
With the example deployment above, the Go application inside the container will observe only 1 CPU core and 500MB of memory.
Share your thoughts about the topic with me on Twitter.