I'm looking for a suggestion on how to approach a ...
# general
a
I'm looking for a suggestion on how to approach a scenario like this: we're building a couple of python apps, packaging them as Docker containers, to be later deployed on a k8s cluster. We're building the apps as pex_binaries, and then simply copying them into the container. I figure, for this to work properly it's best to actually build the apps on the target platform (correct dependencies, even if the prebuilt wheels aren't available). In order to achieve all this, we use a build/publish step in gitlab CI, involving the use of docker-in-docker and a custom builder image based on the same distro as the target images. (We also use dind this way, because our dev environments are all containerized so this limits the use of tools such as docker, buildah or podman to build the images). The steps are then: build all the artifacts (pexes, containers) and publish them. Lately, I encountered a problem when dealing with big 3rdparty dependencies.
Copy code
ProcessExecutionFailure: Process 'Building...failed with exit code 1.
stdout:
stderr:
pid 536 -> /root/.cache/pants/named_caches/pex_root/venvs/.../pex --retries [...] --retries 5 --timeout 15 exited with -9 and STDERR:
None
Building the same pex_binary works locally, so I assume this is a problem with the gitlab shared runner running OOM (I think it has <4gigs available) So I guess my questions are: 1. Am I correct about it being an oom error? 2. Do you maybe see a simpler alternative to using pants in the scenario I just described?
e
No comments on the bigger scenario, but yes, a -9 signal is the Linux OOMKiller if you have no reason to expect some human on the machine issuing a
kill -9 ...
a
I'll have to figure a way around it then. Thank you