What would be the cause for pants 1.30 suddenly fa...
# general
b
What would be the cause for pants 1.30 suddenly failing with this stacktrace? We suddenly encountered this last week and after clearing out the cache, we still run into this issue on our Team City hosts, but can't replicate locally. Mainly trying to understand why pants would fail this way.
e
failed with -9
- that's SIGKILL. Since no human ran
kill -9 <pid>
, you can be assured it was the Linux OOMKiller - the host was running low on memory and Linux picked a victim to free some up.
Since you blew away caches, the relation is probably a bunch more work to do than normal, and that fills up memory.
Likely the only way out is to add memory if you can to the TeamCity nodes running CI.
b
Ah, that would make sense then. I'll try bumping that memory up to see if that solves the issue
I gave pants more room to work, increasing the container it runs in from 2GB of RAM to 4 GB however that still didn't fix the problem. Some of the time the build for a specific service works, other times it doesn't with the same error above.
e
Yeah, the nature of the OOMKiller is somewhat obscure. It uses a not-straight-forward algorithm to decide what processes to pick for killing. The short of it is though that you're still memory constrained but closer to the edge. Before you were way over the edge apparently.
h
e
This is Pants 1.30
👍 1
You might be able to play with
--pantsd-max-memory-usage
though: https://v1.pantsbuild.org/options_reference.html#heading_optionreference_107 and it looks like that option spelling has amazingly gone unchanged.
b
I'll try the max memory usage option, that may be what we need to prevent pants from growing without bounds. Although the docs day it defaults to 1 GB, so that's curious why we're still failing. Either way, I'll try it out
e
The v1 docs link I gave above is not anchor-friendly, but if you expand global options you can find it's actualy 4GB in v1:
b
Oh, that makes sense then, thanks!