I’m trying to add golang code to a polyglot repo, ...
# general
c
I’m trying to add golang code to a polyglot repo, and I’m finding that pants downloads Go modules into it’s sandbox much more frequently than I would expect. Even when doing a
pants run some.py
which has no relationship with any go code results in a lengthy Go module download. Am I doing something obviously wrong, or is this the expected behaviour?
h
Which Pants version is this?
I wouldn't expect this
w
this is expected currently, because the go plugin generates targets for thirdparty, and so needs to have run
go list
for many operations
c
Is there a way to speed this up? For example it doesn't seem to be using the modules already cached from
$GOPATH
w
it should be cached in Pants’ cache: are you using a local or remote cache of builds?
c
I'm using a local cache. Now that I think about it the initial download of modules took 40mins, while most subsequent Go Module downloads take 5-10mins. If it's cached should I be getting the go module download messages, or should it jump straight to the go module metadata analysis?
w
you might see
Scheduling: $process
, during the cache lookup, but only for a few seconds at most. you shouldn’t see any processes actually running unless their inputs have changed.
c
Hmmm I might need to have a closer look then
h
I can imagine having to do some one-time Go work but then it should be completely quiescent if you only do Python stuff
So something weird may be going on
But I acknowledge that this is still a broken window. In practice this is a lot of work that it's humanly obvious isn't necessary.
n
Seeing this as well. “Download Go module” and “Analyze metadata for Go third-party module” takes a significant amount of time with an uncached state, which is expected, but even running something like
pants lint path/to/python/file.py
multiple times after another still adds about 10s. Running
pants -ldebug lint path/to/python/file.py
with some timestamps:
Copy code
# first log output 
14:32:09.54 [DEBUG] acquiring lock: <pants.pantsd.lock.OwnerPrintingInterProcessFileLock object at 0x108769880>
# first reference to Go
14:32:15.39 [DEBUG] Running Searching for `go` on PATH=….
# last references to Go
14:32:25.90 [DEBUG] Completed: Download and analyze all third-party Go packages
14:32:26.01 [DEBUG] Completed: Generate `go_third_party_package` targets from `go_mod` target
# done
14:32:27.66 [DEBUG] Completed: `lint` goal
The same command without Go enabled takes 6s instead of 18s.
h
Yeah, I'm not totally sure how to deal with this. In theory, dep inference on Go code could generate Python-relevant dependencies. Even if there is no pathway for that today.
The "multiple times after another" part is concerning though. That should all be memoized. Is pantsd restarting between runs?
n
It logs
[DEBUG] Launching pantsd
each time, and
[DEBUG] pantsd is running at pid 49264, pailgun port is 64668
logs a different pid and port. So it seems so?
Alright, found the issue. pantsd was killed due to exceeding the default
pantsd_max_memory_usage
value. I bumped in to 2GiB and then again to 4GiB and now it seems to stay alive between executions. The previous command that took 18s now goes way faster:
Copy code
real	0m0.746s
user	0m0.308s
sys	0m0.050s
❤️ 1
c
Damn it, the answer was so simple. We have a slightly bigger codebase, needed to bump it to 5GB, and now things are flying again. Before we having our python devs just have the ignore directive set for the golang directories in pants.toml:
pants_ignore.add = [“src/golang”,“tests/golang”,]
that also explains why the lmdb_store was filling up and not getting garbage collected, pantsd wasn’t running in the background to do the periodic cleanup
h
We really need to bump the default, 1GiB is way too low, people keep bumping in to this.
c
I had to increase the max number again today. We are now running with a max of 12GB. I know it’s probably a complicated question, but we have some kind of rough guideline for what is expected usage based on codebase size?
this particular repo isn’t all that big, 129k loc, mostly golang. So I guess maybe its all the 3rd part golang packages?
w
would you mind filing an issue containing the output of
--stats-memory-usage
for a relevant run? see https://github.com/pantsbuild/pants/pull/18389#issuecomment-1458610778
c
Whats interesting is that the runs on my machine don’t use more than 447MB. I’ve now read the Using Pants in CI article and will add stats and memory usage collection to our CI so it makes it easier to identify jobs with issues.
👍 1
Is it possible to increase the logging in
.pants.d/pants.log
to capture more information that will assist in troubleshooting this memory consumption? I’m looking through the pants.log of my own machine and seeing plenty of instances where even 12G is insufficient:
ERROR] The scheduler was invalidated: _Exception_(‘pantsd process 19704 was using 12291.17 MiB of memory (above the
--pantsd-max-memory-usage
limit of 12288.00 MiB).’)
👀 1
Whats interesting is that this is an hour after the last job run. So I’m worried that the stats that are returned at the end of a run would not capture relevant information