Are there any known issues with pantsd leaking mem...
# development
f
Are there any known issues with pantsd leaking memory leading to spurious shutdowns? In this particular scenario, I'm seeing rss increase by several gigabytes with no additional pants commands being issued. Eventually terming itself
Copy code
The scheduler was invalidated: Exception('pantsd process 7391 was using 10315.20 MiB of memory (above the `--pantsd-max-memory-usage` limit of 10240.00 MiB).')
The log lines immediately proceeding that appear to be file watcher related
Copy code
4:15:11.29 [INFO] Extending leases
14:15:15.45 [INFO] Done extending leases
14:15:51.87 [INFO] notify invalidation: cleared 0 and dirtied 0 nodes for: {".git", ".git/.watchman-cookie-csmith1-mb.local-18662-6676"}
14:15:51.88 [INFO] notify invalidation: cleared 0 and dirtied 0 nodes for: {".git", ".git/.watchman-cookie-csmith1-mb.local-18662-6676"}
14:15:51.89 [INFO] notify invalidation: cleared 0 and dirtied 0 nodes for: {".git", ".git/.watchman-cookie-csmith1-mb.local-18662-6676"}
14:15:51.89 [INFO] notify invalidation: cleared 0 and dirtied 0 nodes for: {".git", ".git/.watchman-cookie-csmith1-mb.local-18662-6676"}
14:15:51.90 [INFO] notify invalidation: cleared 0 and dirtied 0 nodes for: {".git/.watchman-cookie-csmith1-mb.local-18662-6676", ".git"}
14:15:51.90 [INFO] notify invalidation: cleared 0 and dirtied 0 nodes for: {".git", ".git/.watchman-cookie-csmith1-mb.local-18662-6676"}
14:16:35.48 [INFO] Extending leases
I looked through some github issues but didn't see anything describing this particular behavior
h
I don't think that's a known issue. What reproduces it?
f
Observing this a few more times, I'm beginning to think this isn't a leak, but a large increase in memory pressure from
StoreGCService
running _maybe_extend_lease. The several gb required to run that was enough to push it over the max-memory-usage, which would term the process.
👀 1
h
Sounds like this is consistent?
f
Yeah its pretty reproducible in our js monorepo, but I don't have a great reproduction of it outside of that yet. Looking at that code, its probably related to having a huge number of files in node_modules (as that'll drive the number of Digests up)