<#17486 Coarsened target calculation for `pylint` ...
# github-notifications
q
#17486 Coarsened target calculation for `pylint` pulls in more files than it should on v2.15.0a0 Issue created by danxmoran Describe the bug While testing
v2.15.0a0
, I saw that
./pants lint
would consistently freeze / be OOM killed. Through
py-spy
and logging I found that: • Much time was being spent merging together source digests when setting up
pylint
runs • The number of source digests in each of ^^^ setups was much larger than expected (tens of thousands of files for an input batch size of a few hundred) •
pylint
batches of different sizes always ended up with the same number of source digests While looking through the
pylint
changes in v2.15, I found that coarsened target calculation is currently running in the "partitioner" rule (see here). This will result in too many targets being associated with each
pylint
batch, because the partitions returned by that rule are re-batched into smaller chunks according to
[lint].batch_size
, and there's no support in the re-batching logic for subsetting the partition metadata. We should push the calculation of coarsened targets into the "runner" rule for
pylint
, so we only compute & hydrate the transitive dependencies that are relevant for the specific inputs used in each batch. Pants version v2.15.0a0 OS Both pantsbuild/pants
🤯 1