(I asked this before but can't find it) What are t...
# development
b
(I asked this before but can't find it) What are the "test cases" for the "batched dep inference" PoC? • Cold cache (
./pants --no-pantsd --no-local-cache peek ::
) • "warm cache"
./pants peek ::
to start -> touch N files (of varying sizes) ->
./pants peek ::
• what else? Edit: instead of
peek
, probably a
lint
and/or
check
which doesn't run the process
w
those are the easiest to define, and most important. beyond that, “edited one file” is also useful., but generally harder to notice a difference in.
to be clear about what i’m suggesting though: i’m suggesting profiling those cases, not necessarily benchmarking them.
1
and to that end,
peek
is not really a realistic case to test… most of the time it will be
test
or `lint`/`check` etc
to bury the lede a bit: i’m suggesting that a lot of the time, the reason X or Y is slow in ci right now is https://github.com/pantsbuild/pants/issues/11270
b
When you say profiling, you mean py=spy profiling or workunit profiling?
w
it depends… whichever helps identify bottlenecks
b
😐
😅 1
w
my strategy is usually
py-spy
first to see if anything is hot, then workunits to see where time is being spent
b
Mmmkay. I think I'll make sure my branch is logically working, then put the baton on a shelf if anyone wants to pick it up
CC @happy-kitchen-89482