<@U06A03HV1> <@U051221NF> re: <https://github.com/...
# development
@witty-crayon-22786 @happy-kitchen-89482 re: https://github.com/pantsbuild/pants/discussions/17477 Would y'all be amenable to the new dep-inference-only opt-in process type? I basically have half the code written already, and most of the batched inference code, so it'd only be a few days off effort to try out
Why dep-inference-only at this point?
There will be other cases where we want to run in a batch but cache individually
So when I did that experiment, it showed that the only way to feasibly get it work is to receive several files/digests and then merge them (as splitting them is impossible). That means the caller has to compute a digest that represents of we ran the process on just one file. (Think config files). Which are y that point becomes very error prone and very difficult do diagnose/debug.
This sounds similar to an idea I had mentioned a while back while at Toolchain: Use the
REAPI call to set "synthetic" ActionResult's into the remote cache. I.e., batch some computation and then split it into the
instances that would have been run for the files individually and then upload them into a remote cache using
Then later if Pants reconstructs a
for an individual file in the same way, it should get a cache hit.
The above explanation is more of an aside than anything, probably not that relevant to dep-inference-only stuff.