I'm unaware of a real problem being reported in th...
# development
e
I'm unaware of a real problem being reported in the wild, but it struck me that our means of doing resolves - Python and JVM - is broken when using constraints ranges due to Process use and the resulting caching. Once resolved there is no reasonable way to ask for a re-resulve to get newer versions except by mutating requirements strings: https://github.com/pantsbuild/pants/issues/12199 Lockfile can solve this, since re-generating a lockfile will bump to newer versions, but a lockfile is a required thing.
h
Yep, I think this has bitten people in the past
It would be hard I think to automatically detect whether a resolve might be affected by the world changing. We could say "if you don't have a lockfile we re-resolve every time, and rely on the resolver cache for performance"? Or this could be an option, if people are willing to rely on our cache and therefore occasionally resolve a newer version at some arbitrary moment? I agree that down that path lies madness, but if people choose performance over sanity...
w
in the absence of lockfiles, ranges are almost always time bombs though, regardless of caching
…oh, but you mean when someone is trying to force a re-resolve
e
Yes. They are time bombs, but we don't currently disallow them. And, in fact, we actively thwart them via the caching described.
h
Yeah there seems to be no right or good way to handle ranges, so our current behavior is probably no worse than any other?
Maybe more unexpected though
e
I guess what this highlights is we have no intrinsic to mutate a counter. The only way to store state right now is to mutate a workspace file. This is why using a lockfile to solve this works. It gets checked in and thus saves the state of the resolve out of band. To enable something similar without requiring a lockfile we'd need a
--re-resolve
option that used an intrinsic to say "bump the counter for this resolve by one", where the counter name was maybe the stable hash of the requirements. Without getting super complex, it seems the only way to do this would be to hijack the git repo, and the only friendly way to do this might be to use git notes.
w
the cache key version still exists, but unlike in the past it is not scoped. either way, it was always a bit too large a hammer.
for the purposes of
test
we have the
--force
flag, which sets
ProcessCacheScope.NEVER
… which would almost work, but which would also skip writing to the cache, and so wouldn’t wipe anything out.
e
Yeah, afaict the cache key version is equivalent to nuking the lmdb store / remote cache as described in the ticket items 1 and 2 and the
--force
analogy is similar to item 3. We simply have no way to support this currently. Luckily is a gedanken bug.