Is there a way to always rerun an `adhoc_tool` tar...
# general
n
Is there a way to always rerun an
adhoc_tool
target (aka. not cache it)? I have an
adhoc_tool
that runs a Python script that transforms some remote data into a list that I then want to pass to some
experimental_test_shell_command
, but the caching of the json makes the tests quite flaky. I figured I could set
extra_env_vars
to some random value and bust the cache that way, but even that’s tricky to do with no imports allowed in BUILD files. 😅
1
b
I'ms ure it's doable, but taking a steb back you've poked a hole in the model of reproducible builds by reading from a remote and just accepting whats been downloaded. Maybe that's OK, but Pants surely assumes reproducibility by it's very nature. Can you instead pin the version+digest of the data somehow? That way you control when it gets updated downstream and you also invalidate the cache?
n
Yeah, I’m aware. The problem we’re facing is that our CI/CD pipeline during deploy expects service X to be known in another system because it’s used to track deploys through our environments, but the developers always forget to add it making our pipeline cry until it’s fixed - so I figured I could add a test that simply checks that the service is known and that needs to pass in order for the change to be merged. Maybe I can solve it in some reproducible way instead. I’ll have another think :)
b
What version of Pants are you on?
n
2.16
b
I think certain
runnable_dependencies
values are able to say "dont cache me" Although grepping the codebase, I don't see any use of
RUN_REQUEST_NOT_HERMETIC
😕
Theoretically, the FieldSet for any of those targets could include a
hermetic_runs: bool
alongside
restartable: bool
But AFAIK nothing out of the box is gonna help ya.
n
I guess I could always port the Python code to bash and run that in
experimental_test_shell_command
, then the test would fail (and not be cached afaik?) until it passes, at which point I don't really care about running it again (even though someone could in theory could remove the Python service again once it's been added). It's not optimal, but feels less dirty than some homebrewed cache busting. Though thinking a bit more about it, I think I might as well make the step of our pipeline that's crashing optional instead. It's not the end of the world if someone configures it in the external system after the fact. Anyway, thanks for the help!