Any idea why `load_backends_integration_test.py` i...
# development
e
Any idea why
load_backends_integration_test.py
is so slow? I am running all tests in the suite from
main
and it’s still going after >2700 seconds (set max timeout 10000 seconds):
Copy code
⠒ 2702.05s Run Pytest for src/python/pants/init/load_backends_integration_test.py:tests
Looking at the code my guess is that it’s the very last test, which is parameterized and loads each discovered backend one at a time, but my local machine is notably underpowered so wanted to check if this is expected.
h
Yeah that test is a bugger, and we should probably break it up
you can safely skip it on desktop though
e
Cool. Is there a standard flag to skip non-desktop tests? For different reasons I want to skip the cherry pick test under .github for instance — only works inside CI.
w
Can’t recall if I mentioned it in the other channel, but I think a nice workflow would be that desktop
test ::
skips tests where certain dependencies are missing (e.g. docker or go or terraform), but CI can’t skip any. I think this backends test would fall into that category
e
Thanks @wide-midnight-78598 yes you did. It doesn’t belong in current PR, but that seems like a good improvement. I’ve got it down to just 8 failing tests and suspect they’re all related to missing dependencies as you suggest.
Copy code
✕ .github/workflows/tests/auto_cherry_picker_smoke_test.py failed after 3 attempts in 2.26s.
✕ src/python/pants/backend/docker/goals/run_image_integration_test.py:tests failed after 3 attempts in 43.41s.
✕ src/python/pants/backend/docker/util_rules/docker_build_context_test.py:tests failed after 3 attempts in 128.18s.
✕ src/python/pants/backend/go/util_rules/cgo_test.py:tests failed after 3 attempts in 105.63s.
✕ src/python/pants/backend/helm/resolve/fetch_test.py:tests failed after 3 attempts in 9.68s.
✕ src/python/pants/backend/python/goals/package_pex_binary_integration_test.py:tests failed after 3 attempts in 103.72s.
✕ src/python/pants/backend/rust/lint/rustfmt/rules_integration_test.py:tests failed after 3 attempts in 7.80s.
✕ src/python/pants/core/util_rules/archive_test.py:tests failed after 3 attempts in 23.44s.
I will go through them one-by-one to see what’s going on. the cgo error might be ARM64-related, for instance:
Copy code
E       ValueError: cgo binary link failed:
E       stdout:
E       
E       stderr:
E       /usr/bin/ld: gopath/pkg/mod/github.com/confluentinc/confluent-kafka-go@v1.9.2/kafka/librdkafka_vendor/librdkafka_glibc_linux.a(rdkafka_error.o): Relocations in generic ELF (EM: 62)
E       /usr/bin/ld: gopath/pkg/mod/github.com/confluentinc/confluent-kafka-go@v1.9.2/kafka/librdkafka_vendor/librdkafka_glibc_linux.a(rdkafka_error.o): Relocations in generic ELF (EM: 62)
E       /usr/bin/ld: gopath/pkg/mod/github.com/confluentinc/confluent-kafka-go@v1.9.2/kafka/librdkafka_vendor/librdkafka_glibc_linux.a: error adding symbols: file in wrong format
b
For local work, I basically never use
pants test ::
, and instead do a combination of: •
pants test path/to/dir::
(if there's a whole directory with potentially relevant tests) •
pants test path/to/file_test.py
(if there's a specific file of interest) • get things "probably working" via the above, and then push to a draft PR and let it catch any unexpectedly-relevant test failures
h
Same, although it’s embarrassing that we can’t use our own tool to improve this. The trouble is that some of pants’s own integration tests depend on “basically all of pants”, so any change causes all those to re-run, and we haven’t yet untangled that.
b
(Acknowledged, but I think even if we solved that, even running
pants test ::
locally the "first" time, to seed the cache is quite burden-ful (and one would have to pay it again, any time one pulled sufficiently-large changes from
main
).)
Btw, thanks for contributing @elegant-family-19982!
e
I’ve already fixed three out of the eight so they work in dev container. 🙂 The cgo ARM64 issue for instance was fixed in a newer rdkafka. Not suggesting that
pants test ::
is something you want to do regularly and I’ve made good use of the sub-setting as part of this exercise, just that if you do, it would be nice if it did not throw up red herrings.
👍 1
Plus, slow tests are part of my business case for getting a better ARM64 machine at home, don’t mess with that! 🤣
😆 1
w
I generally also don’t use
pants test ::
the overwhelming majority of the time, but with the call by name entering core territory soon, I’ll be running those kinda tests more than I would like - would be nice if the ones I ran had a hope and a prayer of actually working locally 😆