Proper import/ target separation for tests in v2 i...
# general
n
Proper import/ target separation for tests in v2 is absolutely 🔥
❤️ 2
h
Glad you like it! It’s what gives the high parallelism and fine-grained caching. You can theoretically run every test target in total parallelism. On your local machine, you’ll be limited by the number of cores. But if you have remote execution with enough workers, you can have every test running at the same time
For fine-grained caching, if run
./pants test ::
, then only change one test target, every other target will immediately give you back the result because of caching
n
That's quite nice. And it seems jenkins sort of remembers the last coverage file it receives as well. So we can let the tests run only when changed and still get sort of the actual coverage.
Also v2 doesn't seem to leave weird ghost files around for cov to trip up on. So no more flakyness on coverage either. All in all great stuff.
💯 1
h
I’m personally not super familiar with how Coverage works. @polite-garden-50641, does that sound right? @numerous-fall-96475 check out https://pants.readme.io/docs/using-pants-in-ci if you haven’t yet for some suggestions on how to set up CI. https://pants.readme.io/docs/python-test-goal has some tips on the
test
goal too, like saving your results to Junit XML files
n
It was some super subtle strangeness with calling it in a worker pool one way vs another. Some files were being cleaned up a fraction of a second too late. And in some cases it was making the coverage module successfully break something with little to no trace. Error would be a null pointer exception essentially. It's gone. Which is super.
😶 1
p
As for coverage, I haven't used jenkins coverage plugin in a while, but what @numerous-fall-96475 sounds about right. For us, we are using circleci + codecov.io they have an orb (i.e. circleci plugin) which uploads coverage results from CI to codecov.io
FYI, we have also added the ability to create JUnitXML results files, which jenkins can read, makes it easier to see which test fails (much better than scrolling thru endless logs)