<@U021C96KUGJ>: the previous junit-xml parser code...
# development
w
as mentioned, entirely possible that a library will be better. one thing to watch out for is probably libraries that can only consume
junit.xml
produced by
pytest
, for example
a
That makes sense
w
i know that there was at least one property that was optional in there because one producer of
junit.xml
emitted it, while another didn’t
a
OK.
w
(which is very FUDy, heh. sorry. i just don’t remember which)
a
😄
Looking through our current Pants 2 code, I see that we store
xml_results
, but it doesn’t look like we do anything to parse it
is that a fair understanding?
w
correct
it’s attached to workunits as an artifact, so there might be “other consumers” who are parsing it.
a
right
so I take it in the PyTest case, we’re able to determine the results of individual test suites/cases without needing to defer to the XML file?
w
because of file level deps, we always invoke at the file level, and we use the exit code of pytest for pass/fail
(…works regardless of file-level deps, but yea)
a
OK, staring at the output of a pytest run, I see that, indeed, failures are reported on the file level, and it’s just the console output of pytest that’s showing me the details of the failure
w
yep. and that’s for the best, because there are lots of fatal errors that result in the test runner not making it far enough to render anything as xml
a
Not having dug into the specifics of JUnit, I’m assuming that we can’t get that useful high-level information out of JUnit, and that’s why we need to consider the XML?
w
what do you mean?
sorry, re: file level deps: we invoke pytest once per file
a
I know
w
so we don’t need any high level information from it
but i’m probably not understanding what you are asking
a
The ticket I’m doing is saying that I should parse the JUnit output in order to determine pass/fail status of tests, I’m assuming the ticket I’m working on exists because we can’t get high-level (i.e. “there was a failure”) information without parsing the XML
whereas we can get that from the pytest exit code in the Python case
w
hm. we … probably don’t need to do that. cc @fast-nail-55400
i could imagine it being runner dependent, but it’s hard to imagine a runner that didn’t exit with non-zero on failure
f
well I already fixed one flaky bug in the existing junit test because it was checking the runner’s output (in addition to the exit code)
that PR is linked in the junit.xml issue
a
(Wow the, GitHub project view is useless)
f
and I modified the junit rule to turn off the fancy output
but in actual use, users might want to see the fancy output
but that makes the test fail
a
So is there a reason why the runner’s exit code is insufficient?
w
yea, why are we parsing it at all?
f
that is what @bored-art-40741 had for a test so I kept it
it is parsing that the failure that occurred was the expected one
and not some other failure
review what the test is doing and see
w
i could imagine adding a test that just validates that XML has been captured, but Pants itself doesn’t need to consume it
other consumers might (and it should be attached to the workunit)
(sorry… i hadn’t grokked the underlying motivation for all this)
a
(nor I, hence the questions :D)
f
basically two issues: (1) what to do for Pants users re junit.xml and (2) how to more properly fix the hacky fix in https://github.com/pantsbuild/pants/pull/12909
w
so assuming we’re capturing it, you could validate that the field is set (and maybe peek inside)
a
My understanding is that junit.xml just needs to be propagated beyond the test, for downstream consumers to deal with; like Pytest does (see @witty-crayon-22786’s above), and the hacky fix is probably fine?
f
https://github.com/pantsbuild/pants/issues/12812 can probably be rephrased from its current formulation
w
i would remove all parsing of stdout/stderr output from junit
f
the hacky fix is not fine because it hard-codes the test runner to use ASCII output in all cases even ordinary usage by a Pants user
a
OH.
w
since that’s only in a test, the test could either check the exit code, or peek inside the xml
a
CHecking the exit code won’t suffice, because it’s looking for a specific failure, aiui
f
yes exactly
w
mm.
f
adding some pass-through configuration to the test runner would let the test of the junit rules pass in the specific config it needs to get ASCII mode
w
ok! full circle. but non test code doesn’t need to parse. just capture.
f
correct, non-test code does not need to parse
but my hacky flakiness fix is not a good long-term solution so needs something more than the “5 minute” fix I did
w
iirc, we do something related to
--color
for pytest too. not sure what it is.
a
Oh, I see. So we pass in that that extra flag using
PYTEST_ADDOPTS
, which lets us handle things in a test… so we could probably do the same thing for JUnit to enable output formatting in certain cases?