Hi pants, I am trying to capture summary output fr...
# general
r
Hi pants, I am trying to capture summary output from a
test
run and having some trouble. Is there a convenient way to write the result summary at the end of a test run to a file or envvar, like`RESULTS=$(./pants test ::)` ? I also tried using pytest plugins to generate summary reports (like
pytest-html
) and they weren’t working, possibly because the full results would have to be aggregated from the parallel test runs. I noticed that the summary (i.e. the list of
𐄂 test:test failed in 1.32s.
at the end of the output) goes to stderr so I added a
2>&1
, but then it captures other errors too. Is there an easy way to just capture the summary?
w
is the test XML useful for you, or is this exclusively about a text summary? https://www.pantsbuild.org/docs/reference-test#section-xml-dir
r
Thanks for your response! I was hoping for an aggregated summary that’s either human readable or HTML/JSON. The XML output could potentially be useful, but I’d have to aggregate it somehow since it writes one file per module. Does my reasoning for why pytest plugins like
pytest-html
don’t work as intended when used with pants test make sense? Ideally I would just be able to use that, or another pytest results aggregator plugin.
w
yea, that’s correct: you would get a report per-process. our support for test coverage uses a separate aggregation step to merge coverage… not sure whether plugins like
pytest-html
would support that
r
Gotcha. I also wasn’t able to access each process’s test report (XML or the HTML that I was having it generate via
pytest-html
) after the run completed. It says in the logs that in generates files like..
Copy code
/private/var/folders/kx/_whrt0l54d9_350g0xwn9zp40000gn/T/process-executionOdmirF/foo.html
but the files aren’t there. Should they be or is it expected for them to get cleaned up?
w
it’s expected for them to be cleaned up: test runs occur in sandboxes, and only outputs that Pants is aware of are actually captured (currently the XML and any coverage information)
r
Ok cool. If I wanted to grab those per-process html reports is there something I could do? (Also don’t want to take up too much of your time on this rabbit hole, I appreciate your help!)
w
no worries.
i think that the big picture of “i’d like less noisy test output” is worth tackling… whether it’s pytest specific, or just being able to turn down the noise in CI is a question though.
is reducing the log level to
-lwarn
an option? iirc @bitter-ability-32190 does that, and we should consider whether if that is helpful to a few folks we need to get quieter by default
r
I’m not too worried about noise as long as I can grab the summary to be presented to a human. The basic idea I’m trying to accomplish is to post a sort of “advisory” test summary to slack for daily 3P integration test runs
b
🙈 I actually had to bump it back to
info
because some options like no process cleanup and stats log info and I was tired of trying to track them down for the log level by target whitelist. It's still a goal of mine to make pants less noisy in CI tho
👍 1
w
@ripe-vase-85561: i can’t really vouch for it, but it looks like https://github.com/inorton/junit2html will turn multiple XML files into an html report if given a directory?
👍 1