We're interested in scraping some metrics about te...
# general
h
We're interested in scraping some metrics about test failure rates. Since we have remote caching enabled, we'd like to avoid attributing cached results as a "successful" test run. Are there any mechanisms in
pants test
we can use to alter the reporting type so we can get informed about this? Or are we in plugin territory?
b
Have you turned on the stats log?
h
We did have it turned on at one point, but I don't think anyone was getting value out if it so it was disabled.
b
That's all that comes to mind stats log + log parsing the result to see memoized/local cache/remote cached / ran
h
Yeah. It seems a little difficult to attribute with just raw counters, though?
b
I don't understand the OP precisely, but yeah probably. And probably plugin territory
h
Here's a user story. 1. call
pants test ::
2. Filter tests down to ones that actually ran and weren't picked from the cache 3. Update some centralized store with official success/failure rate per test
I think we could get around this some with using
--changed-since
b
The test result lines tell you what ran locally or not
h
Sure, but I'd hope to not have to parse the pants output when we already have the pytest report that can be exported.
b
Yeah that's another option
As is a work unit handler plugin
b
As a middle ground between "pants does this nicely automatically" and "write a full plugin", might this be possible with just the `pants.backend.experimental.tools.workunit_logger`'s ability to write JSON files to a directory? E.g. in
pants.ci.toml
something like
Copy code
...
backend_packages.add = ["pants.backend.experimental.tools.workunit_logger"]
...

[workunit-logger]
enabled = true
logdir = "..."
This is of course experimental, but likely less fragile than a full plugin
đź‘€ 1
other wild hack idea: do the test reports have timestamps in them? If yes, one could filter out any reports that have a timestamp that isn't within the time covered by the particular
pants test ::
(although this can still give some false "yes, this test was ran" answers, if a concurrent run fills in the cache just before the current run tries to run the test/checks the cache)
h
Great ideas. Thanks for sharing. I’ll play around with them some time.
b
I forgot I added that backend 🤦‍♂️
p
@broad-processor-92400 or perhaps someone else — would you know why the JSON files generated by the workunit-logger are empty? I have the exact same config as above in
pants.toml
. Running pants 2.22. Files are created in
.pants.d
but they’re all
[]
Copy code
backend_packages = [
  ...
  "pants.backend.experimental.tools.workunit_logger",
  ...
]

[workunit-logger]
enabled = true
Bigger-picture: I’m trying to log everything pants outputs to screen to file so I can set up “log shipping” to a central place for analysis. Not sure if workunit-logger accomplishes that but trying it out
b
It looks like that invocation has an error very early in its processing, while setting up/before running any processes; so I wonder if no work units were executed. Can you post a separate thread about this and we can debug there? Thanks
👍 1
p
Sure thing! We can talk here if you’d like