I am not sure whether we have this, but is there a...
# general
f
I am not sure whether we have this, but is there a way to obtain information on what test modules were run and what were memoized / cached after a
test
goal was executed? ๐Ÿงต
๐Ÿ‘€ 1
Copy code
$ pants --no-pantsd test --report ::
23:13:22.00 [INFO] Completed: Run Pytest - helloworld/translator/translator_test.py:tests - succeeded.
23:13:22.20 [INFO] Completed: Run Pytest - helloworld/greet/greeting_test.py:tests - succeeded.

โœ“ helloworld/greet/greeting_test.py:tests succeeded in 0.20s.
โœ“ helloworld/translator/translator_test.py:tests succeeded in 0.35s (cached locally).

Wrote test reports to dist/test/reports
it would be helpful to read from somewhere that
helloworld/greet/greeting_test.py
was executed and
helloworld/translator/translator_test.py
was cached locally
I'd love to avoid parsing plain text ๐Ÿ˜›
I've added support for JSON Lines for the stats subsystem, see https://github.com/pantsbuild/pants/pull/20579, but this has no information about the tests modules themselves. Unless I've missed it somewhere, we'll probably need to add this extra reporting in Pants. I'll take a look at the source code in the meanwhile
b
f
let me take a closer look. Getting this info out of the test results doesn't seem to be that hard:
Copy code
diff --git a/src/python/pants/core/goals/test.py b/src/python/pants/core/goals/test.py
index b21c37e459..b7861381ff 100644
--- a/src/python/pants/core/goals/test.py
+++ b/src/python/pants/core/goals/test.py
@@ -940,6 +940,7 @@ async def run_tests(
     exit_code = 0
     if results:
         console.print_stderr("")
+    json_results = {}
     for result in sorted(results):
         if result.exit_code is None:
             # We end up here, e.g., if we implemented test discovery and found no tests.
@@ -949,7 +950,7 @@ async def run_tests(
         if result.result_metadata is None:
             # We end up here, e.g., if compilation failed during self-implemented test discovery.
             continue
-
+        json_results[result.addresses[0]] = result.result_metadata._source
         console.print_stderr(_format_test_summary(result, run_id, console))
 
         if result.extra_output and result.extra_output.files:
@@ -962,7 +963,7 @@ async def run_tests(
                 <http://logger.info|logger.info>(
                     f"Wrote extra output from test `{result.addresses[0]}` to `{path_prefix}`."
                 )
-
+    print(json_results)
     rerun_command = _format_test_rerun_command(results)
     if rerun_command and test_subsystem.show_rerun_command:
         console.print_stderr(f"\n{rerun_command}")
~
~
Copy code
./pants_from_sources --no-pantsd test ::
Pantsd has been turned off via Flag.
23:35:27.67 [INFO] Completed: Run Pytest - helloworld/translator/translator_test.py:tests - succeeded.
23:35:27.67 [INFO] Completed: Run Pytest - helloworld/greet/greeting_test.py:tests - succeeded.

โœ“ helloworld/greet/greeting_test.py:tests succeeded in 0.36s (cached locally).
โœ“ helloworld/translator/translator_test.py:tests succeeded in 0.28s (cached locally).
{Address(helloworld/greet/greeting_test.py:tests): 'hit_locally', Address(helloworld/translator/translator_test.py:tests): 'hit_locally'}
Does the https://www.pantsbuild.org/2.19/reference/subsystems/workunit-logger backend provide this info?
it does, partially only. It told me that a test module
helloworld/greet/greeting_test.py
was run, but it doesn't tell me anything about the other modules (was it memoized? cached locally? cached remotely?)
Copy code
pants test ::
23:41:58.73 [INFO] Completed: Run Pytest - helloworld/translator/translator_test.py:tests - succeeded.
23:41:58.98 [INFO] Completed: Run Pytest - helloworld/greet/greeting_test.py:tests - succeeded.

โœ“ helloworld/greet/greeting_test.py:tests succeeded in 0.24s.
โœ“ helloworld/translator/translator_test.py:tests succeeded in 0.35s (memoized).
23:41:58.98 [INFO] Wrote log to artifacts/pants_run_2024_05_11_23_41_58_701_5b78f507156843cfaab9ff8b3d8f1261.json
so the answer is we kind of have some of the information available. However, we can't run this command on running
test ::
in the CI build as it will have enormous performance hit
I think it may be reasonable to add a flag to emit some of the test results metadata as JSON that's going to be saved into a JSON Lines file, similarly to what we do with the
stats
subsystem
w
I'd be suuuuper interested in any pass/fail/memoized/cached introspection we could offer - as I could also plug that into an IDE Test Explorer (in VSCode for example). Right now, the extent of it is mostly "here are the test files" and "here's how we run them" and then parsing text
Emitting pass/fail, cache status, run duration would be ๐Ÿ’ฐ
f
thanks, @wide-midnight-78598, that's helpful to know. For the run duration the JUnit report contains the
time
tag with the duration, but, yes, individual tests module cache status and execution status, at least, would be helpful. I'll file an issue and will start working on a feature.
๐ŸŽ‰ 1
it took a while, but better late than never ๐Ÿ˜„ https://github.com/pantsbuild/pants/pull/21171 @wide-midnight-78598
๐ŸŽ‰ 1