One other thought: a lot of the time the heuristic...
# development
a
One other thought: a lot of the time the heuristic I really want isn't quite as static as levels... "Show me this if it failed, or took a non-trivial time". That felt like one of the things people found very annoying about v1 logging - logging trivial workunits which spent <100ms no-oping
đź‘Ť 2
w
nooping is eliminated by pantsd and caching though.
a
Eric's example isn't actually no-op, it's "this particular run wasn't interesting, but if it was slow or had anything other than the default expected output it would've been"
đź’Ż 1
w
do you want to see no output for your test because it took less than 100 ms…?
h
I want output, but only from the underlying tool. I don’t want the noise of Pants adding all this extra information taht’s irrelevant
w
but how do you identify which tool output corresponds to which target?
needs a header… which is the completing workunit, i think.
h
Yes, I agree it needs a header. What I want is for it to look like 1.29.x. I thought 1.29 had a good balance of noise to signal. Honestly, I don’t think any of the new log statements add value.
And when you’re running a simple case, like just one test or
test --debug
, we should not add any headers or summary. 1.29 does this atm
w
i mostly agree. but i think that that is addressed by (1)
h
Yeah, (1) certainly would help. But that needs to apply to
Process
too. I don’t think it’s valuable to log when running every single
Process
. The output of the Process is what I care about
w
with a header to identify what it is, yea.
h
Yes
w
iff it is info or above.
there are processes that should probably be at debug.
a
And even the output… Look at https://travis-ci.com/github/pantsbuild/pants/jobs/341659189 - All of those blocks that say:
Copy code
============================= test session starts ==============================
platform linux -- Python 3.6.8, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /b/f/w
plugins: cov-2.8.1, icdiff-0.5, timeout-1.3.4
collected 1 item
pants_test/contrib/awslambda/python/test_python_awslambda_integration.py . [100%]
============================== 1 passed in 25.90s ==============================
are noise to me after the run finishes. They would be useful if a test was hanging. But now, they get in the way of me finding the failure.
w
@average-vr-56795: agreed… implicit in (3) perhaps is that i don’t think we should be rendering stdio for successful runs (ie things not at
warn
/
error
)
a
Cool 🙂 I look forward to seeing how it looks!
Oh, and one more while I’m on my soap box… Can we sort the summary at the end by test status, rather than target name?
đź’Ż 1
w
. @hundreds-father-404: ^ thoughts on that aspect of (3)? i know that you had said that you wanted all tool output regardless
h
I do, but ideally it’s controllable. For example, Pytest may succeed but have deprecation warnings. I want to see those
w
that should be addressed by (2) i think
so yea. cool.
đź‘Ť 1