What's the canonical Pants way of declaring pytest...
# general
b
What's the canonical Pants way of declaring pytest tests are "manual"? •
skip_tests
• a Pants
tag
plus remembering to set the CLI arg? • A
pytest
mark, and an arg in our
pytest
config to skip? • Something I'm missing?
h
If the entire test file needs to be skipped, I unfortunately don't think that using a PYtest mark will work because PYtest will complain that no test run which I think is exit code number five
3
My recommendation is probably to use skip_test, so that you can still run pants test :: It may be helpful to use the overrides field so that things are more precise
If you only need to skip some test in a file, then you can use PYtest marks. Alternatively, you can always use pytest marks and add a trivial helper test that returns true so that the file doesn't complain there's no test
h
What we did was adopt a convention to name our files something like
my_test_manual.py
which avoids it being caught by the filters of
python_tests
. Then, we had a section like
Copy code
if __name__ == '__main__':
   unittest.main()
and used the
pex_binary
that
tailor
adds to run the tests.
👍 1
h
Thanks for sharing that. So then did you have those files included in the Python_sources target?
h
yup. but the glob filters already did that for us
👍 1
b
Oh VERY interesting. I need to sit on that one
@hundreds-father-404 is it worth making a feature request to allow a manual option for
python_test(s)
. Along with the machinery to: • Run the test if specified exactly • Run if a flag is given to run manual tests?
h
What do you mean a manual option?
b
oops typo. Just option 🙂
h
I'm interested in hearing more about the flag idea. How would that be different than tags?
b
It's like an inverse tag. Tags are opt-out. This would be opt-in
Alternatively a mechanism to invert Pants' choosing a target based on a tag. E.g. I set a config setting that the
manual
tag is always skipped unless otherwise stated
Maybe I can swing it with a config with
[GLOBAL].tag="-manual"
?
It'd be nice if goals themselves participated in
tag
filtering. Although I struggle ot think of a
pex-BInary
or other target which might be tagged
manual
h
Oh I like that idea of tags being opt in, that's my biggest complaint with tags right now is that you have to remember to do the right filtering. Generally I would bias towards a more generic solution here like using tags, because I suspect this would be useful for other goals like the package goal. Feel free to open a ticket with this idea
b
Hmmmm....
Copy code
python_tests(
    name="tests",
    tags=["manual_test"],
)
Copy code
./pants --tag='-manual_test' test path/to/tests/:
Seems to run the tests
As does
./pants --tag='-manual_test' test ::
Oh but
list
does do the filtering. Weeeeeird
Copy code
[DEBUG] Completed: Scheduling: Run Pytest for ...
h
Maybe I'll try to slip it in after I finish some docs work and a presentation I'm preparing this week. No promises though
I think this is now the third or fourth time that this issue has come up this month
b
I might look into going the mark route + trying to silence the error 5 for all-manual tests
h
use this
Copy code
def test_hacky():
    pass
h
Yeah that's another thing we did
h
We've also gone back-and-forth on whether to special case exit code 5 so that it no longer causes Pants to fail. There are pros and cons to both approaches, so thus far we have stuck with keeping normal PYtest semantics
h
Since that came up in the marked case because pants collects things that we normally wouldn't have. So we added a dummy unittest in the same file.
I'd be very interested in seeing a solution that does not require that
b
I'm gonna try and whip one up
🙌 1
Oh huh it doesn't report exit 5 if it skips all tests
👀 1
Copy code
import pytest

@pytest.mark.skip
def test_nothing():
    pass
Copy code
joshuacannon@CEPHANDRIUS:~/work/techlabs$ ./pants test --force --output=all path/to/test_foo.py  -- -vv
16:01:25.17 [INFO] Completed: Run Pytest - path/to/test_foo.py:tests succeeded.                                                                     
============================= test session starts ==============================                                                                                            
platform linux -- Python 3.8.12, pytest-6.1.1, py-1.11.0, pluggy-0.13.1 -- /home/joshuacannon/.cache/pants/named_caches/pex_root/venvs/s/cf02046f/venv/bin/python3.8
cachedir: .pytest_cache
rootdir: /tmp/process-executionUMiF6w, configfile: pytest.ini
plugins: profiling-1.7.0, kafka-0.5.0, forked-1.4.0, cov-3.0.0, mock-3.7.0, xdist-2.4.0, timeout-2.1.0, mockito-0.0.4
collecting ... collected 1 item

path/to/test_foo.py::test_nothing SKIPPED        [100%]

- generated xml file: /tmp/process-executionUMiF6w/path.to.test_foo.py.tests.xml -
============================== 1 skipped in 0.02s ==============================



✓ path/to/test_foo.py:tests succeeded in 0.24s
Gonna try this out Thursday and report back: https://pypi.org/project/pytest-manual-marker/
👀 1
that plugin deselects instead of skips 😞
Therefore error code 5
👎 1
Ronny is a beast:
Good point, will add matching cases hopefully by the end of the week
h
Will that will only work for manual tests? We have some custom markers for integration tests that it sounds like we'd be right back in the same "make a dummy test to circumvent the deselecting error code" issue.
b
I think if you skip instead of deselect you're OK
h
Well we're just using the
pytest.mark.<my_custom_marker>
after specifying custom markers in
pytest.ini
. I'm not sure how to configure the skip/deselect behavior. Is that something supported out of the box?
seems like there's some stuff here
we currently have something similar that controlled test collection but pants ignored that because it does the collection on its own
b
If all you're doing is marking integration tests for the sake of marking, then those tests should be automatically run, no? Are you filtering tests based on that tag?
h
No, they're only run if we provide the custom marker in the pytest call.
b
There has to be some machinery in the middle then. Just marking them doesn't deselect them.
h
Yes, we have some tooling in
conftest.py
that used
pytest_collection_modifyitems
to control what was collected for running. My theory is that since we currently have modules with one marked test in them, pants collects them and then conftest deselects it, and that pants process thinks that no tests were run.
r
Just been directed to this thread, my context is...
Copy code
# pytest.ini

[pytest]
markers =
    integration: mark test as integration test
and various test modules that are marked as integration tests, and others that are just unit tests...
Copy code
# test_some_integration_tests.py

import pytest

pytestmark = pytest.mark.integration

def test_something():
  # some integration tests that rely on db resource
Copy code
# test_some_unit_tests.py

import pytest

# no marks

def test_something_else():
  # some unit tests that don't rely on any resources
and getting exit code 5 for both of these attempts to run tests...
./pants test :: -- -m "not integration"
-- i.e. just running "unit" tests All of the modules that contain
pytestmark = pytest.mark.integration
return exit code 5 due to
collecting ... collected x item / x deselected / 0 selected
if the unit tests pass, create db and other "expensive" resources in the CI and run integration tests...
./pants test :: -- -m "integration"
-- i.e. just running integration tests All of the modules that don't contain
pytestmark = pytest.mark.integration
also return exit code 5 for the same reason.
I might have to create multiple
python_tests
targets with
name=test_unit
and
name=test_integration
plus tags and explicitly add the test modules to each target, as the solution for moment?
Copy code
python_tests(
  name="tests",
  sources=["test_*.py", "!test_an_integration_test.py"]
)

python_tests(
  name="integration_tests",
  sources=["test_an_integration_test.py"],
  tags=["integration_tests"]
)
And then running unit tests/integration tests explicitly becomes...
Copy code
# run unit tests
./pants -tag=-integration_tests test :: -- -m "not integration"

# run integration tests
./pants -tag=integration_tests test :: -- -m "integration"
Although this isn't working for me 😭 still getting integration_tests running even when specifying --tag=-integration_tests 🤷
b
@high-yak-85899 try skipping the test instead of deselecting them? Not sure the pytest incantation, but the experiment shows you won't get error code 5
@rapid-exabyte-76685 you're hitting the issue I was earlier. See Eric's link to 2 relevant issues: https://pantsbuild.slack.com/archives/C046T6T9U/p1649191574386879?thread_ts=1649189336.248079&amp;cid=C046T6T9U
r
Yes. And unfortunately
./pants --tag='-integration_test' test ::
doesn't work. And I forget if there is a way to recursively match all test targets with a given name, e.g.
./pants test integration_tests/**::
or something
b
I've adapted the linked plugin to have less weird behavior. Now tests are skipped if manual unless
--run-manual
is specified:
Copy code
import pytest


def pytest_configure(config):
    config.addinivalue_line("markers", "manual: mark tests which need a person to execute them")


def pytest_addoption(parser):
    group = parser.getgroup("manual", "configuration of manual tests")
    group.addoption(
        "--run-manual",
        action="store_true",
        default=False,
        help="run manual tests",
    )


@pytest.hookimpl(hookwrapper=True, tryfirst=True)
def pytest_runtest_makereport(item, call):
    outcome = yield
    rep = outcome.get_result()
    if call.excinfo and isinstance(call.excinfo.value, pytest.skip.Exception):
        if call.excinfo.value.msg == "manual":
            rep.outcome = "manual"


@pytest.hookimpl(tryfirst=True)
def pytest_report_teststatus(report):
    if report.outcome == "manual":
        return "manual", "M", "MANUAL"


@pytest.hookimpl(tryfirst=True)
def pytest_collection_modifyitems(config, items):
    run_manual = config.getoption("run_manual")

    for item in items:
        if item.get_closest_marker("manual") is not None and not run_manual:
            item.add_marker(pytest.mark.skip("manual"))