I have a <collect_ignore_glob> callout in my `conf...
# general
I have a collect_ignore_glob callout in my
to not run modules called
. I know my
is getting picked up properly because I can see it with
./pants dependencies
calls. Are there any special things to get conftest to apply other than making sure it's a dependency like that? The docs make it seem like it should just work ™️ after that.
It should just work after that. I have less confidence about how pytest hooks declared in conftest.py like you have interact with the way we sandbox and call pytest though. Does your plugin hook not appear to fire?
Yeah, it's collecting tests like
and fails which makes sense because it's not intended to work.
I can obviously get around this by just renaming those files, but was wondering if I was close.
OK. Speak up if you need more help The combination of: 1. ... and fails which makes sense because it's not intended to work. 2. I can obviously get around this by just renaming those files ... Don't make sense to me. The 1st sounds like a-ok as expected, the second sounds like not ok - need to workaround, If you're good to go though, I'm happy to be left confused.
file got picked up happily but maybe pants is collecting tests before
gets a chance to say what should be collected?
Pants definitely collect files before running anything_._ It then invokes pytest against all those files: https://github.com/pantsbuild/pants/blob/main/src/python/pants/backend/python/goals/pytest_runner.py#L309
The manual tests usually need some files or something special on one person's machine so they can run it. With those files missing, the test fails. We prevent them from being collected by pytest normally with that conftest flag. Since pytest collects tests like
, and that happens to match
, I can just rename my few
looking files to something like
so pytest never collects it.
Its probably the latter,
pytest .... all your files explicitly listed here as positional args
that defeats your hook?
Yeah, I would think so. Maybe that's a bug? Idk. This is actually the first I am learning of this behavior in our repo and I already didn't like it. Felt like a lot of indirection.
@pytest.mark.skipif(function_to_determine_file_presence, reason="You don;t have required files.")
work? Presumably that's alot of churn to add though and your conftest hook approach is more transparent to end-user-tests.
Yup, that probably also works, though maybe you see a little more clutter if you have a lot of skipped tests.
Actually, seems like the pants output is a little misleading. It shows output as "succeeded" when the test is skipped.
In general Pants is dumb about the nature of the subprocesses it executes at that layer, Exit code 0 - success, non-zero failure.
Got it. Just has me wondering about maintaining insights on false positives for our CI chain.
Usually CI setups can ingest those and turn them into reports or take other actions on them.
@high-yak-85899 there is also the
option: https://www.pantsbuild.org/docs/existing-repositories#3-set-up-tests
That feels a little overreaching. They are colocated with other things that work fine. pytest skipif markers (which we have already been using) seems like a good path. But that's a good option to keep in mind if I run into a real blocker.
👍 1
Maybe this could be worth a bug. Just found a different instance of this. I had
python_files = *_test.py
but because
already has a default glob pattern to include
, some files I didn't intend to have picked up were being executed. So, while maybe not a bug, it does seem important/restricting that collection happens without regard for what is in conftest.py or pytest.ini.
I think I found another case where
Exit code 0 - success, non-zero failure.
is a problem. We have custom fixtures set up with our
and properly marked in the test file. As expected, that test shouldn't run without the proper marker applied with a run time flag. But, because pants collects them first and then doesn't run them, we get an exit code because no tests are run. A silly way to fix this is to put something like this in the file that has a marked test so that something runs and prevents the error from being thrown.
Copy code
class DontLetPantsFail(unittest.TestCase):
    def test_nothing(self):