I need some help with transitive transient test fa...
# general
f
I need some help with transitive transient test failures in a plugin I wrote (this is actually nearly the same code that’s in PR https://github.com/pantsbuild/pants/pull/16932). I created a sample repo that reproduces the problem I’m seeing here: https://github.com/bryanwweber/pants-dependency-tracking What I find is that if I run
./pants test --force ::
repeatedly, it fails about 3/4 of the time. I have no idea why this is occurring… adding as many output and debugging options as I can from the docs (
--print-stacktrace -ldebug --pex-verbosity=9 --keep-sandboxes=always --no-pantsd --no-local-cache
) and comparing logs between successful and failing runs doesn’t turn up anything obvious. The reason I created this test repo is because in our actual code, I can’t even run the tests due to a dependency inference error:
Copy code
./pants test  ::
15:48:35.48 [ERROR] 1 Exception encountered:

  UnownedDependencyError: Pants cannot infer owners for the following imports in the target pants-plugins/pep621/pep621_requirements_test.py:tests:

  * .pep621_requirements.PEP621RequirementsTargetGenerator (line: 16)
  * .pep621_requirements.rules (line: 16)

If you do not expect an import to be inferrable, add `# pants: no-infer-dep` to the import line. Otherwise, see <https://www.pantsbuild.org/v2.14/docs/troubleshooting#import-errors-and-missing-dependencies> for common problems.
As near I can tell, the configuration between the repos is the same.
Right, so I found the configuration difference causing the dependency inference error 🎉 The question is still, why are the tests failing intermittently?
🎉 1
h
Which platform are you running this on to get the errors?
f
macOS 12.6, M1
h
In your demo repo (and thanks for that! makes things so much easier to debug) on my macOS 12.6 M1 I get a slightly different outcome: it failed on the first run, then succeeded after that every time.
What error do you see when it does fail?
f
Here’s the error. I find the failures are intermittent. If I run a bunch in a row, I’ll get 5 failures, then 3 success, then a few more failures, etc. I also found that running with the
--debug-adapter
and connecting via VSCode, the tests passed every time I tried (maybe 5 times). Obviously, those runs there’s a much longer time taken for the test to run as I’m stepping through… I wonder if time piece is related?
h
Hmm, yes, I was seeing a similar error, looks like the issue is
tags=('another-tag', 'jupyter', 'jupyter')
vs
tags=('another-tag', 'jupyer')
Presumably that double-
jupyter
tag is wrong
thinking how that can happen, and intermittently at that
f
Yep, that’s what I noticed as the error as well. I’m going to try removing the code from use as a plugin and see if it helps
OK, I so I’m finding that if the code under test is being loaded as a
backed_package
then it has this failure mode. If I comment out the plugin from
backed_packages
, (and adjust
BUILD
to be
python_requirements
instead of
pep621_requirements
), I can’t reproduce any failures
I just pushed a commit that effects this change
Also, even if the plugin is loaded and not used, I still see the failure
h
Oh, interesting, so a conflict between code-as-plugin and the-same-code-under-test
f
That’s what seems to be the case…
h
So the suspect code would be pep621_requirements.py:67-76
What happens if you don’t do this tagging?
race condition there somehow?
f
If I remove the tagging, the tests seem to consistently pass. I wonder if it’s something to do with using a mutable data structure here and overriding parts of it, rather than creating a new object from scratch
Pushed up a change for this bit of testing too
FYI, this is no longer blocking me (I marked the test in question as XFAIL in our code), but I’m happy to help debug as needed
h
Hmmm, looks like a subtle concurrency issue in your test