But when running it directly with the same interpr...
# development
But when running it directly with the same interpreter it doesn’t fail
That sounds like an environment variable issue - perhaps LANG or something similar? The difference between a Pants run and a direct run is Pants always runs processes fully locked down and only allows environment variables that you opt-in.
@enough-analyst-54434 how can I see which env my command is running with?
to the command line:
Copy code
$ ./pants -ldebug test src/python/pants/util/strutil_test.py
08:56:14.34 [DEBUG] spawned local process as Some(16645) for Process { argv: ["./pytest_runner.pex_pex_shim.sh", "--no-header", "src/python/pants/util/strutil_test.py"], env: {"GPG_TTY": "/dev/pts/15", "PEX_EXTRA_SYS_PATH": "src/python", "PYTEST_ADDOPTS": "--color=yes --junitxml=src.python.pants.util.strutil_test.py.tests.xml -o junit_family=xunit2"}, working_directory: None, input_files: Digest { hash: Fingerprint<88a121e60364dcb235c1cf224eb676d90cb764838823024e57c970a80277c215>, size_bytes: 914 }, output_files: {RelativePath("src.python.pants.util.strutil_test.py.tests.xml")}, output_directories: {RelativePath("extra-output")}, timeout: Some(60s), execution_slot_variable: None, description: "Run Pytest for src/python/pants/util/strutil_test.py:tests", level: Debug, append_only_caches: {CacheName("pex_root"): CacheDest(".cache/pex_root")}, jdk_home: None, platform_constraint: None, use_nailgun: Digest { hash: Fingerprint<e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855>, size_bytes: 0 }, cache_scope: Successful }
08:56:14.46 [DEBUG] handle_workunits total=5355 completed=5352 started=5342 finished=False calls=2
08:56:14.54 [DEBUG] Completed: Run Pytest for src/python/pants/util/strutil_test.py:tests
08:56:14.54 [DEBUG] Completed: Scheduling: Run Pytest for src/python/pants/util/strutil_test.py:tests
08:56:14.55 [INFO] Completed: Run Pytest - src/python/pants/util/strutil_test.py:tests succeeded.

✓ src/python/pants/util/strutil_test.py:tests succeeded.
08:56:14.55 [DEBUG] Completed: `test` goal
08:56:14.55 [DEBUG] computed 1 nodes in 2.161858 seconds. there are 6637 total nodes.
08:56:14.55 [DEBUG] Async completion is enabled: workunit callbacks will complete in the background.
Notice the
spawned local process as Some(16645) for Process { argv: ["./pytest_runner.pex_pex_shim.sh" ...
line. That has the
passed in it.
If you're using Toolchain BuildSense this information is all also in the web UI for any particular Pants run.
I tried to unset all env variables and it doesn’t seem to be the issue
Copy code
$ curl -sSL <https://raw.githubusercontent.com/boto/boto/develop/boto/__init__.py> -O
$ LANG= LC_TYPE= LC_ALL= python3.8 -mcompileall __init__.py
Compiling '__init__.py'...
Copy code
$ python3.8 -W all -c 'print("""\c""")'
<string>:1: DeprecationWarning: invalid escape sequence \c
Copy code
python3.8 -W error -c 'print("""\c""")'
  File "<string>", line 1
SyntaxError: invalid escape sequence \c
So you must have warnings setup to be treated as errors? Perhaps via PYTHONWARNINGS or ... I'm not sure.
The thing that's strange is to get
happening in Pants you'd have to deliberately tweak pants.toml to set that environment variable or else allow it to pass through.
I did a quick search of CPython configuration options and there do not appear to be any that control the default warning level for the compiled
binary. That was my only other idea: 1. You had such a binary on your system with a default of
-W error
baked into the binary somehow. 2. Pants was choosing 1 and your runs from the command line were using a different Python without that baked in.
I'm pretty much out of ideas beyond those.
Can I get a virtual environment for my pants lockfile?
I’m unable to reproduce the issue outside the pytest pex
Not directly. If the lock file covers the whole repo, you can always use
./pants repl ::
which will create a venv and drop you into that venv's repl. That said, the best way to debug this is probably by adding
. That will print out
directories each process runs in. You can then cd into the
dir corresponding to the failing test and execute
to reproduce, and then, after that experiment with what is happening inside that script.