ripe-cpu-8514109/28/2022, 2:52 PM
, the build fails saying it can't find the executable 'newuidmap' on $PATH. I tried to add the following to the pants.toml file:
./pants package ::
and sometimes it works, sometimes it doesn't. When it starts to work, I can remove the section and it continues to work. Sometimes the problem comes back. I'm not sure how to reproduce that easily and I can't guarantee that nothing changed in my environment. I tried removing the caches, but it doesn't seem to have an effect. Any idea on what's going on or what I should do to dig deeper?
[docker] tools = [ "newuidmap", "newgidmap" ]
clever-hamburger-5971609/28/2022, 3:12 PM
test/sub_dir_2/BUILD has this:
resources ( name="test-data", sources=["./examples/data/*"], ) pex_binary( name="service", entry_point="service.py", platforms=[ "current", "manylinux2014-x86_64-cp-39-cp39", ], dependencies=[":test-data"], )
I am trying to package everything as a pex file. A test in test_service.py uses the files as such:
pex_binary( name="test_service", entry_point="test_service.py", dependencies=["service:test-data"], )
Checking the sandbox, indeed, the files don't exist(actually no resources exist). However, I do see the files in the packaged .pex archives. I am trying to run the test as such
path = str(pathlib.Path(__file__, "../../../examples/data/data.csv").resolve()) do_something_with(path)
I am using pants 2.13.0. I would really appreciate any pointers on how to go about including files in tests. TIA, CSN
./pants --keep-sandboxes=on_failure test service/test/sub_dir_2/test_service.py -- -k test_csv
careful-address-8980309/28/2022, 3:51 PM
hundreds-father-40409/28/2022, 4:51 PM
careful-address-8980309/28/2022, 5:15 PM
hundreds-father-40409/28/2022, 5:20 PM
high-yak-8589909/28/2022, 5:27 PM
high-yak-8589909/28/2022, 6:05 PM
lets you supply a
to appear in the lockfile header. I'd like developers to use that. However, if they see an invalid lockfile error, they are presented with a command that I don't want them to run.
plain-carpet-7399409/28/2022, 7:53 PM
each uses (because pytorch has CUDA and non-CUDA libs which are differentiated only by the index url). But it appears that
is a global property and can't be set per-resolve. Is there a way to do this? (see https://pantsbuild.slack.com/archives/C046T6T9U/p1664233267501069 for some more context if necessary)
brash-student-4040109/29/2022, 1:38 PM
? The use case here is moving code to a monorepo, and trying to use code that was previously installed with a
. From some old Github conversations, I believe this should be possible, but I couldn't find documentation for how to set things up. Details in 🧵.
refined-addition-5364409/29/2022, 3:59 PM
In this case
15:49:36.77 [WARN] Pants cannot infer owners for the following imports in the target src/package-local/tests/utils/test_nlp.py: * tests.resources.fake (line: 5)
has structure like. We have multiple of such local packages each with their own tests.
src/package-local - package_local - tests
high-yak-8589909/29/2022, 5:17 PM
use to make a virtual environment?
cold-vr-1523209/29/2022, 6:31 PM
bumpy-noon-8083409/29/2022, 10:38 PM
What am I getting wrong?
$ ./pants test :: 00:31:26.68 [ERROR] Completed: Run Pytest - packages/libhello/tests/test_hello.py failed (exit code 2). ============================= test session starts ============================== platform linux -- Python 3.9.14, pytest-7.0.1, pluggy-1.0.0 rootdir: /tmp/pants-sandbox-8eRczK plugins: cov-3.0.0 collected 0 items / 1 error ==================================== ERRORS ==================================== ____________ ERROR collecting packages/libhello/tests/test_hello.py ____________ ImportError while importing test module '/tmp/pants-sandbox-8eRczK/packages/libhello/tests/test_hello.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.9/importlib/__init__.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) packages/libhello/tests/test_hello.py:1: in <module> from libhello import hello E ModuleNotFoundError: No module named 'libhello' - generated xml file: /tmp/pants-sandbox-8eRczK/packages.libhello.tests.test_hello.py.xml - =========================== short test summary info ============================ ERROR packages/libhello/tests/test_hello.py !!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!! =============================== 1 error in 0.06s ===============================
high-yak-8589909/29/2022, 11:56 PM
. https://www.pantsbuild.org/docs/using-pants-in-ci recommends a simple script for nuking cache directories when too large. It recommends
as one of the directories to monitor. When we do that for our builds when they kick off, we start seeing errors like this
Any recommendations on what to do?
/home/buildbot/.cache/pants/setup/bootstrap-Linux-x86_64/2.13.0_py38/bin/python: No such file or directory
rough-vase-8355309/30/2022, 2:16 PM
. I'm aware that you can create an arbitrarily large lockfile without having to worry about everything in the lockfile getting dragged in to each Python target. But do you have to also create separate
for each binary / library if you want to restrict the number of dependencies they pull in? Or can Pants still resolve which ones to pull in via import statements?
high-energy-5550009/30/2022, 4:13 PM
this doesn’t seem to include the
python_requirement( name="apache-airflow", requirements=[ "apache-airflow[amazon]==2.2.2", ], modules=["airflow"], )
extra (if i run pylint against code that uses this as a dependency, it fails to import
) however, including the extra as a separate package seems to work, e.g.
the downside is that i have to manually specify the compatible version of the package, which i’d like to avoid
python_requirement( name="apache-airflow", requirements=[ "apache-airflow==2.2.2", "apache-airflow-providers-amazon==2.4.0", ], modules=["airflow"], )
wide-midnight-7859809/30/2022, 8:18 PM
- however, to deploy them from my ansible role, I need them in specific locations. I haven't been able to resolve symlinks (it just uploads the symlink to the target). Is there a Pantsian way to manipulate the host filesystem?
bumpy-noon-8083409/30/2022, 9:27 PM
bumpy-noon-8083410/01/2022, 12:01 AM
everything is fine. But when I run pylint through VSCode using
./pants lint ::
, I get pylint's
apparently doesn't understand my imports, despite
being defined in
. BTW, the
looks fine as VSCode itself has no issue resolving those imports. Any idea how I could troubleshoot this further?
worried-painter-3138210/01/2022, 2:52 PM
bored-energy-2525210/02/2022, 4:33 PM
high-magician-4618810/02/2022, 4:50 PM
files. Hi, I’m trying to setup Pantsbuild (2.13) for the first time in an existing Python monorepo. At the root of the repo, I’ve added an empty
file and a
There are about 50 projects in the repo. I’ve run
[GLOBAL] backend_packages = ["pants.backend.python"] [source] root_patterns = ["/X_*/"] # "X" is the name of the company and "X_" is the prefix of each project
pants tailor ::
and got 644. Do I really need to commit 640
git status | wc -l
files? Is there any way around it? (maybe creating one per project?) Thanks in advance 🙂
gentle-sugar-5237910/02/2022, 7:01 PM
is there a way to make
usable too? atm it only works on
but thats slowing down the dev iterations massively.
gentle-sugar-5237910/03/2022, 7:10 AM
script and made it invokable with
. i'm able to run it with
with and without the sandboxing feature. pretty nice but ansible-runner wants to call
./pants run deployments/project_name
via subprocess. i tried to include it as a requirement in my
but without any luck. the next try was: make a fake
executable inside my
. result: my
is able to find and execute it. is it the right solution to globally install ansible so it's findable by my
script or is there something i could do to include it into the path to be able to rely on the automatic fetching of ansible?
fresh-cat-9082710/03/2022, 11:58 AM
resources( name="testdata", sources=["tests/testdata/**/*", "!tests/testdata/**/*.py"], )
worried-painter-3138210/03/2022, 7:45 PM
These are small services, with 4 requirements (largest being boto3). The time spent is mostly focused on building "requirements" pexes, meaning
19:41:23.94 [INFO] Long running tasks: 643.75s Building docker image <http://amramedical.com/client-session-tagger:latest|amramedical.com/client-session-tagger:latest> +2 additional tags.
targets in the container with only requirements included. We use a lockfile, with_tools=True and layout="packed". We use only wheels.
#13 [source 4/4] RUN PEX_TOOLS=1 /usr/local/bin/python client-session-tagger-handler.pex venv --compile /app #13 sha256:3dcefd9a9645feb44367da0c45b3370685d8cc30427c1a07e8d99fe9111ab559 #13 DONE 3.7s #9 [dependencies 4/4] RUN PEX_TOOLS=1 /usr/local/bin/python client-session-tagger-requirements.pex venv --scope=deps --compile /app #9 sha256:ef99e535e560ee6d3decd7805cb3c81c0d17b05123d0f2b0505b157e6efb1697 #9 DONE 637.8s
rhythmic-glass-6695910/03/2022, 8:09 PM
plugin. Since my
is not in the project's root directory, I need to pass the
option pointing to the requirements in
. So I added this to `[flake8].args`:
However, the plugin doesn't seem to be able to read the text file. I suspect an issue with the sandbox. Any ideas?
[flake8] args = [ ..., "--requirements-file=3rdparty/python/requirements.txt", ]