ambitious-actor-36781
07/13/2022, 9:33 PMfmt
on?
We've had some issues where our pre-commit hook that uses --changed-since=HEAD lint
isort will complain about making changes, but then when you manually do fmt ::
nothing will changestrong-refrigerator-32393
07/13/2022, 9:54 PMrhythmic-morning-87313
07/14/2022, 8:47 AMnice-florist-55958
07/14/2022, 12:29 PMpython_distribution
A
that depends on a first-party python_distribution
B
(based on the sources resolve algorithm), it uses the pinned version of B
(as expressed in its python_artifact
in its BUILD
file) for the install_requires
entry of B
in A
's setup.py
. How can this behavior be overridden so looser semantic versions of B
are used instead and people don't get upset about overly restrictive distributions in their envs? :)loud-laptop-17949
07/14/2022, 4:28 PMnice-florist-55958
07/14/2022, 4:52 PMexport
goal is confusing me a bit. It accepts an (undocumented?) python_requirement(s)
target as its first argument, and then prints a message suggesting it is building a venv from only the requirements specifically owned by that target, with versions obtained from the resolve it too references. But then it proceeds to export the entire resolve in dist/export/python/virtualenvs. Exporting the entire resolve is what I'm after, but it means as the cmd works now, you can get that behavior by exporting any python_requirement
target that references the resolve. By chance do resolves themselves have implicitly defined targets that I can refer to instead (for clarity in shell scripts)?
The reason I am bringing this up at all is because if I ./pants export
without any target, it only exports virtualenvs for the toolset (mypy, black, etc.), so seems I need to do it this way (as an aside: it proceeds to export all the tool venvs anyway, but at least I get the resolve's venv I wanted 🙂 it just takes a little extra time 😞 )flat-queen-95161
07/14/2022, 9:22 PM% ls dist/cmd.hello-world/bin
dist/cmd.hello-world/bin
% pants filter --target-type=go_binary ::
cmd/hello-world:bin
# From the Dockerfile
...
COPY cmd.hello-world/bin /go/bin/hello-world
# Build docker image
pants package cmd/hello-world:docker
...
> [2/4] COPY cmd.hello-world/bin /go/bin/hello-world:
------
failed to compute cache key: "/cmd.hello-world/bin" not found: not found
nice-florist-55958
07/14/2022, 11:20 PMpython_requirements(name="A")
and python_requirements(name="B")
both consuming top-level <http://requirements.in|requirements.in>
and, then making a few adjustments to a couple requirements, say A
needs Flask~=2.0
and B
needs Flask~=1.0
(that is, there is really requirements_A.ini
and requirements_B.ini
, but they're both derivatives of a common requirements.ini
). Let them both be placed in the same BUILD
file ./3rdparty/python
with A
belonging to resolve python-A
and B
belonging to python-B
.
Now an application pex_binary(name="Z")
only works with resolve python-B
because it has a requirement that restricts Flask~=1.0
, say on apache-airflow>2.0
, which is reflected in the changes to requirements.ini
in B
. It also imports some first-party module associated with python_sources(name="M")
that previously only worked with python-A
. No big deal, set that dependency's resolve to resolve=parametrize("python-A", "python-B").
Now running Z
puts python-B
resolve in play, and everyone is happy. In particular, imports in both the app and the dependency get mapped back to requirements in B
unambiguously.
But now suppose M
has an explicit dependency on a common requirement in A
and B
that Pants cannot infer for some reason, say 3rdparty/python:A#Jinja2
(for example, it depends on pandas
, but needs Jinja2
for HTML styling that Pants did not infer). Then when python-B
is in play, Pants rightfully complains about the dependency. So then we parametrize it: dependencies=parametrize(python_A=["3rdparty/python:A#Jinja2@resolve=python-A"], python_B=["3rdparty/python:B#Jinja2@resolve=python-B"])
(the kwargs in parametrize
are needed when the args are not simple strings).
Ok, this gets us past the error, but now Pants cannot do implicit inference because now two targets own the module represented by M
when python-B
resolve is in play: The one we care about and want Pants to infer (./path/to/M/.__init__.py@resolve=python-B,dependencies=python_B
and the other that is impossible but nonetheless has a target generated because of the cartesian product methodology Pants uses with parametrizations (./pants/to/M./__init__.py@resolve=python-B,dependencies=python_A
).
But now apparently everyone has to disambiguate their dependence on M
, because on the flip side when python-A
is in play, it is a similar issue as ^. In our repo python-default
is python-A
in this toy example and python-B
is analogous to another resolve that was made for a service that uses a lot of first party code (and all of their thirdparty requirements), but has some conflicts with a few packages that are not consequential to the library/utility codebase, so it's a real use case, but maybe our approach is wrong.
Wondering if there has been any thought about heirarchal resolves, where you could factor our the common requirements in A
and B
, build a common resolve, and then refer to targets in that common set when either python_A
or python_B
are in play (Pants would know its compatible with both). In the above example, Jinja2
could be referred to, say by 3rdparty/python:A+B#Jinja2
, and this would work when either resolve python_B
or python_A
are in play and no disambiguation would be necessary (and no useless target permutations would be created). The idea is to imagine a base python-default
which generalizes as many of the repo's requirements as possible, and then small deltas to this resolve/requirements, but which are all compatible with each other on their common set of requirements.ambitious-actor-36781
07/15/2022, 1:12 AM.env
file to the pants bootstrapper.
if [[ -f ".env" ]]; then
warn "Loading .env file" 1>&2
export $(cat .env | xargs)
fi
feels like it's a cheap way to load the contents of a .env file every time its needed.broad-processor-92400
07/15/2022, 2:34 AMaws_xray_sdk
) so is kinda relevant to be including in a lambda. This gives an error:
Dependency on future not satisfied, 1 incompatible candidate found:
1.) future 0.18.2 (via: aws-lambda-powertools==1.25.10; python_full_version >= "3.6.2" and python_full_version < "4.0.0" -> aws-xray-sdk<3.0.0,>=2.8.0 -> future) does not have any compatible artifacts:
<https://files.pythonhosted.org/packages/45/0b/38b06fd9b92dc2b68d58b75f900e97884c45bedd2ff83203d933cf5851c9/future-0.18.2.tar.gz>
I suspect our particular app doesn't actually need this package at runtime:
1. Is it possible to tell pex/`python_awslambda` that this specific sdist is okay? I'm assuming that it is okay, since it's pure Python...
2. Alternatively, can we break the dependency? I tried adding various overrides, like python_requirements(overrides={"aws_xray_sdk": {"dependencies": ["backend#setuptools", "!backend#future"]}, ...
and python_awslambda(dependencies=["!backend#future"], ...)
, but these had no effect, presumably because they're operating outside pexnice-florist-55958
07/15/2022, 11:53 AMA
is the set of requirements input into the resolve python-default
and B
is a derivative set of requirements in the sense that the resolve of python-default-B
is compatible with the sub-resolve of the closure of ./pants dependencies proj/lib::
(filtered A
), then python_sources
in that glob can be assumed compatible with python-default-B
without having to declare it and maintain a list?
The idea is that as more and more people add to proj/app
while proj/lib
stays relatively stable with a minimal set of thirdparty dependencies, people can start defining deltas to the baseline requirements/resolve B, C, D, ...
and not have to worry about maintaining the BUILD
files of proj/lib/**
- it would just automatically work because Pants would detect that python-default-B
is compatible with python-default
to the extent that proj/lib/**
is concerned (or in reality, the closure of whatever the app is depending on in firstparty code).
Basically, wondering if Pants does, can, or even should have the ability to infer resolve compatibility (similar to how it does dependency inference).
Furthermore, can we imagine a project in proj/app
pinning their requirements in a pyproject.toml
and asking Pants to take care of the rest in creating an implied python_requirements(name="someapp")
and corresponding resolve python-default-someapp
, perhaps by just adding a directive in its BUILD
files (or pyproject.toml
and Pants inferring it is applicable to all BUILD
file targets blow it) such as auto_resolve_with_base_requirements = "3rdparty/python:A"
.
Maintaining requirements / resolves in the top-level 3rdparty
package and then updating targets throughout the repo that are heavily depended on (lib/util code) for the highly individual requirement deltas of apps/services feels wrong (but see philosophical point below). I think as-of now you can place the new python_requirements
in the project's subdir, declare the new resolve, etc., but it still seems you would have to go through all the firstparty deps and add your resolve to their list, which feels a bit unsnure to me. Also the author of the app would have to make sure their python_requirements
includes all the stuff their firstparty dependencies care about (not just their app). It's easy enough to write a script that forms the right input requirements from just their own requirements, but could still be error-prone or a frustrating experience for someone not used to working w/ Pants or a monorepo.
At least initially Pants could error if the requirements delta does not result in a compatible resolve with the default resolve of the closure of its dependencies. But even this might be too restrictive. For example, if python-resolve
has pandas==1.1.0
but the new app changes the requirements to pandas=1.2.0
and depends on some first party python_sources
depending on 3rdparty/python:A#pandas
, then this is a conflict. But probably the individual app does not care and the author does not have time to initiate a repo-wide upgrade of pandas (and all the governance/testing/signoff that would be needed for that), so given some option like override_autoresolve_conflicts = true
, Pants could assume that the dependency is also compatible with python-default-B
and use 3rdparty/python:B#pandas
instead.
The philosophical point is app / services builders should be able to (if they want to take the risk) override the pinned requirements more easily and w/o having to maintain compatible resolve lists in the shared firstparty code. Most of that code has very flexible requirements anyway, but are nontheless pinned down from Pant's perspective (unless we make the inputs A
have more open constraints, but now you are making apps have loose constraints and their dependencies can all the sudden change if someone runs pants generate-lockfiles
again). On the other hand, by having rampant project-specific version pinning, the repo gets harder to maintain and people are less likely to test their apps against upgrades and will likely in the future stop working completely when one of the overrides actually results in a crash in one of their dependencies. Some warning / configurable limit on the number of builds of such a project could be an idea to manage this though?wide-midnight-78598
07/15/2022, 1:56 PMquaint-forest-8735
07/15/2022, 3:47 PMrequirements.txt
. Any thoughts as to what might be causing this?
File "/Users/rahulm/.cache/pants/named_caches/pex_root/installed_wheels/6242b902db69f59e1092b406655c0fb1634486c47ce563f5fd27277cf4561822/pex-2.1.90-py2.py3-none-any.whl/pex/dist_metadata.py", line 294, in from_filename
project_name, version = fname.rsplit("-", 1)
ValueError: not enough values to unpack (expected 2, got 1)
nice-florist-55958
07/15/2022, 5:10 PMpackage
a python_distribution
owning python_sources
with a parametrized resolve field? Which requirement will it use in install_requires
? Is there any way to take advantage of automatic setup.py
generation but override install_requires
with looser constraints than what's expressed in the resolve's input requirements? Right now we have pinned constraints for input requirements to the python-default
resolve, but that is causing our packages to be published with overly strict constraints and generally substantially increasing the likelihood for consumers running into resolve errors in their own environments/apps.
Is there any guidance on how to handle the general trade-off between pex_binary
wanting pinned constraints for the top-level input requirements and python_distribution
wanting loose constraints? If our input requirements.txt
to python_requirements
for python-default
has pinned constraints, then python_distribution
s will have pinned install_requires
in their setup.py
. But if the constraints are loose, the resolved dependencies could change any time the generate-lockfiles
is re-ran, thus potentially breaking an app. We'd like to say the pex_binary
is participating in python-default-pinned
and the python_distribution
's python_sources
are participating in python-default-loose
, then have Pants infer the two are compatible when the stricter one is in play. Is there a general idea of resolve compatibility?freezing-vegetable-92896
07/15/2022, 5:33 PM[pyupgrade]
args = "[--py39-plus]"
but pants doesn't seem to like the [pyupgrade]
scope
13:32:54.73 [ERROR] Invalid scope [pyupgrade] in /Users/tdimiduk/eopt-core/pants.toml
I'm on 2.10
but I've also tried 2.11
and get the same errorfreezing-vegetable-92896
07/15/2022, 6:17 PM--remove-all
flag that I'm not ready for. I've tried passing my own args
but that doesn't seem to replace it. How do I find out and/or change what command pants is actually running?high-yak-85899
07/15/2022, 8:20 PM--pytest-args='-s'
to show test output when a test passes? Seems like I get output when I run pytest
regularly, but not when running ./pants test
flat-queen-95161
07/16/2022, 9:44 AMflat-queen-95161
07/16/2022, 9:46 AMgo_binary(
name="bin",
)
go_binary(
name="bin-linux",
env="GOOS=linux GOARCH=amd64"
)
white-jordan-40851
07/17/2022, 8:26 PMpython_tests
, but some of my unittests require assets from s3. I'm seeking for advice on how this should be implemented. For instance, could I use a target such as experimental_shell_command
to run shell aws s3 sync <s3://bucket/> ./test_assets/
prior to my tests ? Then how can I make the content of ./test_assets
available to my unittests ? Would be ./test_assets
cached somewhere or entirely re-fetched for every execution ?mysterious-waiter-14207
07/18/2022, 6:42 AMrhythmic-glass-66959
07/18/2022, 5:44 PM.env
file and set them as environment variables when executing goals?busy-vase-39202
07/18/2022, 6:10 PMhappy-kitchen-89482
07/18/2022, 6:29 PMrhythmic-glass-66959
07/18/2022, 6:58 PMdependencies
field in python_test
target), but can we do the same with arguments?nutritious-minister-3808
07/18/2022, 9:01 PM./pants package
? I.e. if I have explicit values set for a build_arg through the pants.toml
or extra_build_args
field on my docker_image
target, simply setting a environment variable in the pants environment does not overwrite the explicit value - however if there is no explicit value set - setting the environment variable does get passed through to the build. This seems a little odd but I can understand why this isn't seen as an issue because it only really becomes a problem when I try to use a build_arg to dynamically tag my images. I can provide additional details if need be.lemon-finland-60184
07/18/2022, 10:37 PMpants --pants-config-files=pants.ci.toml test :: --test-shard=${TEST_SHARD}
When I try to run tests on just the changed files, Pants seems to take 5-10 minutes building the dependencies for all the tests
• pants --pants-config-files=pants.ci.toml --changed-since=origin/master --changed-dependees=transitive test --test-shard=${TEST_SHARD}
When running in CI, we have pantsd
and watch_filesystem
set to false from guidance in this thread, so I know we're taking some performance hit, but it seems like a really significant hit compared to using just xdist.
I'm not sure what we're doing wrong, but if you have any guidance on how to determine what's causing the long run times, that would be appreciated? Thanks! (cc @cold-soccer-63228)incalculable-grass-76623
07/19/2022, 2:33 AMexperimental_shell_command
I am new to pants, so something may be obvious but I can seem to get it.
I am using experimental_shell_command
to "run a docker image", built in a preious step, that has tooling inside of it. think like code generation.
How do I "run" the experimental_shell_command
task, as a goal or other ..
eg:
experimental_shell_command(
command="docker run --rm -t -v `pwd`:/v --workdir /v schema-tooling sh -c 'echo 1 > results.log'",
tools=["docker"],
dependencies=["datamodel/schema-tooling:schema-tooling"],
outputs=["results.log"],
)
incalculable-grass-76623
07/19/2022, 2:34 AMaloof-tent-90836
07/19/2022, 11:06 AMcoverage combine && coverage xml
to join coverage data files from different GitHub action workers) within venv
created by Pants (I saw no `test`/`pytest`/`coverage-py` names in cache folder but hashes).
Is there any way to invoke this command in a Pants-way?