https://pantsbuild.org/ logo
m

modern-manchester-33562

09/15/2022, 2:54 PM
Hi! Is it possible to combine the default version of a package with a local one? It has been discussed before but I couldn't find a solution that suits my needs. It's about our friend
pytorch
. What we would like is to use the regular upstream version e.g.
1.12.1
for all OSes except Linux, where we want to use
1.12.1+cpu
. Our issue being that the upstream Linux wheel is very large and we don't use any of the capabilities included in the Linux wheel. The CPU one would suit us just fine. However, as soon as we add an index that contains any local versions it always only select a suitable package for the current platform (OS), ignoring all the other ones. This results in incompatible dependency error on other platforms than the one that generated the lockfile. The only solution that seems to work is to use the version upstream (pypi) but, as I said, then we unfortunately use the very large wheels for Linux. Any recommendations on how to solve / circumvent this issue?
e

enough-analyst-54434

09/15/2022, 3:11 PM
Are you familiar with environment markers?: https://peps.python.org/pep-0508/#environment-markers I imagine that's what you need. 2 dependencies where only one is selected based on environment.
šŸ‘ 1
c

curved-television-6568

09/15/2022, 4:02 PM
Hi @modern-manchester-33562 That sounds very similar to what I needed just the other day, and this was pretty close to a working solution for that: https://pantsbuild.slack.com/archives/C046T6T9U/p1663004064934019?thread_ts=1662995442.966099&cid=C046T6T9U. just needed to add some version constraints rather than leave them floating.
Oh, and you may of course preferably provide the requirements as an
override
on the
python_requirements
target rather than single one out like here, it was during my testing session only..
m

modern-manchester-33562

09/16/2022, 6:56 AM
First of all, thank you for your quick respond! Yes, I've looked into the environment markers. First I tried something like this:
Copy code
torch==1.11.0; sys_platform!="linux"
torch==1.11.0+cpu; sys_platform=="linux"
But it seems to ignore the first requirement completely and only lock the
1.11.0+cpu
. Both Linux and Mac produces the same result. So I though, maybe I have to always use local versions as they seems to take precedence. Since the upstream version is
+cu105
I tried:
Copy code
torch==1.11.0+cu102; sys_platform!="linux"
torch==1.11.0+cpu; sys_platform=="linux"
but that gives me the following error:
Copy code
ERROR: Cannot install torch==1.11.0+cpu and torch==1.11.0+cu102 because these package versions have conflicting dependencies.
ERROR: ResolutionImpossible: for help visit <https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies>

 The conflict is caused by:
     The user requested torch==1.11.0+cpu
     The user requested torch==1.11.0+cu102

 To fix this you could try to:
 1. loosen the range of package versions you've specified
 2. remove package versions to allow pip attempt to solve the dependency conflict
which doesn't make sense to me as they should never be used together. I should also say I'm using pants version 2.13.0.
e

enough-analyst-54434

09/16/2022, 2:53 PM
Ah, yeah. The 1st issue actually makes sense. The lock must pick a version and that presents 2 versions; so one wins. As for your second attempt and the conflicts - there is not much to do about that - there are conflicts and you can't fake your way out of that.
I think you need to use Pants multiple resolves feature here.
In case you have looked into that yet, some documentation here: https://www.pantsbuild.org/docs/python-third-party-dependencies#multiple-lockfiles
m

modern-manchester-33562

12/16/2022, 2:58 PM
It is time to revisit this again. With the new 1.13 version of pytorch it starts pulling in even more cuda packages. We've seemed to find a manual solution for this where we just bluntly replaces
torch-1.13.1-cp310-cp310-manylinux1_x86_64.whl
in the PEX lockfile with the URL for
torch-1.13.1%2Bcpu-cp310-cp310-linux_x86_64.whl
. This is obviously not your typical solution for dependency resolution but it fits out use case perfectly: ā€¢ If Mac, install pypi torch 1.13.1 ā€¢ If Linux, install https://download.pytorch.org/whl/cpu torch 1.13.1+cpu The question is just how to best integrate this "hack" into pants šŸ¤” Is there an easy way to intercept the
generate-lockfile
goal to override this?
e

enough-analyst-54434

12/16/2022, 3:52 PM
So, did you try using the multiple resolves avenue mentioned above?
I understand you got a hack working and would probably like to continue with that, but, to explore the whys and the avenues, it should be the case that no hacks are needed if you use multiple resolves. It may be more inconvenient? Unsure - but to discuss the avenues...
m

modern-manchester-33562

12/16/2022, 4:04 PM
Yes, that doesn't work. Since the issue occurs when adding the
torch+cpu
index. As soon as that version is present in one of the indexes, it throws out all other torch versions and only keeps the
+cpu
one. AFAIK it is not possible to parametrize the indexes?
Local version seems to also have precedence. It seems you can never combine regular versions with local versions e.g.,
+cpu
e

enough-analyst-54434

12/16/2022, 4:11 PM
Yeah - the --find-links and --indexes are global. Your use case is different from Jacob's, but you need the same thing: https://github.com/pantsbuild/pants/issues/17565
Ok, that's unfortunate. This damn Pytorch distribution model is painful.
So, one option: Disable lock file generation by Pants. I do this here for different reasons (See comment): https://github.com/pantsbuild/scie-pants/blob/83160ea1f69d92c55315d7d21b960cc90565d05f/pants.toml#L41-L46 This would mean you'd run
pex3 lock create ...
yourself when you needed to update your lock.
Pants would just read the lock, never generate it. That may be too messy / not viable though - not sure of your case / setup.
Do you get requirements from a requirements.txt? Or is it some pyproject.toml?
m

modern-manchester-33562

12/16/2022, 4:16 PM
Thanks for your input @enough-analyst-54434! I'll make sure to observe that issue and also see if it that other option is viable for us.
requirements.txt
e

enough-analyst-54434

12/16/2022, 4:17 PM
Ah, then the pex3 lock create route is pretty straightforward. DO you want the command line?
m

modern-manchester-33562

12/16/2022, 4:17 PM
Yes please šŸ™‚
How about integrating into the
generate-lockfile
goal and doing some custom merging of lockfiles? Would that be possible?
e

enough-analyst-54434

12/16/2022, 4:18 PM
Copy code
pex3 lock create -r requirements.tx --style universal --pip-version XXX --resolver-version pip-2020-resolver --target-system linux --target-system mac --interpreter-constraint YYY --indent 2 --output lock.json
šŸ‘ 1
I don't know what XXX and YYY you current have, so you fill those in.
So, the problem with the custom goal is ... technically the Pex lock file format is opaque. Its not supported as read-write, it's intended read-only. IOW - Pex reserves the right to change the format which would break your goal use.
You could certainly do that though and buyer beware.
m

modern-manchester-33562

12/16/2022, 4:30 PM
I assumed as much. I'll have a look at the command you sent me and try to make something out of that. It all boils down to swapping out one artifact. E.g., this
Copy code
{
  "algorithm": "sha256",
  "hash": "fd12043868a34a8da7d490bf6db66991108b00ffbeecb034228bfcbbd4197143",
  "url": "<https://files.pythonhosted.org/packages/81/58/431fd405855553af1a98091848cf97741302416b01462bbf9909d3c422b3/torch-1.13.1-cp310-cp310-manylinux1_x86_64.whl>"
}
to this
Copy code
{
  "algorithm": "sha256",
  "hash": "11692523b87c45b79ddfb5148b12a713d85235d399915490d94e079521f7e014",
  "url": "<https://download.pytorch.org/whl/cpu/torch-1.13.1%2Bcpu-cp310-cp310-linux_x86_64.whl>"
}
And keeping the version as
1.13.1
. Then everything works "perfectly".
e

enough-analyst-54434

12/16/2022, 4:32 PM
I don't fully believe you - that new hash looks too short for sha256, but gotcha.
! šŸ™‚
m

modern-manchester-33562

12/16/2022, 4:33 PM
Fixed it šŸ™‚
e

enough-analyst-54434

12/16/2022, 4:33 PM
Immutable artifacts, mutable messages
šŸ˜‚ 1
m

modern-manchester-33562

01/02/2023, 3:32 PM
FYI We chose to solve this by setting up our own virtual pypi repository which: ā€¢ Contains upstream pypi ā€¢ Contains https://download.pytorch.org/whl/cpu ā€¢ Contains re-packaged versions of the macOS PyTorch wheels with
+cpu
suffixed version numbers. Now this pins perfectly!
e

enough-analyst-54434

01/02/2023, 4:10 PM
That's good to hear; but it's also unfortunate. A lot of folks will need to do similar. It would be good if there were a less labor intensive way to solve this.
šŸ‘ 4
l

lemon-oxygen-72498

02/10/2023, 9:15 AM
@modern-manchester-33562, @enough-analyst-54434> any update on this solution? I'm exactly in your situation @modern-manchester-33562 and I'm not so fund of putting the solution into place šŸ˜…
šŸ‘€ 1
e

enough-analyst-54434

02/10/2023, 2:25 PM
@lemon-oxygen-72498 I'm almost positive no-one contributing to Pants development has done any work here yet.
l

lemon-oxygen-72498

02/10/2023, 3:24 PM
Thanks for the update John šŸ‘
m

modern-manchester-33562

02/13/2023, 10:22 AM
I'm more of the opinion that this is something that torch should solve. All "solutions" I've seen discussed here in the channel just feel so wrong.
l

lemon-oxygen-72498

02/17/2023, 7:55 AM
Yeah agreed, I feel bad putting the solution you described above into place šŸ˜ž
8 Views