I'm getting a sha256 mismatch when upgrading my re...
# general
c
I'm getting a sha256 mismatch when upgrading my requirements. How do I start resolving this? I have run
pants -no-pantsd generate-lockfiles
Copy code
Expected sha256 hash of 7b70f5e6a41e52e48cfc087436c8a28c17ff98db369447bcaff3b887a3ab4467 when downloading triton but hashed to e2b0afe420d202d96f50b847d744a487b780567975455e56f64b061152ee9554.
g
I'm guessing you have torch as an extra index?
c
You are entirely correct. I have had issues after removing the extra index as well, but I'm going to try once again now (after having updated pants and cleaned the cache etc)
g
Root issue is that torch has duplicated entries from Pypi to their index but modified the contents. You can't really work around this, it's just random as far as I can tell. Just their regular non-PEP compliant behavior. Fixed here: https://github.com/pex-tool/pex/issues/2631, so you could update pex to the latest release in your pants.toml (by setting
[pex].known_versions
and it'd work. https://github.com/pex-tool/pex/releases/tag/v2.55.1
If upgrading is problematic for other reasons, you could use my mirror of their index which only contains the wheels with local version specifiers: https://tgolsson.github.io/torch-index/. Contains the following variants of the torch index:
Copy code
var URLs = []string{
	"<https://download.pytorch.org/whl/>",
	"<https://download.pytorch.org/whl/nightly/>",
	"<https://download.pytorch.org/whl/cu129/>",
	"<https://download.pytorch.org/whl/cu126/>",
	"<https://download.pytorch.org/whl/cu121/>",
	"<https://download.pytorch.org/whl/cu118/>",
	"<https://download.pytorch.org/whl/cu117/>",
	"<https://download.pytorch.org/whl/cu116/>",
	"<https://download.pytorch.org/whl/cu110/>",
	"<https://download.pytorch.org/whl/cu111/>",
	"<https://download.pytorch.org/whl/cpu/>",
	"<https://download.pytorch.org/whl/nightly/cpu/>",
}
c
Removing the index fixed the issue. It didn't work before I upgraded pants with the index removed though. I have not specified pex in my toml. Is it possible the pants triggered a pex update? I have no idea how these things are connected.
g
If you removed the index you only see the PyPi version, so if you don't need the index that's great. If you haven't specified pex in your toml, it follows whatever was set as default in the Pants version you use -- outside of that there are no upgrades.
c
Copy code
Installing pantsbuild.pants==2.26.0 into a virtual environment at /home/user/.cache/nce/60b513559c7b53eb2acecbd7b8aaaeb942686f3997d07fa77377b51324f58fda/bindings/venvs/2.26.0
Failed to fetch <https://github.com/pantsbuild/pants/releases/download/release_2.26.0/pants.2.26.0-cp39-linux_x86_64.pex>: [22] HTTP response code said error (The requested URL returned error: 404)
Wasn't able to fetch the Pants PEX at <https://github.com/pantsbuild/pants/releases/download/release_2.26.0/pants.2.26.0-cp39-linux_x86_64.pex>.

Check to see if the URL is reachable (i.e. GitHub isn't down) and if pants.2.26.0-cp39-linux_x86_64.pex asset exists within the release. If the asset doesn't exist it may be that this platform isn't yet supported. If that's the case, please reach out on Slack: <https://www.pantsbuild.org/docs/getting-help#slack> or file an issue on GitHub: <https://github.com/pantsbuild/pants/issues/new/choose>.
Ok, this is somewhat unrelated, but after updating pants + my lockfile I get this error when I'm trying to run my script on the remote. It works on my machine. I have no idea why it's trying to fetch cp39. The python interpreter in use is py311 and interpreter constraints is set to 3.11.x?
c
Re the latest error: Could you run
PANTS_BOOTSTRAP_VERSION=report pants
?
c
0.11.0
and on the computer it works it's 0.12.2
c
ah, you might have jumped multiple versions at once and missed the WARNing period about updating scie-pants https://www.pantsbuild.org/blog/2025/01/28/pants-2-24
SCIE_BOOT=update pants
https://www.pantsbuild.org/stable/docs/getting-started/installing-pants#upgrading-pants
c
I did! But I did upgrade one version at the time as recommende by your last link. Unless this happened in version 2.25, which I jumped over by accident.
This fixed it! Thank you both for prompt and accurate help! Looking forward to the speedups when working with torch!^^