few-arm-9306511/10/2022, 11:01 PM
yet, I still see these errors.
~/.cache/pants/lmdb_store ~/.cache/pants/named_caches ~/.cache/pip
Any advice on how I could go about fixing this?
ProcessExecutionFailure: Process 'Building src.stages.fragment_counts/exe.pex with 9 requirements: boto3==1.21.45, confluent-kafka[avro]==1.8.2, cryptography==3.4.8, overrides==6.1.0, redis[hiredis]==4.3.4, toml==0.10.2, typeguard==2.13.3, types-redis==4.3.20, types-toml==0.10.1' failed with exit code 1. stdout: stderr: ERROR: Could not find a version that satisfies the requirement boto3==1.21.45 ERROR: No matching distribution found for boto3==1.21.45 pid 1910 -> /home/runner/.cache/pants/named_caches/pex_root/venvs/97c49a45855c8337ed6c7437a8f9bce7e5f12e07/15ba0a31f835e32f40e40a07c519d52e41260cdc/pex --disable-pip-version-check --no-python-version-warning --exists-action a --isolated -q --cache-dir /home/runner/.cache/pants/named_caches/pex_root --log /tmp/process-execution74ifxC/.tmp/tmpa2vp83vp/pip.log download --dest /tmp/process-execution74ifxC/.tmp/tmpddxwdbv9/opt.hostedtoolcache.Python.3.9.15.x64.bin.python3.9 boto3==1.21.45 confluent-kafka[avro]==1.8.2 cryptography==3.4.8 overrides==6.1.0 redis[hiredis]==4.3.4 toml==0.10.2 typeguard==2.13.3 types-redis==4.3.20 types-toml==0.10.1 --index-url ***((our-artifactory-redacted))/artifactory/api/pypi/sw-pypi/simple --retries 5 --timeout 15 exited with 1 and STDERR: None
happy-kitchen-8948211/10/2022, 11:09 PM
enough-analyst-5443411/10/2022, 11:24 PM
few-arm-9306511/11/2022, 12:01 AM
on a single executable target in our build, which forces pex/pip to download a bunch of commonly used wheels. After that step, github copies the contents of those three
directories (in my original message) into its cache. Later, inside the ~40 workflows that follow, those folders are copied from the github cache back into the filesystem before pants is called. • What’s new — this is where it gets interesting... We’ve been using this approach for a long time and seeing this error sporadically (once every 10 or 20 builds), but it was always rare enough that rerunning the failed part of the build was sufficient. Over the last months it’s gotten much worse, to the point where every build has 2 or 3 examples of this failure case, and sometimes will fail when re-running. • We’ve also seen some cases where individual users run into the same error while building a single pex on their laptops, but that’s still rare.
enough-analyst-5443411/11/2022, 12:04 AM
few-arm-9306511/11/2022, 12:04 AM
enough-analyst-5443411/11/2022, 12:05 AM
few-arm-9306511/11/2022, 12:18 AM
enough-analyst-5443411/11/2022, 12:48 AM
few-arm-9306511/11/2022, 1:07 AM
folders already have the wheels for these packages that keep failing, and I’m trying to learn why pex would keep going out to artifactory despite the cached wheel.
enough-analyst-5443411/11/2022, 1:14 AM
few-arm-9306511/11/2022, 1:15 AM
enough-analyst-5443411/11/2022, 1:16 AM
few-arm-9306511/11/2022, 1:17 AM
enough-analyst-5443411/11/2022, 1:18 AM
few-arm-9306511/11/2022, 1:21 AM
enough-analyst-5443411/11/2022, 1:21 AM
few-arm-9306511/11/2022, 1:23 AM
enough-analyst-5443411/11/2022, 1:23 AM
few-arm-9306511/11/2022, 1:24 AM
enough-analyst-5443411/11/2022, 1:27 AM