I have two questions about the lockfile generation...
# general
g
I have two questions about the lockfile generation: 1. Is there something I can do to speed up the process? 2. There’s no way to limit upgrades, other than specifying the limits in the source requirement files? Nothing similar to pip-compile
--no-upgrade
like in pip-tools?
h
Re 2: There is now support in PEX for this, but it hasn't been implemented on the Pants side yet.
Re 1: There is now support in PEX for incrementally updating lockfiles, but again, not implemented on the Pants side yet.
I hope to introduce all of this in the new python backend
If not sooner in the existing backend
What kind of lockfile generation times are you seeing? What is the size of your input requirement set?
g
Hi @happy-kitchen-89482, sorry for the late response. Right know for the 21 resolvers that I have, on MacbookPro ~ 2020 2.3 GHz Quad-Core Intel Core i7, using all the cores at ~100% usage. It takes around 13 minutes. The current use case is not ideal at all (the idea is to start from where we’re at and gradually improve the situation with the inspection tools that pants provides)_,_ I’m adapting pants into a repo with ~30 project/services, each with at least two independent requirements files, one for the app and other for testing (which I could make it work using ambiguity_resolution), in the same repo we have ~ 20 shared projects/libraries that are uses by some of those 30 projects. The python version varies on the service (which up to this point, each one of them lives on it’s own container and independent build process), from 3.9 to 3.11, I’m using parametrize, combining interpreter constraints and resolvers. So that X library can be used by Y services, using R resolvers, each one of them considering different combinations of requirements (for each library discovered using dependency inference from the same source root in the repo, plus the service specific requirements and development requirements). Currently most of the requirements are not well thought out, they are either too restrictive using
==
or to broad
>=
or just the plain project name. I had to manually adapted some of those to make it work (but hey! I could run all of unit test successfully!) To give you more context looking for
python_requirements
targets, it yields
Copy code
pants --filter-target-type=python_requirements list  :: | wc -l
     374

pants --filter-target-type=python_requirement list  :: | wc -l
    1745
Not sure how often this type of use case is presented, but my approach is to adapt pants to what we have, and then improve the situation.
h
That is a good approach, I think. But yeah, that is a lot of resolving.
👍 1
I guess the biggest win will be had from combining as many of those lockfiles as possible (this would fall under "improve the situation")
So having multiple projects (in an ideal world, eventually, all of them) share a single set of input requirements and thus a single lockfile
👍 1