hey so just wanna give a big thanks to all of the pants devs for all their help. We’ve hit a few roadblocks over the past several weeks, especially with the 2020-resolver (in which I even submitted a bug report about it not working for py2). There was a moment of despair last week when requirements building times were very slow, like unacceptably slow. After further investigation, it turns out the slowness was with the py2 2020-resolver. Curiously enough, with the same set of requirements, the py3 2020-resolver was just as fast as py2+legacy-resolver.
To give a few numbers of a cold cache startup time
• Pants v1.25 Baseline 1m47.436s
• Pants v2.2.0 PY2 + 2020-resolver 5m14.671s
• Pants v2.2.0 PY2 + legacy_resolver 0m56.745s
• Pants v2.2.0 PY3 + 2020-resolver 0m58.757s
• Pants v2.2.0 PY3 + legacy-resolver 0m59.575s
While 5m might not seem too bad (this ^^ was for a relatively simple package, some of our bigger pydata type packages were taking 30m+ to resolve!)
After much hand-wringing, the solution was to use the legacy-resolver plus the global constraints file. Using the global constraints file meant bumping incompatible dependencies in various project code and doing a bunch of botocore version manipulations (aioboto3+aiobotocore+boto3+botocore have very awkward pin ranges amongst themselves).
We view the legacy-resolver + constraints as a bridge to get us immediately on pantsv2 and then from there update our code completely to py3 so we can jump onto the 2020-resolver. (I haven’t done further py3+2020-resolver testing on larger packages so there is definitely an assumption I’m making here, and it could be the dependencies in that test target just required less backtracking for py3)
In any case, what was taking 31m on a 16 core build server to test-the-world on a cold start with the 2020-resolver now with (legacy-resolver + constraints) takes 6m. And with a warm start we can test-the-world in under 2m. This is huge. We can literally test everything on every commit if we wanted to instead of doing incremental tests. And I imagine we can further reduce that time on beefier machines due to the parallelism of v2.
So, big thanks to you all for all the work you put into v2!