Agreed that there's been a lot of churn recently -...
# general
h
Agreed that there's been a lot of churn recently - dependency inference + "file targets" hugely changed the paradigms for a lot of things, which kept leading to yet more new insights. For example, in a v1 world, it was sensible for
provides=setup_py()
to live on a
python_library()
. That became illegal to do once we added "file targets", it would break most the time. Which led to
python_distribution
, which led to this insight that there's a distinction between "metadata describing first party code" vs. "metadata describing something you want to build" -- FYI the major remaining churn we anticipate is wanting to solve the problem of it being hard to add explicit metadata to a granular part of your first party code, without needing lots of granular targets. A common pattern (now) is to have one
python_library
target describing >20 files thanks to
**
rglobs. When you add an explicit dep like a database that Pants can't infer, then now all 20 files get that even if only 1 file actually needs it. 111 mitigates this problem, but is extra boilerplate We're envisioning a way where you can somehow merge metadata, like say "these 200 files use these interpreter constraints; these 5 files have a database dep". Without needing to have distinct targets for everything. The trick is how do we do that in a way that isn't majorly disruptive... It's another big paradigm change we couldn't envision before dep inference because we were blinded by the way every monorepo Build Tool™️ has done things since the start.