<@U06A03HV1> <@U6YPB4SJX> <@UKQ8DL2PP> wrt platfor...
# development
a
@witty-crayon-22786 @average-vr-56795 @early-needle-54791 wrt platforms in v2 -- i was under the impression that "target" vs "host" platforms only need to be differentiated if you are running a cross compiler (otherwise, you just need the "target" platform). this is something i believe bazel supports, but i think it would be a great simplification to avoid cross compiling, both in concept and implementation. my concern is that cross compilation would be a significant amount of effort to support, on top of the existing difficulty (which we've now managed) of compiling for the same platform, especially if people want to choose which compiler to use (see the comment i left on the doc at https://docs.google.com/document/d/1AWIvRRGzmE3q5IXiRCqvLPCo0BkqX_9y4F9B0KV_9h8/edit?disco=AAAADE1EQ5U). if the native backend tasks were converted to v2 right now, it would immediately work to compile things on a linux remote execution platform if pants was invoked from an osx machine, because: (1) pants is provided
Platform.linux
as a root input (2) pants resolves the appropriate binary tools for the linux platform (this already works) (3) pants constructs the snapshot for linux from those tools (this also already works, it's just not made into a snapshot because the tasks are still v1, because i was waiting for local process execution caching) (4) pants sends over the snapshot to the remote execution backend and receives the output (remote execution is known to work) this could be done without introducing a
PlatformSpecificExecuteProcessRequest
, and something like speculation could be implemented by instead just injecting the appropriate
Platform
parameter into the rule subgraph that ends up triggering the execution (because we already have a rule subgraph that does platform-specific logic in native_toolchain.py). it's true we then need to have a way to communicate to rust whether to execute the process locally or remotely, so maybe either a field on
ExecuteProcessRequest
or a separate object might work for that, but the identity of the injected
Platform
param is exactly what distinguishes the appropriate subgraphs in the native backend right now. i'm concerned when i see the
yield Get(PlatformSpecificBinary, ...)
in the doc because the native backend already does this exactly, without having to introduce new syntax or classes, or having to write code that knows anything about platforms except where it differs (which the native backend has already figured out). i spent some time when building the self-bootstrapping native backend to represent the intricacies of what's actually used when people run a compile from the compilers their package manager installs so that it could work immediately on all supported platforms. that time was spent first building on @happy-kitchen-89482's work on
BinaryToolBase
and
BinaryTool
to make it more injectable and extensible, and then adding lots of very basic descriptions of things like
Compiler
,
CCompiler
,
CppCompiler
,
CppToolchain
in https://github.com/pantsbuild/pants/blob/847dbe46f4d64635c23d2146d36f2642f0bd7918/src/python/pants/backend/native/config/environment.py#L22 (and there's some moderately-complex but really-useful logic added in that file with
_ExtensibleAlgebraic
and decorators to make this all really composable). i think that structure offers quite a lot to build off of. in the case of speculation, i don't think we need to have a "target" and "host" platform, because we can simply make sure we inject the appropriate
Platform
parameter at the start, as described above. i would really like the work we're doing with speculation to align with the existing logic. if we want a concrete example of platform-specific logic (since the jvm backend doesn't provide that), the native backend is very small and highly decoupled from other subsystems by design, and was written precisely to be injectable in this fashion. i can't edit the doc and comments can be ignored, hence the slack message