in response to the feedback on the design doc abou...
# development
h
in response to the feedback on the design doc about how to implement running a subprocess from pants, I've updated it, assuming that what I'll implement is something fairly close to what @witty-crayon-22786 had in mind: https://docs.google.com/document/d/1Hn73YlhTPROlULTMa_3A-Fdv7hAPiHFII8rvXri5l7E/edit#
I've been poking around the v2 engine code around process execution today, and I'm not entirely sure where I should add the flag to ensure no-caching
or how these options on the python-side
ExecuteProcessRequest
should interact with the existing multi-platform logic in the intrinsic
@average-vr-56795 it looks like you also wrote a bunch of the rust engine code around process execution - curious for your feedback as well
w
caching and the option itself are probably secondary to getting stdout/stderr hooked up
have you confirmed with your team though that the "long term"
./pants run
is the right goal for now?
in the short term, i sortof agree with daniel that externalizing it is a reasonable workaround
commented though.
a
It would probably be reasonable to only accept single-platform EPRs if exclusivity isn’t none, because it’s only going to run locally
Definitely happy to talk/help around implementation, it’s a corner of the code I know well 🙂
h
@witty-crayon-22786 short-term toolchain can work around not having a v2
./pants run
, as long as
./pants binary
works (and that will once CI passes)
but having a properly-working v2
./pants run
(as well as
./pants repl
) is something that we want sooner rather than later
anyway, saw your comment about what order to work on the several bullet points in
I'm not sure what streaming results for
pants run
should look like
if I understand what the v1 version https://github.com/pantsbuild/pants/blob/master/src/python/pants/backend/python/tasks/python_run.py#L26 is doing correctly, we're not currently trying to proxy the stdout/stderr of the pex through any kind of console abstraction
so that's new behavior we'd have to add for v2
and we also don't get stdout/stderr output from
ExecuteProcessResult
in a streaming fashion - as I understand it, the engine won't yield an
ExecuteProcessResult
until the program is completely done running
and doing some kind of streaming output would entail redesigning that interface anwyay
w
yes:
and we also don't get stdout/stderr output from
ExecuteProcessResult
in a streaming fashion - as I understand it, the engine won't yield an
ExecuteProcessResult
until the program is completely done running
this is the actual work i'm talking about.
but, no... it doesn't entail redesigning the interface, as far as i know. just adding the
foreground/exclusivity
flag.
and then, when it's time to invoke that process, acquiring whatever locks you need on the console, running it, and then returning control to other code
h
several questions: 1. what does locking the console actually entail? run would be implemented as a
console_rule
, so it has access to a
Console
instance and the engine already makes sure that only one
console_rule
runs at a time 2. I don't understand how adding the
exclusivity
flag to
ExecuteProcessRequest
in and of itself would let us handle an interactive subprocess. no matter what the v2 engine does internally in response to that flag, as long as the run
console_rule
yields to the engine to wait for an
ExecuteProcessResponse
, I don't see how we can run an interactive process (like a repl), or capture keyboard interrputs from python code (like the v1 run task is doing)
w
@hundreds-breakfast-49010: at any given point in time, there is exactly one "stdout/stderr" for the process, even under pantsd
when a bunch of
@rules
are running, they might be logging on stdout/stderr
and executing a process is an intrinsic rule
meaning that it will run concurrently with other rules and potentially other logging
acquiring a lock on stdout/stderr might mean ... literally that, in the rust case. because i think rust has such a lock
once you've acquired that lock, you can then give those file handles to the process you are forking
which would then run to completion, and when it exited, you'd unlock stdout/stderr, and continue on your way
so i'm really not talking about anything other than "forking a subprocess and giving it our stdout/stderr handles to execute on, and then blocking for it to complete"
there are other semantics (where do you get the handles for stdout/stderr? maybe from the Console object, or from the Logger? or from a global?)
regarding: https://pantsbuild.slack.com/archives/C0D7TNJHL/p1570656386009700?thread_ts=1570581183.469500&cid=C0D7TNJHL ... ideally this won't be the case forever. so actually using the handles in the
Console
would be ideal
h
that I think is actually doing the executing for any
ExecuteProcessRequest
that the engine determines can be run locally
so it looks like this is using hte rust std lib
Command
abstraction (although I'm not sure where
spawn_async
comes from)
it's also using rust stdlib
Stdio
to provide the output handles
I'm not sure what surgery would be necessary in the engine to make these stdout handles be something controlled by a python
Console
instance
w
@hundreds-breakfast-49010: so, remember that we also have the rust-side UI: https://github.com/pantsbuild/pants/blob/master/src/rust/engine/ui/src/display.rs
it looks like that is just reaching out and grabbing
stdout/stderr
this is ... not an answer to your question.
yes, there might be some surgery.
but basically, we are interacting with stdout/stderr in two signficant places on the rust side.
and it currently looks like those are the globals
(not necessarily the handles from the python side Console, BUT: in practice, in production those will be the same handles)
h
ok, so let's say we're locally executing a pex containing a python repl. the python code will yield to the engine, the engine will eventually get to here: https://github.com/pantsbuild/pants/blob/master/src/rust/engine/process_execution/src/local.rs#L242 , and then will execute that pex in the context of a Future
I'm still not sure what
spawn_async
is doing but that seems like the important method there
and the stdout/stderr handles it has are currently generated within that process_execution code, but we could modify that code path to get them from logger.rs or display.rs or something
hm, I just realized - we're explicitly setting
stdin(Stdio::null())
so does that mean that any subprocess we run, we don't expect to have any kind of input? that's not gonna work with a repl
looks like @enough-analyst-54434 wrote the bulk of this code, pinging him so he's aware of this discussion next time he's online
w
spawn_async is just provided by
tokio-process
. it's roughly equivalent to spawn.
h
this whole line of argumentation is still assuming that the right way for pants to execute an interactive, console-blocking process is to modify the existing
ExecutePorcessRequest
code path, which I'm increasingly less certain is the right way to go.
w
@hundreds-breakfast-49010: yes. the stdout/stdin/stderr will all be different in the case of
foreground/exclusivity
h
even if we wanted to be able to support
--loop
, that odens't necessarily imply that the way we need to support
--loop
is by re-using ExecuteProcessRequest
w
https://pantsbuild.slack.com/archives/C0D7TNJHL/p1570657906023500?thread_ts=1570581183.469500&cid=C0D7TNJHL this investigation is ~exactly what i was expecting here... i think that you're on the right track.
if you'd like to re-open design with more of the constraints listed up-front, that would be fine.
but it's 1)
--loop
, 2) ability to conditionally run a test process in the foreground, 3) streaming
2 is challenging. because i think that in order to not totally turn your test running infrastructure inside out, you need this ability to toggle being in the foreground on a process-by-process basis.
h
I'm not sure 2 would be that hard to implement
let's assume that we do create a new type
LocalForegroundExecutProcess
with a similar API to
ExecuteProcessRequest
if we had that infrastructure
then in
python_test_runner.py
we could have a new option on
test
that either runs this code as is
or runs a pex according to the new, local, API, for local debugging
w
i'm not sure how that's fundamentally different from what i'm suggesting, with the exception that one is a boolean flag on
ExecuteProcessRequest
from a rule-author perspective, it feels more complicated
but otherwise it's roughly the same thing... just implemented in python rather than rust.
(maybe that's a benefit? but it feels like it duplicates codepaths both for rule authors and for us)
h
my thinking is that the boolean flag actually changes the semantics of
ExecuteProcessRequest
fairly substantially, and that implies having to add more incidental complexity in the existing code path around
process_execution
I also personally don't understand that code path super-well (which is why I'm asking questions about it)
if we did create a new python-type for spawning interactive console-hogging processes, it would probably be an intrinsic, so the code would eventually go into
<http://nodes.rs|nodes.rs>
anyway, and could in principle re-use some of the existing code in
process_execution
but without having to manage details around
MultiPlatformExecuteRequest
or how exactly we'd handle streaming output
I think we also don't want an
ExecuteProcessResponse
- a type with
stdout
and
stderr
bytestrings - as an output in an interactive context, we want a different type of
*Response
that has handles to stdout/stderr streams
if we did this with a flag on
ExecuteProcessRequest
, then the presence or absence of that flag would make the output
ExecuteProcessResponse
have a different shape depending on whether it's input had the flag or no
and I htink that also justifies implementing this functionality as a different type-pair
w
maybe
i would say that conditional logic that could run inside of https://github.com/pantsbuild/pants/blob/master/src/python/pants/backend/python/rules/python_test_runner.py#L97-L106 is a different shape of proposal than was on the doc initially
and that might be more agreeable.
could sketch out that design vs a flag?
h
okay, yeah, I'll sketch out a design in light of those three desiderata
w
feel free to overhaul that bottom section however you see fit.
h