there is exactly one call that represents "the run...
# development
w
there is exactly one call that represents "the running of `@console_rule`s", but there are lots of calls for other reasons
h
the context for this question is that I'm trying to figure out where the best place is to add code that can asynchronously poll
WorkUnitStore
so that we can report build statistics while the build is still going on
w
rather than polling, i'd suggest changing the recording to eagerly send things somewhere
h
and I was thinking that spawning a new thread and/or creating a new Future in
Scheduler::execute
is a reasonable place to do that, mostly because it's the place that the loop to write to the display already exists
w
ie, push rather than poll/pull
a middle ground would be putting events in a queue next to where we currently add them
h
the implementation I have right now makes
WorkUnitStore
have a queue of new workunits
w
nice
h
and then I want to have code that just periodically checks for new items in teh queue and pops them off
and sends them off
so I need code that 1) can mutate the
WorkUnitStore
of a
Session
and 2) runs asynchronously from the engine
and if I can do that then I can introduce some interface in python maybe that will hook into that async thread/future and make HTTP rquests or do whatever else somoene wants to do with that data live
w
well
if it's in a queue, you can consume the queue from anywhere
if you want the consuming code to be python, you could start a thread on startup of a RunTracker, and join it on shutdown
afaik, (and Benjy/Alex might know more) the RunTracker lives for the length of one pants run
a rough equivalent on the rust side is the Session
h
hm,
RunTracker
looks interesting
I wonder how many v1 assumptions it makes
a
Sesssion
has an inner
WorkUnitStore
so maybe a session could spawn a thread that polls its own
WorkUnitStore
, and then gets killed in the session's drop method?
a
Sounds like a reasonable plan 🙂
h
so one thing that I'm not sure about, because I'm not as familiar with rust futures as I'd like to be, is what the tradeoffs are between spawning a thread (with
std::thread::spawn
) and spawning a future on the runtime
I'm not 100% sure it's possible to spawn a future in one method, keep around a handle to it, and kill it on drop time, where that future contains an infinite loop
and if you can do that, why pick a future instead of a thread?
w
so
what are you planning on doing with the events? some sort of pluggable publish logic?
hitting an http api?
i ask, because that "what" probably influences the "where"
h
toolchain wants to use it to hit an http api, but the details of that are toolchain-specific, so we probably want to implement that functionality as a plugin
that can generically get a stream of events from the engine
w
an alternative would be to make the API public, and have the implementation be specific.
h
(and maybe in the future zipkin can also be implemented this way, instead of being a special case in the engine)
w
yea, definitely in favor of that.
rust-based plugins are "probably" not happening anytime soon, so i'm guessing that the extension logic would be implemented in python
h
so I think that implies that it's better not to try to create a public "pants" http API (at least right now) and instead create a generic events stream within pants that arbitrary code can use, including code that hits an API
I think we're fine with doing the http in python
it just needs to be capable of interacting with the engine asynchronously
w
...ish?
h
I'm sorry I don't follow?
w
so, if this is "we need to add a plugin"
...then the harness for plugins needs to be able to do various things
but the plugin code may just be a callback for each event.
anywho.
h
a callback per-event is probably fine, although I wonder if it's better to allow python to ask for batched events every 500 ms (or whatever time), and have rust do that batching
we don't want to make one http request for every event so we have to buffer events somewhere