I have a custom test rule where I’m going to start...
# plugins
f
I have a custom test rule where I’m going to start a docker-compose with some services, then run the tests. There are several of these targets that could run concurrently with separate compose files and I need to try and avoid port collisions. I saw
execution_slot_variable
and I guess I could calculate an “offset” in the exposed ports. This may work but isn’t particularly other people, and doesn’t guarantee the port will be available. Is there any api available to act as a mutex? (so I could take/wait on a lock on the requested port) Alternatively, is there a way to limit pants concurrency at the rule level? I saw
Process.concurrency_available
but its missing a doc string how it should be used or what effect it has. I don’t really want to set the global concurrency to 1 using the global
process_execution_local_parallelism
flag
c
Not a Pants solution, but if the ports don't have to be static, only that the tests know of the correct host port, you can have docker compose randomly assign a host port. All you need to do is to publish instead of map the port by only listing the port, something like
Copy code
version: "3"
services:
  web:
    ports:
      - "8000"
you can then get the host port with
docker-compose port web 8000
or the api. docker-compose delegates to the OS to get an available port.
also I think the randomisation only applies to the host port, inside the docker-compose network it's still available on the normal port
f
Thats an interesting option, thanks. Still curious about the pants specific apis if anyone else has context
h
This is what
execution_slot_variable
is for, but yeah, you'd have to have a range of ports that you know are available
How would this proposed mutex work? Presumably you wouldn't want tests to block waiting for a port held by another test?
f
I’m not proposing pants should add this if its not a common request, but thats exactly what I was thinking. My use-case is a little different from how say python works, I’m dealing with a large batch of tests at once, not single files. So blocking execution while waiting for a shared resource is fine-ish. The parallel execution is definitely better though I’m still struggling to understand how to tell pants how “big” my test run is expected to be though. Does
concurrency_available
directly correspond to the number of threads I’m asking pants to acquire for this process run?
h
Gotcha
Yeah, I think this is another aspect of the general "how to set up and tear down processes for tests" question that still needs a robust solution.
concurrency_available
is a way to tell pants how many cores the process itself (which is otherwise opaque to Pants) is capable of utilizing. It influences the number of cores available to the process, but it doesn't directly set it, because there may be other competing consumers of cores.
So it indirectly corresponds to the number of threads pants will give the process
But it is not an end-user thing. Pants backend code can set it based on what it knows about the tool it's invoking.
f
Right I’m trying to figure out what I should expect if I set it from my plugin. If I set it equal to the same value as
process_execution_local_parallelism
does that mean no other process will be allowed to execute at that time? Effectively making my rule a “singleton” Or is it more of a “suggestion” rather than an enforced value
h
I wouldn't rely on that working, it's more of a suggestion in some sense
So you don't want to set
process_execution_local_parallelism=1
so you can get parallelism on other processes, but you want just pytest to not parallelize at all?
Do I have that right?
f
not pytest in this case, its a custom test goal I’m writing for dotnet
but otherwise yes, its another potential solution similar to the mutex mentioned before
h
ah, ok, got it so
process_execution_local_parallelism
would work but is too strict because it would apply to all processes, not just the test processes. Yeah, makes sense.
That seems like a sensible feature to have
g
Just catching up on this thread, couldn't this be solved with partitions? Since each partition ~ one process (iirc) one could do something like a run script and run in sequence.
f
@gorgeous-winter-99296 I haven’t used partitions much so I’m not quite following. I think one problem I have here is that my test goal is “big” and runs many tests as part of a single process execution. I don’t yet have a way to split this up into smaller segments for pants to work with
h
g
No, I was thinking of the rules-level partitioner/batch API, since there was a mention of a custom test goal.