Hello, Am I able to use: ```[docker] run_args = [...
# general
l
Hello, Am I able to use:
Copy code
[docker]
run_args = [
  "-p 0.0.0.0:13000:13000"
]
with a
docker_environment
that is used for my tests? As you can see I want the docker container to expose a port
To give a little more context, if I use a default environment to run all my tests:
Copy code
__defaults__({(python_test, python_tests): {"environment": "my_default_environment"}})
How can I pass
docker-run-args
to the docker run command when that environment starts up? I have tried
[docker].run_args
[test].env_args
docker_environment.docker_env_vars="PANTS_DOCKER_RUN_ARGS='--name my_default_environment'"
and I’m sure a few other ways too
This has other practical uses such as passing a network to that docker container so it can communicate with a database container instead of being part of the default bridge network etc
I know environments are a new concept, but does what I’m asking make sense?
e
What you're asking makes sense, but the concept is new enough that you'd probably spend less time actually just trying the experiment than waiting for someone from the limited pool of folks in the know at this point to answer.
I know I, for example, could only guess and lie at this point without looking at code - which is another option.
l
I was trying to find the code that starts the anonymous container to see how I can pass it run args
I also did try many ways to get that container to accept
run_args
without luck
e
Aha - Ok. The OP was phrased as a question, but it appears you have experimented to find the answer. I think you need to move on to bug report / feature request / issue file then! Thanks for experimenting.
l
yeah I fiddled for a while before coming here for sure. I will file an issue. Of course was just hoping someone has run across this as we’re trying to completely isolate our testing environment so the project has better cross platform support
e
One detail I do know is docker run is not what is done. A container is started and is long lived and individual actions are `docker exec'd`; so this is likely a problem for ports now that I think about that statement for a second.
l
yeah I can see that the docker container is run in the background which is certainly why it doesn’t accept run_args via common methods
e
Well, your OP mention ports, but your follow up does not, it talks about passing a name that points to other ports.
l
yeah, I simplified it to any run_arg for clarity
my use case isn’t specific to this functionality
which is why I mentioned name, or network args
e
Ok, well port exposing / container start-up only options will be a problem as things stand today. Other options fine.
The crux is container start up is dog slow (on a Mac)
l
I could get around exposing ports with
--network host
e
So, to support container startup options like --port their would need to be new code that fell back to doing things the slow way and actually docing a
docker run
behind the scenes instead of a
docker exec
.
Or else more complex code that split the options in 2 and used the container only options as a key into an appropriate long lived container to exec into.
l
or even a
--name
option tbh, because the containers in the bridge network can communicate with each other using network DNS
but I see what you’re saying regarding supporting those options
e
I think issues & feature requests would definitely be great. My varnish on this feature - although it technically allows for more - is just to fix the situation of folks who develop on Mac but use Linux for production. As such the development effort was pretty narrowly scoped and no one with a ~real world use case hammered on it. You're providing that though, which is useful.
l
yeah, the documentation points to building images on macs as the motivation for the docker_environment functionality
for most of my tests it does everything I need, but we have integration tests that communicate with AWS SAM containers and we need to do a little additional networking
which is the reason for my initial
-p
option, and my use case is certainly an edge case
e
l
I tried using those as well, the only environment variable I could think of putting in there was
"PANTS_DOCKER_RUN_ARGS='-p port:port'"
but as you said, it doesn’t run the container using pants run
e
Well, the idea there is not to influence docker or pants, but to plumb a port straight down to the test for reading in test code - if present - to alter test behavior.
It would be a hack bridge to the issues you'll be filing being implemented.
The details may not actually line up to allow this, but you'll know that if so.
l
so for more context, the docker_environment container will be running a simple http_server so that it can accept requests from a AWS SAM container in order to execute code
basically, we have an AWS Stepfunction container running (external) and it makes requests to our test container which acts as an AWS Lambda endpoint
which allows us to run stepfunctions locally, which is pretty cool imo 🙂 but I need pants docker container to expose port 13000 to the host
or I need to give the container a static name so I can give the stepfunction a static endpoint to call lambda functions eg: http://docker.host.internal:13000
or http://test_environment:13000 (if using the container name on the same bridge network)
e
Gotcha.
Lots of great details for an issue. The motivation for the feature / bug work is invaluable.
l
tbh I can get it all working if I just run the tests in a local_environment
but then every developer needs to install all the project dependencies instead of just having all our tests run in a docker container based off an image we can create that installs all the deps and a docker_environment uses as its image
I will try to compile a very detailed issue report this week. @enough-analyst-54434 thank you for going through it with me
your insights were helpful
h
FWIW I'm pretty sure the
[docker]
subsystem is for "Pants building docker images for you" and not "Pants running other stuff in a container". But maybe there is reuse I'm not aware of. Thanks for digging and experimenting! Issues very welcome!
l
@happy-kitchen-89482 I do like the idea of all tests being run in a docker container, but this excerpt from the documentation is what started me down this path:
Copy code
Consuming environments

To declare which environment they should build with, many target types (but particularly "root" targets like tests or binaries) have an environment= field: for example, python_tests(environment=..).
h
Oh yes, you're definitely on the right track there. But
[docker].run_args
is this - it's for when you
pants run path/to/docker/image:target
)
It's a Pants plugin for building images (and running them for debugging them), that is not related to the environments feature, which is what you're looking into.
l
yeah, after all my digging and talking here I’m understanding that much better now. Initially those args were all I could find in order to experiment
I’m relatively new to pants, so I don’t know the codebase yet, but I wasn’t able to find the code that creates the environment anonymous container so I could see what configuration is possible
h
This will all be very helpful in shaping this feature as we stabilize it!
l
I really like 👖 thus far, so I’m happy to help as best I can. Thanks again for both your insights
🙏 1
The code invokes the Docker API directly and does not invoke
docker run
, although all of the functionality of
docker run
should be available in the Docker API.
Please feel free to make a PR with additional config to set there (for example,, ports). It should be a straightforward code change: 1. Define the option in Python. 2. Modify certain dataclasses to get the new option through to the Rust code. 3. Use the option in the Rust code.
e
@fast-nail-55400 IIUC the non-straightforward thing is at a higher layer. Docker run vs exec for reasons of perf.
Basically you cant bind a port or a network in an exec, and that's the use case.
f
but that code is what spawns the cached container
the code that does the exec is elsewhere
putting aside that a port shared among all of the exec's may not be semantically correct
e
Yeah - I was not putting that bit aside.
That case could be harmless, its easy to imagine others where you get a conflict and need two partitioned "pools" of container execs, etc.