We’ve got a step in our build process (for docker ...
# general
g
We’ve got a step in our build process (for docker artifacts) that copies files down from S3 copying them into the docker container. We’ve been looking at the
experimental_run_command
target as a solution for this but have been struggling. Does the
experimental_run_command
support a workflow that performs the following steps? 1.
aws s3 sync <s3://bucket/dir/> ignored_path_in_monorepo/
(or a shell script that runs this) 2. Make those files available to the
docker_image
sandbox 3) A plain old
COPY ignored_path_in_monorepo/ /path_in_docker_image/
in the Dockerfile
b
Pants 2.16 allows you to install arbitrary URL handlers which can act as middleware for requests. The one implementation we ship with this feature is AWS S3 field downloading support. So once enabled, you'd be able to use an S3 URI as the source of an http_source source object in a
file
target
(You'll have to grep the pants repo for it. I'm on my phone about to board a plane)
g
Thank you! I’ll dig around.
b
Let me know if that works for you!
uhm, might hit this snag, tho, unless it’s been fixed and this issue should be closed: https://github.com/pantsbuild/pants/issues/18574 ?
b
Oh yeah, whoops. 😅
Not fixed AFAIK
g
Cool, looks promising. We're getting along fine for now with some
resource
targets that are synced manually before pants runs. Excited for this new feature!
b
Pants should be able to handle any out-of-band pre-steps, so if you have more let us know! We can figure out how to make it better 🙂
1
👍 1