I'm curious about Python PEX in Docker. Is there a...
# general
i
I'm curious about Python PEX in Docker. Is there a way to use
docker compose
in conjunction with PEX. For example, I have a common "logger" package which I want all of my services to use. The best way to deploy "logger" as a dependency of these services is in a PEX file. (Or maybe there's another way?) I want to use
docker compose
to run my entire codebase (front end on port 8080, various microservices, postgresql on port 5432, etc.) Any suggestions, ideas, or otherwise?
The solution was pretty simple once I started thinking about it. 1. I need to define targets for
base
,
dev
, and
prod
a.
dev
will be used for
docker compose
and copy the application files to a new directory in the container. b.
prod
is for Pants/PEX and the binary is copied to the bin directory. 2. I need to mount the
./logger
directory to a place where my application files will live. 3. Specify which target
docker compose
should use. docker-compose.yaml
Copy code
version: "3.8"
services:
  postgres:
    image: postgres:14.2-alpine
    restart: always
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
    ports:
      - "5432:5432"
    volumes:
      - postgres:/var/lib/postgresql/data
  process:
    container_name: process
    build:
      context: ./process
      target: dev
    image: process
    depends_on:
      - postgres
    links:
      - postgres
    volumes:
      - ./logger:/app/logger
volumes:
  postgres:
    driver: local
process/Dockerfile
Copy code
# syntax=docker/dockerfile:1

FROM python:3.9-slim AS base

FROM base AS dev

WORKDIR /app

COPY . .

ENV LOG_LEVEL=verbose

CMD [ "python3", "main.py" ]

FROM base AS prod

WORKDIR /bin

COPY ./main.pex app

ENV LOG_LEVEL=

CMD [ "app" ]
With this configuration I can use both
docker compose
and
./pants run process:docker
l
@incalculable-yacht-75851 In your example, is the pex file being built first by Pants before you run docker-compose? I had gone down the path of having my Dockerfile use Pants to build the pex file, but I had some bad performance with that because the caching wasn't working well - i.e. for local dev, if I changed a single Python file, the entire pex had to be rebuilt which was taking well over a minute due to all of our dependencies being resolved again
i
I hadn't considered having the Dockerfile build the pex. Since the dev target is only for local development and will never be published I could easily just mount the repo root. I'd rather not do that and be intentional about what is exposed to the container.