Hi folks! I have this application that I am buildi...
# general
p
Hi folks! I have this application that I am building as a docker image with pants and has a pex binary. When I run the application locally with docker desktop, I am able to access the exposed port, but inside our kubernetes cluster, the app actually cannot connect and serve properly on the same port over an Ingress. Though port forwarding it to my local machine works. Any help on this would be appreciated. More information on the pants build file in the thread.
Identification of this issue: Definitely something with pants because I have run it with a local docker build and it works fine, no application issues so seems to something with how pants packages the pex in the docker image.
BUILD
Copy code
python_sources(
    name="src_files",
    sources=["src/pages/**"],
)

resources(
    name="assets",
    sources=["src/resources/**", "src/assets/**"],
)

files(
    name="asset_files",
    sources=["src/resources/**", "src/assets/**"],
)

pex_binary(
    name="main",
    environment=parametrize("osx", "linux_docker"),
    entry_point="src/index.py",
    dependencies=[":src_files", ":assets"],
)

docker_image(
    name="midas-gui",
    instructions=[
        "FROM python:3.11-slim-buster",
        "EXPOSE 8050",
        "COPY src/quant/src/services/gui/src/resources /bin/resources",
        "COPY src.quant.src.services.gui/main@environment=linux_docker.pex /bin",
        # 'ENTRYPOINT ["sh", "-c", "while true; do echo Running; sleep 1; done"]',
        'ENTRYPOINT ["/bin/main@environment=linux_docker.pex"]',
    ],
    registries=[
        "@ecr-registry",
    ],
    image_tags=["{build_args.GIT_COMMIT}"],
    dependencies=[":asset_files"],
)

python_sources()
c
can you share the kubernetes manifest for whatever results in the pod (eg the deployment's template)?
EXPOSE
doesn't actually publish the port (although docker desktop might parse it and be helpful), and if the port isn't in
...spec.containers[*].ports
it won't be exposed and an ingress won't be able to connect to it
p
Copy code
apiVersion: apps/v1
kind: Deployment
metadata:
  name: midas-gui-deployment
  labels:
    app: midas-gui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: midas-gui
  template:
    metadata:
      labels:
        app: midas-gui
    spec:
      serviceAccountName: dev-midas-gui
      containers:
        - name: midas-gui
          image: xxx
          imagePullPolicy: Always
          ports:
            - containerPort: 8050
          env:
            - name: DATA_DIR
              value: /app/src/data
            - name: AG_GRID_LICENSE
              valueFrom:
                secretKeyRef:
                  name: ag-grid-credentials
                  key: ag-grid-license
            - name: REDIS_MD_URI
              valueFrom:
                secretKeyRef:
                  name: mdrediscredentials
                  key: MDREDISCREDENTIALS
            - name: PROTON_URL
              valueFrom:
                secretKeyRef:
                  name: proton-credentials
                  key: proton-url
            - name: CLICKHOUSE_URL
              valueFrom:
                secretKeyRef:
                  name: clickhouse-credentials
                  key: clickhouse-url
            - name: GOOGLE_SHEETS_CREDS
              valueFrom:
                secretKeyRef:
                  name: google-sheets-credentials
                  key: google-sheets-password
            - name: GOOGLE_SHEETS_ID
              valueFrom:
                secretKeyRef:
                  name: google-sheets-credentials
                  key: google-sheets-id
            - name: AWS_REGION
              value: us-east-1
            - name: AWS_DEFAULT_REGION
              value: us-east-1
            - name: HERMES_DATA_PATH
              value: /app/src/data
          resources:
            requests:
              cpu: 2
              memory: 7Gi
            limits:
              cpu: 2
              memory: 7Gi
      tolerations:
        - key: "<http://monoceros.io/large-memory-optimized|monoceros.io/large-memory-optimized>"
          operator: "Equal"
          value: "yes"
          effect: "NoSchedule"
---
apiVersion: v1
kind: Service
metadata:
  name: midas-gui-cluster-ip-service
spec:
  type: ClusterIP
  selector:
    app: midas-gui
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8050

---
apiVersion: v1
kind: Service
metadata:
  name: midas-gui-load-balancer-service
  annotations:
    <http://service.beta.kubernetes.io/aws-load-balancer-type|service.beta.kubernetes.io/aws-load-balancer-type>: "nlb"
    <http://service.beta.kubernetes.io/aws-load-balancer-scheme|service.beta.kubernetes.io/aws-load-balancer-scheme>: "internet-facing"
spec:
  type: LoadBalancer
  selector:
    app: midas-gui
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8050
---
apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
kind: Ingress
metadata:
  name: midas-gui
  annotations:
    <http://nginx.ingress.kubernetes.io/rewrite-target|nginx.ingress.kubernetes.io/rewrite-target>: /
spec:
  ingressClassName: nginx
  rules:
  - host: <http://midas.dev.orionintelligence.com|midas.dev.orionintelligence.com>
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: midas-gui-load-balancer-service
            port:
              number: 80
That is the
deployment.yaml
and I do have container port there.
Hey folks! I posted this message a few days ago. Could anyone look into why this issue is happening and help? Not exactly sure why there would be any issues other than the part that @careful-address-89803 mentioned above. If that is the case, what do you guys think would be the fix.
b
Based on everything you said, it’s almost certainly NOT a bug with Pants but elsewhere in your application. Do you have anything in your code / config that makes the application be served on port 8050?
c
If a
kubectl port-forward
works, then the container is listening on that port and the container is built properly. You can also port-forward to the svc verify that the svc is forwarding correctly. The next thing to check is that your ingress can connect. You can check for any NetworkPolicies that prevent the nginx pods from contacting your svc. You can also check the logs of the ingress controller to see if it thinks it can connect. You can also tell us the error you're getting beyond "cannot connect". That's not much to go on. Is it a HTTP 502? HTTP 504? timeout?
👍 1