I'm hoping that someone will be able to solve the ...
# general
r
I'm hoping that someone will be able to solve the final part of my Pants puzzle... successfully publishing a Docker image to a Google Artifact Registry Docker registry.,, 🧵
My current condundrum... My CI build currently does the following (at a high-level and obvious steps like check out the code ommitted...) 1. Authenticate a service account via
gcloud auth activate-service-account
with environment variables containing the service account key. 2. Runs
gcloud auth configure-docker <http://europe-docker.pkg.dev|europe-docker.pkg.dev>
so that we get Docker configured to use the appropriate credential helper... after running this we have
~/.docker/config.json
with the following content...
Copy code
{
  "credHelpers": {
    "europe-docker.pkg.dev": "gcloud"
  }
}
where
gcloud
is actually a suffix that gets translated into
docker-credential-gcloud
Then we successfully run
./pants package ::
to build the Docker container images... if I ran
docker images ls
at this point I would see the new container images with the expected tags.
However, the
./pants publish ::
step is failing due to a permissions error...
denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/<gcp-project>/locations/europe/repositories/<repository>"
where
<gcp-project>
is the name of my GCP project and
<repository>
is the name of my repository (located in the GCP europe region, hence
<http://europe-docker.pkg.dev|europe-docker.pkg.dev>
as the registry in the credHelpers key. Let's say
<gcp-project>
is
banana
and
<repository>
is
dev
from here on...
However, if I just use the docker client directly to do the publish of my images... e.g.
docker push <http://europe-docker.pkg.dev/banana/dev/my-docker-image:my-tag|europe-docker.pkg.dev/banana/dev/my-docker-image:my-tag>
this works just fine
BUT, if I run
docker login <https://europe-docker.pkg.dev>
...
(although I get the following warning...)
Copy code
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /home/circleci/.docker/config.json.
Configure a credential helper to remove this warning. See
<https://docs.docker.com/engine/reference/commandline/login/#credentials-store>
Then
./pants publish ::
works just fine
Of interest... My pants.toml contains...
Copy code
[docker]
build_args = ["GCP_ARTIFACTS_REGISTRY"]
tools = ["gcloud"] # docker-credential-gcloud and gcloud are in the same directory

[docker.registries.banana]
address = "europe-docker.pkg.dev/banana"
default = true
My
docker_image
targets have
repository="{build_args.GCP_ARTIFACTS_REGISTRY}"
where
GCP_ARTIFACTS_REGISTRY
has the same value as
<repository>
(basically
dev
) and
[docker] build_args
contains
GCP_ARTIFACTS_REGISTRY
f
Have you tried setting
[subprocess-environment].env_vars
to set the
PATH
to include the location of
docker-credential-gcloud
?
r
I haven't, and I will try that, but I note that I'm not seeing any errors about not being able to find
docker-credential-gcloud
f
Hmm then my suggestion is probably irrelevant.
Maybe the GCP-related env vars need to be passed through?
There is also a
[docker].env_vars
option which may only apply to Docker invocations. https://www.pantsbuild.org/docs/docker#docker-configuration
c
Is the gcloud a script that in turn uses other scripts binaries? Those would need to be listed as well, if so. The subprocess env Tom mentioned does not apply for the docker backend. Use ‘[docker] env_vars’ for that.
r
Just both
gcloud
and
docker-credential-gcloud
are shell scripts. both located in
~/google-cloud-sdk/bin
and I get the same result from having pants.toml say
tools = ["gcloud"]
and
tools = ["docker-credential-gcloud"]
- not sure what other binaries I might have to reference... should I be expecting some sort of 'could not find x' message in that case though?
The path of the shell that I run pants from contains ~/google-cloud-sdk/bin
If I run with
-ldebug
I observe...
Copy code
[Warning] One or more build-args [BRANCH GCP_ARTIFACTS_REGISTRY SHORT_GIT_HASH] were not consumed
Successfully built b3b3a866dce5
Successfully tagged europe-docker.pkg.dev/banana/dev/my-image:my-branch
Successfully tagged europe-docker.pkg.dev/banana/dev/my-image:a-git-hash
but I have in a
BUILD
...
Copy code
image_tags=[
        "{build_args.BRANCH}",
        "{build_args.SHORT_GIT_HASH}",
    ],
    repository="{build_args.GCP_ARTIFACTS_REGISTRY}/my-image",
c
That warning is from docker, so wouldn't know about your use of them in the pants build file.
👍 1
As an attempt to find out if this issue is indeed tools related, try adding PATH to the docker env_vars.
It doesn't help that you have multiple scripts in the same folder, pants will single the ones you point out so doesn't use the residing dir as path right off.
r
adding PATH to env_vars doesn't resolve this issue
It doesn't help that you have multiple scripts in the same folder,
Not sure I understand here, unfortunately that's how the GCP SDK installs itself. At any rate, I have just a single entry in
tools
and it doesn't appear to matter if it is
"gcloud"
or
"docker-credential-gcloud"
ok hang on, the combination of having 'PATH' in envvars and tools = ['gcloud'] might have gotten me moving 🤔 (I had been experimenting with tools = ['docker-credential-gcloud'])
👀 1
YES. Can confirm that fixes it. So is the
[docker] tools = ["gcloud"]
entry still required, or is
[docker] env_vars = ["PATH"]
sufficient
I can test this myself of course
Yeah, it appears that I can get away with just
[docker] env_vars = [ ..., "PATH", ]
👍 1
c
It is not recommended to have PATH be part of your env vars, however, as that makes your cache keys brittle. It would be good to find out which other tools are being used, and list them in the
[docker] tools
section of your config instead.
Good progress!
It doesn’t help that you have multiple scripts in the same folder, pants will single the ones you point out so doesn’t use the residing dir as path right off.
What I was trying to say was this: If you list
gcloud
in the
tools
list, the Pants execution sandbox will be setup so that PATH has
gcloud
on it, but any other files from the same directory as gcloud will not be on the PATH. This is to keep the sandbox as hermetically small as possible, and not leak unspecified tools.
It may help to run with
--no-process-cleanup
and then execute the
__run.sh
script from the sandbox invoking docker, as you may see output about missing binaries that way, that you otherwise don’t.
r
Ok, I've tried this... unfortunately I'm not seeing a
Preserving local process execution dir /tmp/process-execution<hash> for ...
log line for the publish step... I see ones immediately prior to the log output tied to building the Docker images, and if I change into that directory and run
__run.sh
then I get the image built again but doesn't attempt to publish.
c
Right. Forgot that the publish process is run as an interactive process in the foreground i.e. no sandbox. (But still isolated env etc)
@witty-crayon-22786 any hints for debugging interactive processes further..?
w
sorry i missed this!
the
run
goal has something custom for this: https://www.pantsbuild.org/docs/reference-run#section-cleanup … but other consumers don’t currently.
generalizing it (…or having it use the global setting…? i’m not sure why it doesn’t) would be good… but in the meantime, attaching a debugger and having it pause in the right spot might be necessary =/