Hi community, I have a dist owner question. I have...
# general
h
Hi community, I have a dist owner question. I have a mono repo, one folder is for the config of different libs, and the other folders are apps. The structure like this:
Copy code
project/config/redis/
project/config/cassandra/
project/app1
project/app2
app1 and app2 both need these shared configs for redis and cassandra. When I distribute app1 or app2, I want to claim these configs as resources. How can I do it? I try to put
BUILD
file under project/config, and define resource in it, and then include it in the app1 and app2's BUILD files’ dist section. But pants complain there is no owner for project/config
c
What do you mean by complains? Is there an error message you could paste?
h
Here is the message. Basically it asks me to put
python_distribution
in the file
project/config/BUILD
. Obviously, I do not want to publish config files only as a python package
Copy code
10:32:19.30 [ERROR] 1 Exception encountered:

Engine traceback:
  in `package` goal

NoOwnerError: No python_distribution target found to own project/config/redis/integration/config.json:../../resource. Note that the owner must be in or above the owned target's directory, and must depend on it (directly or indirectly). See <https://www.pantsbuild.org/2.27/docs/python/overview/building-distributions> for how python_sources targets are mapped to distributions. See <https://www.pantsbuild.org/2.27/docs/python/overview/building-distributions>.
c
The
python_distribution
is meant to build python wheels for use outside of the monorepo. In that use case each file needs to belong to exactly one wheel, just like one would expect from a package on PyPI. If the resources in question are covered by a
python_distribution
, then the generated metadata for the
app1
should depend on it. The more typical way to package up and entire application and deploy it would be to use pex and/or docker.
h
So in a mono repo, it is common to have some common config files shared by multiple apps. Is there any way to do it and package these config files into apps’ wheel or pex? I do not think we should distribute these config files alone (they are not python files), right? I just want multiple apps include them as resources.
c
If you build a pex for each app, app of the transitively depended on resources be included. Multiple PEXs can include the same resources -- or python sources -- without issue.
h
I see. Are you saying pex can do this, but
python_distribution
cannot include the shared resources?
b
I know for a fact that the files in
resource
targets that are dependencies on the
pex_binary
target will get packaged into the .pex output example
Copy code
# project/BUILD
resources(
    name="configs",
    sources=["config/**/*.json"] # update to match your config files
)

pex_binary(
  name="app1",
  entry_point="app1/main.py",
  dependencies=[":configs"]
)


pex_binary(
  name="app2",
  entry_point="app2/main.py",
  dependencies=[":configs"]
)
something like this, although usually you'd have multiple BUILD files if you do
Copy code
pants package ::
and then
Copy code
unzip -l dist/project.app1/app1.pex
you'll see the config files in there
h
Ah, cool. You defined each app pex on the root BUILD file. I have not thought about it. Currently I just put the pex or python_distribution at each app’s folder’s BUILD file. Good to know!
b
Yeah I would actually make separate BUILD files in the app and config folders, I was just keeping the example simple
h
But if I separate the BUILD files in each app, it does not work
I mean for
python_distribution
, have not tried pex
BTW, for pex, it does not come with a version, right? Usually how do you make your app with versions?
b
Yeah I can only speak for the
pex_binary
target. Is that not an option? For portable python apps, a .pex file usually makes a lot of sense
h
It is good. But how about version?
b
A pex file is like a .jar or .exe. It’s up to you how you want to version it. And that depends on how you’re storing/distributing it. For my use cases, I have a CI job that runs
pants package
to build the pex files, and then that CI job adds the git commit sha (as a sort of version) before uploading them. I’m sure it’s simpler if you’re packaging it into a docker image or uploading to something like Artifactory
Some people use git tags with semantic versions
Pex files also have their own hash (it’s in the PEX INFO file, but can be extracted with pex tools)
h
Got it. Thank you!
One concern for the version in this approach is that the published artifacts are mutable. If we use semantic versioning, and upload docker image with tag
1.2.3
, this released docker image can be overridden, right? The benefit for pypi package is that once you upload the version, it cannot be overridden. Any solution for this?
c
This would be specific to wherever you are storing the artifacts. Some PyPI package registries let you mutate, some docker registries support immutable tags, etc