Hi All. I'm trying to think of the best way to imp...
# general
k
Hi All. I'm trying to think of the best way to implmenet a workflow that builds and deploys a new image, but only if the test passed, but unmemoized. For me that means that it's new code that passed real tests, and that's the only time I want to deploy it (Failed tests obviously won't deploy, and also memoized successes since those already exist in the deployments). What is the best way to go about doing this using Pants? I'm assuming there's a cleaner way than parsing pant's and pytest's stdout? Thanks!
f
You can have Pants write a
junit.xml
for the test run with the
—test-xml-dir
option.
h
I have some ideas here, but I have some questions first, to be sure I understand the setup: 1) Are you already using Pants, or are you asking more generally? 2) What is your unit of deployment? Sounds like it's a Docker image? Does that contain Python code? How many different ones do you have in your repo? 3) Is there a consistent mapping from tests to units of deployment?
I ask because typically you might want to separate the two different concerns of A) figuring out what might need to be deployed based on changes and B) testing those things
You can achieve A) using Pants's project introspection goals, for example
So let's say you want to compare the current git state to some deploy branch state, you can do something like:
Copy code
./pants --changed-since=deploy_branch list | xargs ./pants filter --target-type=docker_image
That will list all the docker images that are affected by changes between those two git states
Selectively figuring out which images are good to deploy based on test outcomes is trickier to do in a single Pants run. If any test fails then the Pants run fails, so you'd never get to the packaging goal.
But this can probably be composed out of multiple Pants runs
Or you can write a plugin that consumes test results that are allowed to fail, and analyzes them
There are a few ways to go here
k
@happy-kitchen-89482 thanks for the reply benjy 🙂 so, to answer your questions: 1. We are using Pants, or at least starting to, and trying to figure out the best way to go about it 2. It is indeed a docker image, containing an unpacked zip archive created from the python_awslambda Pants target 3. Well, our monorepo has a bunch of python projects, each containing a Dockerfile. So every directory is a deployable unit.
Our thinking was, we want to run one big pants run on all the projects, rely on pants caching to not test unchanged projects, and if something both changed and passed the tests - we want to redeploy.
Also, we're looking into argo workflows as a CI engine. So I'm trying to think how to fuse these technologies together in a sensible way. That is, if I have an argo step that runs tests and is successful, I want to somehow pass to the next argo step the list of containers that need to be rebuilt and deployed.
e
Just to flesh this out a bit more - I have no clue how to solve - @kind-hydrogen-975 do you have any non-deployable projects that others depend on for shared common code? If so, do you want a green tested change in that style of project to cause a deploy in all projects that depend on its production code?