high-yak-85899
01/20/2022, 5:33 PMfiles
target so that the software distribution is available in the sandbox. I've checked this by running the dependencies
goal and by running a diff -r
on what was passed to the sandbox and what I have on disk and only saw differences in __pycache__
folders. So, I think my question is more just looking for general guidance on what might be different about the two execution environments. It's a prebuilt distribution so it should have everything it needs but does rely on some system libraries (e.g. libpng12.so
). Is it possible that these aren't discoverable when running in the sandbox environment?hundreds-father-404
01/20/2022, 5:39 PM--no-process-cleanup
flag: https://www.pantsbuild.org/docs/troubleshooting#debug-tip-inspect-the-sandbox-with---no-process-cleanup. As mentioned there, there is a __run.sh
script that emulates what Pants is doing under-the-hood, including stripping env vars
for general guidance on what might be different about the two execution environmentThe most obvious way Pants is different is that it tries to be hermetic when running things, such as stripping env vars. It might be helpful to compare something like the output of
env
in bash to the __run.sh
script
--
Is the segfault deterministic?enough-analyst-54434
01/20/2022, 5:40 PM09:37:50.55 [INFO] Preserving local process execution dir /tmp/process-executionD1Nekz for "[some descrition of the action ...]"
You can then ecd intoter the sandbox, here /tmp/process-executionD1Nekz
, and use the `__run.sh`script to emulate how Pants runs the process.
In general you should find the issue is missing files - as you were getting towards - or missing env vars.high-yak-85899
01/20/2022, 5:43 PM--no-process-cleanup
but wasn't familiar with __run.sh
.enough-analyst-54434
01/20/2022, 5:49 PM__run.sh
in its emulation of how the Rust core engine actually invokes subprocesses: the script only shows environment variables set / passed through in the positive sense - it does not actively unset all others. To simulate failure you'd need to replace the export ...
line at the top with env -i ...
I think.__run.sh
that way: env -i ... ./__run.sh
high-yak-85899
01/20/2022, 5:56 PM/usr/bin/python3.8
7ec9e5c95ffcb4f4bbc26579e9c026e4f342da8ef17cfe49d5237bc1361d7335
enough-analyst-54434
01/20/2022, 6:08 PMhigh-yak-85899
01/20/2022, 6:52 PMbash
on PATH
and testing it
• Searching for python
and python3
and testing them
• Finding an interpreter for CPython
• Determining imports for my script and that's it
Is there more I should be seeing of running a pex_binary
or is that it?hundreds-father-404
01/20/2022, 6:54 PMhigh-yak-85899
01/20/2022, 6:56 PMrun
to execute my module. There's a little bit of path stuff before this happens, but here's the snippet that causes things to break
sys.path.insert(1, str(_GMAT_BIN))
import gmatpy as gmat
gmat.Setup(str(_GMAT_STARTUP))
script = path_util.get_package_root(
) / 'astranis/utils/hifi_propagator.script'
print('CHECKING SCRIPT')
# SEGFAULT happens at this call
print(gmat.LoadScript(str(script)))
print('LOADED')
gmat.Setup
works happily and then trying to load a script fails. I don't want to get too into the weeds on GMAT-specific debugging to not burden y'all with that.hundreds-father-404
01/20/2022, 7:02 PM--no-process-cleanup
was a red herring because the run
goal runs interactively in your repository, rather than in a temporary directory. So you won't ever see the process to inspect. Instead, you can use `./pants run --no-cleanup path/to/file.py`: https://www.pantsbuild.org/docs/reference-run#section-cleanup. The PEX will be saved to the .pants.d
folder iirc, like .pants.d/tmplpd86t9k/
One thing you could try is execution_mode='venv'
on the pex_binary
target. See https://www.pantsbuild.org/docs/reference-pex_binary#codeexecution_modecodeenough-analyst-54434
01/20/2022, 7:05 PM./pants package
and not ./pants run
- we do torturous things in ./pants run
IIRC. So that leads to another test: @high-yak-85899 does ./pants package ...
and then running the PEX produced in dist/...
work? If so that isolates it to our run
chicanery.high-yak-85899
01/20/2022, 7:25 PMfiles
, they aren't bundled with the pex
and that causes other file discovery issues when executing the built pex
files
target and then finding it within the sandbox. When I point to it with an absolute path where it actually lives on disk, things run without errors happily./home/<user>/tools
and point to it that way with an absolute path.files
target which would be surprising.enough-analyst-54434
01/20/2022, 7:47 PMresources
instead?high-yak-85899
01/20/2022, 7:52 PMenough-analyst-54434
01/20/2022, 8:07 PMThe above strategy works whether it's packaged first and run or just run directly with the run goal.@high-yak-85899 does that mean switching to
resources
solved your issue?high-yak-85899
01/20/2022, 8:10 PMrun
it or package
it and execute the pex. Swapping between files and resources doesn't seem to change things.enough-analyst-54434
01/20/2022, 8:13 PMhigh-yak-85899
01/20/2022, 8:15 PM~/tools
similarly for some other purposes) and not attempt to package it up.files
isn't grabbing everything even though it seems like it is.enough-analyst-54434
01/20/2022, 8:16 PM$ find . -name "*.py"
./userfunctions/python/AttitudeTypes.py
./userfunctions/python/SimpleSockets.py
./userfunctions/python/AttitudeInterface.py
./userfunctions/python/StringFunctions.py
./userfunctions/python/socket-test-drivers/AttitudeTypes.py
./userfunctions/python/socket-test-drivers/SimpleSockets.py
./userfunctions/python/socket-test-drivers/gmat-sync-mquat.py
./userfunctions/python/socket-test-drivers/AttitudeInterface.py
./userfunctions/python/socket-test-drivers/Test-mjd.py
./userfunctions/python/socket-test-drivers/gmat-sync-mjd.py
./userfunctions/python/socket-test-drivers/Cosmos180-mjd.py
./userfunctions/python/MathFunctions.py
./userfunctions/python/ArrayFunctions.py
./bin/gmatpy/gmat_py.py
./bin/gmatpy/navigation_py.py
./bin/gmatpy/__init__.py
./bin/gmatpy/station_py.py
./api/Ex_R2020a_CompleteForceModel.py
./api/Ex_R2020a_RangeMeasurement.py
./api/BuildApiStartupFile.py
./api/Ex_R2020a_FindTheMoon.py
./api/Ex_R2020a_BasicFM.py
./api/Ex_R2020a_BasicForceModel.py
./api/load_gmat.py
./api/Ex_R2020a_PropagationLoop.py
./api/Ex_R2020a_PropagationStep.py
./utilities/python/GMATDataFileManager.py
./utilities/python/ochReader.py
./utilities/python/missionInterface.py
./utilities/python/testDriver.py
./utilities/python/segment.py
./utilities/python/ochWriter.py
high-yak-85899
01/20/2022, 8:22 PMbin/gmatpy/gmat_py.py
enough-analyst-54434
01/20/2022, 8:37 PMfrom gmat_py import gmat_py
then this should work:
Relevant repo subtree:
---
3rdparty/GMAT/R2020a/bin
BUILD
gmat_py/__init__.py
gmat_py/gmat_py.py
gmat_py/_gmat_py.so
3rdparty/GMAT/R2020a/BUILD:
---
resources(
name="so",
sources="**/*.so",
)
python_sources(
sources="**/*.py",
dependencies=[
":so",
]
)
pants.toml:
---
[source]
root_patterns.add = ["/3rdparty/GMAT/R2020a/bin"]
PEX_EXTRA_SYS_PATH=dir1:dir2
for this sort of thing. Is that useful?PEX_INHERIT_PATH=prefer|fallback
if the provided deps are expected to be found in the site-packages of the system interpreter running the PEX. See pex --help-variables
for docs on this or see: https://pex.readthedocs.io/en/v2.1.63/api/vars.htmlhigh-yak-85899
01/20/2022, 9:08 PM.so
files (and .so.R2020a
files) elsewhere in that directory that need to be loaded in. What I ended up with was
Relevant repo subtree:
---
3rdparty/GMAT
BUILD
3rdparty/GMAT/R2020a/bin
BUILD
gmat_py/__init__.py
gmat_py/gmat_py.py
gmat_py/_gmat_py.so
3rdparty/GMAT/BUILD:
---
resources(
name = "so",
sources = [
"**/*.so.*",
"**/*.so",
],
)
3rdparty/GMAT/R2020a/BUILD:
---
python_sources(
sources=["**/*.py"],
dependencies=[
"//3rdpart/GMAT:so",
]
)
pants.toml:
---
[source]
root_patterns.add = ["/3rdparty/GMAT/R2020a/bin"]
from gmatpy import gmat_py as gmat
just fine, but when I got to the LoadScript
call, I'm back to a segfault. So, for now, I think I'll stick with storing on system for the few cases this is needed and move on to other migration issues we've had.