Let's say someone added `RustPython` to our engine...
# development
b
Let's say someone added
RustPython
to our engine's deps. Now let's say someone wanted to rewrite our Python dep parser script in Rust. What do we think would yield the best perf in the average case? 1. An intrinsic which leverages the
local_cache
to cache Digest -> results ◦ Doesn't support remote cache 2. Build a full executable and use that 3. An intrinsic without caching I'm guessing 1 is enough because the code is likely faster than the latency of the network + overhead of the remote download code
CC @witty-crayon-22786
(and FYI @happy-kitchen-89482)
w
If it is fast enough, which I suspect that it could be, then I don't think you have to worry about caching it at all.
b
Oh right, I remember there being 3 options. I forgot that one
w
I also suspect that's the easiest distribution model
b
Yeah building an exe seems time-cosnuming and space-wasteful
w
Well, it's 1 but without caching unless it looks necessary
w
How hypothetical is this …… hmmmmmm…..?
b
👀
w
I was inspired by Charlie Marsh’s new company post - all Rust tooling all the time
b
Hehe I'll be seeing him this weekend 🙂
w
Ask him if he’ll lend 5% of his brainpower to fixing the incredibly slow JS ecosystem. Prettier is great, but man - I could hand format faster sometimes
I'll see y'all after PyCOn
h
Keep in mind that dep inference supports custom python code (e.g., for extracting django deps) so that would need to be considered.
I think there are other ways to speed up dep inference, such as batching, that might be better in this case
b
Yeah that's why I CCd you
I think that can remain. But the generic inferer is now in rust outside of the process
There certainly are other ways to speed it up... Too
This also gets us using RustPython. Which we can use to speed up rule visiting 😊
1
CC @curved-television-6568 who had the same idea
h
I don't think it's ideal to have two mechanisms for dep inference
Or dep extraction, to be precise
b
No not ideal. But really really fast. 😈
w
There are already multiple implementations of inference: it's generic in that regard.
b
To be fair the existing customization points to the deo inference script could've just been unions to start with
h
I think rewriting the custom stuff in rust is not that prohibitive either
w
One thing to note though is that it would put a cap on supported python versions, unless you kept the old implementation as a fallback
h
Oh hmmm, true
I still think we want to invest in smarter batching and splitting
we happen to have a rust hack for dep inference, but that won't help with test running
b
Certainly!
And good point, the old stuff needs to support Python 2. But we can conditionally use that
Until we drop Py2 support
I'd also feel less bad if our plugins weren't baked into pants itself. Then a different binary for Python wouldn't be baked in if you didn't need it
Ok I'll see about taking this into PoC at the airport today
Copy code
08:34:56.08 [WARN] ParsedPythonDependencies(imports=ParsedPythonImports({'__future__.annotations': ParsedPythonImportInfo(lineno=4, weak=False), 'pants.util.frozendict.FrozenDict': ParsedPythonImportInfo(lineno=9, weak=False), 'dataclasses.dataclass': ParsedPythonImportInfo(lineno=6, weak=False), 'pants.engine.collection.DeduplicatedCollection': ParsedPythonImportInfo(lineno=8, weak=False)}), assets=ParsedPythonAssetPaths([]))