Does anyone have tips for debugging rule-graph err...
# development
s
Does anyone have tips for debugging rule-graph errors coming from a
RuleRunner
in a test? I’m trying to revamp the tests for
test
as part of the batching impl, and I think I’ve reached the limit of my guess-and-check strategy 🤯
c
Are you using the
@logging
decorator from the rule_runner.py testutil? (or
@logger
never remember)
s
I am not 👀
c
decorate your test method with it, and set level to debug or something.. that’s usually very helpful 🙂
s
Will give it a try, thank you!!
c
np 🙂
s
sadly not very helpful 😕 debug shows basically nothing, trace shows effectively the same error that gets printed to console
some of the errors are about intrinsics that I’d expect to Just Work, i.e.:
Copy code
No installed rules return the type Console, and it was not provided by potential callers of @goal_rule(pants.core.goals.test:759:run_tests(Console, TestSubsystem, DebugAdapterSubsystem, Workspace, UnionMembership, DistDir, RunId, ChosenLocalEnvironmentName) -> Test
I see in the rule-runner code that it should be building and injecting a
Console
instance, so I’m not sure what’s going wrong
f
how are you invoking the rules under test?
.request
or
.run_goal_rule
?
s
run_goal_rule
I’m also not passing any `QueryRule`s into the rule-runner - pretty sure that’s a part of the problem, but I’m not sure how to know which `QueryRule`s are needed
f
From my experience, `QueryRule`'s should only be needed to match the types passed to
.request
calls.
Rule graph debugging is still a problem. I have usually just added rules to the
RuleRunner
until the error goes away. (To the point that you will see some of the Go backend tests add the entire Go backend to the test. 😞 )
👍 1
w
@sparse-lifeguard-95737: if you’re still having trouble with it in a little bit, feel free to push the branch and ping me and i can take look. sorry about that.
s
I gave up on the more ambitious thing and got something working with mocks