i would also potentially try `RUST_BACKTRACE=full`...
# development
a
i would also potentially try
RUST_BACKTRACE=full
. will try to repro now
e
they are all failing with
Copy code
thread 'remote::tests::remote_workunits_are_stored' panicked at 'called `Result::unwrap()` on an `Err` value: Fatal("Error fetching stdout digest (Digest(Fingerprint<693d8db7b05e99c6b7a7c0616456039d89c555029026936248085193559a0b5d>, 16)): \"Error making env for store at \\\"/var/f
olders/cf/z3ktw6dn467gvm24fgx8ft6c0000gp/T/.tmpBlydey/files/0\\\": No space left on device\"")
a
got it
thank you
i'm updating git repositories right now, might be a while
e
thanks
FYI i blew the /var/folders/cf/blahblah directory away but the issue persists.
before I blew it away the size was 3.4Gb should be below the 5gb that the caching command runner creates itself with
👍 1
a
got it
off of a master i just pulled, i am not reproducing that error when running with the same command except with
../../../
at the front, from within the
src/rust/engine
dir, with
MODE=debug
and
RUST_BACKTRACE=full
(none of which should matter, i think)
i.e. all the tests are passing
and if you blew away that dir then ugh
e
yeah sorry I do also run from the engine directory as well
a
yeah exactly
i have been stuck in this exact situation for a while before when developing on rust code especially the caching command runner specifically and haven't gotten out of it which is not good but i don't remember what i did the last time
e
there wasn't anything in /private/var/folders that looked related at all
a
i might also try blowing away
~/.cache/pants
(or just the lmdb store, maybe) along with
git clean -xfd
yeah that's why i'm thinking maybe clearing the lmdb store might work
but not sure
as in, maybe it has pointers to bad filesystem entries
(guessing)
trying very hard to remember what i did last time, it was extremely frustrating. sorry
e
no worries for now. I'm cleaning out pants
and if that doesn't work ill remove the pants cache dir
👍 1
a
ok. if we figure it out we should document it this time -- my fault for not following up previously. if this continues to block you later today or tomorrow morning we could pair on it or someone else can pair on it, whoever has context/time
a
@fancy-motherboard-24956 has run into this in the past and rebooting his actual OS has generally been his fix; can’t remember whether we worked out anything more useful
It didn’t seem to actually relate to disk space, but may have been related to number of free inodes on a tmp mount or something?
f
Yes, I can confirm I’ve hit this a couple of time and Hagai also hit this once and asked me for advice. I don’t have a fix at this point, unfortunately. Rebooting works as a workaround.
Where I would have expected the general OS limits to have something to do with this, nothing I tried setting them (max files, max process handles…) to unlimited did anything.
e
same, unlimited open files didnt change anything, and the disk was obviously not full. inodes could have been an issue. Either way a restart fixed it 🤧