Seeing `Error storing versioned key \"000102030405...
# general
c
Seeing
Error storing versioned key \"000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20\": MDB_MAP_FULL: Environment mapsize limit reached
when running a full build in pants. Is there a way to increase the size of this map? The node I have this running in has
48Gi
in memory which should be enough.
w
sorry for the trouble! the map-size is actually a disk usage limit on the content in
~/.cache/pants/lmdb_store
those directories are garbage collected by
pantsd
, which should prevent them from growing too large
how large are they in this case, and is
pantsd
enabled?
c
It's the default so
1073741824
bytes.
Can check the disk limit on the pod but they are usually very large like 1T
w
i’m saying that Pants has a disk size limit on the content of those directories
which
pantsd
usually prevents you from hitting
c
Oh so pantsd should be cleaning up that disk usage
w
right
c
I'm running
./pants package ::
over my project to build all the packages and this is where it fails probably since some of them are very large.
Will increasing the memory usage of pantsd potentially help?
w
no
this is not related to memory usage: only disk usage
@clean-night-52582: are you able to check the disk usage of
~/.cache/pants/lmdb_store
?
also, do you know if you have any individual packages that are larger than 4 GB ?
c
Ya I can re run and find out the disk usage.
w
thanks: sorry for the trouble. we clearly need to fix something here in Pants, but i want to fix the right thing =D
c
I thought I had bigger pex files than 4GB since we have a lot of them that depend on Tensorflow but now looking at the output they seem to have gotten a lot smaller. Haven't check the sizes for a couple pex updates and they seem smaller now.
Is the 4gb the input limit? e.g reqs and code
w
it’s the limit of a single file in the store currently
BUT, you may not be hitting that aspect of the limit. need to see the disk usage to know what we should fix here.
c
I'm watching the directory. It's still in the testing phase about
2.8G
currently.
Any given file or sub dir you want me to get the size of.
w
the total set of files in there is pretty small, so the entire output of
du -h ~/.cache/pants/lmdb_store
would be great
c
Copy code
484M    /home/coconor/.cache/pants/lmdb_store/files/0
2.0G    /home/coconor/.cache/pants/lmdb_store/files/1
1.2G    /home/coconor/.cache/pants/lmdb_store/files/2
1.7G    /home/coconor/.cache/pants/lmdb_store/files/3
988M    /home/coconor/.cache/pants/lmdb_store/files/4
1.3G    /home/coconor/.cache/pants/lmdb_store/files/5
2.1G    /home/coconor/.cache/pants/lmdb_store/files/6
951M    /home/coconor/.cache/pants/lmdb_store/files/7
2.0G    /home/coconor/.cache/pants/lmdb_store/files/8
2.4G    /home/coconor/.cache/pants/lmdb_store/files/9
1.5G    /home/coconor/.cache/pants/lmdb_store/files/a
2.6G    /home/coconor/.cache/pants/lmdb_store/files/b
4.5G    /home/coconor/.cache/pants/lmdb_store/files/c
216M    /home/coconor/.cache/pants/lmdb_store/files/d
1.5G    /home/coconor/.cache/pants/lmdb_store/files/e
2.1G    /home/coconor/.cache/pants/lmdb_store/files/f
27G     /home/coconor/.cache/pants/lmdb_store/files
340K    /home/coconor/.cache/pants/lmdb_store/directories/0
308K    /home/coconor/.cache/pants/lmdb_store/directories/1
300K    /home/coconor/.cache/pants/lmdb_store/directories/2
280K    /home/coconor/.cache/pants/lmdb_store/directories/3
312K    /home/coconor/.cache/pants/lmdb_store/directories/4
264K    /home/coconor/.cache/pants/lmdb_store/directories/5
300K    /home/coconor/.cache/pants/lmdb_store/directories/6
252K    /home/coconor/.cache/pants/lmdb_store/directories/7
332K    /home/coconor/.cache/pants/lmdb_store/directories/8
356K    /home/coconor/.cache/pants/lmdb_store/directories/9
280K    /home/coconor/.cache/pants/lmdb_store/directories/a
288K    /home/coconor/.cache/pants/lmdb_store/directories/b
300K    /home/coconor/.cache/pants/lmdb_store/directories/c
300K    /home/coconor/.cache/pants/lmdb_store/directories/d
300K    /home/coconor/.cache/pants/lmdb_store/directories/e
316K    /home/coconor/.cache/pants/lmdb_store/directories/f
4.8M    /home/coconor/.cache/pants/lmdb_store/directories
84K     /home/coconor/.cache/pants/lmdb_store/processes/0
84K     /home/coconor/.cache/pants/lmdb_store/processes/1
76K     /home/coconor/.cache/pants/lmdb_store/processes/2
72K     /home/coconor/.cache/pants/lmdb_store/processes/3
80K     /home/coconor/.cache/pants/lmdb_store/processes/4
76K     /home/coconor/.cache/pants/lmdb_store/processes/5
76K     /home/coconor/.cache/pants/lmdb_store/processes/6
76K     /home/coconor/.cache/pants/lmdb_store/processes/7
80K     /home/coconor/.cache/pants/lmdb_store/processes/8
76K     /home/coconor/.cache/pants/lmdb_store/processes/9
72K     /home/coconor/.cache/pants/lmdb_store/processes/a
76K     /home/coconor/.cache/pants/lmdb_store/processes/b
76K     /home/coconor/.cache/pants/lmdb_store/processes/c
72K     /home/coconor/.cache/pants/lmdb_store/processes/d
72K     /home/coconor/.cache/pants/lmdb_store/processes/e
84K     /home/coconor/.cache/pants/lmdb_store/processes/f
1.3M    /home/coconor/.cache/pants/lmdb_store/processes
27G     /home/coconor/.cache/pants/lmdb_store
Looks like
/home/coconor/.cache/pants/lmdb_store/files/c
is going over 4gb
w
Copy code
4.5G    /home/coconor/.cache/pants/lmdb_store/files/c
… the fact this this is so uneven probably implies a fairly large file
yea. maybe not 4GB, but probably at least 1.5 GB.
@clean-night-52582: and this was all from one run? i.e., these directories were empty before the run?
c
Yes this was run in a fresh docker container
w
ok. yea, very sorry for the trouble. i’ll get a fix out for this today. the limit is too low, but i also should have made it configurable last time i was in here.
@clean-night-52582: which release are you on?
c
Pants
2.3.0
w
ok, thanks: yea, i’ll get this in a
2.3.0rc2
this afternoon
c
Thanks
w
opened https://github.com/pantsbuild/pants/pull/11777 for this… sorry for the trouble. will try to get it out in a release tomorrow.
c
Thanks @witty-crayon-22786 got around it by disabling some of our bigger binaries for the moment. Will upgrade to this when available.
w
oof… this is quite a cherrypick unfortunately.
if we were to get
2.4.0rc0
out, would that be an option for you?
there is another option if you need this fix on
2.3.0
: i can backport a much smaller fix to increase the limit. just let me know… otherwise, i think that the plan will be to wait for
2.4.0rc0
today/tomorrow
c
How big of a change is to upgrade to
2.4.0rc0
I'll give it a shot
w
it’s not quite out, although we’re hoping that we can cut it tonight…
to get a sense of how large it is though, you could try out
2.4.0.dev3
from last week…
let me know though: if it’s too much of a pain to get on
2.4.x
and you need the fix sooner, i am happy to backport the workaround.