Tuesday, July 03, 2018

Re: "Cannot allocate memory" error when memory is enough

On Tue, 3 Jul 2018, Philip Guenther wrote:
<nothing of interest>

Flakey button on my mouse; time to clean it again and throw it out if it
keeps glitching. Sorry about that.


> On Tue, Jul 3, 2018 at 4:53 PM Nan Xiao <xiaonan830818@gmail.com> wrote:
> > Thanks for your reply! The "ulimit -a" outputs following:
> >
> > $ ulimit -a
> > time(cpu-seconds) unlimited
> > file(blocks) unlimited
> > coredump(blocks) unlimited
> > data(kbytes) 33554432
> > stack(kbytes) 8192
> > lockedmem(kbytes) 1332328
> > memory(kbytes) 3978716
> > nofiles(descriptors) 128
> > processes 1310
> >
> > It seems should be enough to launch cmake or egdb.

But it wasn't and the kernel can only indicate that with a single error
code, so now you have to actually dig into what's going on. There are
many possibilities, as a search for ENOMEM in /usr/src/sys/kern/*exec*.c
will show.
1) the ELF interpreter (normal ld.so) could be too large
2) the PT_OPENBSD_RANDOMIZE segment could be larger than permitted by the
kernel
3) program's text segment could exceed the maximum for the arch, MAXTSIZ
4) the program's vnode couldn't be mmaped for some reason
5) the argument list and environment were together too big for the stack
6) the signal trampoline couldn't be mapped into the process VM
7) other random memory allocation problems

Of those, (1), (4), and (6) are *really* unlikely. (3) is possible if
you're building a debugging binary that's *huge* as a result. (5) would
result in _all_ programs failing in that shell. I think (7) would show up
in a close examination of the "vmstat -m" output.

(2) is perhaps the most likely, as recent compiler changes have increased
the expected size of the PT_OPENBSD_RANDOMIZE segment and while the kernel
limit on that was also increased recently, you didn't provide any
information about your setup: are your kernel, userland, and ports all in
sync?


Philip Guenther

No comments:

Post a Comment