In Orange we've been fighting the past weeks with memory usage a lot...
at the moment incredible huge scenes are being rendered, with multiple
layers and all compositing, stressing limits of memory a lot.
I had hoped that less frequently used blocks would be swapped away
nicely, so fragmented memory could survive. Unfortunately (in OSX) the
malloc range is limited to 2 GB only (upped half of address space).
Other OS's have a limit too, but typically larger afaik.
Now here's mmap to the rescue! It has a very nice feature to map to
a virtual (non existing) file, allowing to allocate disk-mapped memory
on the fly. For as long there's real memory it works nearly as fast as
a regular malloc, and when you go to the swap boundary, it knows nicely
what to swap first.
The upcoming commit will use mmap for all large memory blocks, like
the composit stack, render layers, lamp buffers and images. Tested here
on my 1 GB system, and compositing huge images with a total of 2.5 gig
still works acceptable here. :)
http://www.blender.org/bf/memory.jpg
This is a silly composit test, using 64 MB images with a load of nodes.
Check the header print... the (2323.33M) is the mmap disk-cache in use.
BTW: note that is still limited to the virtual address space of 4 GB.
The new call is:
MEM_mapalloc()
Per definition, mmap() returns zero'ed memory, so a calloc isn't required.
For Windows there's no mmap() available, but I'm pretty sure there's an
equivalent. Windows gurus here are invited to insert that here in code! At
the moment it's nicely ifdeffed, so for Windows the mmap defaults to a
regular alloc.
errors, switched MEM_set_error_stream to MEM_set_error_callback
that calls a function to print result instead of just giving
a FILE *
Note: requires intern recompile
I took out the following from the includes in the intern dir that still had
it:
-#ifdef HAVE_CONFIG_H
-#include <config.h>
-#endif
Kent
--
mein@cs.umn.edu