Windows 7 - Limits on size of bitmap from CreateCompatibleBitmap()?

Asked By Chris Shearer Cooper on 18-Oct-07 10:10 AM
I'm trying to create a large (5734x10460) bitmap using
CreateCompatibleBitmap() and getting ERROR_NOT_ENOUGH_MEMORY errors.  The DC
is set to 32 bits per pixel, so that's 4 bytes per pixel, for a total of
approximately 229M of memory needed (right?).  I've got 2G of RAM, and 1G of
it free, so I'm betting it's not that I'm running out of available RAM (XP
Pro, by the way).

Does CreateCompatibleBitmap() have some other restrictions on its memory
usage?  For example, is it using graphics card memory somehow, and that's
the limitation?


/dev/nul replied on 18-Oct-07 10:46 AM
If I'm not wrong CreateCompatibleBitmap() will try to create the bitmap
in the video memory (as CreateDIBitmap() does for example). Even if your
card has 256MB of video ram the call will fail if there is no 229M
available. Use CreateDIBSection() instead.

Grzegorz Wróbel
Chris Becke replied on 18-Oct-07 11:23 AM
DDBs are allocated in video memory. Apparently the driver can choose where
and how to allocate, im not a kernel driver developer so I dont know. What I
do know is the nVidia cards ive tested wont allocate a byte's worth of DDB
more than I have video memory - and the video memory is effectivly halved
from what the box says when I have two screens connected to one card.

DIBSections on the other hand are limited by kernel mode address space - a
hard upper limit of 2Gb there.

Bigger than that? Target Vista 64bits.
Norman Diamond replied on 19-Oct-07 04:30 AM
Kernel mode?  Not user mode?

Even in Vista x64, some libraries limit some individual objects to 2GB.  I
assumed this was in user mode address space and that the limits were due to
libraries not yet ready for year 2004.  Some libraries released in year 2007
aren't yet ready for year 2004.
Chris Becke replied on 19-Oct-07 06:09 AM
You know, I hate it when I clearly remember reading something, but I can
never find a cross reference to back it up.

DIBSections certainly are kind of strange in that - in order for GDI to deal
with them, the bits must be accessible to kernel mode. The memory must also
be available in user mode.

As far as my limited understanding of how kernel mode works, this means (on
32bit platforms) that they have to be allocated in the kernel mode
accessible memory range - as a result large DIB Sections can starve the
system of kernel mode address space.

It is my naive understanding that kernel section objects have this dual
access property - hence DIB Section.
/dev/nul replied on 19-Oct-07 08:39 AM
You probably confused DIBs with DDBs that are created in video memory,
which is indeed addressed in kernel-mode space (in order to access video
memory a driver needs to address it).

What is so strange about that? The kernel mode services can access any
memory location, including all of the applications' memory spaces.

That is not correct, CreateDibSection() allocates the bitmap in
user-mode paged memory. The bitmap size is not affected by any
kernel-mode address space limitation.

The only related limitation imposed by the available kernel-mode address
space I can think of here is connected with the mentioned fact that the
video memory needs to be addressed in the kernel-mode space. For
example, if you have a video adapter with a lot of onboard RAM and use
/3GB switch (dedicating 3GB of address space for user-mode and leaving
only 1GB for the kernel) the driver most likely will fail to load.

Grzegorz Wróbel
Chris Becke replied on 19-Oct-07 10:00 AM
more a misunderstanding on section objects on my part. Reading,
doesnt imply any problems for anything other than the current process when
allocating large dib-sections.
Jerry Coffin replied on 21-Oct-07 03:25 PM
In article <>,

DDBs are allocated from a system-wide pool that's typically limited to
about 200 Megabytes on 32-bit versions of Windows. In some cases (e.g.
Terminal Server sessions) the limit is lower still. On 64-bit versions
of Windows, the limit has apparently been raised considerably. I tried
to test for the limit, but got sick of watching the disk thrash when I'd
allocated somewhere around 6 gigabytes worth of bitmaps...


The universe is a figment of its own imagination.
Norman Diamond replied on 21-Oct-07 08:51 PM
Oh neat.  This provides a way for a single non-privileged client to
accomplish a denial-of-service attack on a server, right?
Jerry Coffin replied on 21-Oct-07 09:12 PM
ndiamond@community.nospam says...

I suppose you could use it that way. I'm not sure it accomplishes
tremendously more than other memory allocation though...


The universe is a figment of its own imagination.
Norman Diamond replied on 22-Oct-07 12:33 AM
Surely other kinds of memory allocation that can be performed by user
processes can only DOS that user's processes?  (Of course different users'
processes can make each other thrash the pagefile, but that's not the same
as completely starving them.)
Jerry Coffin replied on 22-Oct-07 01:30 AM
ndiamond@community.nospam says...

[ ... ]

It's not the same, but for quite a few practical purposes, it tends to
be worse. In particular, while the system is expanding the paging file,
it can/will fail other attempts at allocating memory, so it can not only
thrash, but starve them as well.

By contrast, the limit on the kernel paged memory pool means that if
nothing else, it generally either succeeds or fails _quickly_. On a
reasonably recent machine, it's unlikely to have to fiddle much with the
paging file before it fails when/if you allocate too much space. If you
were really concerned about this, it would be pretty easy to set up a
small monitoring program to shut down any of a group of semi-untrusted
programs if they used up too much paged kernel space.
GetProcessMemoryInfo will get you the required data quite easily, so
it's just a matter of running that periodically, and calling
TerminateProcess when/if you decide to do so.

A more direct way of dealing with this would be to run the process(es)
inside of a job object. A job object allows you to specify limits on
committed memory as well as things like process scheduling, processor
usage, working set size, etc. Just for example, most programs could be
limited to 100 Meg. (total), which would prevent them from denying much
to anybody else, and still allow most legitimate code to work just fine.


The universe is a figment of its own imagination.
Chris Becke replied on 22-Oct-07 02:36 AM
What card & drivers did you test with? I ran a DDB allocation test on a Dual
head nVidia Quattro card with the latest forceware drivers and the limit was
the video memory on the card. (Upon reaching that limit, the rest of the
system started to fail to draw things correctly).

Im pretty sure memory allocation strategies are up to the driver - if theyre
not meant to be, they are anyway.
Jerry Coffin replied on 23-Oct-07 09:14 PM says...

[ ... ]

An GeForce 8800 GTS/640, driver version (yeah, I should
probably update it, now that I look at things...)

Until I ran my test, that was what I thought. I'm pretty sure there's a
driver entry point to allow it to create a DDB if it chooses to do so,
and the GDI does so only when/if the driver chooses not to. I suppose
it's possible that (for whatever reason) my driver is either always
using main memory, or attempting to be intelligent and allocate from
main memory when/if I ask for too big of a bitmap to fit on-board. I
suppose I could do some more testing to see if there's an abrupt
increase in use of main memory around the time I ask for more memory
than the card has available.


The universe is a figment of its own imagination.