[ ... ]
It's not the same, but for quite a few practical purposes, it tends to
be worse. In particular, while the system is expanding the paging file,
it can/will fail other attempts at allocating memory, so it can not only
thrash, but starve them as well.
By contrast, the limit on the kernel paged memory pool means that if
nothing else, it generally either succeeds or fails _quickly_. On a
reasonably recent machine, it's unlikely to have to fiddle much with the
paging file before it fails when/if you allocate too much space. If you
were really concerned about this, it would be pretty easy to set up a
small monitoring program to shut down any of a group of semi-untrusted
programs if they used up too much paged kernel space.
GetProcessMemoryInfo will get you the required data quite easily, so
it's just a matter of running that periodically, and calling
TerminateProcess when/if you decide to do so.
A more direct way of dealing with this would be to run the process(es)
inside of a job object. A job object allows you to specify limits on
committed memory as well as things like process scheduling, processor
usage, working set size, etc. Just for example, most programs could be
limited to 100 Meg. (total), which would prevent them from denying much
to anybody else, and still allow most legitimate code to work just fine.
The universe is a figment of its own imagination.