So 64bit windows allows a 64bit process to allocate much more than 4GB - I've done this before.
Given your challenge, I of course tried it on my work PC with VS 2015.
Lo and behold, I couldn't get a 64bit program to allocate more than about 0xe0000000 (even allocating many smallish blocks).
Weird --- but I figure it must be some compiler option. (I dug around a bit but it wasn't obvious to me what option this might be).
I've now tried at home using VS2017 Community edition -- and as I believed, I've no problem allocating a single 8GB block, or many smaller blocks -- same source code.
On my machine with 16GB of ram, I can successfully do a HeapAlloc(8GB), and I can allocate a total of 25GB in smaller blocks.
WRT memory behaviors:
On Windows, if memory allocation succeeds, the system has guaranteed that you can access the amount of memory requested.
It backs this promise up with some combination of physical memory and virtual memory.
If your PC has 2GB of ram, and a pagefile of 4GB, then your 64 bit process can allocate somewhere near 6GB (less whatever system overhead).
On Linux, malloc() only fails once your process runs out of *address* space.
If your PC has 2GB of ram, and a pagefile of 4GB, then your 64 bit process is not limited to allocating only 6GB.
It can allocate 32 GB.
The problem comes when it tries to actually access it -- if you access more memory than the system can back with physical+pagefile, then your process will be terminated immediately (no chance for error recovery).
There are probably ways to modify this behavior, but this is the default.