Author Topic: Bad crash with search  (Read 612 times)

jporkkahtc

  • Senior Community Member
  • Posts: 1828
  • Hero Points: 177
  • Text
Bad crash with search
« on: October 16, 2018, 09:32:15 pm »
Doing a FiF over a set of 3 log files (each > 8GB), I sometimes get Slick to crash.

I cannot get a minidump, because the Visual Studio debugger also crashes when it tries to JIT attach, or even if I attach to Vs.exe before the crash.

See the attached screen shot.
NOTE: All 3 dialogs in this screen shot are open at the same time -- not consecutively.

For the crash I have:
Tools > Options > Application Options > Virtual Memory. Buffer cache size (MB) = 8000
And I do a FiF for a string which occurs a few dozen times in the 3 log file, and one of the log files is already a buffer in slickedit.

I have reproed this about 4-5 times



Clark

  • SlickEdit Team Member
  • Senior Community Member
  • *
  • Posts: 4896
  • Hero Points: 399
Re: Bad crash with search
« Reply #1 on: October 16, 2018, 09:56:27 pm »
What is the encoding of the log files? Is the extension just .log?

Clark

  • SlickEdit Team Member
  • Senior Community Member
  • *
  • Posts: 4896
  • Hero Points: 399
Re: Bad crash with search
« Reply #2 on: October 16, 2018, 10:06:25 pm »
I think the problem is your cache is too large so SlickEdit is running out of memory. This also would explain why Visual Studio is also crashing.

How much memory do you have? In order for 8000 to work, you would need at least a 32gig system. Even that's cutting it close.

Try a much smaller cache size (like 2000)

jporkkahtc

  • Senior Community Member
  • Posts: 1828
  • Hero Points: 177
  • Text
Re: Bad crash with search
« Reply #3 on: October 16, 2018, 10:39:20 pm »
Encoding: LF ACP
Extension: .log
I have 16GB of actual ram.

When I load the files in Slick, the Slick memory size tops out at around 8GB - just as I would expect.

Why would I need 32GB?
If I was running out of memory, then it should start swapping - so performance would get really terrible, but it shouldn't crash.


I wasn't running into problems with system memory - well, some apps are getting slow, but not really terrible like things get when the system is swapping like crazy.

Clark

  • SlickEdit Team Member
  • Senior Community Member
  • *
  • Posts: 4896
  • Hero Points: 399
Re: Bad crash with search
« Reply #4 on: October 16, 2018, 11:07:24 pm »
write a small 64-bit C++ app which calls malloc and attempts to allocate 8gig. It will fail. I think 4gig will work. It’s a very interesting exercise.

Let’s say the max memory the OS will give SlickEdit is 4gig. You can’t give all 4gig to edit buffers. Hope this all makes sense.

Clark

  • SlickEdit Team Member
  • Senior Community Member
  • *
  • Posts: 4896
  • Hero Points: 399
Re: Bad crash with search
« Reply #5 on: October 17, 2018, 12:47:04 am »
I just tried this on a couple of my Windows systems. Looks like whatever algorithm Windows previously used is different now. I was able to allocate 16gig on my 16gig Windows 7 system. On my 8gig Windows 10 system, I was able to allocate 27gig. On older Windows OSes the OS would never let you allocate more than 1/3 of the total memory available.

In any case, it would be a good idea to try a much smaller buffer cache size to see if that fixes the problem. I'm still very suspicious.

jporkkahtc

  • Senior Community Member
  • Posts: 1828
  • Hero Points: 177
  • Text
Re: Bad crash with search
« Reply #6 on: October 17, 2018, 04:42:14 pm »
So 64bit windows allows a 64bit process to allocate much more than 4GB - I've done this before.
Given your challenge, I of course tried it on my work PC with VS 2015.
Lo and behold, I couldn't get a 64bit program to allocate more than about 0xe0000000 (even allocating many smallish blocks).

Weird --- but I figure it must be some compiler option. (I dug around a bit but it wasn't obvious to me what option this might be).

I've now tried at home using VS2017 Community edition -- and as I believed, I've no problem allocating a single 8GB block, or many smaller blocks -- same source code.

On my machine with 16GB of ram, I can successfully do a HeapAlloc(8GB), and I can allocate a total of 25GB in smaller blocks.


WRT memory behaviors:
On Windows, if memory allocation succeeds, the system has guaranteed that you can access the amount of memory requested.
It backs this promise up with some combination of physical memory and virtual memory.
If your PC has 2GB of ram, and a pagefile of 4GB, then your 64 bit process can allocate somewhere near 6GB (less whatever system overhead).


On Linux, malloc() only fails once your process runs out of *address* space.
If your PC has 2GB of ram, and a pagefile of 4GB, then your 64 bit process is not limited to allocating only 6GB.
It can allocate 32 GB.
The problem comes when it tries to actually access it -- if you access more memory than the system can back with physical+pagefile, then your process will be terminated immediately (no chance for error recovery).
There are probably ways to modify this behavior, but this is the default.