Archived Beta Discussions > SlickEdit 2016 v21 Beta Discussion

Safe saves for large files


Slick doesn't seem to do safe-saves for large files.
Once a file is larger than the backup size limit slick seems to just overwrite the existing file.
Potential disaster lurks.

Shouldn't it at least write it to a temp file, then delete the original renaming the tmp file into place?

Yes and no. There needs to be an option. Maybe multiple options. Easy to run out of disk space. Other customers are not happy when there is duplicate data and have run out of disk space.

I could argue that running out of disk space on save is a predictable and exceedingly rare event given todays multi-terabyte disks.
Slick can predict with good reliability before saving begins if it will run out of disk or not.

OTOH, an error that occurs writng over a large file is unpredictable and generally unrecoverable - resulting in data loss.

My only option with Slick for protecting large files with safe saves is using VSDELTA backups.
But delta backups are inherently slow and apparently have some inefficiency.

For comparison, I saved a 500MB from slick first without backup, then with backup.
Without backup: 3 seconds.
With backup 1 minute, 49 seconds -- wow, thats really slow.
   During the save, slick:
      Reads the original file 4 times, writes the original file once.
      Creates and writes the VSDelta file twice, and reads in back in once.
Clearly, for large files using VSDelta really isn't an option.

The experiment I did, using PROCMON for monitoring.

Starting with BufferCacheSize: 250MB.
Load partially larger than: 8000KB.
Max size to backup: 5000KB.

Open a large file (500MB)
    Slick reads the first 8KB of the file - good.

Change the first character.
    Slick Reads 8KB, then writes 8KB from begining to the end of the file.
    9:47:53 start
    9:47:56 end
    Elapsed time: About 3 seconds.

Now, change the backup size limit to 1GB.
Change the first character again.
    Slick looks for, but doesn't find the VSDelta file at 9:49:37.1894626 AM

   Reads the original file, all 500MB in a single read!

    Slick creates the VSDelta file at 9:49:37.9289904 AM
    Slick writes 256KB at a time into the vsdelta file.
    Notably: Slick doesn't read the original file here - its musy be in memory now - despite being too large for the buffer cache size
   Finished writing 500MB and close the vsdelta file: 9:49:42.5931828 AM
    Elasped: about 5 seconds

    Slick opens the original file and begins writing to it at 9:49:42.6562641 AM
    Slick Reads 8KB, then writes 8KB from begining to the end of the file.
    (Hm...clearly Slick must have the file in memory, so why is it reading the file now???)
    Finished reading: 9:49:45.5202586 AM
    Elasped: 3 seconds

   Read the entire VSDelta file in a single 500MB read.

   Slick gets a SHARING_VIOLATION error on the original file - Yet Slick is the only accessor of this file.

    Slick again opens the original file.
    Slick reads the entire file again, 8KB at a time.
    Slick reads the entire file once more, 8KB at a time.
    Slick begins reading the original file yet again, but this time starting at offset 469,688,320, and proceeding towards the begining of the file.

    Slick writes the VSDelta file again, in 256KB chunks - starting at offset 233.

    9:51:27.2380178 AM closes the files.

Total elapsed time for save:

9:49:37.9288709 to 9:51:27.2380178: About 1 minute 49 seconds!

Safe save in a nutshell:
   Read the *entire* original file in a single read.
   Write the VSDelta in 256KB chunks.
   Read/write the original file, in 8KB chunks.
   Read the entire VSDelta file in a single 500MB read.
   Read the original file 2 times in 8KB chunks.
   Read the original file a 3 time in 8KB chunks, backwards this time.
   Write the VSDelta file again, in 256KB chunks.


[0] Message Index

Go to full version