I haven't tested how well checking tag files into git helps.
I haven't tried this either.
I'm more concerned about how well background tagging works. You should be able to continue working. I simulated this situation by touching the date of 40,000 files and then switching back to slickedit to force a massive background tag. I have quad core windows and linux machines. I was able to continue working with ok performance.
First I should start by saying that my desktop is a few years old, but still respectable. Quad-core Xeon 2GHz clock. FireGL graphics card I think. 2 relatively quick 7200 rpm drives and a third 10000 rpm drive.
I can do some work with background tagging running, but symbol def/ref lookups are pretty slow.
I suspect if you have a really slow graphics card for Linux, that would be a big problem. A notebook computer might be a problem as well (I've got an old notebook which I will test).
Haven't noticed any graphics speed related issues.
One thing I didn't like is that if I pressed Ctrl+Period to look for a tag that did not exist it switched to synchronous tagging. This is a good default for this but maybe we need an option.Note that if you press Ctrl+Period to look for a tag and SlickEdit start synchronously tagging, press the Cancel button and SlickEdit will continue tagging in the background.
I'm kind of agnostic about this. If I can't find a symbol then that is kind of disrupting my thought process - especially if I'm looking for a bug, if synchronous tagging would expedite this then that would be helpful. On the other hand it's critical to see all the references for a symbol .. or if not then an indication that the list is potentially incomplete - because when you're wading through 50000+ files looking for a function that you plan on changing, missing an instance can lead to great unhappiness.
One thing I was wondering is when git switches to a different branch, how your build still works. It seems to me you would need to check-in the binaries, delete the binaries, or touch all the source files. If git is just changing links, I would expect no dates to change.
We use a custom bsd-style build where the objects get placed in a separate output directory tree - this helps because one build often contains object code for multiple cpu targets - each set of output objects will get stored in its own subtree.
Of course none of this gets checked into git.
I've found that the only safe thing to do when significantly shifting the baseline of one's sandbox is to blow away the whole output tree and start afresh. For me, build correctness is the most important thing - Tthe extra time it takes to rebuild in these situations is insignificant when compared with the time taken to track down subtle dependency related build problems.
Now if I'm advancing my sandbox whilst staying on the same branch, say rebasing to a new head on the current branch, then I'll leave the object tree in place and trust the dependency engine figure out what needs to be rebuilt.
Git doesn't just change links persay, it will reset the file dates to reflect the file dates at that particular point on the branch. If you're moving backwards on the branch some file dates will go back in time to reflect that changes have been unwound, if you're advancing on the branch some file dates will move forward to reflect changes made to those files. I think that git sort of treats changes like patches in the sense that it has a list of patches to apply as you're going forward and to revert as you're moving back. Each patch gets applied with specific timestamp.
Cheers,
--Andrew