Dealing with large Snapshots

If you have poor network links (replicating to ships like I used to?) or you have large snapshots, or unluckily you have both together (been there!) you’ll need to tweak things a bit. Here are some options I’ve used:

(1) Local Initialization

This is one of the biggest time saving options. Chris Skorlinski explains the methodology here. If I can paraphrase….. You run the snapshot agent on the publisher and this creates a whole load of files in a snapshot folder. Transfer the folder to the subscriber using Robocopy or FTP. Set the distribution agent to use the -AltSnapshotFolder parameter. Strangely this can be done even if you have a push subscriber. You can run the distribution.exe / merge.exe on the subscriber and sync up locally.

(2) Use Compression

As a part of the above process, zip up the snapshot folder (use 7-Zip or such like if this is going to be >2GB).  Unzip at the subscriber. There is an option in the snapshot tab to do compression but this is really slow for large snapshots and using a 3rd party tool performs much better.

You’ll be amazed how much time this saves. I had one publication that takes a week to complete initialization and it was changed to complete in a single day.

Things that didn’t work….

As a final note I tested the use of the UseInprocLoader parameter of the Distribution Agent and the Merge Agent . This parameter causes the agent to use the BULK INSERT command when the snapshot is applied. I find no improvement in the agent synchronization times using this parameter. Also I tried to use the –MaxBCPThreads parameter of the Snapshot Agent, Merge Agent, and Distribution Agent. This parameter specifies the number of bulk copy operations that can be performed in parallel when the snapshot is created and applied. Once again I saw no improvement for  Transactional replication.

Leave a Reply

Your email address will not be published.