Missing merge data! Why? Bulk Inserts!

We need to know why some data is missing at the subscriber. This is after synchronization and no errors have been reported. One thing to check is see if someone has run a BULK INSERT statement.

For example consider the one below.

It inserts data into a merge-replicated table and looks innocent enough:

BULK INSERT testmergedb..tCity
FROM ‘C:\Snapshots\SSISMerge\Cities\Cities.dat’
WITH (FORMATFILE = ‘C:\Snapshots\SSISMerge\Cities\cities.fmt’);

However if I run the following to see what is waiting to go to the subscriber I see that there are no rows ready!

exec sp_showpendingchanges

PendingInserts1

 

By default the BULK INSERT statement doesn’t fire triggers and remember that merge replication adds insert/update/delete triggers to replicated tables in order to log all changes to them, so if the triggers are not fired merge doesn’t know of the change. There is an additional parameter we need to make sure the developers use: “FIRE_TRIGGERS” as below.

BULK INSERT testmergedb..tCity
FROM ‘C:\Snapshots\SSISMerge\Cities\Cities.dat’
WITH (FORMATFILE = ‘C:\Snapshots\SSISMerge\Cities\cities.fmt’, FIRE_TRIGGERS);

Now when we check the pending changes we see it there and it’ll go to the subscriber now.

exec sp_showpendingchanges

PendingInserts2

I’ll do a separate post to explain how to fix this type of issue if it has already happened!

 

Why I don’t use a continuous schedule!

In replication there will always be a delay – post SQL 2000 it is never using a distributed transaction – so the choice for the agent job is a continuous schedule or a repeating one. I always use a repeating schedule for my jobs. If I need them to run quickly this could be every 5 mins. One reason is the problematic accumulation of history records that are not removed.

Have a look below. I’m checking for the oldest record in the msmerge_history table and it is 1st Jan 2017. However manually running the cleanup procedure to remove all history older than 2 days shows 0 records deleted!!!

So – the history records keep accumulating in the distribution database.

This is because the filter in the cleanup proc to remove records looks at the datetime in the session history. As the merge agent runs in one session it continually updates the session time. So for the cleanup query it seems that the history records have always all just been created. Really this is a logic bug in the cleanup proc but since it has always been there either you let the history table get huge or you need to restart the merge agent. Remembering to do this is a pain – so that’s one reason why I use a repeating schedule!

Missing merge data! Why? Fast Loads!

When looking at the subscriber we may find that there are rows missing, although synchronization works without errors. I mentioned BULK INSERTS as one possible cause. Here we look at another – “Fast Loads”.

If the developers are using SSIS to insert data into a merge-replicated table, open their SSIS pakages and have a look at the data flow task:

SSISOLEDB

In particular look at the properties of the OLEDB Destination:

 

The default Data access mode is to perform a “Table or view – fast load”. This works incredibly quickly but it doesn’t fire table triggers and merge needs triggers to know what data to replicate. So – tell the devs to use “Table or view” and data will get replicated just fine :). I’ll do another post to explain what to do if the data is already messed up and you need to repair it.

Replication Backups

As a DBA we’re always considering disaster recovery, business continuity, backups etc. Replication is no different – we need to get a copy of the whole setup. Manually this can be done from SSMS:

This is OK if you remember to always do it when making changes.

In practice people forget or haven’t got the time.

So, the Powershell script below will create a text file and you can run this script as a job – easy!

########################################################################################################################
# Script created by:
# Date created:
# Changes:
# Purpose: Script creates the replication file for DR purposes
# Then it copies the file to the DR destination
# Example Usage: powershell -Command “C:\DBA\scripts\BackupReplicationScripts.ps1” -SourceReplicationFilename “\\servername\replication\Publications.sql” -DestinationReplicationFilename “\\servername2\replication\Publications.sql” -SQLServerWithPublications “mainserver,port” -$ComputerWithPublications “mainserver”
########################################################################################################################

param(
[Parameter(Position=0,mandatory=$true)]
[string]$SourceReplicationFilename,
[Parameter(Position=1,mandatory=$true)]
[string]$DestinationReplicationFilename,
[Parameter(Position=2,mandatory=$true)]
[string]$SQLServerWithPublications,
[Parameter(Position=3,mandatory=$true)]
[string]$ComputerWithPublications
)

Try
{
write-host “$SourceReplicationFilename”
write-host “$DestinationReplicationFilename”
write-host “$SQLServerWithPublications”
write-host “$ComputerWithPublications”
$ScriptBlockContent = {

param ( $SourceReplicationFilename, $SQLServerWithPublications, $ComputerWithPublications)

#Reference RMO Assembly
[Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.Rmo”)
[Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.Replication”)

function errorhandler([string]$errormsg)
{
writetofile (“Replication Script Generator run at: ” + (date)) $SourceReplicationFilename 1
writetofile (“[Replication Script ERROR] ” + $errormsg) $SourceReplicationFilename 0
write-host(“[Replication Script ERROR] ” + $errormsg) -Foregroundcolor Red
}
function writetofile([string]$text, [string]$myfilename, [int]$cr_prefix)
{
if ($cr_prefix -eq 1) { “” >> $myfilename }
$text >> $myfilename
}
function initializefile([string]$myfilename)
{
“” > $myfilename
}

$repsvr=New-Object “Microsoft.SqlServer.Replication.ReplicationServer” $SQLServerWithPublications

Clear-host

initializefile $SourceReplicationFilename

# if we don’t have any replicated databases then there’s no point in carrying on
if ($repsvr.ReplicationDatabases.Count -eq 0)
{
writetofile (“Replication Script Generator run at: ” + (date)) $SourceReplicationFilename 0
writetofile “ZERO replicated databases on $SQLServerWithPublications!!!” $SourceReplicationFilename 1
EXIT
}
# similarly, if we don’t have any publications then there’s no point in carrying on
[int] $Count_Tran_Pub = 0

foreach($replicateddatabase in $repsvr.ReplicationDatabases)
{
$Count_Tran_Pub = $Count_Tran_Pub + $replicateddatabase.TransPublications.Count
}
if (($Count_Tran_Pubb) -eq 0)
{
writetofile (“Replication Script Generator run at: ” + (date)) $SourceReplicationFilename 0
writetofile “ZERO Publications on $SQLServerWithPublications!!!” $SourceReplicationFilename 1
EXIT
}

# if we got this far we know that there are some publications so we’ll script them out
# the $scriptargs controls exactly what the script contains
# for a full list of the $scriptargs see the end of this script
$scriptargs =
[Microsoft.SqlServer.Replication.scriptoptions]::Creation `
-bor [Microsoft.SqlServer.Replication.scriptoptions]::IIncludePublications `
-bor [Microsoft.SqlServer.Replication.scriptoptions]::IncludePublicationAccesses `
-bor [Microsoft.SqlServer.Replication.scriptoptions]::IncludeGo `
-bor [Microsoft.SqlServer.Replication.scriptoptions]::IncludeArticles `
-bor [Microsoft.SqlServer.Replication.scriptoptions]::IncludePublisherSideSubscriptions `
-bor [Microsoft.SqlServer.Replication.scriptoptions]::IncludeCreateSnapshotAgent `
-bor [Microsoft.SqlServer.Replication.scriptoptions]::IncludeSubscriberSideSubscriptions `
-bor [Microsoft.SqlServer.Replication.scriptoptions]::EnableReplicationDB `
-bor [Microsoft.SqlServer.Replication.scriptoptions]::IncludeCreateLogreaderAgent
writetofile (“– Replication Script Generator run at: ” + (date)) $SourceReplicationFilename 0
writetofile “– PUBLICATIONS ON $SQLServerWithPublications” $SourceReplicationFilename 1
writetofile “– TRANSACTIONAL PUBLICATIONS ($Count_Tran_Pub)” $SourceReplicationFilename 1
foreach($replicateddatabase in $repsvr.ReplicationDatabases)
{
if ($replicateddatabase.TransPublications.Count -gt 0)
{
foreach($tranpub in $replicateddatabase.TransPublications)
{
write-host “–********************************************************************************” -Foregroundcolor Blue
“–***** Writing to file script for publication: ” + $tranpub.Name
write-host “–********************************************************************************” -Foregroundcolor Blue
writetofile “–********************************************************************************” $SourceReplicationFilename 0
writetofile (“–***** Writing to file script for publication: ” + $tranpub.Name) $SourceReplicationFilename 0
writetofile “–********************************************************************************” $SourceReplicationFilename 0
[string] $myscript=$tranpub.script($scriptargs)
writetofile $myscript $SourceReplicationFilename 0
}
}
}

}

Invoke-Command -ComputerName “$ComputerWithPublications” -ScriptBlock $ScriptBlockContent -ArgumentList $SourceReplicationFilename, $SQLServerWithPublications, $ComputerWithPublications

Copy-Item $SourceReplicationFilename $DestinationReplicationFilename

}
Catch [Exception]
{
  write-host $_.Exception.Message $true
}
Finally
{

}

 

Dealing with large Snapshots

If you have poor network links (replicating to ships like I used to?) or you have large snapshots, or unluckily you have both together (been there!) you’ll need to tweak things a bit. Here are some options I’ve used:

(1) Local Initialization

This is one of the biggest time saving options. Chris Skorlinski explains the methodology here. If I can paraphrase….. You run the snapshot agent on the publisher and this creates a whole load of files in a snapshot folder. Transfer the folder to the subscriber using Robocopy or FTP. Set the distribution agent to use the -AltSnapshotFolder parameter. Strangely this can be done even if you have a push subscriber. You can run the distribution.exe / merge.exe on the subscriber and sync up locally.

(2) Use Compression

As a part of the above process, zip up the snapshot folder (use 7-Zip or such like if this is going to be >2GB).  Unzip at the subscriber. There is an option in the snapshot tab to do compression but this is really slow for large snapshots and using a 3rd party tool performs much better.

You’ll be amazed how much time this saves. I had one publication that takes a week to complete initialization and it was changed to complete in a single day.

Things that didn’t work….

As a final note I tested the use of the UseInprocLoader parameter of the Distribution Agent and the Merge Agent . This parameter causes the agent to use the BULK INSERT command when the snapshot is applied. I find no improvement in the agent synchronization times using this parameter. Also I tried to use the –MaxBCPThreads parameter of the Snapshot Agent, Merge Agent, and Distribution Agent. This parameter specifies the number of bulk copy operations that can be performed in parallel when the snapshot is created and applied. Once again I saw no improvement for  Transactional replication.