Peer-to-Peer Update Conflicts – Be careful with the OriginatorID!!!

In Peer-to-Peer Transactional Replication we can allow for conflict detection and also to continue after a conflict.

Really changes to the data at different nodes should be partitioned so conflicts are not possible but not everyone sets it up this way so there is a rudimentary conflict resolution mechanism in place for us to use.

For an update-update conflict we’ll see mention of it in the conflict viewer in a format like the one below. One node skips the conflict ie preserves its value, while the other applies the update and so overwrites its value.

 

In the case above, both records end up being “Madrid” and we have data in sync. But how was this decided? The point to notice is the numbers above “peer 100” and “peer 1”.

We can see which node has which value when looking at the topology:

..and the number 1 was allocated when we set up the subscriptions :

In the conflict, the row that originated at the node with the highest ID wins. The value of 100 is assigned to the Publisher by default, and as above we left the default of 1 at the first subscriber.

Now – here’s the point – the publisher in this scenario will always beat the 1st subscriber. The second subscriber always wins against the first, the third always beats the second and so on. This is not something we can change afterwards.

So, we need to decide which nodes are the most important before we set this up!

Missing merge data! Why? Bulk Inserts!

We need to know why some data is missing at the subscriber. This is after synchronization and no errors have been reported. One thing to check is see if someone has run a BULK INSERT statement.

For example consider the one below.

It inserts data into a merge-replicated table and looks innocent enough:

BULK INSERT testmergedb..tCity
FROM ‘C:\Snapshots\SSISMerge\Cities\Cities.dat’
WITH (FORMATFILE = ‘C:\Snapshots\SSISMerge\Cities\cities.fmt’);

However if I run the following to see what is waiting to go to the subscriber I see that there are no rows ready!

exec sp_showpendingchanges

PendingInserts1

 

By default the BULK INSERT statement doesn’t fire triggers and remember that merge replication adds insert/update/delete triggers to replicated tables in order to log all changes to them, so if the triggers are not fired merge doesn’t know of the change. There is an additional parameter we need to make sure the developers use: “FIRE_TRIGGERS” as below.

BULK INSERT testmergedb..tCity
FROM ‘C:\Snapshots\SSISMerge\Cities\Cities.dat’
WITH (FORMATFILE = ‘C:\Snapshots\SSISMerge\Cities\cities.fmt’, FIRE_TRIGGERS);

Now when we check the pending changes we see it there and it’ll go to the subscriber now.

exec sp_showpendingchanges

PendingInserts2

I’ll do a separate post to explain how to fix this type of issue if it has already happened!