“Keep existing table unchanged” – why would I ever want to use this?

There is an option on the article properties to “keep existing object unchanged”.

This seems a little odd at first. Surely we always drop all subscriber objects when we reinitialize? Well for a particular table we could have something different in mind. A real edge case but there is a scenario this is designed for. It is where we have multiple publishers to a single subscriber. The table is created on the subscriber by the first publisher during initialization and hence has the replication objects are created at the subscriber. Subsequent publishers will send their commands and records there but leave the table in place. This can be used for centralised reporting from several municipal offices to head office. Any ddl changes such as adding a column are fine – we do them on the first publisher (the one which did the initial drop and recreate at the subscriber) so the ddl change goes to the subscriber.

When initializing do I need to drop FKs at the subscriber?

This is an interesting question. In the publication properties there is the option on the snapshot tab to run extra scripts before and after the snapshot is applied. This is the same for both merge and transactional publications (below).

Many DBAs will have scripts which drop all the FKs on the subscriber and readd them after the snapshot is applied so the initialization runs smoothly and we don’t get the following sort of error:

Could not drop object ‘dbo.tCity’ because it is referenced by a FOREIGN KEY constraint“.

However the snapshot generation is different between Transactional and Merge. In Transactional all the FKs are dropped for you and re-added later on. This doesn’t happen for merge. There’s probably a good reason for it but can’t see why they should run differently at present. Anyway, the message is that you don’t need to roll your own logic to deal with subscriber FKs in Transactional but you still do in merge!

Adding an article – do I really want a complete snapshot?

Have you ever noticed that when you add an article to a transactional publication and run the snapshot agent it does a complete snapshot of all tables? This can be a real pain for big publications. Fear not – it is configurable. The setting is visible in the Replication Monitor. Actually 2 settings – shown below. We need both to be False but the observant DBA will notice that the main one of these is greyed out!

 

A little scripting solves it though. We just run the script below before adding a table and all is well – the snapshot agent then just creates the new article.

EXEC sp_changepublication
@publication = ‘pubTestTransactional’,
@property = ‘allow_anonymous’,
@value = ‘false’
GO

EXEC sp_changepublication
@publication = ‘pubTestTransactional’,
@property = ‘immediate_sync’,
@value = ‘false’
GO

How to see all pending merge changes

There is a handy system stored procedure sp_showpendingchanges that will provide an “approximation” of the pending changes that need to be replicated at that database. This proc will list inserts/updates/deletes per article and can also filter by the subscriber server. It has been around since SQL 2008 and is a welcome change. Would be nicer if it was represented in the replication monitor like the one for transactional replication though.

You can run the procedure without any arguments in which case it provides a summary.

You can also provide all arguments as below to look at a specific subscriber and table and see the rows waiting to be sent.

exec sp_showpendingchanges @destination_server = ‘DESKTOP-NSONKMO’
, @publication = ‘Testuse_partition_groups’
, @article = ‘tCompany’
, @show_rows = 1

However, if the subscriber database name is the same as the publisher one nothing will get returned. Not great as this is a common setup!

In this case you’ll need to roll your own script using something like the code I created below:

select tCompany.* from tCompany
inner join MSmerge_contents on tCompany.rowguid = MSmerge_contents.rowguid
inner join MSmerge_genhistory on MSmerge_contents.generation = MSmerge_genhistory.generation
where MSmerge_genhistory.genstatus = 0 and MSmerge_genhistory.generation >
(select recgen from sysmergesubscriptions where subscriber_server = ‘DESKTOP-NSONKMO\paulsinst’)

PreComputePartitions merge option should come with health warning!

In merge replication we often replicate several related tables. Imagine if we were replicating the following 3 tables:

CompanyStaff

Now it might be that we declare a filter and join filters as follows in the publication:

CompanyStaffFilter

Notice that the relationship of the filters is opposite to the direction of the foreign keys. This makes business sense in some cases (another post on this later). Anyway, if this is the sort of scenario you have then beware that some data changes won’t propagate. If for example on the subscriber I delete 3 related records:

delete tStaff where staffname = ‘PaulSmith’
delete tDepartment where departmentname = ‘Finance’
delete tCompany where where CompanyName = ‘IBM’

When synchronizing back to the publisher the tCompany record will not be deleted. No error in the replication monitor but there is mention of a “retry”. Further syncs don’t mention the retry and still the tCompany record remains. This is a bug/issue that has existed from SQL 2005 through to SQL 2016. What we need to do is reset the Precompute partitions option. By default it is set to true and we reset it to false. This will cause a snapshot to be created so if you are going to have this type of filter setup remember to do this at the beginning before initialization.

Fixing missing merge rows

Suppose there are some rows at the publisher which are not at the subscriber. Or vice versa. This is after a successful synchronization….

Other posts have dealt with how this can occur, and how we can stop it happening, but suppose it has happened – what to do?

Well there is a nice stored procedure which comes to our rescue: sp_mergedummyupdate. We just need the name of the table and the rowguid.

exec sp_mergedummyupdate @source_object = ‘tCity’, @rowguid = ‘724FEE04-F8DB-E611-9BE5-080027A1E9BF’  

It is very robust – if we give it the rowguid of a non-missing row there will just be an update propagated and we won’t end up with duplicates. Likewise if we try running it more than once there won’t be an issue.

Just run the proc at the publisher or the subscriber (whichever one has the missing rows) and then synchronize – job done!

 

Peer-to-Peer Update Conflicts – Be careful with the OriginatorID!!!

In Peer-to-Peer Transactional Replication we can allow for conflict detection and also to continue after a conflict.

Really changes to the data at different nodes should be partitioned so conflicts are not possible but not everyone sets it up this way so there is a rudimentary conflict resolution mechanism in place for us to use.

For an update-update conflict we’ll see mention of it in the conflict viewer in a format like the one below. One node skips the conflict ie preserves its value, while the other applies the update and so overwrites its value.

 

In the case above, both records end up being “Madrid” and we have data in sync. But how was this decided? The point to notice is the numbers above “peer 100” and “peer 1”.

We can see which node has which value when looking at the topology:

..and the number 1 was allocated when we set up the subscriptions :

In the conflict, the row that originated at the node with the highest ID wins. The value of 100 is assigned to the Publisher by default, and as above we left the default of 1 at the first subscriber.

Now – here’s the point – the publisher in this scenario will always beat the 1st subscriber. The second subscriber always wins against the first, the third always beats the second and so on. This is not something we can change afterwards.

So, we need to decide which nodes are the most important before we set this up!

Missing merge data! Why? Bulk Inserts!

We need to know why some data is missing at the subscriber. This is after synchronization and no errors have been reported. One thing to check is see if someone has run a BULK INSERT statement.

For example consider the one below.

It inserts data into a merge-replicated table and looks innocent enough:

BULK INSERT testmergedb..tCity
FROM ‘C:\Snapshots\SSISMerge\Cities\Cities.dat’
WITH (FORMATFILE = ‘C:\Snapshots\SSISMerge\Cities\cities.fmt’);

However if I run the following to see what is waiting to go to the subscriber I see that there are no rows ready!

exec sp_showpendingchanges

PendingInserts1

 

By default the BULK INSERT statement doesn’t fire triggers and remember that merge replication adds insert/update/delete triggers to replicated tables in order to log all changes to them, so if the triggers are not fired merge doesn’t know of the change. There is an additional parameter we need to make sure the developers use: “FIRE_TRIGGERS” as below.

BULK INSERT testmergedb..tCity
FROM ‘C:\Snapshots\SSISMerge\Cities\Cities.dat’
WITH (FORMATFILE = ‘C:\Snapshots\SSISMerge\Cities\cities.fmt’, FIRE_TRIGGERS);

Now when we check the pending changes we see it there and it’ll go to the subscriber now.

exec sp_showpendingchanges

PendingInserts2

I’ll do a separate post to explain how to fix this type of issue if it has already happened!