We executed the migration of stand-alone database servers to an active/active cluster. This was for our enterprise-wide reporting solution which includes a scaled-out deployment of Reporting Services implementing F5 BIGIP as a load balancer.
I/O challenges:
We migrated over the Distributor Instance and needed to re-snapshot several tables; planned for a 36 hour downtime. Small tables were copied over quickly but we were struggling with large tables. We had IO_COMPLETION and ASYNC_NETWORK_IO as top waiters during the bulk copy. In this scenario, data was brought over to the distributor instance and then bulk copied onto the subscriber. Both instances reside on the same node, each one with their properly assigned storage on Dell Equallogic SAN.
The first thing we attempted was to correct the link negotiation from auto to 1000/full. Changes were made and we experienced no change. All instances bounced immediately after the change, a surprise as we never thought that changing the configuration for 1 NIC will affect the database instances. It was neither the heartbeat nor the iSCSI NICs the ones affected.
We continued with the bulk copy until we found that a 110 GB table was taking extremely long to complete. I decided to exclude the largest tables from replication and approach another plan for them (backup, restore, batch insert and partition switch-in – I might write about this in another blog post).
We continued with the remaining objects and declared success on a Sunday at 10 PM. Everything looked good except for four of the very large tables that were not included due to the time required to copy them over.
IO Pending Requests ms.
Re-Snapshot and subsequent Rollback:
Something interesting happened. A bulk copy was rolling back on Monday morning which took around 8 hours to complete. The log file grew to 125 GB and hit the storage limit. We were not sure why as the table was bulk copied successfully a night before. We removed the table from replication and decided to continue troubleshooting the I/O problems.
I/O resolution:
I was able to reproduce the I/O symptoms at the OS level by copying files larger than 4.4 GB. I would see rates of 15 MB/s instead of the 500 MB/s I was seeing with smaller files. We failed over expecting to isolate the issue to 1 node, but the issue reoccurred. We decided to upgrade the NIC drivers to the latest published a few weeks ago, but no improvements were experienced. The SAN admin then found an article about disabling Windows Scaling heuristics and decided to give it a try.
1
| HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\ParametersEnableWsd=0 |
Registry was updated, nodes were bounced (1 at a time) and tests were re-ran. No more I/O contention and replication was just flowing with a a few ms. of IO pending requests instead of 200 ~ 400 we were hitting before.
1 2 3 4 5 6 7 8
| select
database_id,
file_id,
io_stall,
io_pending_ms_ticks,
scheduler_address
from sys.dm_io_virtual_file_stats(NULL, NULL) AS t1
INNER JOIN sys.dm_io_pending_io_requests AS t2 ON (t1.file_handle = t2.io_handle) |
Re-Snapshot and subsequent Rollback, again:
I/O was fixed, so now we needed to re-snapshot the missing table that rolled back. We did on a Tuesday night, everything went well and then on Wednesday morning a rollback was pending. We again hit the storage limit of 125 GB for log. I asked the SAN admin to extend it to 200 GB, asked one of the DBAs to exclude the table for replication and let the undelivered commands flow to the subscriber.
Undelivered Transactions:
The rollback took another 6 hours to complete and at that point I was just waiting for replication to catch up. 30 minutes later, nothing happened. We continued waiting 30 more minutes and nothing happened either. Jobs were bounced, SQL Server Agent restarted on the Disitributor and Subscriber and nothing happened either. I asked a question on Twitter and was suggested to call Microsoft. Before I did, I called a friend of mine, Anthony Sammartino (Twitter | Blog), and left a message. While calling Microsoft, Anthony called me back and explained that there might be some errors with DMLs attempted on the subscriber. He discussed it with one of the DBAs and we were able to get replication going by changing the distributor agent profile. We had over 5 million undelivered commands pending.
Stabilization of the Environment:
By 8 PM we were up to date with all tables except the bulk copy that rolled back. I decided to stop making changes to the environment for the long Thanksgiving weekend and ensured that the environment was stable enough to continue working on it on the next Monday.
Re-Snapshot and subsequent Rollback resolution:
We figured that a filter was later giving an exception letting the publisher reinitialize replication for the object issuing a re-snapshot. It had been performing bulk copies and then rolling back several times during the night. The filter value was defined as column NOT LIKE ‘%value%’.
We decided to exclude the filter and re-snapshot the table today. 15 minutes later, 60 million rows were inserted into the subscriber. No issues so far.
Things are now looking much better.