2010 came by pretty quickly. It has been a very busy year with all kinds of expansions, migrations and consolidations. We installed HP Blade Servers, VMware, SQL Server 2008 R2, Windows Server 2008 R2, CentOS 4.2 X64, F5s, Dell R710s/610s, Dell Equallogic Storage, TFS 2010, Windows 7 X64 for all DBAs and BI developers, etc. The list continues.
So what is happening next year? Many exiting things, with a core focus on HA, DR, Analytics, SharePoint 2010, and VMware on production databases.
On the Database side:
Homogenous environment. All instance running under SQL Server 2008 R2 (some edition downgrades in the mix)
Isolated Development, QA and UAT environments with proper edition (Developer, Standard, Enterprise). VMWare and MSDN come to play.
Management Data Warehouse deployment to collect health information from all production instances. Birds-eye view of all instances in one location.
Central Management Servers Implementation
Policy Based Management Implementation
Data-Tier Application Implementation
RML utilities for pre-production load testing
On the Analytics side:
All legacy cubes migrated to SQL Server Analysis Services 2008 R2
Deployment of Star Schemas for the remaining business units
Dedicated instance for transformations (SSIS)
Fire up 3rd and 4th VM for our Scaled-out Deployment of Reporting Services
Mastering DAX. PowerPivot for Excel 2010
GUI implementation for browsing our cubes. We are in the process of evaluating products.
High Availability & Disaster Recovery:
All instances to be moved to HA (2, 3, 4 node clusters). No more single point of failures.
Zero-Downtime database migration with Mirroring
Block Level Backup and restore offsite
Full-Log-Daily Differential Backups across the board.
Backup file test by restoring to our UAT environments
Footprint Reduction by archiving historical data that can be rolled up
Storage Expansion to benefit from the spindle count
SSD for the most demanding applications (over iSCSI)
I am encouraging my team members to use twitter, to follow knowledgeable folks and read tweets from the #sqlserver and #sqlhelp hash tags.
We executed the migration of stand-alone database servers to an active/active cluster. This was for our enterprise-wide reporting solution which includes a scaled-out deployment of Reporting Services implementing F5 BIGIP as a load balancer.
We migrated over the Distributor Instance and needed to re-snapshot several tables; planned for a 36 hour downtime. Small tables were copied over quickly but we were struggling with large tables. We had IO_COMPLETION and ASYNC_NETWORK_IO as top waiters during the bulk copy. In this scenario, data was brought over to the distributor instance and then bulk copied onto the subscriber. Both instances reside on the same node, each one with their properly assigned storage on Dell Equallogic SAN.
The first thing we attempted was to correct the link negotiation from auto to 1000/full. Changes were made and we experienced no change. All instances bounced immediately after the change, a surprise as we never thought that changing the configuration for 1 NIC will affect the database instances. It was neither the heartbeat nor the iSCSI NICs the ones affected.
We continued with the bulk copy until we found that a 110 GB table was taking extremely long to complete. I decided to exclude the largest tables from replication and approach another plan for them (backup, restore, batch insert and partition switch-in – I might write about this in another blog post).
We continued with the remaining objects and declared success on a Sunday at 10 PM. Everything looked good except for four of the very large tables that were not included due to the time required to copy them over.
IO Pending Requests ms.
Re-Snapshot and subsequent Rollback:
Something interesting happened. A bulk copy was rolling back on Monday morning which took around 8 hours to complete.The log file grew to 125 GB and hit the storage limit. We were not sure why as the table was bulk copied successfully a night before. We removed the table from replication and decided to continue troubleshooting the I/O problems.
I was able to reproduce the I/O symptoms at the OS level by copying files larger than 4.4 GB. I would see rates of 15 MB/s instead of the 500 MB/s I was seeing with smaller files. We failed over expecting to isolate the issue to 1 node, but the issue reoccurred. We decided to upgrade the NIC drivers to the latest published a few weeks ago, but no improvements were experienced. The SAN admin then found an article about disabling Windows Scaling heuristics and decided to give it a try.
Registry was updated, nodes were bounced (1 at a time) and tests were re-ran. No more I/O contention and replication was just flowing with a a few ms. of IO pending requests instead of 200 ~ 400 we were hitting before.
I/O was fixed, so now we needed to re-snapshot the missing table that rolled back. We did on a Tuesday night, everything went well and then on Wednesday morning a rollback was pending. We again hit the storage limit of 125 GB for log. I asked the SAN admin to extend it to 200 GB, asked one of the DBAs to exclude the table for replication and let the undelivered commands flow to the subscriber.
The rollback took another 6 hours to complete and at that point I was just waiting for replication to catch up. 30 minutes later, nothing happened. We continued waiting 30 more minutes and nothing happened either. Jobs were bounced, SQL Server Agent restarted on the Disitributor and Subscriber and nothing happened either. I asked a question on Twitter and was suggested to call Microsoft. Before I did, I called a friend of mine, Anthony Sammartino (Twitter | Blog), and left a message. While calling Microsoft, Anthony called me back and explained that there might be some errors with DMLs attempted on the subscriber. He discussed it with one of the DBAs and we were able to get replication going by changing the distributor agent profile. We had over 5 million undelivered commands pending.
Stabilization of the Environment:
By 8 PM we were up to date with all tables except the bulk copy that rolled back. I decided to stop making changes to the environment for the long Thanksgiving weekend and ensured that the environment was stable enough to continue working on it on the next Monday.
Re-Snapshot and subsequent Rollback resolution:
We figured that a filter was later giving an exception letting the publisher reinitialize replication for the object issuing a re-snapshot. It had been performing bulk copies and then rolling back several times during the night. The filter value was defined as column NOT LIKE ‘%value%’.
We decided to exclude the filter and re-snapshot the table today. 15 minutes later, 60 million rows were inserted into the subscriber. No issues so far.