I/O Issues on a newly deployed SQL Server cluster attached to a SAN [Windows 2008 R2]

We executed the migration of stand-alone database servers to an active/active cluster. This was for our enterprise-wide reporting solution which includes a scaled-out deployment of Reporting Services implementing F5 BIGIP as a load balancer.

I/O challenges:

We migrated over the Distributor Instance and needed to re-snapshot several tables; planned for a 36 hour downtime. Small tables were copied over quickly but we were struggling with large tables. We had IO_COMPLETION and ASYNC_NETWORK_IO as top waiters during the bulk copy. In this scenario, data was brought over to the distributor instance and then bulk copied onto the subscriber. Both instances reside on the same node, each one with their properly assigned storage on Dell Equallogic SAN.

The first thing we attempted was to correct the link negotiation from auto to 1000/full. Changes were made and we experienced no change. All instances bounced immediately after the change, a surprise as we never thought that changing the configuration for 1 NIC will affect the database instances. It was neither the heartbeat nor the iSCSI NICs the ones affected.

We continued with the bulk copy until we found that a 110 GB table was taking extremely long to complete. I decided to exclude the largest tables from replication and approach another plan for them (backup, restore, batch insert and partition switch-in – I might write about this in another blog post).

We continued with the remaining objects and declared success on a Sunday at 10 PM. Everything looked good except for four of the very large tables that were not included due to the time required to copy them over.

IO Pending Requests ms.

IO Pending Requests ms.

Re-Snapshot and subsequent Rollback:

Something interesting happened. A bulk copy was rolling back on Monday morning which took around 8 hours to complete. The log file grew to 125 GB and hit the storage limit. We were not sure why as the table was bulk copied successfully a night before. We removed the table from replication and decided to continue troubleshooting the I/O problems.

I/O resolution:

I was able to reproduce the I/O symptoms at the OS level by copying files larger than 4.4 GB. I would see rates of 15 MB/s instead of the 500 MB/s I was seeing with smaller files. We failed over expecting to isolate the issue to 1 node, but the issue reoccurred. We decided to upgrade the NIC drivers to the latest published a few weeks ago, but no improvements were experienced. The SAN admin then found an article about disabling Windows Scaling heuristics and decided to give it a try.

1
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\ParametersEnableWsd=0

Registry was updated, nodes were bounced (1 at a time) and tests were re-ran. No more I/O contention and replication was just flowing with a a few ms. of IO pending requests instead of 200 ~ 400 we were hitting before.

1
2
3
4
5
6
7
8
select
database_id,
file_id,
io_stall,
io_pending_ms_ticks,
scheduler_address
from sys.dm_io_virtual_file_stats(NULL, NULL) AS t1
INNER JOIN sys.dm_io_pending_io_requests AS t2 ON (t1.file_handle = t2.io_handle)

Re-Snapshot and subsequent Rollback, again:

I/O was fixed, so now we needed to re-snapshot the missing table that rolled back. We did on a Tuesday night, everything went well and then on Wednesday morning a rollback was pending. We again hit the storage limit of 125 GB for log. I asked the SAN admin to extend it to 200 GB, asked one of the DBAs to exclude the table for replication and let the undelivered commands flow to the subscriber.

Undelivered Transactions:

The rollback took another 6 hours to complete and at that point I was just waiting for replication to catch up. 30 minutes later, nothing happened. We continued waiting 30 more minutes and nothing happened either. Jobs were bounced, SQL Server Agent restarted on the Disitributor and Subscriber and nothing happened either. I asked a question on Twitter and was suggested to call Microsoft. Before I did, I called a friend of mine, Anthony Sammartino (Twitter | Blog), and left a message. While calling Microsoft, Anthony called me back and explained that there might be some errors with DMLs attempted on the subscriber. He discussed it with one of the DBAs and we were able to get replication going by changing the distributor agent profile. We had over 5 million undelivered commands pending.

Stabilization of the Environment:

By 8 PM we were up to date with all tables except the bulk copy that rolled back. I decided to stop making changes to the environment for the long Thanksgiving weekend and ensured that the environment was stable enough to continue working on it on the next Monday.

Re-Snapshot and subsequent Rollback resolution:

We figured that a filter was later giving an exception letting the publisher reinitialize replication for the object issuing a re-snapshot. It had been performing bulk copies and then rolling back several times during the night. The filter value was defined as column NOT LIKE ‘%value%’.

We decided to exclude the filter and re-snapshot the table today. 15 minutes later, 60 million rows were inserted into the subscriber. No issues so far.

Things are now looking much better.

Denali, the next release [SQL Server]

After a week at the Pass Summit United 2010 and after catching up at work with several tasks, I am now able to update my blog with some exciting information. SQL Server codename “Denali” CTP1 was released formally on 11/09/2010 (version 11.0). I was able to obtain a DVD copy while at the summit, and now it can be downloaded too.

The key takeaways:

  1. AlwaysON. This is a full blown HADR solution that allows multiple asynchronous and synchronous
    SQL Server Codename "Denali" Community Technology Preview 1

    SQL Server "Denali" CTP1

    mirrors of a database with transparent fail over option to a local and remote datacenter. This is a feature I am really looking forward to.

  2. VeritPaq embedded in the relational engine. The SQL Server engineers ported the same code used for PowerPivot into the RDBMS. The feature will allow column-level ultra-high compression of repetitive data which will dramatically increase query efficiency by ten times at least.
  3. BISM with DAX support. Business Intelligence Semantic Model in Analysis Services which will allow retrieval of data using DAX queries; the same ones used in PowerPivot.
  4. Sequencer generator. It is never too late for a feature like this.
  5. Enhanced Windowed Functions.
  6. File Table Support. I have been waiting this feature since the Oracle 9i days. It will allow us to manage files directly within SQL Server.

There are certainly more enhancements which will continue to pop-up as the product matures.

Now, it is a challenge for us to learn and prepare for the new features after just upgrading to 2008 R2. Well, Microsoft is doing a pretty good job by keeping us busy and I am OK with that. They were kindly enough to publish Books Online for Denali though.

Split a Partition and Expect Locks [SQL Server]

Not in every scenario. But as a rule of thumb, test before you perform a split operation in a production environment.

Every year around this time we plan, test and deploy partitions for tables that will hold historical data from the next year. We typically create one partition per month and in one case a partition per day (which has its own sliding window schedule every month).

This has been a very busy year, and between the migrations and consolidations a historical table did not have partitions added for October and November. I took the decision to keep the data as-is and just add a new partitions starting on 11/12; one for each day. This table holds around 30 million rows per day.

Now, we usually partition tables using RIGHT functions in order to have the last boundary empty and able to split it beforehand without any locking. The table I mention in the above paragraph is using a LEFT partition function and all the current data is sitting in the rightmost partition.

Issuing a Split partition of the rightmost data will let the engine scan and move the data for the current partition. This is something that can be fine in systems with a few million rows, but not with a partition holding 500+ million rows. We tried on a testing environment and as expected it locked the table for a very long time. DML operations were not able to be performed against it until the split succeeded.

1
2
3
-- Using new filegroup
ALTER PARTITION SCHEME [ps_Left_day_sliding] NEXT USED [OEN_Lists_2010_11];
ALTER PARTITION FUNCTION [pf_Left_day_sliding] () SPLIT RANGE ('2010-11-12 00:00:00.000');

This is the table which has a partition per day.

We are not going to purse that route. We will create a brand new table with proper RIGHT partitions and swap it with the current one. We have that liberty as the data is purely historical, and DBAs retrieve from it when required. They will know where to look at after reading at the documentation they write after performing such a change.

We went ahead and also tested with RIGHT just to double make sure. It worked just fine.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
CREATE PARTITION FUNCTION [pf_Left_day_sliding1](datetime) AS RANGE RIGHT FOR VALUES (N'2010-09-03T00:00:00.000', N'2010-09-04T00:00:00.000', N'2010-09-05T00:00:00.000', N'2010-11-12T00:00:00.000');

CREATE PARTITION SCHEME [ps_Left_day_sliding1] AS PARTITION [pf_Left_day_sliding1] TO ([PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY], [OEN_Lists_2010_11]) ;
CREATE TABLE [dbo].[batchresults_buffer1](
[column1] [bigint] NOT NULL,
[column2] [datetime] NOT NULL,
[column3] [int] NOT NULL,
[column4] [int] NOT NULL,
[column5] [int] NOT NULL,
[column6] [varchar](200) NOT NULL,
) ON [ps_Left_day_sliding1]([column2]);
CREATE UNIQUE CLUSTERED INDEX [PK__batchresults_buf__2077C861] ON [dbo].[batchresults_buffer1]
(
[column1] ASC,
[column2] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [ps_Left_day_sliding1]([datecreated]) ;
BEGIN TRAN;
INSERT INTO [batchresults_buffer1] (column1, column2, column3, column4, column5, column6)
SELECT
column1, column2, column3, column4, column5, column6
FROM batchresults_buffer; -- This table contains sample data, 50 million rows.
COMMIT;
ALTER PARTITION SCHEME [ps_Left_day_sliding1] NEXT USED [OEN_Lists_2010_11];
ALTER PARTITION FUNCTION [pf_Left_day_sliding1] () SPLIT RANGE ('2010-11-12 00:00:00.000');

I do not see that need for us to use LEFT. All future deployment will use RIGHT instead.

MediaWhiz’s BI Solution Goes Live [Analytics]

Over a year ago we realized that the DBA team spent a very long time digging through data we had everywhere, counting billions and billions of rows for various systems, and that needed to be beautified in Excel with charts and the whole nine yards to be delivered to business owners.

That prompted us to take the decision and move to the next level. At the beginning, we called it our future Data Warehouse, then DataMart (as we were going to start with just one subject area), then Reporting Solution and finally decided to just call it “Business Intelligence”.

To speed up the process, we hired 2 consultants. A Data Analyst and a BI Architect. Everything started pretty good on the first week by deciding on what needed to be done first for us to start gathering requirements. Unfortunately other high priority tasks took over and the Data Analyst switched their duties to something else. That left us with just one BI Architect.

Anthony Sammartino (Twitter | Blog), of TekPartners Business Intelligence Solutions, assumed the role of Data Analyst, Project Manager and BI Architect. This was not easy as he needed to spend time creating proper questionnaires and meeting with different business owners to start gathering requirements. Then he needed to come up with proper schedules and plan the work.

As not everything worked according to plan (business owners needed to prioritize other work) we could not gather enough requirements from all of  the different businesses. Plus there was still this fuzzy understanding of what BI meant.

We approached a different strategy and started pinpointing the type of reports that were generated mostly from the transactional data. Anthony came with the great idea of replicating it to a new DB instance and let our users connect and just query it all day long. We setup a dedicated Distribution and Subscriber Servers under SQL Server 2008 Evaluation Edition. We named the Subscriber our Operational Data Store (ODS).

As we continued moving data from different Publishers we realized that the ODS had become a somewhat important system as more users started using it, and even the DBA team realized that was easier to join data from multiple businesses in one instance. We were still in Beta phase as we needed to prove that this was a viable purchase.

Then SQL Pass Summit 2009 came by, I went to Seattle, learned a ton of new stuff about Performance Tuning and BI. I came back to Florida and planned a migration of our Distribution and ODS to SQL Server 2008 R2 November CTP Edition. I had just two things in mind, PowerPivot and the new features provided by Reporting Services.

We executed the plan, rebuilt both servers with the latest CTP release, re-installed the SSL certificate for SSRS and we were done. This was performed on a weekend and was completely transparent to the end users.

We continued working with this environment, faced some challenges with multi-million row partitioned tables with no primary keys which could not be replicated, with the lack of redundancy and of course the CTP edition we were running on. But again, this was still in a Beta Phase.

Months came by, we continued deploying more subscribers, enhancing transactional reports and enabling filtered indexes and compression on our ODS to reduce the amount of I/Os.

A decision was taken as we wanted this system to go live with the RTM version of SQL Server 2008 R2. Licensing was purchased and new a hardware was queued up to be purchased later.

While this happened, we focused our efforts again on BI. We brought in Robin Haropulos from TekPartners. She eventually assumed the Project Manager Role and concentrated on one of our largest business units. The one that brought in 100 million records a day.

The team sat down with the business owners, end users, analysts and pretty much everyone that accessed that data, crunched from legacy transactional systems.

Pivoted Data

I decided that we needed a good ETL resource that knew SSIS inside and out. Luis Figueroa (Twitter| Blog), also from TekPartners, came on board, and while he was not yet an expert, he proved to us that he could master it in a matter of months. I bet on that and thankfully I was right.

I also wanted to have an additional resource on board who had a good knowledge on replication and the need to learn SSAS. Rich Bartho (Twitter| Blog) switched his main duties and started helping us out on this new project.

We came up with 28 dimensions, and 4 facts for the first release. We made sure the SSIS performed all its transformation as an isolated entity and only touched the source to extract the data. We also took advantage of Change Data Capture and Replication in order to have the systems pull from a replicated site instead of the main source and minimize production overhead.

After the BI architecture design, project plan, and first fact and dimension requirements  were complete, Anthony trained the entire team in the areas of SSAS cube design and development, Replication design, and BI Project Management to ensure we were on the correct path to complete the project successfully before he completed his assignment with us at Mediawhiz. I quickly moved another resource, James Yagielo (also from TekPartners), to help us out with the ETL packages. The project was running on track, the Infrastructure team finalized with the build of 4 new instances and a High Availability Active/Active Cluster with plenty of RAM and I/O to spare. We were hooked up to 48 spindles and 3 GB/s teamed dedicated network connections to the SAN. They also took care of the Reporting Services instances and created 4 VMs.

We went ahead and soft launched over 10 days ago, fired up a new distribution instance, a ODS/Historical instance, Dimensional instance and of course the SSIS/SSAS dedicated box. This solution can scale to multiple nodes if additional CPU cycles are required; one of the advantages of clustering. Reporting Services was launched as a scaled out deployment into multiple VMs and load balanced by F5.

We ensured that all the data was consistent, checked that the packages were efficient, performed health checks by looking at the wait stats and tuned the instances based on what we found.

Two days ago, we presented the solution to the business owners, with multiple sample reports generated by SSMS, Regular Pivot Tables and PowerPivot for Excel. We sliced and diced at the client level, server level and the results were coming back pretty fast. They were extremely happy as they just found a way to slice “everything by everything”. I even heard the statement ‎”This will be the company’s biggest asset”.

We will continue adding more facts in the next 3 months and be done with the bulk for this business unit. Then we will extend the current solution to include additional business units.

Stacked ChartWe will also finalize with the migration of the remaining objects, like Reports running on the old instance, remaining ODS tables, new replication deployments and of course the migration of a legacy SQL Server 2005 SSAS Cube.

We had a cube already? Yes, but we never called it BI, we called it just enhanced reporting. The Cube itself is fine but the problem is in the legacy ETL. That code will be wiped out and redeployed with best practices onto our SSIS instance.

All our solutions are running under SQL Server 2008 R2 RTM, Enterprise Edition.

Well, that’s it for now. I am lucky to be surrounded by a talented team that has allowed us to get to the next level. And this is just the beginning.

Data Compression before or after loading operations? [SQL Server]

We have a process that collects and archives anywhere from 30 to 100 million rows a day. I asked one of my team members to research and come up with a suggestion for us to load the data and keep it partitioned by day. Then apply a sliding window to the oldest partitions.

As per BOL, we went ahead and tested by inserting into a compressed heap and then creating a clustered index on that table:

1
2
3
4
5
6
7
8
9
-- Insert into Partitioned Heap
BEGIN TRAN
INSERT INTO [History_201010].[dbo].[Buffer_History_Stage] ([column1], [column2], [column3], [column4])
SELECT [column1], [column2], [column3], [column4]
FROM [History].[dbo].[Buffer_History_Archive]
IF @@ERROR=0
COMMIT TRAN
ELSE
ROLLBACK TRAN

–Results
(28034332 row(s) affected)
— Time to Complete: 00:09:38

— Create compressed, partitioned clustered index (sort in tempdb)
BEGIN TRAN
ALTER TABLE [History_201010].[dbo].[Buffer_History_Stage] ADD CONSTRAINT [PK_Buffer_History_Stage] PRIMARY KEY CLUSTERED
(
[column1] ASC,
[column2] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = ON, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, DATA_COMPRESSION=PAGE) ON [ps_history] ([datecreated])
IF @@ERROR=0
COMMIT TRAN
ELSE
ROLLBACK TRAN

— Time to Complete: 00:07:19

–Switch Partition to Permanent Table (should be instantaneous)
BEGIN TRAN
ALTER TABLE [History_201010].[dbo].[Buffer_History_Stage] SWITCH PARTITION 1 TO [History_201010].dbo.[Buffer_History] PARTITION 1;
IF @@ERROR=0
COMMIT TRAN
ELSE
ROLLBACK TRAN

— Time to Complete: 00:00:00
— Total Time to Complete: 00:16:57

In comparison, we attempted to just directly insert into a permanent table:

1
2
3
4
5
6
7
8
9
--Insert into Compressed, Partitioned Clustered Index directly
BEGIN TRAN
INSERT INTO [History_201010].[dbo].[Buffer_History] ([column1], [column2], [column3], [column4])
SELECT [column1], [column2], [column3], [column4]
FROM [History].[dbo].[Buffer_History_Archive]
IF @@ERROR=0
COMMIT TRAN
ELSE
ROLLBACK TRAN

–Results
— (28034332 row(s) affected)
— Total Time to Complete: 00:12:43

We tested this several times on an idle Test box, and were looking just for time; not really looking at the IO Statstics.

Based on what we have experienced, it appears that it is more efficient to just immediately insert into a compressed table, with a partitioned clustered index.

One difference between our test and what Microsoft was suggesting was that they performed a bulk insert. We used data already in another database for the same instance.  Our tests were ran with 28 Million rows.