Allow me to sleep and focus on number crunching [#TSQL2sday]

T-SQL Tuesday #014

T-SQL Tuesday #014

It’s 2011 already and the first T-SQL Tuesday for the year has arrived.

This time the event is hosted by Jen McCown (Blog|Twitter), who just received an MVP nomination (congratulations Jen).

I have come up with a list that I want covered by my team this year, but my main goals are two: Allowing me to sleep properly and investing time on number crunching.

What does “allow me to sleep” mean?

Last year we started our migration project of all production database servers to High Availability solutions. We shall continue this year and finalize it. On the same token, homogenize our environment so every single business unit runs on SQL Server 2008 R2. We really spent time figuring out processes that cannot run in SQL Server 2005 (e.g. change data capture) and I need to make sure it does not happen anymore.

On the storage side, there are 2 projects running in parallel. One is to allow growth on a scalable solution which is already in place (stop concerning about hitting volume limits), and the second one to reduce data file footprint. The first project is trivial as we can hookup additional trays and expand volumes. The second one is more complex as we need to identify tables that hold historical data, partition them (if applicable), phase out old data and archive without impacting production applications.

On the monitoring side, we are creating a new project to see if we shall stick with Management Data Warehouse or get a 3rd party tool to give us bird’s eye view of our systems at the DB level, define KPIs, and report based on them. We got proper HW and Service level monitoring in place already.

What does “focus on number crunching” mean?

This has 2 phases.

The first one is to work on a Master Metadata repository project and figure out relationships between different business entities. Then work with Business Owners and Data Analysts to figure out what should we report on, how far back should we do so, and what KPIs should be defined. This can allow us to continue building our Data Warehouse, extending ETLs, and allowing business unit A to correlate its data with Business Unit B.

This project is also tied with data file footprint reduction as we will be able to rollup information that shall be available on our DSS without the need of all the transactional data.

The second phase is analytics. This can be tied with data mining and statistics and will definitely involve some of the information we have stored in our cubes. We are still figuring this one out.

That is it for me on the technical side. On the managerial and personal? I would write about them on future blog posts.

What are your resolutions for 2011?

Table Structure, Trading Maintenance with Performance [SQL Server]

We recently migrated a database from SQL Server 2005 X86 to 2008 R2 X64. Running an Enterprise Edition License, we were able to take advantage of partitioning. It had several tables with similar structures; row count north of 2 Billion rows.

Old Structure:

  1. 3 fields
    1. TypeId INT
    2. ContentData VARCHAR(100)
    3. insertionDate DATETIME
  2. Unique constraint: TypeId, ContentData. Added InsertionDate to comply with partitioning requirements.
  3. Table had a partition per month.

Pros

  1. On the first week of each new month we marked the previous month partition as read-only.
  2. A one-time File backup was performed to the previous month partition.
  3. Deltas (new data) would always be located on the last partition
  4. The current months’ partition would be the only piece left in Read-Write mode.

Cons

  1. Searching for ContentData for a particular TypeId scanned as many times as partitions used
  2. Bulk Data comparison between new and old
    TypeId + ContentData performed an index seek on each partition
1
2
3
4
SELECT ContentData
INTO #temp
FROM sampletable
WHERE TypeId  = 41;
1
Table 'sampletable'. Scan count 92, logical reads 7613, physical reads 223, read-ahead reads 7808, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Original Plan

Original Plan

New Structure:

  1. 3 fields
    1. TypeId INT
    2. ContentData VARCHAR(100)
    3. insertionDate DATETIME
  2. Unique constraint: TypeId, ContentData.
  3. Table has a partition per TypeId (range of 50 items each, e.g. 1,2, … ,50).
  4. Index on insertionDate with TypeId as include
  5. Page Compression enabled.

Pros

  1. Searching for a particular TypeId + ContentData generates a unique index seek plan, 1 scan count (1 partition).
  2. Scanning for ContentData for a particular TypeId generates an index scan for only 1 partition
  3. Bulk Data comparison between new and old TypeId + ContentData performs an index seek in one partition only
  4. Compression reduced I/O reads by 50%

Cons

  1. All datafiles are in read-write mode. Every file needs to considered in the backup maintenance plan
  2. Any partition can suffer block update even if it holds data from 4 years ago (cannot be tagged as historical)
  3. Retrieving deltas performs an index seek on every partition
1
2
3
4
SELECT ContentData
INTO #temp
FROM sampletable
WHERE TypeId  = 41;
1
Table 'sampletable'. Scan count 1, logical reads 5449, physical reads 66, read-ahead reads 5445, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
New Plan

New Plan

Conclusion:

The table is mostly used for retrieving and comparing data. The retrieval call will always pass in TypeId as parameter. The system now performs an index scan for 1 partition (1 scan count). For data comparison, an index seek is performed for a unique value under 1 partition only. Data compression reduced I/O reads by 50%.

In the past, data extraction and comparison scanned through all the partitions generating excessive I/O calls, but we did not need to worry about continuously backing up older partitions as they were already flagged as read-only and backed up.

With this exercise we have effectively increased performance at the expense of maintenance. Well worth the change.

Denali, the next release [SQL Server]

After a week at the Pass Summit United 2010 and after catching up at work with several tasks, I am now able to update my blog with some exciting information. SQL Server codename “Denali” CTP1 was released formally on 11/09/2010 (version 11.0). I was able to obtain a DVD copy while at the summit, and now it can be downloaded too.

The key takeaways:

  1. AlwaysON. This is a full blown HADR solution that allows multiple asynchronous and synchronous
    SQL Server Codename "Denali" Community Technology Preview 1

    SQL Server "Denali" CTP1

    mirrors of a database with transparent fail over option to a local and remote datacenter. This is a feature I am really looking forward to.

  2. VeritPaq embedded in the relational engine. The SQL Server engineers ported the same code used for PowerPivot into the RDBMS. The feature will allow column-level ultra-high compression of repetitive data which will dramatically increase query efficiency by ten times at least.
  3. BISM with DAX support. Business Intelligence Semantic Model in Analysis Services which will allow retrieval of data using DAX queries; the same ones used in PowerPivot.
  4. Sequencer generator. It is never too late for a feature like this.
  5. Enhanced Windowed Functions.
  6. File Table Support. I have been waiting this feature since the Oracle 9i days. It will allow us to manage files directly within SQL Server.

There are certainly more enhancements which will continue to pop-up as the product matures.

Now, it is a challenge for us to learn and prepare for the new features after just upgrading to 2008 R2. Well, Microsoft is doing a pretty good job by keeping us busy and I am OK with that. They were kindly enough to publish Books Online for Denali though.

Split a Partition and Expect Locks [SQL Server]

Not in every scenario. But as a rule of thumb, test before you perform a split operation in a production environment.

Every year around this time we plan, test and deploy partitions for tables that will hold historical data from the next year. We typically create one partition per month and in one case a partition per day (which has its own sliding window schedule every month).

This has been a very busy year, and between the migrations and consolidations a historical table did not have partitions added for October and November. I took the decision to keep the data as-is and just add a new partitions starting on 11/12; one for each day. This table holds around 30 million rows per day.

Now, we usually partition tables using RIGHT functions in order to have the last boundary empty and able to split it beforehand without any locking. The table I mention in the above paragraph is using a LEFT partition function and all the current data is sitting in the rightmost partition.

Issuing a Split partition of the rightmost data will let the engine scan and move the data for the current partition. This is something that can be fine in systems with a few million rows, but not with a partition holding 500+ million rows. We tried on a testing environment and as expected it locked the table for a very long time. DML operations were not able to be performed against it until the split succeeded.

1
2
3
-- Using new filegroup
ALTER PARTITION SCHEME [ps_Left_day_sliding] NEXT USED [OEN_Lists_2010_11];
ALTER PARTITION FUNCTION [pf_Left_day_sliding] () SPLIT RANGE ('2010-11-12 00:00:00.000');

This is the table which has a partition per day.

We are not going to purse that route. We will create a brand new table with proper RIGHT partitions and swap it with the current one. We have that liberty as the data is purely historical, and DBAs retrieve from it when required. They will know where to look at after reading at the documentation they write after performing such a change.

We went ahead and also tested with RIGHT just to double make sure. It worked just fine.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
CREATE PARTITION FUNCTION [pf_Left_day_sliding1](datetime) AS RANGE RIGHT FOR VALUES (N'2010-09-03T00:00:00.000', N'2010-09-04T00:00:00.000', N'2010-09-05T00:00:00.000', N'2010-11-12T00:00:00.000');

CREATE PARTITION SCHEME [ps_Left_day_sliding1] AS PARTITION [pf_Left_day_sliding1] TO ([PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY], [OEN_Lists_2010_11]) ;
CREATE TABLE [dbo].[batchresults_buffer1](
[column1] [bigint] NOT NULL,
[column2] [datetime] NOT NULL,
[column3] [int] NOT NULL,
[column4] [int] NOT NULL,
[column5] [int] NOT NULL,
[column6] [varchar](200) NOT NULL,
) ON [ps_Left_day_sliding1]([column2]);
CREATE UNIQUE CLUSTERED INDEX [PK__batchresults_buf__2077C861] ON [dbo].[batchresults_buffer1]
(
[column1] ASC,
[column2] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [ps_Left_day_sliding1]([datecreated]) ;
BEGIN TRAN;
INSERT INTO [batchresults_buffer1] (column1, column2, column3, column4, column5, column6)
SELECT
column1, column2, column3, column4, column5, column6
FROM batchresults_buffer; -- This table contains sample data, 50 million rows.
COMMIT;
ALTER PARTITION SCHEME [ps_Left_day_sliding1] NEXT USED [OEN_Lists_2010_11];
ALTER PARTITION FUNCTION [pf_Left_day_sliding1] () SPLIT RANGE ('2010-11-12 00:00:00.000');

I do not see that need for us to use LEFT. All future deployment will use RIGHT instead.

Data Compression before or after loading operations? [SQL Server]

We have a process that collects and archives anywhere from 30 to 100 million rows a day. I asked one of my team members to research and come up with a suggestion for us to load the data and keep it partitioned by day. Then apply a sliding window to the oldest partitions.

As per BOL, we went ahead and tested by inserting into a compressed heap and then creating a clustered index on that table:

1
2
3
4
5
6
7
8
9
-- Insert into Partitioned Heap
BEGIN TRAN
INSERT INTO [History_201010].[dbo].[Buffer_History_Stage] ([column1], [column2], [column3], [column4])
SELECT [column1], [column2], [column3], [column4]
FROM [History].[dbo].[Buffer_History_Archive]
IF @@ERROR=0
COMMIT TRAN
ELSE
ROLLBACK TRAN

–Results
(28034332 row(s) affected)
— Time to Complete: 00:09:38

— Create compressed, partitioned clustered index (sort in tempdb)
BEGIN TRAN
ALTER TABLE [History_201010].[dbo].[Buffer_History_Stage] ADD CONSTRAINT [PK_Buffer_History_Stage] PRIMARY KEY CLUSTERED
(
[column1] ASC,
[column2] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = ON, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, DATA_COMPRESSION=PAGE) ON [ps_history] ([datecreated])
IF @@ERROR=0
COMMIT TRAN
ELSE
ROLLBACK TRAN

— Time to Complete: 00:07:19

–Switch Partition to Permanent Table (should be instantaneous)
BEGIN TRAN
ALTER TABLE [History_201010].[dbo].[Buffer_History_Stage] SWITCH PARTITION 1 TO [History_201010].dbo.[Buffer_History] PARTITION 1;
IF @@ERROR=0
COMMIT TRAN
ELSE
ROLLBACK TRAN

— Time to Complete: 00:00:00
— Total Time to Complete: 00:16:57

In comparison, we attempted to just directly insert into a permanent table:

1
2
3
4
5
6
7
8
9
--Insert into Compressed, Partitioned Clustered Index directly
BEGIN TRAN
INSERT INTO [History_201010].[dbo].[Buffer_History] ([column1], [column2], [column3], [column4])
SELECT [column1], [column2], [column3], [column4]
FROM [History].[dbo].[Buffer_History_Archive]
IF @@ERROR=0
COMMIT TRAN
ELSE
ROLLBACK TRAN

–Results
— (28034332 row(s) affected)
— Total Time to Complete: 00:12:43

We tested this several times on an idle Test box, and were looking just for time; not really looking at the IO Statstics.

Based on what we have experienced, it appears that it is more efficient to just immediately insert into a compressed table, with a partitioned clustered index.

One difference between our test and what Microsoft was suggesting was that they performed a bulk insert. We used data already in another database for the same instance.  Our tests were ran with 28 Million rows.