Split a Partition and Expect Locks [SQL Server]

Not in every scenario. But as a rule of thumb, test before you perform a split operation in a production environment.

Every year around this time we plan, test and deploy partitions for tables that will hold historical data from the next year. We typically create one partition per month and in one case a partition per day (which has its own sliding window schedule every month).

This has been a very busy year, and between the migrations and consolidations a historical table did not have partitions added for October and November. I took the decision to keep the data as-is and just add a new partitions starting on 11/12; one for each day. This table holds around 30 million rows per day.

Now, we usually partition tables using RIGHT functions in order to have the last boundary empty and able to split it beforehand without any locking. The table I mention in the above paragraph is using a LEFT partition function and all the current data is sitting in the rightmost partition.

Issuing a Split partition of the rightmost data will let the engine scan and move the data for the current partition. This is something that can be fine in systems with a few million rows, but not with a partition holding 500+ million rows. We tried on a testing environment and as expected it locked the table for a very long time. DML operations were not able to be performed against it until the split succeeded.

1
2
3
-- Using new filegroup
ALTER PARTITION SCHEME [ps_Left_day_sliding] NEXT USED [OEN_Lists_2010_11];
ALTER PARTITION FUNCTION [pf_Left_day_sliding] () SPLIT RANGE ('2010-11-12 00:00:00.000');

This is the table which has a partition per day.

We are not going to purse that route. We will create a brand new table with proper RIGHT partitions and swap it with the current one. We have that liberty as the data is purely historical, and DBAs retrieve from it when required. They will know where to look at after reading at the documentation they write after performing such a change.

We went ahead and also tested with RIGHT just to double make sure. It worked just fine.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
CREATE PARTITION FUNCTION [pf_Left_day_sliding1](datetime) AS RANGE RIGHT FOR VALUES (N'2010-09-03T00:00:00.000', N'2010-09-04T00:00:00.000', N'2010-09-05T00:00:00.000', N'2010-11-12T00:00:00.000');

CREATE PARTITION SCHEME [ps_Left_day_sliding1] AS PARTITION [pf_Left_day_sliding1] TO ([PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY], [OEN_Lists_2010_11]) ;
CREATE TABLE [dbo].[batchresults_buffer1](
[column1] [bigint] NOT NULL,
[column2] [datetime] NOT NULL,
[column3] [int] NOT NULL,
[column4] [int] NOT NULL,
[column5] [int] NOT NULL,
[column6] [varchar](200) NOT NULL,
) ON [ps_Left_day_sliding1]([column2]);
CREATE UNIQUE CLUSTERED INDEX [PK__batchresults_buf__2077C861] ON [dbo].[batchresults_buffer1]
(
[column1] ASC,
[column2] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [ps_Left_day_sliding1]([datecreated]) ;
BEGIN TRAN;
INSERT INTO [batchresults_buffer1] (column1, column2, column3, column4, column5, column6)
SELECT
column1, column2, column3, column4, column5, column6
FROM batchresults_buffer; -- This table contains sample data, 50 million rows.
COMMIT;
ALTER PARTITION SCHEME [ps_Left_day_sliding1] NEXT USED [OEN_Lists_2010_11];
ALTER PARTITION FUNCTION [pf_Left_day_sliding1] () SPLIT RANGE ('2010-11-12 00:00:00.000');

I do not see that need for us to use LEFT. All future deployment will use RIGHT instead.

Wi-Fi Hotspot [HTC G2]

After sending the OTA upgrades, TMobile enabled 3 nice features: WiFi Calling, USB Tethering and WiFi HotSpot. The HTC G2 can be used as a wireless router and be able to accept connections from any device that connects to Open and WPA2-PSK Wi-Fi networks.

Portable WiFi Hotspot

Portable WiFi Hotspot

Portable WiFi Hotspot Configuration

Portable WiFi Hotspot Configuration

Just follow the steps:

  1. On the Android phone, press Home, press Menu, and touch Settings to open the Settings application.
  2. Touch Wireless & networks > Tethering & portable hotspot.
  3. Check Portable Wi-Fi hotspot.
  4. Check Portable Wi-Fi hotspot Settings
  5. Configure the Netword SSID, Security and Save
  6. Your phone is now a Wireless Router using the Available HSPA+ network.

It seems that T-Mobile isn’t charging extra for this feature yet, but will throttle if bandwidth usage goes over 5 GB within a month.

USB Tethering [HTC G2]

After ensuring that WiFi calling worked properly I went ahead and researched about USB tethering for my G2. I have tried PDA Net before, but I wanted to use a “native” solution provided by T-Mobile. It is really easy.

USB Tethering

  1. You are likely to need drivers, so download it from here and decompress into a folder.
  2. Use the USB cable that came with your phone to connect your phone to your computer.
  3. On the Android phone, press Home, press Menu, and touch Settings to open the Settings application.
  4. Touch Wireless & networks > Tethering & portable hotspot.
  5. Check USB tethering.
  6. When asked for drivers, point the the one you just downloaded.
  7. You should be able to connect to the internet at this point.

If your phone is connected to the Internet through WiFi, the tethered computer will use your phone as a gateway, using wireless internet instead of the available HSPA+ network. This is good for computers that do not have a wireless connection and not able to connect to the internet.

Speed Tests:

Tethering using 3G Network (No HSPA+ available)

Tethering using 3G Network (No HSPA+ available)

Tethering using Wi-Fi and local Internet Service Provider

Tethering using Wi-Fi and local Internet Service Provider

WiFi Calling Feature Does Not Work Properly With Bluetooth Enabled And No Headset [HTC G2]

T-Mobile started to roll out OTA radio upgrades and Wifi Calling in the past couple of days. Mine was updated yesterday and I was really exited to try it. At first it did not work and left for work. Then came back to try it again and it did;  I even got a couple of calls at night with no issues. Today calls dropped to the point that was almost unusable. I researched online and found out two cases of users having the same exact issue and pointing Bluetooth as the culprit. They were right. As soon as I disabled Bluetooth, WiFi calling just worked beautifully. It is a bug that T-Mobile will need to patch quickly.

Now, what does WiFi calling means for your plan and phone:

  1. It will use the alloted minutes for your current plan, for the bucket you are currently on: Peak, Nights, Weekends or MyFavs. I double checked today and ensured that all calls made were recorded as “Weekend”.
  2. You might be able to use it overseas and avoid ridiculous roaming charges. You just need a decent WiFi connection.
  3. Your phone is likely to use more battery as WiFi will be on all the time. So keep a spare charger or MiscroUSB cable handy.

I have also read that tethering was going to be part of this OTA update. I have not found the option yet, but will comment about it as soon as I do. Right now I am focused on ensuring that WiFi is flawless while in my house which has really bad reception spots, including my home office.

Enjoy your G2.

MediaWhiz’s BI Solution Goes Live [Analytics]

Over a year ago we realized that the DBA team spent a very long time digging through data we had everywhere, counting billions and billions of rows for various systems, and that needed to be beautified in Excel with charts and the whole nine yards to be delivered to business owners.

That prompted us to take the decision and move to the next level. At the beginning, we called it our future Data Warehouse, then DataMart (as we were going to start with just one subject area), then Reporting Solution and finally decided to just call it “Business Intelligence”.

To speed up the process, we hired 2 consultants. A Data Analyst and a BI Architect. Everything started pretty good on the first week by deciding on what needed to be done first for us to start gathering requirements. Unfortunately other high priority tasks took over and the Data Analyst switched their duties to something else. That left us with just one BI Architect.

Anthony Sammartino (Twitter | Blog), of TekPartners Business Intelligence Solutions, assumed the role of Data Analyst, Project Manager and BI Architect. This was not easy as he needed to spend time creating proper questionnaires and meeting with different business owners to start gathering requirements. Then he needed to come up with proper schedules and plan the work.

As not everything worked according to plan (business owners needed to prioritize other work) we could not gather enough requirements from all of  the different businesses. Plus there was still this fuzzy understanding of what BI meant.

We approached a different strategy and started pinpointing the type of reports that were generated mostly from the transactional data. Anthony came with the great idea of replicating it to a new DB instance and let our users connect and just query it all day long. We setup a dedicated Distribution and Subscriber Servers under SQL Server 2008 Evaluation Edition. We named the Subscriber our Operational Data Store (ODS).

As we continued moving data from different Publishers we realized that the ODS had become a somewhat important system as more users started using it, and even the DBA team realized that was easier to join data from multiple businesses in one instance. We were still in Beta phase as we needed to prove that this was a viable purchase.

Then SQL Pass Summit 2009 came by, I went to Seattle, learned a ton of new stuff about Performance Tuning and BI. I came back to Florida and planned a migration of our Distribution and ODS to SQL Server 2008 R2 November CTP Edition. I had just two things in mind, PowerPivot and the new features provided by Reporting Services.

We executed the plan, rebuilt both servers with the latest CTP release, re-installed the SSL certificate for SSRS and we were done. This was performed on a weekend and was completely transparent to the end users.

We continued working with this environment, faced some challenges with multi-million row partitioned tables with no primary keys which could not be replicated, with the lack of redundancy and of course the CTP edition we were running on. But again, this was still in a Beta Phase.

Months came by, we continued deploying more subscribers, enhancing transactional reports and enabling filtered indexes and compression on our ODS to reduce the amount of I/Os.

A decision was taken as we wanted this system to go live with the RTM version of SQL Server 2008 R2. Licensing was purchased and new a hardware was queued up to be purchased later.

While this happened, we focused our efforts again on BI. We brought in Robin Haropulos from TekPartners. She eventually assumed the Project Manager Role and concentrated on one of our largest business units. The one that brought in 100 million records a day.

The team sat down with the business owners, end users, analysts and pretty much everyone that accessed that data, crunched from legacy transactional systems.

Pivoted Data

I decided that we needed a good ETL resource that knew SSIS inside and out. Luis Figueroa (Twitter| Blog), also from TekPartners, came on board, and while he was not yet an expert, he proved to us that he could master it in a matter of months. I bet on that and thankfully I was right.

I also wanted to have an additional resource on board who had a good knowledge on replication and the need to learn SSAS. Rich Bartho (Twitter| Blog) switched his main duties and started helping us out on this new project.

We came up with 28 dimensions, and 4 facts for the first release. We made sure the SSIS performed all its transformation as an isolated entity and only touched the source to extract the data. We also took advantage of Change Data Capture and Replication in order to have the systems pull from a replicated site instead of the main source and minimize production overhead.

After the BI architecture design, project plan, and first fact and dimension requirements  were complete, Anthony trained the entire team in the areas of SSAS cube design and development, Replication design, and BI Project Management to ensure we were on the correct path to complete the project successfully before he completed his assignment with us at Mediawhiz. I quickly moved another resource, James Yagielo (also from TekPartners), to help us out with the ETL packages. The project was running on track, the Infrastructure team finalized with the build of 4 new instances and a High Availability Active/Active Cluster with plenty of RAM and I/O to spare. We were hooked up to 48 spindles and 3 GB/s teamed dedicated network connections to the SAN. They also took care of the Reporting Services instances and created 4 VMs.

We went ahead and soft launched over 10 days ago, fired up a new distribution instance, a ODS/Historical instance, Dimensional instance and of course the SSIS/SSAS dedicated box. This solution can scale to multiple nodes if additional CPU cycles are required; one of the advantages of clustering. Reporting Services was launched as a scaled out deployment into multiple VMs and load balanced by F5.

We ensured that all the data was consistent, checked that the packages were efficient, performed health checks by looking at the wait stats and tuned the instances based on what we found.

Two days ago, we presented the solution to the business owners, with multiple sample reports generated by SSMS, Regular Pivot Tables and PowerPivot for Excel. We sliced and diced at the client level, server level and the results were coming back pretty fast. They were extremely happy as they just found a way to slice “everything by everything”. I even heard the statement ‎”This will be the company’s biggest asset”.

We will continue adding more facts in the next 3 months and be done with the bulk for this business unit. Then we will extend the current solution to include additional business units.

Stacked ChartWe will also finalize with the migration of the remaining objects, like Reports running on the old instance, remaining ODS tables, new replication deployments and of course the migration of a legacy SQL Server 2005 SSAS Cube.

We had a cube already? Yes, but we never called it BI, we called it just enhanced reporting. The Cube itself is fine but the problem is in the legacy ETL. That code will be wiped out and redeployed with best practices onto our SSIS instance.

All our solutions are running under SQL Server 2008 R2 RTM, Enterprise Edition.

Well, that’s it for now. I am lucky to be surrounded by a talented team that has allowed us to get to the next level. And this is just the beginning.