What’s happening in 2011

2010 came by pretty quickly. It has been a very busy year with all kinds of expansions, migrations and consolidations. We installed HP Blade Servers, VMware, SQL Server 2008 R2, Windows Server 2008 R2, CentOS 4.2 X64, F5s, Dell R710s/610s, Dell Equallogic Storage, TFS 2010, Windows 7 X64 for all DBAs and BI developers, etc. The list continues.

So what is happening next year? Many exiting things, with a core focus on HA, DR, Analytics, SharePoint 2010, and VMware on production databases.

On the Database side:

  • Homogenous environment. All instance running under SQL Server 2008 R2 (some edition downgrades in the mix)
  • Isolated Development, QA and UAT environments with proper edition (Developer, Standard, Enterprise). VMWare and MSDN come to play.
  • Management Data Warehouse deployment to collect health information from all production instances. Birds-eye view of all instances in one location.
  • Central Management Servers Implementation
  • Policy Based Management Implementation
  • Data-Tier Application Implementation
  • RML utilities for pre-production load testing
  • PowerShell

On the Analytics side:

  • All legacy cubes migrated to SQL Server Analysis Services 2008 R2
  • Deployment of Star Schemas for the remaining business units
  • Dedicated instance for transformations (SSIS)
  • Fire up 3rd and 4th VM for our Scaled-out Deployment of Reporting Services
  • Mastering DAX. PowerPivot for Excel 2010
  • GUI implementation for browsing our cubes. We are in the process of evaluating products.

High Availability & Disaster Recovery:

  • All instances to be moved to HA (2, 3, 4 node clusters). No more single point of failures.
  • Zero-Downtime database migration with Mirroring
  • Block Level Backup and restore offsite
  • Full-Log-Daily Differential Backups across the board.
  • Backup file test by restoring to our UAT environments

Storage:

  • Footprint Reduction by archiving historical data that can be rolled up
  • Storage Expansion to benefit from the spindle count
  • SSD for the most demanding applications (over iSCSI)

Training:

  • I am encouraging my team members to use twitter, to follow knowledgeable folks and read tweets from the #sqlserver and #sqlhelp hash tags.
  • To Read Blog Posts and subscribe to RSS feeds
  • To attend to SQL Saturdays
  • To buy books (and e-books)
  • To get trained online and if necessary bring someone to get us trained.
  • To attend the SQL Rally 2011 or SQL Pass Summit 2011
  • To read about SQL Server Denali and install on a VM to test.

I see 2011 being an excellent year for us.

Happy Holidays everyone.

2011

Maintaining Passwords. What do I do?

It comes with no surprise to me that online database of passwords are being hacked. Last week Gawker Media got compromised and their password database stolen. Passwords were stored in an encrypted format but are still prone to dictionary attacks.

So how do I do it?

I use a different password for every account I create online. I combine uppercase, lowercase, numbers and non-alphabetical characters when possible.

How do I keep up with different passwords?

Well with so many passwords it is almost impossible for me to remember them. I use an excellent software: KeePass.

Do I create my own Passwords?

I do mostly. But lately I have let KeePass generate them for me.

How do I access my own password database from different places?

The easy way? Storing the password database on a USB thumb drive. Just make sure that the database has a strong password for accessing it.

The convenient way? Synchronizing to the cloud. I use DropBox for that matter. I can download from the website, or if feasible, install the DropBox and KeePass application on the local machine where I am working on.

KeePass Password Safe

KeePass Password Safe

What If I need to access the site using my SmartPhone?

No problem. I installed DropBox and KeePass for Android. Both Applications are available for pretty much all mobile platforms.

Is my smartphone saving passwords for websites?

Yes, for convenience, although it is a security concern.

What if my phone gets lost/stolen?

I installed Android Protector and the most sensitive applications require a PIN to access it. I have also installed Lookout for Android which allows me to backup sensitive data to the cloud, remote lock, remote locate and the best option, remote wipe.

Do I change my passwords?

Yes. Every 3 months at the most.

This is what I do for my passwords. How do you do it?

Business Requirements. What are those? [#TSQL2sday]

This quick post is in reference to the T-SQL Tuesday for the Month of December of 2010, hosted by Steve Jones. The subject is “What issues have you had in interacting with the business to get your job done?”

T-SQL Tuesday, December of 2010

T-SQL Tuesday, December of 2010

Have you ever been in a situation where you were told to develop and deliver certain application because Business folks already sold it to a particular buyer, even with a preset delivery date? You are not alone. I have experienced similar ones without even knowing “What” we were supposed to deliver, less “How” to do it.

Business and Technology need to create a partnership, and communication needs to flow back and forth. This is where a very strong business analyst with technical knowledge comes into play. If business would like to have a portfolio of products of services they can sell, it needs to be communicated with the business analyst, who at the same time will ask questions to the technical team to get an idea of the feasibility and viability of the product. If there are technical questions then they can be asked to the business analyst or even directly to the business owners if required.

Technical teams cannot develop without having a solid business requirement, or a good understanding of what the business folks want. I agree that most of the times, especially with new products or services, complete business requirements are unfeasible, but the business analyst needs to write what business folks want conceptually in a technical form, pointing out relevance and priority.

Iterative development and deployment is a trend being followed by more companies nowadays. As releases get to production on a much quicker fashion it can give the opportunity to the business folks to analyze the original requirements/concepts and recommend changes. It can also build confidence with the technical team as results are being delivered in small releases but in a progressive fashion. This is key to continuously building a partnership.

My opinion is that technical teams need to understand what are they going to develop and for what purpose. Analyze and suggest the technology to be used and work with the project manager and/or business analyst in order to come up with a timeline. Interact with the business folks in order to clarify any doubts and prototype the solution before engaging into a full blown project. Approach short releases if viable and build confidence with the business team. Never be afraid to ask questions; if you do not know “what” needs to be developed rest assured that “how”, the way you will do it, is wrong.

Table Structure, Trading Maintenance with Performance [SQL Server]

We recently migrated a database from SQL Server 2005 X86 to 2008 R2 X64. Running an Enterprise Edition License, we were able to take advantage of partitioning. It had several tables with similar structures; row count north of 2 Billion rows.

Old Structure:

  1. 3 fields
    1. TypeId INT
    2. ContentData VARCHAR(100)
    3. insertionDate DATETIME
  2. Unique constraint: TypeId, ContentData. Added InsertionDate to comply with partitioning requirements.
  3. Table had a partition per month.

Pros

  1. On the first week of each new month we marked the previous month partition as read-only.
  2. A one-time File backup was performed to the previous month partition.
  3. Deltas (new data) would always be located on the last partition
  4. The current months’ partition would be the only piece left in Read-Write mode.

Cons

  1. Searching for ContentData for a particular TypeId scanned as many times as partitions used
  2. Bulk Data comparison between new and old
    TypeId + ContentData performed an index seek on each partition
1
2
3
4
SELECT ContentData
INTO #temp
FROM sampletable
WHERE TypeId  = 41;
1
Table 'sampletable'. Scan count 92, logical reads 7613, physical reads 223, read-ahead reads 7808, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Original Plan

Original Plan

New Structure:

  1. 3 fields
    1. TypeId INT
    2. ContentData VARCHAR(100)
    3. insertionDate DATETIME
  2. Unique constraint: TypeId, ContentData.
  3. Table has a partition per TypeId (range of 50 items each, e.g. 1,2, … ,50).
  4. Index on insertionDate with TypeId as include
  5. Page Compression enabled.

Pros

  1. Searching for a particular TypeId + ContentData generates a unique index seek plan, 1 scan count (1 partition).
  2. Scanning for ContentData for a particular TypeId generates an index scan for only 1 partition
  3. Bulk Data comparison between new and old TypeId + ContentData performs an index seek in one partition only
  4. Compression reduced I/O reads by 50%

Cons

  1. All datafiles are in read-write mode. Every file needs to considered in the backup maintenance plan
  2. Any partition can suffer block update even if it holds data from 4 years ago (cannot be tagged as historical)
  3. Retrieving deltas performs an index seek on every partition
1
2
3
4
SELECT ContentData
INTO #temp
FROM sampletable
WHERE TypeId  = 41;
1
Table 'sampletable'. Scan count 1, logical reads 5449, physical reads 66, read-ahead reads 5445, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
New Plan

New Plan

Conclusion:

The table is mostly used for retrieving and comparing data. The retrieval call will always pass in TypeId as parameter. The system now performs an index scan for 1 partition (1 scan count). For data comparison, an index seek is performed for a unique value under 1 partition only. Data compression reduced I/O reads by 50%.

In the past, data extraction and comparison scanned through all the partitions generating excessive I/O calls, but we did not need to worry about continuously backing up older partitions as they were already flagged as read-only and backed up.

With this exercise we have effectively increased performance at the expense of maintenance. Well worth the change.