Database Server Provisioning: What’s Best Practice, What’s Real

Many months ago I have been tasked to recommend an upgrade to the server hardware of a medium-sized mission-critical OLTP dedicated database server running on SQL Server 2005. The client has gone through the regular 5-year maintenance/upgrading cycle and has added more applications that were not previously foreseen putting added stress on the database server. The combined size of the databases is rather small, roughly around 50GB that has accumulated for almost a decade. Around 15GB of which are highly active. Furthermore, the server is catering roughly to around 6000 active users with 10% of which doing complex, highly relational computation and aggregation. Most of the processing are complex querying on large data sets and which are largely read-intensive.

Though the database server is holding up, it was now considered near-end-of-life. Server utilization has increased to almost 60%-80% during regular working hours and on occasional peak, performance would degrade noticeably.

The client’s upgrading directive is for me to find out the best possible configuration for the least amount of money. And in the segment that I am active with, it is almost impossible to get a good idea on what is best out there based on an actual production setup and experience. With the absence of such information and with the lack of experience (as I do NOT get to do upgrading on a monthly or regular basis, who does anyway?), the most logical thing to do is check on best practices articles/guides that are vastly available over the Internet.

What is Best Practice?

What’s interesting about most of these articles and guides by Microsoft and various 3rd party experts is that, these are anchored on the premise of one having an ideal environment given ideal or best possible circumstances. And commonly, you will see that these best practices are presented in a way not based on various limitations but rather to almost always take advantage on what best/top solution is available out there. One would really have to work his way through these articles to get a balanced solution that is acceptable to a certain level if his circumstances are far from being ideal. But there is nothing wrong with these … and this is exactly why these are called ‘best practices’. No one would definitely be able to see all the variables and factors that can affect the outcome of what your ideal setup will become.

What is Real?

Out there however, reality bites. We all have to work with limits. Out there we are given constraints like having limited budget, hardware vendors carry only certain server models (some needed devices need to be imported from another country) and often we all don’t have the luxury of playing around the real stuff until you get one paid and delivered it to you. And there are lots of other unique and strange variables that we have to hurdle.

It is more than necessary for one to do some heavy sleuthing to come up with options to the client without being extravagant and not being unreasonably idealistic, and that one of these options could become acceptable and still be adhering to most, if not all, of the ideals that are staple in these best practices.

The Recommendations

So I came up with the following:

  • Recommendation #1: Upgrade 2005 to SQL Server 2012. Though SQL Server 2005 can provide for what the client requires today, the client’s thirst of technological solutions now seem to be picking up the pace, they seem bent on evolving their systems rapidly. This kind of upgrade though needs to be justified — for one, SQL Server 2012, for a 12-core server doesn’t come cheap.  So the justification here is that SQL Server 2005 SP2 has already reached end-of-life in 2010 while SP3’s life is on lease. Futhermore, the boatload of new features and performance enhancements in 2012 can surely can justify the upgrade. Though initially, the goal is to just simply move to 2012, once done, the direction is to take advantage of the bevy of new features available in 2012. FILETABLE and the new FULLTEXT SEARCHING capabilities come to mind rather quick. I’d be able to make a case to upgrade just for these two features alone.
  • Recommendation #2: I decided to stick with Windows Server 2008 R2 mainly because the client is not ready for an upgrade of its Active Directory to 2012 across all servers. Two years ago, we upgraded the OS/Active Directory from Windows 2003 and every server moved to 2008 R2. Running in 2008 R2 has been flawless for the past years. so I believe Windows 2012 at this point is not necessary. 2008 R2 is very much potent and very stable. I would probably recommend to them to skip fully Windows 2012 and just wait for the next/future releases of this server OS.
  • Recommendation #3: Mid-level DAS server that can handle more than 8 bays without special kits. IBM allowed me to have a server that can have up to 24. Old servers that were acquired only had 8 bays for DAS HDDs. SANs or Servers allowing more bays were then beyond reach for this client, money-wise — well, this was until IBM introduced certain server models that are very much affordable. With these new servers, these will allow me to configure more RAID arrays.
  • Recommendation #4: From a 2008 generation 2 x 4 core, client is moving to a two socket 6-core/socket server having a total of 12-core current generation technology with faster speed, hyper threading and double the cache. The 8-core per socket is beyond reach as this will also increase the spending on SQL Server 2012 which can now only be licensed on a per core basis . Besides, there isn’t much CPU issues with the existing setup. Having this new processing power could provide ample room for growth in the coming years.
  • Recommendation #5: From 16GB of RAM, now it would be 32GB. A fairly good portion of the database would fit into this space. As RAM gets cheaper in the coming years, we could probably have this server @ 64GB and make RAM less of an issue. Besides the current database is still small. But having RAM double for now will tremendously help and would probably cover some unforeseen growth in the coming years. There is a big chance that newer apps will utilize this server.
  • Recommendation #6: From the existing 6 x 7krpm SATA spindles fashioned in a single RAID controller (configured RAID 1 and RAID 5), I recommended 14 x 7krpm SATA drives on 2 RAID controllers. This is where I have done a lot of thinking, weighing the client’s directive to spend the least amount of money while still achieving some of the ideals of what should be is a best practice. After all, this is the most critical and the slowest of all the system’s components. Going for the considerably performant 15krpm SAS drives would have quadrupled the storage cost. My notion here is that, with me having my hand on Recommendation #3, I would still have gained substantial disk IO performance gain over the existing setup if I’d be able to spread at least three Raid 1+0 Arrays and will have 2 RAID controllers sharing the load. This way, I would be able to separate DATA, LOG and TEMPDB. I understand, the best practice out there is to have at least 3 Arrays so SQL Server’s data, log and tempdb can each be placed on separate arrays — that is what people are evangelizing, or so it seems.

What Came Out of My Recommendations

Now that the client has approved all my recommendations and the components for upgrading were delivered by various vendors, I would like to show what came out of it.

All recommendations were as expected other than Recommendation #6 where it turned out to be interesting and has shown another example of when best practice gets tweaked down to a level that is acceptable and also when harsh reality bites.  The 2 RAID controller setup is not possible (the vendor claimed). The first RAID controller (built-in) is getting disabled by the second RAID controller (add-on). I was fed a wrong information from the start by another vendor and only got to confirm that it is not going to work after the servers were delivered and configured. Unfortunately, the client ordered the server from another vendor so they cannot hold the vendor responsible for the wrong information. Client cannot afford to wait for another 30-day delivery so it has to deal with a single RAID controller setup.

So why 14? Why not 24?

First, given an extra push, the client could have gone for 24. However, what concerned me that time was that, I can’t be certain of the performance gain I will have with having the 24 — well, NOT until the client has paid and the server is delivered. This is one of the realities I have to face, server hardware vendors around here aren’t known  or say familiar with these realities, and they can’t give you figures on how their machines would fare given the load you expect. So I recommended a safe setup.  I came up with 14 thinking that if performance gain over the existing setup comes up short in one of the unforeseen rounds of expansion, I still would be able to tell the client to get the faster 15krpm SAS drives to fill the remaining bays without decommissioning some few hard disks. And now, after playing with the actual machine, my caution seem to hit the mark.

Upon testing using SQLIO, I seem to always get optimal performance with an array of 8 spindles. Increasing the spindles does not improve things much further. I am not sure if this is true with a different server brand/specs/configuration. So now I ended up with the following:

Array 1, RAID 1, 2xDisks, Single Partition (OS, MSSQL)
Array 2, RAID 1+0, 8xDisks, Single Partition (DATA)
Array 3, RAID 1+0, 4xDisks, Two Partition (LOG @ Partition 1, TEMPDB @ Partition 2)

The above setup, far from being ideal, seem to have gained me more than double the I/O performance than that of the existing just by running SQLIO. Testing further and having able to spread SQL Server’s data, log and tempdb across arrays further improved the system’s performance way better than I expected. I’d probably have another blog discussing in length how I came down with this decision. Some SQLIO numbers would definitely shed light on this.

In the end, I believe I was able to satisfy what was required of me without being unreasonably idealistic.

**************************************************

P1122746
Bataan, Philippines

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

SQL Server Data Tools (SSDT) Lengthens Life of SQL Server 2005!

The arrival of SQL Server 2012 last week sparked a semblance of positivity among people who have been using version 2005 who also held off on upgrading and skipping two SQL Server 2008 releases. For a lot of folks like me though, SQL Server 2012 gave a lot of boost and now lengthens the lifespan of 2005 databases. Not that 2005 is dying, but the tool lend more than a helping hand to a lot of 2005’s inherent goodies that some might even consider sticking with the version a little bit longer. Enter SQL Server Data Tools (SSDT), formerly aliased as “Juneau“, is the newest tool Microsoft has provided for its legion of SQL Server professionals.

Developing for SQL Server 2005

Though SQL Server 2012 has a lot to offer to anyone, SQL Server 2005 is very much a formidable data platform that would not go away anytime soon. My company still supports a lot of clients running SQL Server 2005 that might not be considering an upgrade anytime soon. And as a matter of strategy, we still very much believe that we have to pick the most common denominator among our target space. So here, just in time, SSDT gets to be initiated with a new project that would at least support SQL Server 2005.

Working Right Away

The last time I use SSDT was in SQL Server 2012 CTP3 and utilized it for some presentation/demo. I downloaded it earlier today and wasted no time and have it installed smoothly. No hiccups. It gets me so excited to play around a new tool with something that won’t be laid only to be forgotten after some tinkering. This time, SSDT gets to do real work right away with a small database.

Here are some of goodies SQL Server 2005 users from would get from a production release of SQL Server Data Tools:

  • Stand Alone SSDT (No SQL Server 2012, No Visual Studio 2010 needed). I was glad Microsoft had a stand-alone distro for this tool. The last time I tried it, it was blended with SQL Server 2012 CTP3. Now, you need not have SQL Server 2012 and Visual Studio 2010. I mean, you don’t need to have an installation of SQL Server 2012 and Visual Studio 2010 to use SSDT. Without the two, your SSDT off the web installer installs the necessary SQL Server 2012 and Visual Studio 2010 / SP1 components so it can run standalone. I had installed on a bare Windows 7 machine with no SQL Server nor Visual Studio components prior and it just went with no hassles.
  • SSDT Supports SQL Server 2005 as Target Platform. As I have tried before, SSDT fully supports development on SQL Server 2005 (lowest version allowed). You get all the bells and whistles of SSDT even if your target deployment platform is only SQL Server 2005. SSDT tunes itself to support 2005 so you are assured that every object you have in your SSDT project is fully supported by 2005’s engine and syntax.
  • TSQL Code Analysis On Build. Most common coding malpractices are easily detected. There are around 14 coding rules that can be detected for any violations. It is all up to you to weed the problems as they appear as warnings (can be set to appear as errors if you wanted to tidy up your TSQL code.
  • Catches TSQL Design/Compile Time Errors. For users who were contented to use just the SQL Server Management Studio, catching SQL compile time error isn’t a routine. Often, code errors are resolved only when executed against a database. For example, this following statement “SELECT col1, col2 FROM table1 ORDER BY col1, col3”, when parse or included in a stored procedure, produces no error upon the creation of the stored procedure. With SSDT, errors an be resolved at design/compile time.
  • Import SQL Server 2005 databases. This tool comes very handy as I attempt to do some cleaning up of some old 2005 databases and probably copy some old stuff for the new project I am doing. I was able to import 2005 databases into SSDT projects with the goal of running some code analysis. As expected, I was able to detect a lot of areas that arent flagged by the old platform as potential cause of problems (e.g. unresolved references, data typing inconsistencies, possible points of data loses, etc). I was also able to copy easily old structures and stored procedures and have it copied to a new SSDT project I am working on.
  • Offline Development. Unless I need to get some data in and out of a database while developing my SQL Server objects, I can completely do my development without needing to connect to a development or production database.
  • It is FREE! Thank you Microsoft!

My initial feedback on SSDT is that it is a very helpful and very promising tool. For the first release, there is so much goodies that can alleviate some tasks that used to be very tedious before. For the few hours of tinkering, I was able to fashion out a database with ease.

There is a lot more to discover from SSDT. So far, only a few hours of playing time was spent and I felt my productivity has been tremendously boosted.

Indeed, SQL Server 2012 is now the flagship product of Microsoft when it comes to handling data … but with SSDT’s help, SQL Server 2005 is here to stay for a bit more.

For more information about SSDT, check out the links below:

 

The Philippine Eagle-Owl. The largest of all owls endemic to the Philippines. It stands almost 20 inches and preys on small mammals like rodents and snakes.

SQL Server: Real Time Data Processing? When Not To …

During these times when we have the likes of Google, Yahoo, and Ebay, CNN where-in their massive Internet applications and services are run in mammoth data centers filled with thousands of powerful servers ready to process millions of users request and data streaming through mega bandwidth networks, it is but automatic for us developers and database enthusiasts to think of processing data in real-time whenever somebody asks us how we can handle massive amount of data on a very constrained, highly distributed and unreliable environment. Is there such an environment? Let us see.

Let us imagine the rural banks in the country. These rural banks are small and can be found in the most remote of baranggays. They are not so technology savvy, and they still use traditional means of communications. Perhaps only a few have huge servers, while the rest only use personal computers. To make it more constrained and limited, let us imagine that these rural banks use varying forms of networking technology, perhaps some use unreliable dial-ups and low-bandwidth lease-lines. To make it even more pitiful, let us imagine that some can only afford to use decades old floppy diskettes to transport data from one branch to another.

Here is the big question. Can we come up with an affordable solution that would allow them to experience a secured nationwide bank-to-bank interoperations?

Our edge in technology might perhaps convince us that the exercise would be futile as how can we interconnect them when not all of them are connected to a WAN, or we don’t have any assurance of a reliable network connection. We might also wonder how will we integrate some of them that still use diskettes.

Yes, absolutely, we can cook up some system for them … but there is one trivial question we need to ask: Do We Always Have To Process Data in Real Time?

To some, yes. For me? NO.

Anyways, let us not delve too much in what that solution would be like. But let us take a look at an old reliable technology that has made some appearance in some of the latest software offerings out there that can help us solve our rural bank challenge.

I am referring to SQL Server 2005’s Service Broker. For those that are familiar with MSMQ, you will easily comprehend this new feature of SQL Server. Service Broker is designed around the basic functions of sending and receiving messages (just like an email). In its advance form, messages can be secured, queued and routed to specific destinations regardless of whether the communication channel is online or offline (again, just line an email).

Imagine, one rural branch (Quiapo Branch) sends out a message of a money transfer to (Sulo Branch) to notify the Sulo Branch that X amount of money can be released to an authorized recipient. The Quiapo Branch teller just click “Send Message” without knowing that their DSL line just went off. Using Service Broker, the teller won’t even notice that their line went off. The Service Broker will just queue and keep the message until the resumption of the DSL line. Once online, Service Broker sends out the message to the intended destination. If the DSL interruption is for about 1 hour, the transaction delay could just be a bit more than an hour.

With these, we are giving the rural banks the means to operate just like they were online. They need not be bothered if they are offline. The system will just resume its normal operations automatically. So having unreliable and very slow communication/network lines wont be that of an issue.

So who says that we always need to process data in real time?

I’ll try to find time in coming up with something that will tell more about Service Broker. Perhaps somebody out there would want to 🙂

——————————
Additional Information: Service Broker can be used in a lot of scenarios. It can even be used to customize complex data replication requirements. It can be used to split up data paths into multiple channels so that messages can be distributed in as many queues as possible (or queued to a file and transferred unto floppy disks), thereby, increasing capacity and chances to still transport data. This also means that Service Broker can be use to scale applications. Service Broker is also an excellent solution if you want to integrate loosely coupled applications.

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Thesaurus Based Searching in SQL Server 2005

It’s been a while since I have posted something here. I was able to got some time to check out again some features of SQL Server which I have been eager to explore. I have been waiting for this eversince I experimented on it in version 2000.

Ever heard of thesaurus-based searching in SQL Server? This feature has been present since version 2000. However, it was an undocumented feature which I learned from Hilary Cotter. Google this guy and he has a lot of things to say about SQL Server. But he always remind me everytime I do fulltext stuff with SQL Server. He is the go to guy when it comes to fulltext searching. I hope he still uses the bibliographic data I gave him few years ago.

So what is this feature? For example, if you want to search the word “color“, records having the words synonymous to “color” such as “hue” and “tint” can be returned as well. The feature also allows you to expand your search keys. For example, you are searching for “100“, it can return records containing the words “100“, “one hundred” or “isang daan” (100 in Filipino).

How can this be done?

First check out the files in \Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\FTData. The folder contains XML-based thesaurus definition files. SQL Server uses one of these files depending on the language setting you have. For English, check out tsENU.xml. In the file, you may add the following entries:

<replacement>
<pat>color</pat>
<sub>hue</sub>
</replacement>

<replacement>
<pat>hue</pat>
<sub>color</sub>
</replacement>

<expansion>
<sub>100</sub>
<sub>one hundred</sub>
<sub>isang daan</sub>
</expansion>

Then, enable your database for fulltext. I’d leave this for you to explore. You just need a few mouse clicks to enable your DB to fulltext searching.

The following queries will somehow give you an idea on how to use this feature. You may try and modify the scripts for you to use in your applications.

SELECT title FROM book
WHERE CONTAINS(title, ‘ FORMSOF (THESAURUS, “hue”) ‘)

This query will return records having the words “color” and “hue”.

SELECT title FROM book
WHERE CONTAINS(title, ‘ FORMSOF (THESAURUS, “one hundred”) ‘)

This query will return records having the word “100”, “one hundred”, “isang daan”.

SELECT title FROM book
WHERE CONTAINS(title, ‘ FORMSOF (THESAURUS, “isang daan”) ‘)

This query will return records having the word “100”, “one hundred”, “isang daan”.

However, SQL Server search is limited to what is defined in your XML-based thesaurus definition files. As far as I know, Microsoft has not provided a fully functional thesaurus definition file. MS left us only with templates (Don’t forget to remove the comments in the files). It is up for us to define words we want included in searching. Yeah Yeah … quite a lot of linguistic work for us techies. Let us hope MS be kind enough to include a fully functional thesaurus definition file next time.

This is all folks. I hope this is enough to give you an idea and start implementing the feature in the apps that you develop.

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Better Than Before

This is so cool. I haven’t expected that Microsoft would give this for free. Since the first iteration of MSDE, I had always wished that the software giant would someday give more features for free.

With the CTP release of SQL Server Express Edition with Advanced Services, two of the features I wished to become free actually became and is now FREE. I am refering to the following:

  • Full Text Search
  • Reporting Capabilities

I have this product that used to require at least SQL Server’s standard edition simply becuase of full text search. I also tend to have difficulties providing the product a decent reporting capability.

With the missing pieces, I require our clients to buy at least SQL Server’s standard edition.

Now, I only have to sell our product and not worry on selling SQL Server anymore for clients that do not require databases greater than 4GB.

This will be a boon to our efforts and i am expecting to be selling more of our product.

Now I can’t ask for more but to ask a few more GBs from Microsoft. Make it 10GB 🙂

SQL Server 2005 Workgroup Edition

People may have missed this great offering by Microsoft. If one closely checks out the feature set that comes with this edition, one can say that this edition is even more powerful than SQL Server 2000 Standard Edition and way much cheaper. Here is some quick info:

  • 2 CPU Max Supported
  • 3GB RAM
  • 64bit Support based on WOW
  • No Database Size Limit

To my opinion, this is a lot!!! This used to be the numbers for SQL Server 2000 Standard Edition.

Microsoft made the right move in coming up with an edition targeting the small business sector.

How Often Do We Change Data Types

One colleague of mine raised some intriguing points on why SQL Server’s User Defined Datatypes can’t be edited. You do the editing process by dropping the datatype and recreate the new one. His position was that, he avoided using UDDs simply because of this limited mechanisms. This has led me to think of another question.

The question really is .. how often do we need to change the definition of our data? In its lifetime, I should say it should not be changed at all once our understanding of its nature is final. If the data definition we had changes from time to time, I would say that there is a possible problem in our foresight or understanding of the nature of that data entity. If that is the case, it should not be a good reason for us to define a UDD for that data entity. Worse, If we are not sure of a data entity’s definition, we have to get back to the drawing board and try to understand its nature once again if indeed we are sure of its definition. This problem also reflects the lack of resolve when determining the true nature of certain data entities.

However, I do recognize, that by the time we really need to change the definition of our data entity, I would surmise that the reason for doing so is very significant and justifiable as it will have a great impact in both our database and our application. This issue brings to us the question on how clear are pictures in our minds.

Here is my case with a few examples to show how I deal with the situations when understanding and defining data types.

Scenario #1: How stable can we define a data type?

BirthDate as DATETIME
EmailAddress as NVARCHAR(320)

Scenario #2: Questionable Candidates for UDD Creation

CustomerID as CHAR(5)

Scenario #3: The Lack of Understanding Example

Age as INTEGER – (assuming in Years)

Scenario #4: Things Change Over Time

PhoneNumber as CHAR(7)

The scenarios above are the usual things I encounter when one defines the type of a certain data.

In scenario #1, how often do we need to change datatypes? Should we have these kinds of data and have clear understanding of their very nature, defining UDDs for these would be sound. Thus, EXEC sp_addtype BirthDateType, datetime, ‘NULL’ would almost guarantee us that we will NOT encounter problems in the future or would require us to edit its definition.

Scenario #2 is a classic example of our lack of foresight or anticipation. A good DB designer could easily spot this. In the example, it is just a matter of time before we make it to CHAR(10) or CHAR(15) or better yet CHAR(25). I personally declare IDs as VARCHAR(255) as I don’t have a habit of making ID types as one my system keys. I’d rather have IDs as VARCHAR(255) and indexed them with UNIQUE constraint making them candidates for user consumable keys. As for me, I won’t be inclined to have a UDD for this situation.

Scenario #3 is also a classic on how we fail to understand the very nature of certain data. In most of us, without batting an eyelash, it is almost automatic that we declare Age (in years) as INTEGER, only to find out later on that we need to store it like 10.25 for 10 Years and 3 Months.

Scenario #4 forces us to change. It has been decades that Philippines had 6 or 7 digit local phone numbers. We could have probably saved on a few bytes and declared it to CHAR(7). Now we are seeing 8 digit phone numbers and we need to adapt.

If we had defined UDDs, for all the samples above, scenarios #3 and #4 could probably force us to EDIT our UDDs. Using EXEC sp_addtype PhoneNumberType, CHAR(7), ‘NULL’, we have defined a UDD for #4. To drop and create a new UDD for this to support CHAR(8), this could create a major headache in our app. We will need to revisit our SPs and character validation codes, the formats and pics we defined, the CONSTRAINTS we declared and other stuff on both our DB and the apps and check all support related to PhoneNumberType. How do I avoid this? Should I need support CHAR(8), I would definitely NOT do an “EDIT” by dropping and recreating my UDDs. Instead, I’ll create a new one with

EXEC sp_addtype PhoneNumberType2, CHAR(8), ‘NULL’

and subsequently create new ‘phone’ type columns and define them as PhoneNumberType2. Eventually, I will have to deprecate PhoneNumberType in favor of PhoneNumberType2 but that would be another story.

This way we have zero impact on our apps and databases and have avoided possible headaches.

What Hinders SQL Server BI’s Adoption?

There was a thread in one of the forums I am subscribed to regarding why SQL Server’s Business Intelligence feature’s adoption in our region is slow to gain steam. Where I belong, SMEs are the biggest of corporations already. Those smaller than SMEs are a lot and they make up most of the businesses. Companies with less than 15 employees are a lot.

So what hinders?

BI is expensive to deploy. For the smaller than SMEs companies, BI is a luxury. Only those big ones can afford to deploy it and use it to the hilt. BI is best separated from the OLTP server so this will require one to setup a new box, buy a license and new things to manage and also a new box to take care of. BI is also resource hungry so one needs a lot of diskspaces and memory plus the muscle to crunch and analyze data and turn them into information. One can have OLTP and BI functionality in one box but both affects each other significantly.

With this in mind, what is left are the big players. Only those DB developers from big companies had the chance to get their hands dirty on BI. I myself only had a hand on it on an experimental level instead of in production. I have proposed this stuff to a few clients who can afford but after knowing what entails to come up with a datawarehousing environment, they would gladly say they will consider it in the future … especially if they are able to live with the reports that we did together with the apps …. only to be burried in oblivion.

This situation also prevents us developers from being exposed to more BI boxes in the workplace since not everybody/customer thinks it is necessary. Now, this is a challenge for us to make justifications for them to take on BI and have at least a meager BI setup. It is also a challenge for us to become involved in the “convincing” phase instead of letting those marketing pitches drown their already overwhelmed minds. Instead of us trying to position ourselves to learning and self-benefit, let us feed the minds of those who pay our salaries and buy our servers and license and finance our playground. I say, let us give them the taste test. We reap what we sow later.

My SQL Server 2005 Top 10

This was suppose to be my tech briefing topic to the PHSSUG core during its 2nd meeting (the UG’s name wasn’t PHSSUG at that time ), but unfortunately there were more pressing matters to tackle so PHSSUG decided to have this posted in the forums.

MS SQL Server has had tremendous impact on me and each time there is a new version since 6.0, the more i would say that this product is the best Microsoft ever had and product to beat.

So here is my Top 10!

At #10: New Data Types

  • New LVTs (Large Value dataTypes)
  • XML

In SQL 2000, we had text, ntext and image. Has anyone of you tried using these datatypes with your data going beyond 8K? I am sure you will have your own stories on how you were able to process things by chunk. MS will be deprecating the old BLOB handling datatypes (text, ntext and image) so you should migrate your old BLOB datatypes to these new LVTs. The new LVTs allows you to store up to 2GB of data. Under the Unified Large Object Programming Model, LVTs can also be used in a similar fashion when handling shorter strings. No more very unusual way of handling BLOBs.

  • MSSQL2000
    (text, ntext, image)
  • MSSQL2005
    varchar(nmax), nvarchar(nmax), varbinary(nmax)

At long last, we now have native XML as data type in SQL Server 2005. IMHO, XML support in SQL Server 2000 was anemic. But I havent played with XML in 2005 yet so I’ll remain skeptical. But the fact now is that we have XML on the same level as our CHAR, VARCHAR, INT, MONEY datatypes. I now can imagine passing XML as a parameter in one of our stored procedures as native XML. I can now imagine not having to worry about unstructured data. As i have read, there are a wide range of XML processors/functions built into the Transact-SQL language so you can manipulate XML .. so this is gonna be exciting. Here is a small snippet so you have an idea on what we can do with an XML datatype:

Sample:

CREATE PROC ProcessMyXML @XMLParam XML
AS
— you can do whatever you want with your XML

At #9: CLR Integration

The new dotnet framework will now be hosted in SQL Server 2005. With that, our SPs, Triggers, UDFs, UDTs can now be in Managed Code. We can also take advantage of the vast functionalities found in .NET Framework Library and ADO.NET. We also now have ways to handle arrays, collections, bit shifting, or classes.Also great on string handling and regular expressions.

In SQL Server 2005, we now have 3 ways to write “Hello World”: via T-SQL, VB.NET and C#.

At #8 –> SQL Server Express

This is got to be one of the best move ever by Microsoft. Giving SQL Server for free. I have posted this somewhere already but here are some info once again:

  • SQL Server 2005 Express
  • Supports 1 CPU, 1GB RAM and 64-bit (WOW)
  • Goodbye to query governor (very very nice MS)
  • Up to 4GB of database
  • Express Manager (Easy-to-use Management Tool)
  • Packaged with Limited Business Intelligence features
  • Seamless migration to fuller, much powerful version of SQL Server
  • Not a learning tool anymore but primed for production
  • It is still FREEDownload it here (SQL Server 2005 Express)

Almost everything that very small businesses need and much smaller ones are in Express. There are a lot of application that can fit in a 4GB database. I’d say that all foxpro (other xbase systems) and access installations out there should now consider switching to SQL Server via Express.

Why go for LAMP when you can have Windows, IIS, Express and ASP.NET?

At #7 –> SQL Server Report Builder

With all the high end and developer/administrator oriented features shipped with 2005, it is so good that Microsoft has at least included something for the end users, the not-so-techie-users and analysts. SQL Server Report Builder is the new web-based report authoring tool designed for less technically sophisticated business users with zero development experience. This now promotes self-service reporting whcih is very cool. And this is GOOD NEWS to developers. How many of us would try to distance ourselves when given a programming task related to reporting?

I have yet to explore this stuff for our projects. So far, i have just built a couple of simple reports and yeah it was so easy to use it. Hopefully .. this would be at least according to what i read.

At #6 –> Error Handling (TRY … CATCH)

Ho ho ho! MS is really like Santa. At last, now we have a decent horror handler in version 2005. Opps I mean error handler. I cant explain for more but by trying to look at the sample below, you will have an idea as to how errors be handle in TSQL.

BEGIN TRANSACTION
GO
BEGIN TRY
UPDATE NonexistentTable SET Col1 = 100
END TRY
BEGIN CATCH
SELECT Error_Message() As ErrorMessage
END CATCH
GO
IF @@TRANCOUNT > 0 ROLLBACK TRANSACTION
GO

At #5 –> DDL Triggers

Here is once again something great and new from the MS SQL Server team. I have always wished that everytime I update my tables, views, SPs, triggers (btw, I seldom use DML triggers), SQL Server logs the updates and maintain version information on it. Now my wish was granted in Yukon.

Here is for us ———-> DDL Triggers.

These are triggers that can be fired in response to Data Definitions Language statements such as CREATE, ALTER and DROP. These things can be ideal for regulating DDL operations. It can now be used to record audit logs for schema changes, events and used for version control (thanks MS!!!).

Here is some code that would demonstrate this new SQL Server capability:

CREATE TRIGGER TableProtect
ON DATABASE FOR DROP_TABLE, ALTER_TABLE
AS
PRINT ‘Warning! Bawal i-Drop o i-Alter ang Tables’
ROLLBACK

CREATE TRIGGER RecordSPChanges
ON ALL SERVER FOR ALTER_PROCEDURE
AS
DECLARE @DDLEvent XML
INSERT MyEventLog (DDLEvent) VALUES (@DDLEvent)


Now I can have my own change management and version control inside SQL Server. This is COOOOL!!!

At #4 –> Data Paging

Who among us today ever developed a web-based application without a data paging functionality? None i guess. With exotic TSQL workaround to return a page of data to mimic paging and temporary tables, we all were able to survive the data-paging epidemic.

Data paging on the client side is still a challenge to most of us especially if we code ASP.NET based apps a lot. Traditionally, there is really no native way to do paging in SQL Server 2000 and older versions. We have relied on using some workaround schemes to achieve paging. On ADO.NET, the entire result set are being sent from the server to the client.

At last, we got something new in SQL Server 2005. At #4, I am picking the new data paging capabiilties. Data paging can now be done on the server side rather than depend on client-side paging techniques as provided by ADO.NET and other technologies resulting to improved application performance.

We now have Row_Number() function as the center of SQL Server 2005’s data paging features. Let us take a brief look at some TSQL snippets:

SELECT
ROW_NUMBER() OVER (ORDER BY PHSSUGID)
AS
MyRowNumber, PHSSUGID, MemberAlias
FROM
Members

MyRowNumber PHSSUGID MemberAlias
1 10248 Bobbit
2 10249 DotnetProgram
3 10250 Dens16
4 10251 ggsubscribe
5 10252 Romeo

WITH OrderedMembers AS
(
SELECT ROW_NUMBER() OVER (ORDER BY PHSSUGID) AS MyRowNumber, PHSSUGID, MemberAlias
FROM Members
)
SELECT
*
FROM
OrderedMembers
WHERE
MyRowNumber between 6 and 10

MyRowNumber PHSSUGID MemberAlias
6 10253 CoyCoy
7 10254 DreamLordzWolf
8 10255 Femee
9 10257 Lito
10 10258 Margarita

Looks like data paging will be a piece of cake from now on 🙂

At #3 –> Horizontal Data Partitioning

Weeks prior to the Nov 25 launching of SQL Server 2005 in the Philippines, I was fortunate to attend a Microsoft sponsored 1 week boot camp for Yukon. One of the things that caught my interest was the feature on Horizontal Data Partition (HDP). At that time, this was something new for me. I only heard of this feature in the Oracle camp for so many years. Prior to this, there was nothing similar or something nearer to the concept of HDP in SQL Server. There was of course a feature taking advantage of what the Filegroups can offer in terms of performance enhancement and the occasional UNION across servers, but none has anything like HDP in SQL Server (or perhaps I was not just aware of it).

So what is new with HDP? Before anything else, do take note that I am not writing based from experience with this subject as I have very limited exposure yet to this new feature. So far I haven’t played with this feature enough to become highly competent. Nevertheless, this is my #3.

HDP allows new ways to horizontally split large amount of rows for Very Large Databases (VLDB). Why would one want to do that? There are applications that only deal with predictable chunks of data, for example, applications that deal with data on a per year or monthly basis. With HDP, you can enhance performance by isolating seldomly used or needed rows from current ones. This feature enhances data access significantly in terms of speed. I have seen that Yukon has provided ways to manage data partitioning with the use of commands like SPLIT, MERGE and SWITCH.

HDP also helps in spreading load and processing across multiple Disks (much like Filegroups), CPUs and even Servers (I still have to check on this) and can be executed in parallel. I am assuming this will be a big hit for really large databases.

So far, my comments are mostly taken out of BOL and I have yet to explore how to fully utilize HDP.

At #2 –> Database Mirroring

Too bad, PHSSUG failed to have this showcased during the launching due to some glitch/setup problem that were not solved during the early hours of the launch. Though this feature is still not supported by Microsoft in RTM, I am quite excited about the prospects of DB mirroring.

Unlike in version 2000, we are left but iron out via replication and custom solutions our mirroring capabilities.

So what is with DB mirroring?

  • Now we can increase database availability
  • We can mirror databases across different instances of SQL Server on Stand-by
  • We also now can code client applications to connect to a fail-over or standby mirror database
  • We also get Transaction Safety (when Safety Set To FULL and mirroring is operated in synchronous mode)

Let us hope MS releases this officially on Q1-2006.

At #1 –> Painless Migration To 2005

Why start on something completely new when you can start using SQL Server 2005 on something you already know. It is plain and simple! Detach SQL Server 7 or 2000 database, Attach DB in SQL Server 2005. This is the most simple and painless way to get started on SQL Server 2005. This is also the best way to test if your old databases are working in SQL Server 2005. I wanted to test a database from SQL Server 6.5 but I can’t get hold of a copy of the version.

Looks like Microsoft took the extra effort to make things easier for users that are ready to jump to 2005.

I have already tried this apporach on a number of production databases created in SQL Server 2000 and a couple of old databases still in SQL Server 7. Except on the security (logins, roles and permissions), everything worked as if the apps are still running on SQL Server 2000 or 7. What I did was just to recreate all those logins and roles and permissions. Since i already have the scripts … it was peanuts.

This first month of 2006, i’ll be deploying SQL Server 2005 in a couple of clients that already had SQL Server 2000. I already had the done some testing with this process and tested their databases. So far, everything is going well.

However, I caution you to take my word with some degree of skepticism as I have not read anything so far that Microsoft recommended this approach. Dont forget to backup your databases when doing this.