Friday, 06 January 2017 21:37

Keeping your SQL Server healthy


These are the tools I depend on, both when establishing a new SQL Server or encountering one in the wild.

Any significant application needs to store data and, although the cloud is becoming more prevalent, the old-school on-premises SQL Server was the winner of DB-Engines' 2016 database of the year.

DB-Engines tracks database popularity through Google trends, job offers, professional profiles, and other mechanisms that combine to show the popularity of a database in actual use.

No doubt a contributing factor is Microsoft's work on releasing SQL Server for Linux, but it certainly must be recognised that SQL Server is a high-performance, high-quality database management system in its own right.

Yet, like any database, you cannot simply leave it to itself. You must manage disk storage, disk performance, index fragmentation and other factors to ensure the continuing excellent performance.

Here are the three free and publicly available tools I use whenever I set up SQL Server, or whenever I am assessing the health of somebody else's SQL Server.


dbWarden is a monitoring package which delivers a nightly report, along with alerting, giving many facts and figures about your database health, schema changes, file growth, disk I/O and a good many other items.

This script is freely available without cost. It was first published in SQL Server Central by Michael Rounds in April 2013 and can be downloaded from SourceForge. At the time of writing, it was last updated in May 2016 and is a mature and stable product.

I can testify to the value of dbWarden. At one company I consulted, I was assured by the development team their SQL Server was in RAID 10 configuration. Yet, I observed application timeouts and long-running queries and I sensed something was not right.

dbWarden confirmed disk I/O was a problem; disk reads and writes and waits were excessive.

I further tested this with sp_Blitz (below) and was confident disk activity was greatly degraded on this system.

I asked the IT manager to show me the disk configuration, and it is something I will never forget. The server had hardware RAID with two RAID controllers. One was not being used. The other had two RAID 5 volumes on it. Within Windows, one of these RAID volumes was allocated exclusively to drive E:. "What's that?" I asked, and was shown it was the page file. The other RAID volume was partitioned within Windows to two drive volumes, C: and D:.

C: of course, contained Windows and the SQL Server software. D: held database files and log files, together on the same disk.

Oh, and D: also held a file share for one of the busiest departments in the company.

It also held a file share where all the company applications ran from. They weren't installed locally. This very same IT manager insisted that you got the best database performance when apps ran from a file share on the same server as the database. I couldn't believe my ears when I was told that, but I digress.

For now, seeing this disk configuration, I couldn't help but wonder how anyone had tolerated such a setup for so long. I knew why disk I/O was so poor. I could only look at the IT manager in surprise.

He, in turn, said "The sysadmin must have changed this! These were RAID 10!" and went out and asked the systems administrator. He said, "No, that's how it has always been." The IT manager responded, with speed that was in deft contrast to his infrastructure, "Oh yes, that's right; we ordered this from the hardware vendor and said it would be a database server so this is their best practice."

Needless to say, after some discussion with management, a Nimble SAN was soon purchased.

Your experiences may not be as extreme, but this is a real-world example of a company labouring with suboptimal database performance, believing it was actually the best they could get. Yet, every operation was performing poorly. Every minute of every hour of every working day, many staff members were waiting for database operations to complete. The damaging effect on productivity was vast and, in my personal opinion, an unacceptable situation for any IT department that considers itself competent to preside over.

Brent Ozar 1st Responder

Consulting firm, Brent Ozar, makes available a free set of scripts, along with other resources, to analyse the health of a SQL Server environment.

I also used Ozar's sp_Blitz script in the situation described above, in conjunction with dbWarden. Both gave me vital information.

They differ largely in purpose; dbWarden is designed to be a continuing daily and nightly reporting tool on the ongoing health of a system, while the Ozar toolkit is largely to run once, or as-needed, and then to deal with the issues presented.

What I particularly like about this toolkit is the extensive detail it provides on indexing, and the script sp_BlitzIndex can be run over specific databases as well as the server as a whole.

Ola Hallengren's management and indexing scripts

Lastly, the third essential tool I believe any SQL Server needs is Ola Hallengren's scripts for ongoing maintenance and backups.

It's pretty easy to set up a maintenance plan in SQL Server, but is it truly effective? You can tweak it but Hallengren's scripts have been used, and tested, by many large organisations including banks, Universities, energy providers, governments and more. When you deploy these scripts for your ongoing database management and backups you are leveraging the practices and testing of many very large SQL Server environments.

Unlike dbWarden and Brent Ozar's scripts, Ola's require some configuration and consideration to set up how you would like, but once in place they will reduce your maintenance windows, ensure solid ongoing performance, and in turn, reduce your own management burdens.

Read 6057 times

Please join our community here and become a VIP.

Subscribe to ITWIRE UPDATE Newsletter here
JOIN our iTWireTV our YouTube Community here


Hybrid cloud promises to bring together the best of both worlds enabling businesses to combine the scalability and cost-effectiveness of the cloud with the performance and control that you can get from your on-premise infrastructure.

Reducing WAN latency is one of the biggest issues with hybrid cloud performance. Taking advantage of compression and data deduplication can reduce your network latency.

Research firm, Markets and Markets, predicted that the hybrid cloud market size is expected to grow from US$38.27 billion in 2017 to US$97.64 billion by 2023.

Colocation facilities provide many of the benefits of having your servers in the cloud while still maintaining physical control of your systems.

Cloud adjacency provided by colocation facilities can enable you to leverage their low latency high bandwidth connections to the cloud as well as providing a solid connection back to your on-premises corporate network.

Download this white paper to find out what you need to know about enabling the hybrid cloud in your organisation.



It's all about Webinars.

Marketing budgets are now focused on Webinars combined with Lead Generation.

If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.

The iTWire campaign will include extensive adverts on our News Site and prominent Newsletter promotion and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV which will be used in Promotional Posts on the iTWire Home Page.

Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.

We look forward to discussing your campaign goals with you. Please click the button below.


David M Williams

David has been computing since 1984 where he instantly gravitated to the family Commodore 64. He completed a Bachelor of Computer Science degree from 1990 to 1992, commencing full-time employment as a systems analyst at the end of that year. David subsequently worked as a UNIX Systems Manager, Asia-Pacific technical specialist for an international software company, Business Analyst, IT Manager, and other roles. David has been the Chief Information Officer for national public companies since 2007, delivering IT knowledge and business acumen, seeking to transform the industries within which he works. David is also involved in the user group community, the Australian Computer Society technical advisory boards, and education.

Share News tips for the iTWire Journalists? Your tip will be anonymous