Monitoring


How to do this properly: requirements, goals, software tools, alternatives, processes, and caution points.

Why is SQL Server Monitoring important? For me as a DBA, there are couple of reasons:

  1. Some servers are simply important. I just can’t have anything happen to them and not know. Why? For starters, I don’t want look like an idiot when someone asks me what is going with the server. I want to be the first person to find out (not the other way around). And I strive to be good DBA.
  2. Proactive monitoring catches issues before they become problems at 3AM on a Saturday night while I am holding a beer.
  3. I want to avoid after hours work as much as possible. Good monitoring lets me “migrate” issues from sleeping hours into business hours.

What questions does SQL monitoring need to answer?

Monitoring is a wide topic. It begins from “is my server up?” to “how is query X is performing now versus a week ago?” Therefore, it helps to break this subject down. Here is how I do it:

  1. Is stuff down or about to go down? In other words, are the main SQL Server resources online and working?
  2. Has anything significant changed? Or, has the VMware admin stolen a couple CPU cores away or changed RAM size?
  3. How is performance? Only when I have the first two handled will I deal with performance. Performance doesn’t matter if SQL is down, right?
  4. The next goal is to help with trending & capacity planning. This is a product of the first three. You (almost) don’t need to do anything specific here, because if you set up first three properly, you will be logging data. Then, it becomes easy to look into the past and predict the future. It is simple to look at the table that had 5 billion rows last year and now has 7, and predict how much storage will be needed. You will look like a DBA hero when you can tell your manager “We will run out of disk space next October”.

What does monitoring need to achieve?

  1. Catch the issue – first I want to be able to catch the problem.
  2. Self-heal – I don’t want notifications. I want to be able to run scripts after I know something is wrong.
  3. Alert –notify (and eliminate junk alerts that don’t need to notify) if self-healing is not possible or we just haven’t built it yet.
  4. Log – log to SQL tables when possible. I will be using this data later.
  5. Agentless – I want to deploy code in a form of stored procedures, SQLAgent jobs, and PowerShell scripts. This solves a couple of important problems: it means on most servers, I can probably sneak this code in (believe me, I test the crap out of it before I do that on PROD). It means I can probably avoid creating a hundred JIRA tickets and avoid bureaucratic processes and red tape (when it makes sense). I would not do it on PROD servers, but for DEV and QA environments it most likely can be done safely.
  6. Alerting solution should work on all SQL versions (2005 and up).
  7. If all above is done right, there will be no limit of how many SQL Servers can be monitored. I should be able to use the same monitoring on environments with one SQL Server or 500.
  8. It will have minimal or no impact on the SQL Server Performance that is being monitored.

SQL Monitoring Software, Tools, Alternatives and Other Options

  1. Don’t do monitoring. Wait… What? Well, that won’t work for me. Moving on.
  2. 3rd party tools:

    Red-Gate – I have heard good things. Never used it in a real world situation, so I can’t tell you much.

    Idera SQL Diagnostics Manager – this is a crappy tool for monitoring. I’ve run into three DBAs that say “its great”. Every time, I thought I was missing something, but sadly in all cases those DBAs were just using defaults and have not used monitoring to the level I am describing here. And that’s just not good enough (sloppy, in my book). After countless calls with Idera Support, Idera account managers, and submitting bugs – I just gave up on it. Oh, and Idera thought the stuff I was asking for is so good, they added those items to their future road map. Yeah… I was just getting started.

    Dell FogLight – tried using this too. This was about 5 years ago. It was so complex, two DBAs attempted to set it up and we never got passed monitoring free disk space. Maybe it’s good, but any software these days that requires 2 weeks of consultants on site is just not going to work for me. So yeah, moving on.

  3. Solarwinds – probably has some tools that do this, but I am not aware of it. It’s not a DPA (Database Performance Analyzer).
  4. Splunk – this has a lot of promising features. I dabbled in it, but it seems too complex for what I need.
  5. SQL Profiler – you can probably achieve a lot with it, but not 100%. We are also talking extra load on the server and this is not acceptable.
  6. SQL Extended Events – you can probably do more with it than with SQL Profiler, but since I need to monitor all the way down to SQL 2005 or SQL2008 – that won’t work. Maybe in a few years, when old SQL versions are extinct.
  7. Perfmon – It’s good, but not for my purpose here.
  8. SSMS Activity Monitor – nope, won’t work at all for this.
  9. PAL tool – This is a great tool, but not for on-going monitoring purposes.
  10. Wait Stats – Not a good tool for monitoring. Not enough data, and the data is too general.
  11. Plan cache – well, now we’re getting too far off course. It’s really good for performance monitoring, but can’t get me everything.
  12. Build your own SQL/PowerShell scripts – I am not a big fan of doing this. But, since SQL software vendors suck at this, I have no choice but to roll my own.

Here are couple of other miscellaneous SQL monitoring requirements and overall process related items (that don’t fit anywhere else):

  1. The goal is to never wake the DBA up.
  2. Do not alert on anything that is not actionable.
  3. Non-DBA alerts should not come to DBAs (same as #1, just in different words). You need some of these points, because you want to create a written policies document. So, when some whacko manager tries to point a bunch of cisco alerts at DBAs thinking everyone should know, you can point to the policy and say “Nope, no way, my friend”.
  4. Each alert that triggers during non-business hours needs to be addressed the next day. Main question we are trying to answer: what can be done so this never happens again? Can we add some code that will self-heal the issue? If not, can we write a Standard Operating Procedure so 24×7 level 1 support can deal with this and DBAs won’t have to be woken up? (I have declared war on waking up DBAs!)
  5. You will need an SQL Server list, first. My list contains a bunch of columns. But the most important pieces of data are server names. The second most important item is how important the server is. In smaller environments (under 20-50 servers), DBAs tend to keep server lists in their heads. With larger environments, you need to have a written list. I build server lists regardless of how many servers there are. Just do the same.
  6. Alert messages should be as succinct as possible. They need to be worded correctly, so there is no question about what happened and how to fix it. Every second matters. Each character matters (yes, each character). Why? Because when the issue is critical, I can’t waste time. At 4 AM, I want to read less, not more. And I want to be able to tell from the subject line how critical the issue is and if I should be getting up and looking for my cold slippers. If I must interrupt someone’s dinner, I want to be as short as possible and communicate the problem quickly.
  7. Each alert should become a ticket. I hate documentation just like you do. However, this time, I can use that for doing good. Often, it’s hard to see a forest because I am looking at a tree from 5 inches away. When I know that a reoccurring problem takes X hours per week to fix, it becomes easier to say to my manager, “This month we wasted 18 hours on DBA time on this and 314 hours this year. Therefore, dedicating a few days of DBA’s time to write a kickass PowerShell script is worth the investment”.
  8. When a script cannot be written to fix the triggering issue, then the next level of defense is a Standard Operating Procedure (SOP). I want an SOP so I can give it to level 1 support and have them fix the problem. Many companies have 24×7 support staff anyway, so if we educate them on how an issue can be fixed, we avoid waking DBAs! So many companies don’t use level 1 support correctly – if each problem contains an SOP for how it should be fixed, we can actually make those people do some valuable work, instead of sleeping through the night or playing games. And in my experience with those level 1 support guys, their goal in life, 90% of the time, is it to get the hell out from doing support. So, they actually appreciate a well written SOP and learning a bit about SQL.
  9. I will need two DBA emails for this:
  • DBA_Emergency – only important stuff is sent here. Alerts that come into this email box mean “wake someone up at 4 AM. It’s worth it.”
  • DBA_InfoOnly – this email is for info “I’d like to know, but it’s not important to interrupt my dinner. I will check it out tomorrow, when I get it.”

A couple of caution points

The more people you involve in deciding what needs to be monitored, the worse it will get. Managers are scared to “miss something”. And that’s complete BS. The method that has worked for me is to ask my manager, “Um, would you like for me to handle SQL monitoring?” He will say “Yes, of course”. And I will take it from there.

Don’t ask for anything else. If you do, your manager will want every alert under the sun and will gladly volunteer your time off-hours. Just don’t provide that option.

One simple caveat. You have to do a superb job with SQL monitoring. You now own it, buddy. If you do that – there will be no issues. The manager won’t care that some alert triggered and you didn’t do anything until the next day. If the manager is not in trouble, you won’t be either. But it better be an alert that can be ignored.

It’s when you mess up and start having meetings about alerting; that’s when stuff gets out of hand.

Now that I know what I need to pull off, let’s see where that data lives.

  1. Server stats (like CPU & RAM usage)
  2. Event viewer logs
  3. SQL Error log
  4. SQLAgent failed jobs (or msdb)
  5. SQL DMVs

This means I will need to use a combination of SQL and PowerShell scripts.

I/we have bunch of these written and being used on many SQL Servers already. The next step is to clean that code up and write some more so I will not have to turn red when you find something embarrassing in it. We will see how that works out.

I hope you see the approach that we will take if we monitor your SQL Server.

If you have questions, please reach out over the “Contact Us” form or calling us (phone number at the top).

Capabilities

Service Monitoring

test test test

FREE Executive
Report

21 Revealing Questions You Should Ask Any SQL Server Consultant Before Giving Them Access To Your Database

We will email you a copy of our report, "How to Hire Database Consultant – 21 Revealing Questions You Should Ask Any SQL Consultant Before Giving Them Access To Your Company’s Database". !

Send Us a Message

Thank you for contacting us!

See What Other IT Managers Are Saying About Us...

Yielded some very impressive results

The breadth and depth of knowledge with the Microsoft SQL Database platform and ancillary technology stack that Mark has brought has already yielded some very impressive results from a physical and logical architecture perspective.

Prefer to stay anonymous

Quickly identified and fixed SQL performance issues that we struggled with for weeks

Mark was able to quickly identify and fix several SQL performance issues that we had been struggling to trace for several weeks. Additionally, he made several other recommendations and configuration changes that had a significant positive impact on database performance.

Nick Kelland
Vice President of Ops & IT

With little notice Mark was able to find and fix a number of different SQL misconfigurations and wasteful practices.

After a weekend of work the customer realized a 60% decrease in CPU utilization as a direct result of Mark's work.

Jason Milgram
Azure Practice Director

Complex and heavy 5,000+ concurrent connections databases that need tuning, indexes, best practices, up-time, and smooth performance over all

He has mastered the understanding of complex, heavy load TSQL transactions. When tuning, database structure, indexes, stored procedure, best practices, up-time, smooth performance are all important, then Mark is the type of DBA that truly knows how to keep these high standards.

Tim Janssen
Application Manager