SQL Server Performance Investigator – Let Me Put This Into Context!

If you are a developer or DBA using stored procedures in SQL Server one task you might experience is trying to work out how changes to stored procedures have impacted performance. Or you might have been tasked with improving a procedure and need to know if your efforts have been successful.

So in Foglight Performance Investigator you have a feature that allows you to compare one or more stored procedure’s performance profile with the same (or a different) stored procedure for a different time range. For example let’s compare overall performance for the last hour with an hour earlier today. This would assume a similar amount of calls had been made.

For the same workload average SQL response time has dropped from 1.3 secs to 1.0 and the batch rate has increased. So improvements have been made.

But what if you work in dev-ops, you might find it difficult to track performance changes due to constant changes in the underlying code. The stored procedure names might retain the same name but show different performance characteristics during development. As dev-ops promotes constant monitoring at all steps what can Foglight Performance Investigator (PI) offer.

We can make use of SQL Server session Context _Info. As this is a dimension within Foglight PI (along with user, database, program, etc) we can monitor particular stored procedures in the “context” they are running, this could be related to a particular process flow, or batch, but in my example I’m using it to see what version of the procedure is running. 

You can see above I have used the database dimension to review T/SQL batches in the BMF database. I’m actually using Quest Benchmark Factory to repeat the same load test on this procedure.

What we can see is that during the tests the procedure is recompiled and we have new plan handles.

The 2nd iteration of the same procedure used more cpu, was executed more times but clocked up less active time. So this version was more efficient.

So we can continue to drill down and use our context_info.

This was version 6.6.

We can compare wait metrics over this time range to understand the differences. But that is difficult to identify the different versions. So let’s view by Context:

  

I can compare v6.6 with v6.8 to see why we have a reduction in active time for the same workload. V6.8 had less active time but used more CPU, incurred more CPU WAIT but less PARALLEL COORDINATION wait for a similar number of executions. As the development continues we can get to this level of detail to understand if any regression is incurred.

So how did I do this? While I was development the stored procedure I added three lines of code:

As I updated and recompiled, I updated the version number.

Those of you also running Oracle will be familiar with Module and Action used for similar purposes.

Download an evaluation here and try adding the session context_info string to help diagnose performance problems.

About the Author
Martin Wild
Martin Wild is a systems consultant. He specialises in helping clients with database performance monitoring solutions focusing on SQL Server, Oracle, SAP Sybase, DB2, VMware and HyperV. He is a member...