Robbing Peter to Pay Paul on the Database Performance Teeter-Totter: Part 2 of 2
In the previous post, Robbing Peter to Pay Paul, a DBA named "Alex" outlined what happened when he introduced a new index to speed up one user’s report. In short, that one little change on the database caused slow-down issues for other jobs that Alex had to address. One teeter hour of change caused over 90 totter hours of follow-up fixes.
Ideal Solution and Challenges. If you’re considering indexes to performance-tune your database access patterns, Quest recommends considering many SQL, not just one or two. Why? Because there’s many-to-many relationships between those SQL statements and the database objects they access. One change to a table could affect many queries and updates touching it.
The best index tuning strategies start with a good set of SQL, a set that represents the concept of a “workload.” As examples, that workload could be the SQL that touches a specific table, or the INSERT/UPDATE/DELETE object access activity from an OLTP app. But finding those SQL workload sets can be a challenge. Consider:
- It may be difficult to find or even see the SQL. SQL resides in stored procedural code, sure, but what about external code logic residing off the database? What about SQL built dynamically at runtime, or some-time SQL submitted by users directly to the database on an ad hoc basis?
- How many SQL statements—and which types of statements—are feasible to include?
- The more SQL statements in your workload, the (exponentially) harder it is to define possible indexes and test them for feasibility.
- Once index additions and changes are made on the database, do you have the right tools to measure the performance impact of those changes?
SQL Optimizer has several capabilities that mitigate the above challenges. For starters, SQL Optimizer can search for a viable SQL workload, and it can find that SQL in:
- Code logic on the Database (SQL embedded in stored procedures, packages, view definitions, etc.)
- Source code off the database (by parsing through your code, like Java source code, or shell scripts, for example)
- Database SQL memory (to capture SQL built dynamically at runtime or submitted by an ad hoc user).
Once you have a set of SQL statements, you can feed that SQL workload to the product's Index Analyzer, where a sophisticated, state-of-the-art algorithm can quickly find sets of candidate indexes. Way faster than the best consultants your money can buy.
Screen captures below show an example where Quest’s SQL Optimizer finds its *second* set of (6) indexes less than ten seconds after the search starts. That particular index set reduces the CPU cost of the workload by over 70%. Other index sets are found before and after that point in time. Notice that you get not only a sense of the workload’s overall performance improvement, but also a feel for how each individual SQL statement’s performance gets affected (second snap).
There you have it: Quest’s solution can find various SQL-based workloads, and then analyze the object access patterns to recommend candidate indexes. So let’s talk about how you might use this product’s capabilities with a few index tuning strategies.
Strategy #1: 80/20 Tuning. One obvious tuning strategy is to perform object-specific index tuning. That is, collect any and all SQL that touches a known “hot” table, and have SQL Optimizer find the most optimal index or indexes that improve the overall performance of the SQL workload touching that table. But let's expand the scope of this idea. Why stop at just one object? Chances are that you’ll want to do index-tuning for other hot objects you know about.
Economist Vilfredo Pareto became famous for defining the 80/20 principle: in a complex system roughly 80% of the effects come from 20% of the causes. This 80/20 rule says it is almost certain that a small percentage of SQL creates most of your database load. Find the 20% of the SQL that creates most of your database consumptive behavior and you’re getting a huge clue as to which objects are getting pounded, and could use index attention.
I'll dub this an 80/20 tuning strategy, and yes, it can be done. Quest’s SQL Optimizer has an efficient way to inspect database memory that allows it to identify the most “expensive” SQL executed on the database over a time period of your choosing. The result of that inspection is a list of top resource consuming statements (whether SELECTs, UPDATEs, INSERTs, etc.) that can be passed to our Index Tuning algorithm. If there is a combination of new indexes that maximizes the performance of the SQL-based working set, SQL Optimizer will find it.
As a side note, Quest has other solutions that can help you capture other types of SQL workloads. In Jason Hall's blog post regarding Performance Investigator he outlines not only how Foglight’s Performance Investigator can capture various SQL workloads, but how it can help you validate the before- and after- effects of your new indexes, or other types of database change. Trent Mera's blog regarding Spotlight performance monitoring touches upon the interesting idea of how that product can capture SQL workloads based on database wait event profiles.
Strategy #2: Application-centric Tuning. Let’s say that you have a very important, strategic application that your organization relies on (and who doesn’t these days?) How valuable would it be to capture and tune the database workload created from that application? Turning on database tracing could be one way to capture its workload. But that method may not capture all possible SQL from the application stack, even with multiple traces.
SQL Optimizer, again, to the rescue. SQL Optimizer might be able to identify your application’s SQL workload more completely than tracing. Remember that it can parse through the app stack’s readable code logic, and/or capture the SQL that your application modules submit at runtime dynamically. Feed that workload to SQL Optimizer and you’re on your way to optimizing your application’s database performance!
Bonus Round. I had a chance to speak to Alex again late last year. He bought Toad for Oracle DBA Suite (which includes Optimizer) shortly after I did the initial demo with him, and he had some advanced questions on usability. As we discussed his questions, we commiserated with each other that performance tuning is not a “one-time-done” activity. You all know why. Object sizes grow. Data models change. Users get added to the system. DBAs modify configuration parameters. Server hardware and databases get upgraded, etc.
For Alex, he had just received a new release package from his application vendor. Yes, the package contained changes to code and changes to database objects. Yes, there were new objects added, and recommended database config changes. Alex was concerned. Concerned mostly about the affect those changes would have on the feeder and extract systems they developed in-house to and from the vendor's app.
Like many of our customers, Alex didn't have a lot of time to do much off-roading in his SQL Optimizer product. (OK, we'll give him a break since he did just buy the software, and there's lots to explore.) So I did some off-roading for him and drove to that part of the product where Optimizer can do some What-If analysis. There's a part of Optimizer that can estimate how his performance workloads could be affected by proposed changes like the ones mentioned above, before they go live. His reply after seeing a What-If example: "Bonus!"
Toad Turnpike is a blog series about real accounts of real people in real situations. Only names have been changed to protect identities.