3 Easy Ways to Improve Your Database Performance

Data is at the heart of every modern enterprise, so maintaining database performance and ensuring data is quickly and easily accessible for those who rely on it is a key responsibility for database professionals.

Overcoming common database performance issues

Despite all of our best efforts, database performance is going to lag at times. Understanding common causes of performance issues and knowing how to diagnose and fix them will help keep your databases in peak shape.

Here are three common sources of database performance issues and suggestions for how to resolve them quickly before they impact users.

Avoid deadlocks

deadlock occurs when two processes are competing for exclusive access to the same resource. Only one process can use a resource at a time, so database performance slows until the deadlock is resolved.

There are two types of deadlocks you need to watch out for: cycle locks and conversion locks.

Cycle locks occur when two processes vie for an exclusive lock on a resource that is locked by the other process.

Conversion locks are caused when a thread tries to convert a lock from one exclusive type to another exclusive type, but is unable to because a different thread already holds a shared lock on the resource the first thread is trying to convert.

Conversion locks come in three flavors:

  • Shared with intent exclusive (SIX): Occurs when a transaction that holds a shared lock also has an exclusive lock on some pages or rows.
  • Shared with intent update (SIU): Occurs when a transaction that holds a shared lock also has some pages or rows locked with an update lock.
  • Update with intent exclusive (UIX): Occurs when a transaction that holds an update lock also has an exclusive lock on some pages or rows.

There are a few ways to diagnose deadlocks, some more straightforward than others.

If you want to cut to the chase, look for a 1203 error message that literally tells you there is a deadlock:

Transaction (Process ID %d) was deadlocked on %.*ls resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

Another option if guesswork isn’t your thing, is to extract a SQL Server deadlock graph out of Extended Events.

Those who like a bit of mystery in their lives can look for queries that suddenly run more slowly than normal. Sure, slow queries could be caused by a million things, but deadlocks are one of them.

The best way to fix a deadlock is to terminate one of the processes and free up the locked resource. To prevent them from happening in the first place, take preventative measures such as:

  • Create better indexes
  • Adjust transaction priorities
  • Hold locks for as short a time as possible
  • Don’t submit a transaction until you have all the information you need

Defrag indexes

Over time, multiple insertions and deletions cause indexes to get fragmented. This fragmentation may be internal (i.e., caused by pages that have too much free space) or external (i.e., occurs when the logical order of the pages is wrong). You can read more about internal and external fragmentation here.

Whichever type you’re dealing with, the first symptom you are likely to notice is (again) slow queries. To definitively diagnose index fragmentation, use sys.dm_db_index_physical_stats to detect fragmentation in a specific index, all indexes on a table or indexed view, all indexes in a database, or all indexes in all databases.

Unless your indexes never change, there is no way to prevent some level of fragmentation. However, there are a few dos and don’ts that can help minimize the impact fragmentation has on database performance:

  • DO: Be aware of features that can cause page splits
  • DON’T: Update records to make them longer
  • DO: Implement appropriate index fill factors
  • DON’T: Insert records with random key values

The first step in fixing an index fragmentation performance problem is to determine how big the problem is. Run sys.dm_db_index_physical_stats DMF to analyze the level of fragmentation. Then decide whether it’s best to rebuild the index, reorganize the index, or do nothing.

To help you decide the appropriate course of action, here are some guidelines:

  • Rebuild: Rebuild indexes when fragmentation exceeds 30 percent
  • Reorganize: Reorganize indexes with between 11-30 percent fragmentation
  • Ignore: Fragmentation levels of 10 percent or less should not pose a performance problem, so you don’t need to do anything


Quest Blog Promo Banner



Optimize queries

One of the most reliable tells that some of your queries are in need of optimization is—you guessed it—slow-running queries.

Before you head down this path, it’s important to define what “optimal” means in relation to queries, so you know when you have achieved your goal. For our purposes, let’s call optimal the point at which a query performs acceptably and will continue to do so for a reasonable amount of time in the future.

With this definition in hand, set up optimization checkpoints to track progress toward your goal and let you know when you may need to re-evaluate. Signs that re-evaluation is needed include:

  • The query now performs adequately
  • The resources needed to optimize further are cost prohibitive
  • We have reached a point of diminishing returns for any further optimization
  • We discover a different solution that renders this unnecessary

Whether you are retroactively tweaking your queries for better database performance or proactively baking it in from the start, there are a few query optimization best practices that will help ensure your queries return quickly and accurately:

  • Identify your business requirements
  • Avoid SELECT*
  • Create joins with INNER JOIN clauses, not WHERE clauses
  • Define filters using WHERE
  • Say no to leading wildcards
  • Use stored procedures
  • DELETE and UPDATE in batches

Good database performance is important because it helps to maintain high availability (and meet SLAs), it keeps business-critical systems running smoothly both for customers and for internal business users, and it saves money and increases ROI, which is always a bonus.

About the Author

Janis Griffin

Janis Griffin is a Senior Software Engineer at Quest Software and has over 30 years of Oracle DBA experience including design, development and implementation of many critical database applications. Before coming to Quest, Janis held DBA positions primarily in the Telecom Industry, working with both real-time network routing databases and OLTP business to business applications. She was responsible for mentoring other DBAs on best practices in database performance tuning and managing all aspects of the Voice Network databases. Janis has several patents on Telephony Routing Database Designs/ Algorithms.

Related Articles