10 Steps to Optimize SQL Database Performance

By Richard Douglas on September 18, 2020

People are often curious how presenters and bloggers come up with topics for their content. For this particular blog post, I looked at my smart watch. My watch suggests that I take 10,000 steps a day, which got me thinking about the steps required to optimize database application performance. When it comes to some of the applications I have seen and worked with in my career, I think 10,000 steps isn’t too far off!

These days, budgets and timelines are being continuously squeezed—everyone is asked to achieve more with less. These 10 steps will help ensure that your business-critical applications are as zippy as possible while being on time and within budget.

How to Optimize Database Performance:

Deploy a Performance Monitoring Solution

Use the Latest and Greatest Version of SQL Server and the OS

Design Your Server from the Ground Up

Ensure Your Server Can Handle the Expected Workload

Assign the Optimal Data Types

Resolve Blocking and Deadlocks

Optimize Indexes

Embrace a Set-Based Mentality

Don’t Make Changes in Production

Leverage the Knowledge of Your Peers

Deploy a Performance Monitoring Solution

It should come as no surprise that a company renowned for its award-winning database performance monitoring software is suggesting that you monitor your servers. There is obviously a very good reason to do so. When I was a production DBA, it was far easier for me to deliver extra value to the users of applications on servers that were being monitored properly than those monitored using homegrown scripts, or, worse, with no monitoring at all.

The purpose of this step isn’t to cover the build versus buy debate. It is simply to help you build a case for implementing some type of monitoring strategy. One of the ways I like to help people build their business cases is by talking about the various levels of business or process maturity. Monitoring solutions help you spend more time understanding and implementing constructs that help boost business productivity by moving up the maturity levels. Improving process maturity will ensure positive and repeatable business outcomes.

Depending on the framework you choose to implement, there are several levels of process maturity. Most frameworks share the following characteristics:

  • Level 0: Initial state, highly reactive and unpredictable service levels
  • Level 1: Just enough alerting to get in front of the most obvious and critical problems in reactive fashion
  • Level 2: Being proactive; knowing where fires have occurred in the past and setting up early warning signals to intercept them and hopefully stop them from happening again
  • Level 3: Providing service level objectives and/or agreements for incident response time and remediation
  • Level 4: Adding value; being a well-oiled machine where nearly everything is expected and those things that aren’t can be handled by highly skilled teams who have a wealth of experience troubleshooting similar issues.

We all want to be at level 4. Deploying a high-quality monitoring solution not only alerts you to issues but also allows you to diagnose those issues and optimize your findings, which will help you achieve that goal.

It also means you will have more time to read and implement the other 9 recommendations in this blog post. 

Use the Latest and Greatest Version of SQL Server and the OS

It’s not always possible to use the latest and greatest version of SQL Server. However, I would urge you to try to deploy the latest version that you possibly can. In each release, Microsoft makes significant strides in improving the quality and efficiency of its components, while innovating with new features.

Below is a list of some of the lesser-known SQL Server features introduced across releases that you can take advantage of by migrating to a newer version.

  • SQL Server 2012: New Windows and error functions were introduced, allowing developers to create more elegant, robust, and faster code.
  • SQL Server 2014: This version included a change in the algorithm that creates Virtual Log Files. Too many create a slowdown in writing away transactions. The default behavior has now changed, providing better performance out of the box.
  • SQL Server 2016: This version included a reduction in the number of latches required to create temporary objects in tempdb. This greatly helps all applications that have been traditionally curtailed by this shared system resource.
  • SQL Server 2017: Adaptive Query Processing—Microsoft plans to help your workload tune itself over time. For example, if not enough memory had been assigned to a plan and data had to spill to disk, the plan would be assigned more memory for the next run.
  • SQL Server 2019: Table Variable Deferred Compilation—This new feature defers the statistics of the Table Variable and will use the actual row counts the first time the query is run. This information is then used for optimizing downstream plan operations.

These might sound like small updates, but they will help to increase your transactional throughput with no additional hardware. We’ll talk more about increasing throughput in Step 7. The important takeaway is to use the latest version any time you can.

Design Your Server from the Ground Up

Ferrari doesn’t put a new engine into an old Fiat and expect it to be a supercar; likewise, you should not expect stellar performance without putting in effort when it comes to building, deploying, and configuring the SQL Server hosting your database applications.

If a new server is going to be deployed, you need to know that it is fit for its purpose. This provides you with a chance to flex those soft skills and ask the business exactly what the requirements are. Without this information, you are failing to prepare and subsequently preparing to fail.

Understanding the business requirements will help you choose the most suitable SQL Server edition (Developer, Enterprise, SaaS, etc.). Sometimes, the only way to support the projected scalability and high availability objectives is to use Enterprise Edition. If Enterprise features are not required, you might be able to save a bundle on licensing costs. The choice of the SQL Server edition to use should be made by carefully weighing and measuring your current and future needs.

Your desired server configuration will vary from system to system based on the requirements of the application in question. There are, however, a few settings that I will regularly change. You should make sure that you fully understand these settings prior to making any changes and that those changes have been tested. Some of these settings include:

Prior to SQL Server 2016, when many of the defaults were improved and tempdb behaviors were adjusted, I would have also changed several of the configurations related to how tempdb worked.

Ensure Your Server Can Handle the Expected Workload

Soak testing and stress testing the hardware you are planning to use for your business-critical environments is of great importance. One testing method is to perform TPC benchmark tests on the servers. Doing so allows you to gauge hardware potential and check for any issues while the hardware is being pushed to its limits. Those same tests help you determine how close the physical limits are to the theoretical limits. If the limits are significantly different, then you know you have more work to do to reduce the gap.

Suffice to say, having monitoring deployed is an important prerequisite for this step. Ideally, you want to create baselines to ensure the changes you are making have a positive impact on performance.

The following are two popular free tools you can use to ensure things are working correctly:

  • The Diskspd utility—This tool will help to test the underlying storage subsystem that you are using. Note that some SAN admins won’t be happy if you run this tool if other systems share the same storage subsystem. It’s best to check first. If they won’t allow you to use this type of tool, try backing up and restoring large database files to see what throughput can be achieved.
  • HammerDB—This open-source, load-testing application enables you to test multiple users against a schema so that you can more easily test configuration options.

Assign the Optimal Data Types

At its heart, you can think of SQL Server as a storage retrieval system. The way the data is stored can have a huge impact on how fast it can be retrieved. So, how is data stored on SQL Server?

Our data has to be stored somewhere, so our database records are actually stored on what we call pages. Each of these pages are 8KB in size. There are several different data types, and each of these data types will take up a different amount of space on a data page. If you know how many attributes are in each record and the scope of what should be contained in these attributes, then you can assign the most optimal data type for that kind of data.

The higher the density of data per page, the less resources SQL Server will need to use to read that data. This focus on operational efficiency helps us to do more with less, which is the essence of performance tuning.

Resolve Blocking and Deadlocks

Locking is a natural consequence of any multi-user environment and ensures consistency of data. In its default concurrency mode, read committed SQL Server will place a shared lock (e.g., row, page, partition, object) when performing a SELECT operation. This lock will stop write operations from occurring on that same object (depending on the level of the lock and where the lock needs to be placed). The same is true in reverse—an exclusive lock used for writing will block a shared lock requested by a SELECT operation.

There is a well-known saying that summarizes this behavior: “Readers block writers, and writers block readers.” When this occurs, it is generally known as “blocking” (it is less frequently referred to as “Live Locking”). The performance impact of blocking is often considered a negative connotation of ensuring data consistency. Ideally, you want to reduce the amount of locking as much as possible—the result is a higher level of concurrency (i.e., more simultaneous users of the application).

The default concurrency method, read committed, is also referred to as Pessimistic Concurrency. There is an Optimistic Concurrency method available in SQL Server (Optimistic is the default method for Oracle); in this mode, readers don’t block writers and writers don’t block readers. The reason that Optimistic Concurrency can occur is that a copy of the data is made upon the request of a change. This data is stored in a special area of tempdb called the version store. The reader will access this temporary copy of the data. As you might expect, there are some trade-offs to using this method, so any changes should be thoroughly tested.

One of the best ways to reduce the impact of locking is to have a good indexing strategy. We will be investigating what makes a good indexing strategy in the next section.

Another potential side effect of locking, which has reached infamous levels, is a deadlock. As you might have noticed, it doesn’t sound pleasant and the reality is no better. A deadlock is when two or more queries are trying to gain locks on two or more objects, but in a different order.

Figure 1 shows a contrived example, where two SQL Server jobs have been designed to create a deadlock scenario.

10 Steps to Better Database Performance Blog_Image 1

Figure 1: Deadlock example in SolarWinds SQL Sentry

If we follow the numbers in the deadlock example, we can see that the following occurred:

  1. An exclusive lock is placed on a row on the table Rainfall_Rates by session 88.
  2. An exclusive lock is placed on a row on the table Consumption_Rates by session 94.
  3. An update lock is attempted on a row on the table Rainfall_Rates by session 94.
  4. An update lock is attempted on a row on the table Consumption_Rates by session 88.

In this particular scenario, Session ID 88 was rolled back. The application should be able to handle a deadlock scenario and either rollback previous transactions or issue a retry to make sure that the data is in the desired state.

Deadlocks can be resolved in many ways. Often, code is refactored to ensure the objects are not accessed in the reverse order. However, there are several other approaches that can reduce the risk of collision through more traditional query tuning methods.

Optimize Indexes

Indexing is a core performance tuning concept that every Microsoft Data Platform professional should be extremely comfortable with. If you’re interested in learning more about indexing, I recommend reading Erin Stellato’s “An approach to index tuning,” part 1 and part 2.

I’ve written and spoken a lot about indexes (or indices) over the years. Many experts believe indexing strategy is more of an art than a science. I agree, to some degree, but you must find a careful balance between read and write performance and resource management. Indexes don’t provide free performance; there is an associated overhead. You must be careful not to create so many indexes that Insert, Update, and Delete performance degrades or that storage capacity is wasted.

Using SQL Sentry’s built-in query analysis and optimization tool, Plan Explorer, it is easy to see which attributes are being used by a query and need covering via an index. Figure 2 shows why a particular index was being used and how to amend an existing index to fix the scanning issue.

10 Steps to Better Database Performance Blog_Image 2

Figure 2: An execution plan displayed in SQL Sentry’s Plan Explorer

Embrace a Set-Based Mentality

About a year ago, I was working with a potential customer whose company was suffering from some severe issues with some of their data loads. I was soon able to identify the long-running SQL queries and drill down into the query plan using SQL Sentry’s Top SQL functionality.

When most people talk about Set Based versus Row by Agonizing Row (RBAR), they talk about cursors and looping through one record at a time. In this case, it was a single query, but that query had a scalar user-defined function (UDF) within the SELECT clause, so the function had to fire against every single row in the result set that met the WHERE clause criteria, at a significant cost. Those of you who are used to looking at query plans will know that the true cost of this scalar UDF is not actually shown inside an execution plan.

In this example, it took me about 10 minutes to rewrite this UDF into the SELECT statement so that it only fired once against the entire result set—a much more efficient use of SQL Server’s resources. In fact, most of that time was spent making sure all the REPLACE statements and parenthesis had been lined up correctly. Everyone has their idiosyncrasies.

Although it is not always possible to remove RBAR-style programming, there are ways that you can improve performance. For more information on how you can reduce the overhead of cursors, check out Aaron Bertrand’s blog post on cursor options.

Don’t Make Changes in Production

The best practice of not making changes in production does not mean ban all of your users from entering data through their applications (although in some circumstances, doing so might be a good thing). Instead, it’s all about reducing the financial cost of changes being made in production. It is often said that it is cheaper to fix something in development than it is in production. In my experience, this saying is true.

If possible, all changes should be tested in at least one other multi-user environment, even if it is a synthetic test like the one mentioned earlier in this white paper. It is critical that you can quantify the impact of the change being made and guarantee that figure.

Getting things right the first time helps you save operational budget. Utilizing features such as the index analysis capabilities within Plan Explorer is a great way to ensure that SQL queries are optimized before being launched into the wild, where you would have to hunt through murky waters to find a killer query. Don’t put yourself in that position—make sure you collaborate with your developers or vendors ahead of time to make sure very few of these killer problems ever make it into production.

Leverage the Knowledge of Your Peers

No matter the size of your IT department, at some point, you will need help and advice from others. By reading this resource, you have taken a step into the wider world of Microsoft Data Platform performance tuning! There are, however, many other resources that I recommend you reference.

The SolarWinds team provides several resources to help you understand more about how SQL Sentry can help you identify and address SQL Server pain points, as well as educational resources about SQL Server and its associated technologies.

  • SQLPerformance.com—This site, sponsored by SolarWinds, provides high-quality articles on SQL Server performance tuning and internals. 
  • SQLskills Wait Types Library—A library of wait statistics created by Paul Randal and powered by SolarWinds.
Richard Douglas- Small

Richard Douglas

Richard (@SQLRich) is a Principal Solutions Engineer at SentryOne, specializing in our SQL Server portfolio offering in EMEA. He has worked with SQL Server since version 7.0 in various developer and DBA roles and holds a number of Microsoft certifications. Richard is a keen member of the SQL Server community; previously he ran a PASS Chapter in the UK and served on the organizing committee for SQLRelay.

Related Posts