In this post I am going to discuss a couple of key points for a DBA to consider when thinking about storage needs for their SQL Servers. I’ll also show you some SolarWinds SQL Sentry features that will keep you on top of this information to help you make informed decisions—and even see into the future!
What’s important for selecting a disk subsystem?
Often when thinking about disk subsystems, the storage space is the first thing that DBAs consider, and for good reasons:
- Running out of storage space at the worst possible time is fuel for DBA nightmares
- Storage space requirements are known, and related storage costs are simple to calculate
- It’s straightforward enough to monitor storage space consumption and know when things are becoming critical, and the ability to measure things can make it easier for us to take them into consideration
The second thing that DBAs should consider is speed. Skipping this consideration would be a mistake. Things can get painfully slow with poor storage performance and provide a terrible experience for end-users. Even though it’s not the same type of game over scenario that running out of storage will cause, it is vital that end-users are able use their applications without waiting for the disk subsystem to catch up. Once that begins to happen, just wait for the additional calls when your database maintenance operations are taking longer, and your already frustrated users are trying to access information.
What should you know for storage performance requirements?
Knowing your storage speed requirements can be a bit more complicated than storage space. You need to understand the I/O load on your system to get started. It’s important to know throughput needs, workload types, and overall performance expectations.
Online transaction processing (OLTP) vs. Online analytical processing (OLAP) workloads (ensure you pick the right storage specs and configurations based on what your application or overall workload needs):
- OLTP doesn’t tolerate high latency; it needs higher transactions/sec and batch requests/sec
- The workloads are usually high on writes, or at least read/write balanced
- OLAP requires higher throughput, meaning high MB transfers/sec
- These business intelligence workloads are read heavy, often scanning full tables
SSD vs. HDD
SSDs can provide significant performance gains, and while they are not new, they are still more expensive than HDD. If your budget can’t cover the cost of all your data going SSD, consider using it for tempdb to get an improvement across all your databases using those tempdb files.
How does SQL Sentry help DBAs with storage knowledge?
The Disk Space feature in SQL Sentry helps you find storage capacity issues, understand where various database files reside on the disk system, and determine whether available disk space is optimally used. Storage Forecasting is an intelligent feature that uses SQL Server Machine Learning Services to create daily usage forecasts for all your disks, so you’ll know when they are expected to run out of space.
The Disk Activity feature in SQL Sentry breaks down disk activity and latency at the controller, physical disk, and file levels, highlighting bottlenecks at any point in a disk system. It displays graphs showing read and write latency, IOPS, and throughput at the disk and file levels. You can use these metrics to understand your current I/O workload and choose the optimal disk subsystem.
The Dashboard feature shows the disk I/O metrics through ms/Read and ms/Write as well as provide insight into the SQL Server workload type via the SQL Server Activity charts (e.g. Transactions/sec). You can create baselines on all these metrics and discover how your storage is performing over time and after a change to new storage.
If you’d like to see firsthand how SQL Sentry can help you stay on top of your storage requirements—among other things—download a free 30-day trial and take it for a spin!