Will SSD Improve Your Database Performance?

The fog around solid state storage is lifting. Most storage and database administrators are convinced that solid state technology is the most practical next step in storage performance. The challenge is that solid state performs so well that now there’s a concern as to whether the databases placed on the platform can take full advantage of that boost in performance. Unlike mechanical storage, investments in solid state storage cannot afford to go underutilized, the price premium is just too high. Making sure that the database will take full advantage of solid state storage is the next big step in solid state adoption.

The Impact Of Poor Database Performance

Fixing poor database performance is critical in the modern data center. While the capacity of unstructured or file-based data is growing faster, the value of database data is at an all time high. In many cases the speed that information can be captured, retrieved or analyzed has a direct correlation to the revenue generated by the company. In other words, better performance equals more profits and customers are looking for ways to accelerate databases so they can handle more requests from more users more quickly. With this need for performance though, also comes a need for high availability. All the investment in performance will go to waste if that database is brought down due to a failed part.

The Database Challenge

Databases are more complex than a single file being read sequentially. In essence they are their own file system with a variety of files (application code, tables and queries), and meta data (indices and logs). How the user and the application code interact with these files has a dramatic impact on performance. Performance can be degraded either by thousands of users accessing small bits of information or a single application instance accessing large chunks of data.

When a database performance problem arises, identifying the root cause can be complex. With IT staffs stretched as thin as they are it may simply be overwhelming to attempt a root cause analysis. Even if the problem can be identified the expenditure to “fix” the problem through more efficient coding may be more than the cost of buying new hardware. In a valiant effort storage managers and database administrators may try to perform a detailed analysis of the environment to see what the exact cause of the problem is. In reality this analysis takes a dollar investment in software tools to report that information and the time investment to understand what those reports mean. This often leads to an ‘analysis stall’ where the goal to improve performance is paralyzed by analysis. In trying to either avoid this stall or overcome it, organizations often embark on the age-old solution of throwing more hardware at the problem. That usually means buying faster servers with more memory or larger disk arrays with even more mechanical drives to increase spindle ratios. At this point solid state technology can become an attractive strategy for fixing database performance problems.

The SSS Challenge

An alternative to adding faster servers or a greater number of hard drives is solid state storage, which offers the high IOPS and low latency that many of these database environments crave. The challenge when considering solid state is that there can be a staggering number of choices to make. There are internal solid state disk drives (SSD) and PCIe solid state cards as well as appliance based solid state storage systems. Then, once the format is chosen a decision has to be made whether the solid state storage should act as a cache or if it should be used as the storage platform itself. Once again the user is left wondering “what is the right decision to make?”

Ideal Database Tuning

Ideal database tuning is done not by simply throwing hardware at the problem nor is it done by over-examining the problem, which can cause analysis stall. There are basic statistics that will lead the storage manager and database administrator to a solution for most of their database performance issues. The first and most basic metric to consider is CPU utilization. If CPU utilization is fairly high then, usually, there is database coding or structural problem – and a faster more powerful hardware combination may be needed. The database may need to be paralleled or sharded so that more servers can participate in the workload. This of course, is the most difficult and expensive of the problems to fix. The good news is that it’s also the least likely to occur. In most cases CPUs and internal RAM memory are underutilized, not over-utilized.

If CPU utilization is relatively low, less than 30% on average, then faster storage may be the ideal solution for that database. While many statistics can be tracked, including queue depth and latency, potentially the simplest is I/O Wait Time. Oracle for example provides a reporting capability called Automatic Workload Repository (AWR) which groups ‘waits’ into 12 categories and provides a listing of the top five. If, over the timespan of the report, the bulk (75%) of these wait times are I/O related (usually system and user I/O) then adding a faster storage system may be a more direct way to fix the problem, although the application could also be recoded for greater efficiency. In SQL Servers a similar analysis can be done by examining the Dynamic Management Views.

With this information an investment in faster storage can be made confidently. The only remaining choice is if that storage should be mechanical-drive based or solid state, and if that storage should be local to the server or shared.

Storage Performance Options

The options in storage performance acceleration are as numerous as the number of vendors claiming to offer this solution. Adding mechanical drives can continue to improve performance as long as the issue is related to disk queue vs. latency. Disk queue essentially is the amount of individual storage requests that the array can handle at any given point in time, latency is the amount of time it takes to handle each request. Both issues show up in the I/O Wait time in AWR. While queuing can be handled by adding drives, at some point this strategy is no longer cost, space, or power efficient, and solid state storage should be examined. A single solid state storage system can provide the same performance as hundreds of disk drives and do so with a near latency-free environment. It often represents a significant data center floor and power savings as well. In most environments if additional hard drives are being considered to help address a performance issue, then solid state should be considered as a viable alternative.

The next decision to be made is which type of solid state storage should be used; local or shared, DRAM or Flash. This decision is largely dependent on the workload mix. The higher the level of write I/O the more a DRAM based solution should be considered. As Storage Switzerland discussed in a recent article “The Advantages of DRAM SSD”, DRAM based systems like the Kaminario K2 have a distinct advantage in high write environments and should be considered especially in situations where write-heavy I/O waits like redo/undo and transaction logs are responsible for much of the latency issues. In the past DRAM based systems raised concerns around volatility. Unlike flash solid state, when power is removed the data in DRAM is lost. The K2 provides UPS protected DRAM with mirrored hard drives. On failure the entire contents of memory is powered by the batteries until it can be flushed to the hard drives to prevent data loss. When power is restored, the system automatically copies the data from the hard drives back into DRAM storage.

The final decision, and one that’s often overlooked, is the availability of the storage system. Many solid state storage devices in this class do not have high availability built in and as a result, count on the software to help maintain redundancy. This adds potentially greater cost, more complex integration and may even degrade performance. Systems like the K2 have high availability and data integrity functions built into the system which simplifies the above processes. High performance and high availability go hand in hand and it is important that the solid state storage system give both equal priority.

Databases Need Solid State Storage

In the final analysis most databases can benefit from solid state storage. The CPU processor test is the simplest and is reasonably accurate. The more difficult task is understanding the workload and selecting the correct solid state storage system to use as the performance platform. Reporting capabilities built right into the database environment, like Oracle does with AWR, can clarify the workload’s read/write mix. The advantages of DRAM based systems like the K2, is that they provide ideal flexibility, since read-heavy, write-heavy or mixed workloads will all benefit from being on DRAM based storage. More importantly as those workloads change, the K2 is able to handle those changes without modification to the storage itself.

Kaminario is a client of Storage Switzerland


Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.

Tagged with: , , , , , , , ,
Posted in Article

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22,216 other followers

Blog Stats
%d bloggers like this: