One of the key discussion tracks at Powering The Cloud / SNW Europe is “Re-Thinking Your Data Center Architecture”. Storage is of course a part of the data center and its architecture also needs to be re-thought. There are two key drivers for this. The first is the demand for high performance that can be solved by flash enabled storage. The second is the demand to store record amounts of unstructured data that can be solved by object storage.
Data centers need to consider how to best move to both of these storage types. In this series of three columns, we will discuss each. For more information, check out the “Re-Thinking Your Data Center Architecture Track” or grab some time with one of the Storage Switzerland Analysts attending the event.
Implementing flash is as much an architecture decision as it is a storage media decision. Solid state storage removes nearly all the latency from storage I/O, so where you “place” that investment is critical to get the maximum return out of it. But it is important to understand that each placement location has certain pros and cons associated with it.
Part I – What Are the Storage Architectures for Flash?
Shared Flash Storage
The advantage of shared flash storage is that it closely resembles the architecture that you most likely have now. In other words, a shared hard drive based storage system. Shared storage is an ideal way to host virtualized server and desktop environments, as well as clustered applications and databases. It is also ideal if many connected hosts could benefit from the potential performance boost that flash can deliver. Lastly, most shared storage systems have high availability features built into them.
The challenge is that flash based storage systems expose latency in your storage architecture design and that latency may keep you from getting maximum return on your flash investment. As a result, a shared flash acquisition should typically include a storage network upgrade to 8GB fibre channel or 10GbE at a minimum. See how we bore this out in our recent lab test and webinar. Moving the 16Gbps Gen 5 fibre channel is ideal and pays dividends even if your mixing it with 8GB FC.
Server Side Flash
To get around some of the latency and cost associated with a storage network upgrade, there have been a flood of server side flash solutions introduced to the market. These solutions typically consists of a PCIe SSD or a drive form factor solid state disk (SSD), coupled with either caching software or configuring the operating system to use the device as a virtual memory pool.
This type of solution is ideal for situations where there is a specific number of servers that need a performance boost or where the cost of a network upgrade exceeds the cost of server side implementation. These environments also need to be relatively cache friendly (lot’s of small random I/O with a low change rate).
Deciding between shared flash or server side flash is relatively simple. Is the cost of implementing flash storage and caching software less expensive than the cost of implementing shared flash plus a network upgrade? The more servers/hosts that need that performance boost, the more likely a shared flash solution will be attractive.
If after your comparison the shared flash option has a small cost disadvantage, there are three advantages that these systems have that might make it worth the additional investment. First, it is generally a better compliment in the above mentioned clustered or virtualized environments. Secondly, it is generally more efficient from a capacity utilization perspective, since it is not siloed to particular servers. Third, all-flash systems are generally simpler to manage. Since everything is fast, all the time, there is no tuning involved.
Server Side Networking
There is one other storage architecture consideration, a server side storage network. These implementations are essentially a hybrid of the upgraded storage network and server side flash. They implement a server network dedicated to storage; typically it is based on either 10GbE or Infiniband. In some cases, it utilizes the same network that was implemented for vMotion.
Server Side Networks leverage server side flash in the servers and aggregates those cards into a common pool of storage. Data is striped across the various flash cards; similar to how a scale out storage system stripes its data. This provides the advantage of efficient storage capacity and higher availability while still providing the performance attributes of server side PCIe.
While this server network introduces latency, the only devices participating on the storage network are typically PCIe SSDs. They do not have to work around slower HDD based storage devices so relative latency should be better. Also, some of the solutions have data locality awareness. They will keep a local copy of each server’s data inside that server, so that reads do not have to go across the server network. Writes are distributed as described above for redundancy.
Choosing between shared storage and a shared all-flash device is more difficult. They both require a network upgrade of some kind but the server network may be less expensive and something that the staff has greater familiarity with (IP). If they have the described data locality feature, then read performance should be substantially better on a server side network than a shared storage system.
That said a shared storage system may be more economical and have better data efficiency features like deduplication and compression than is typically found in a server side network solution. Also, shared storage solutions typically have more mature feature sets like snapshots and replication when compared to server side solutions.
In our next column, we will walk through the architecture decision making process to help you select the right architecture for your data center.
Click here to register for Powering The Cloud, if you are an IT end user or channel partner enter the promo code S66M13 for free registration to all three Powering The Cloud events: SNW Europe, Datacenter technologies and Virtualization World.
