What Storage Product Should You Use in the Cisco UCS Storage Server?

Customers of Cisco’s UCS 3260 Storage Server can run any software defined storage product that is available as a software-only product. The question is how does one pick amongst the many that are available? Understanding the performance and functionality needs of the storage application that will use the storage server is crucial to answering this question.

The Cisco UCS 3260 is a purpose-built x86 designed especially for use as a storage server. It is a dual node server that offers up to 36 cores and 512GB of DDR3/DDR4 RAM per node. It supports NVMe and flash memory, and can hold up to 600 TB of raw disk drive capacity per system. It also offers dual 40Gb ports for large throughput. Upon this framework you can load any software-only software defined storage (SDS) product.

Why SDS?

Software defined products have become quite popular, and their popularity has changed the direction of multibillion-dollar companies. Some have even suggested that the day of traditional compute, storage, and networking are over. If you’ve often wondered why software-defined products – especially software defined storage – is so popular, read on.

The primary value of SDS is that it allows for direct competition between vendors due to how much easier it is to swap out a storage product you are not happy with. In the old days, once a large storage OEM got its foot in the door, the deal was essentially done. Due to the typical three to five year depreciation cycle of hardware, you were essentially stuck with that hardware until it fully depreciated. Since many companies kept their hardware even longer than the depreciation schedule, the length of time a given hardware product data was in the data center was quite long.

Compare this to how things are done with a software-defined product. Your hardware purchase, such as the Cisco UCS 3260, is one decision. Your software purchase is another, completely separate decision (as long as the software vendor has certified their use on that hardware). Replacing a storage product that you are unhappy with is much easier if you only have to pay for replacing the software. There are still technical challenges to overcome, of course, but customers now have much more leverage if they know they can to keep the hardware.

Similarly, separating the hardware and software allows you to completely change use case but still reuse your hardware. For example, your plan might have started out with a dedicated backup appliance that supports on-board deduplication as part of your backup architecture. But then, what if your backup software in its latest release supports deduplication and the vendor suggests to you that the best way to use their new version is to move to object storage as the target for backups? If you were using a traditional product, you’d have no such choice. If you used a software-defined deduplication system, you could uninstall it and install an object storage system.

Finally, SDS allows you to leverage the different rates of innovation between hardware and software. By its very nature, software vendors can update and enhance their products every day. In fact, companies do just that if they are using modern development methods. In contrast, hardware products tend to change much slower. Separating the two allows you to take advantage of this difference. In addition, if there is an advancement in the hardware that would benefit your overall system, many SDS systems allow you to intermix old hardware with new hardware, which is not something typically done with legacy storage. Finally a hardware solution as powerful as Cisco’s UCS 3260 could support multiple storage software options at the same time.

Storage Use Cases

There are three primary use cases for storage. Primary storage, copy data, and long-term archive. The reader is probably familiar with primary storage and long-term archive. Copy data is a newer term that encompasses all of the various reasons that we keep copies of data, which include backup, testing, development, and big data analysis. Technically, long-term archive is also one of those reasons, but some people logically separate that function from a copy data management (CDM) system.

Each of these use cases has unique needs and requirements. The primary storage use case’s main requirement is performance and accessibility for multiple users and applications. High-performance applications are typically served by SAN connectivity or directly connected storage in a hyper-converged infrastructure (HCI) approach. CDM systems require significant tracking of changes at the sub-file level, which allows for multiple virtual copies of the data over time – without requiring multiple physical copies.

This is typically accomplished through some type of snapshot mechanism. It may be important for disaster recovery purposes to create multiple physical copies, which is why CDM applications also need the ability to replicate data to multiple locations. Finally, the major purposes of long-term archival storage are cost reduction and data retention. In many companies, inactive data represents the bulk of data stored within the data center. Reducing the monthly cost per gigabyte as well as making sure you can get it when you need it are therefore primary goals of archival storage.

Software Defined Choices

What makes SDS unique is the storage hardware can support a wide variety of software and use cases. For example, a product like the Cisco UCS 3260 could be used as a high performance block storage system by combining flash media and a block-based storage or hyperconverged software solution like Cisco’s own Hyperflex. Populating it with a mixture of flash media and hard disk drives might make it more feasible for a secondary storage or copy data management solution similar to Rubrick or Cohesity. Finally, using its available drive bays exclusively for high capacity, inexpensive hard disk drives might make ideal for a long term archive software application like SwiftStack.

One important thing to keep in mind is that each design has an architecture and a development history. With that architecture and history comes a variety of strengths. For example, consider two products capable of block, file, and object storage, each of which started as one of those three and eventually added on the others. You would expect the product that started out as a block device to have strengths in the block device area, and the same would be true of the product that started as an object storage device. You would expect it to be better at object storage than the other products. This is not always the case, of course, but it does help when looking at the products to understand the product history.

The thing you’re trying to avoid when evaluating a product that started as one thing and added on another is you’re trying to avoid the concept of a “bolt-on” product, which is a feature that was “bolted on” and not really built into the core design. For many examples of this, consider what happened to the virtual tape library (VTL) market when Data Domain started marketing their deduplication appliance. Suddenly everyone needed deduplication, and many companies responded by bolting it onto an existing VTL. Most products that did this did not survive. Interestingly enough, Data Domain saw that they were missing VTL functionality and sought to add that feature to their existing product. Unfortunately, it also was seen as a bolt on and consequently represents an extremely small portion of Dell EMC’s revenue stream for that product. This is a perfect example of a perfectly good product adding a feature just to check off a box, and succumbing to the bolt on phenomenon.

Some products today didn’t start as one type of product and add on other types of products; but they were initially designed as “jack of all trades” products that do multiple things out of the box. For example, it’s not that uncommon to find an HCI product that does server virtualization, block storage, NAS storage, and object storage – all in one product. While it is certainly possible that a product that supports this much functionality can succeed, it is also perfectly reasonable to think that a product that just does offers one or two of these interfaces might be better at those interfaces than a product that offers all of them. It’s simply a matter of depth.

Another thing to consider, specifically when looking at the Cisco UCS product, is whether or not the software defined product is available for purchase as a configuration option, or if it is simply a product on the approved list. A product available as a configuration option can just be selected at the time of purchase, meaning that the product you selected will be preinstalled on the product when you receive it. A product on the approved list is certified to work on UCS, but you will need to install it once you receive the unit. Again, this is not to say that a product that is simply on the approved list won’t work. But it’s also perfectly understandable to assume that a product that is available as a configuration option has been tested by Cisco more than other products.

Picking a Product

The most important thing you must do before selecting a product for any purpose is to understand why you need that product. Understand and document your critical success factors. For example, you might decide that the product must support both primary data and archival storage, and it must support 90 days of snapshots without impacting the performance of the system. Any product that doesn’t meet these critical success factors should not make it onto the shortlist of products that you will evaluate further.

Once you’ve created a shortlist of a handful of vendors that a cursory examination showed might be successful in your environment, you need to take a closer look. This is where your second list of important, but not critical success factors come and play. For example, in addition to the critical requirements above, you might decide that it would be helpful if a product was able to do NFS and SMB. You can use these factors to select an even shorter list from the shortlist.

Once you’ve made it through the first two phases of product selection, you should have a very short list of products that might be successful in your environment. This list should have one to three vendors. You really need to have at least two, and three is okay. But more than that is going to limit how much testing you do of each product before purchasing it. The great thing about a software defined architecture is that installing an additional product to test is simply a matter of downloading and clicking a button or running a command, it’s much easier than testing multiple hardware products that each require separate hardware.

The proof of concept phase can be done one of two ways. Many companies often have a product that stands out above the rest at this point – a product they probably think they’re going to buy. One method would be to perform a proof of concept on that product alone, and purchase it if it meets the critical success factors. Another method would be to do a proof of concept on at least two vendors and pick the one that performs better during testing. This method is more expensive, obviously, but sometimes reveals things about one of the products that simply wouldn’t have been revealed otherwise. It also helps keep both vendors on their toes. If you’re doing a POC on only one product, that company might get a little cocky and think it doesn’t have to try very hard.

Once you’ve identified all of the critical success factors, created a shortlist of vendors, and tested one or more of these vendors to verify that they can do the things that they said they could do, it’s time to buy something. You should definitely not buy something before this point, and even when you do buy at this point, any purchases should still be contingent upon a successful rollout. Too many products have stalled or completely failed once implementation began. So make sure you include that in your negotiations with the vendor.

StorageSwiss Take

Building a software defined architecture around the Cisco UCS 3260 Storage Server allows you to use a variety of products. It also allows you to test each of the products in your environment with the hardware you’re going to run it on. If you’ve adequately stated your requirements and then adequately tested that the product meets your requirements, any implementation should be very successful. But the beautiful thing about software defined architectures is that in the unlikely event that you follow this process and still have a problem, you can delete it, then download and install something else. It’s the best thing about software defined anything.

Sponsored by SwiftStack

W. Curtis Preston (aka Mr. Backup) is an expert in backup & recovery systems; a space he has been working in since 1993. He has written three books on the subject, Backup & Recovery, Using SANs and NAS, and Unix Backup & Recovery. Mr. Preston is a writer and has spoken at hundreds of seminars and conferences around the world. Preston’s mission is to arm today’s IT managers with truly unbiased information about today’s storage industry and its products.

Tagged with: , , , , , , , , , , , ,
Posted in Article

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21,520 other followers

Blog Stats
  • 874,396 views
%d bloggers like this: