Cut Costs, Increase Quality and Accelerate Delivery of New Applications – IBM InfoSphere Virtual Data Pipeline

Application development is quickly changing from the old waterfall approach where applications were delivered once a year and all at once, to a more continuous development method where applications are delivered more frequently with more incremental changes between releases. The eventual endgame, known as DevOps, is a continuous development cycle where applications move from development to test to production on almost a daily basis.

The DevOps process, more so than the waterfall development approach, needs continuous access to the latest version of production data so that development and testing is done on the most current versions of data to ensure successful transition to production. This need for continuous access to the most recent versions of data creates a storage problem for organizations. Overcoming this problem often leads organizations to make dozens and dozens of copies of data that take administrative time to create while consuming valuable storage capacity.

The manual nature of the process also means that IT administrators need to be involved in refreshing data sets, which can cause application downtime and introduce human errors. It can take days or weeks to create these copies. Additionally, because these copies need to be shared, multiple copies need to be provided to each tester thus consuming even more capacity.

IBM InfoSphere Virtual Data Pipeline promises to solve the provisioning and refresh problem by automating copy creation and refresh while reducing storage capacity impact. VDP creates an environment where developers and testers can self-service their data provisioning and refreshes needs. Within minutes, VDP creates near instant virtual copies of data that don’t consume any additional storage space. It also provides self-service provisioning and access to these copies as well as the automated refreshing of them.

VDP is part of IBM’s complete Test Data Management family. It creates the virtual copies of data that the other tools leverage. For example, after VDP captures production data, and before making it available to testers via virtual copies, InfoSphere Optim Data Privacy can anonymize sensitive information. IBM InfoSphere Optim Test Data Management can tap into VDP’s gold copy to create integrated subsets for leaner testing where the full copy of data isn’t required.

Details of IBM InfoSphere Virtual Data Pipeline

VDP starts by creating a gold copy of the data. The gold copy is continuously and incrementally updated so it always has the most recent version of production data. VDP then creates virtual copies of that data. The virtual copies can be “held” and not updated, for point-in-time historical reference, or backup and disaster recovery.

The organization then uses VDP to automatically present realtime copies of production data to testers and developers. These virtual copies can be automatically refreshed as needed and as data changes in production. Developers and testers, of course, make changes to these virtual copies as part of their work. VDP can provide self-service roll-backs of data so that DevOps can work from a clean slate after a round of development or testing is complete.

Savings Abound

VDP delivers ROI in many ways. First, the sheer capacity savings is significant. For example, without VDP, a 1TB Oracle database may be needed by multiple developers, such as the Q&A team, integration team, the performance test team, the backup process and the disaster recovery process, leading to 10TB+ of capacity consumption for just a 1TB database. Multiply this requirement across all the applications in the environment and the situation quickly becomes untenable. VDP changes that by making one copy and virtually delivering that copy to each of the above. Capacity is consumed as changes are made to those copies but it is minimal; instead of requiring 10TB+ worth of capacity, the organization only needs about 1.1TBs.

VDP also provides time savings. Again, what used to take weeks now only takes a few seconds and happens automatically. Actual data owners can self-service their own requests freeing IT to focus on infrastructure matters.

StorageSwiss Take

Fully realizing the benefits of DevOps is something that many organizations are struggling with because of storage related roadblocks. The cost of capacity consumption as well as the time and effort to provision and refresh these environments forestalls the best of intentions. IBM’s VDP offers organizations the opportunity to break through the storage roadblocks and not only enable workloads like DevOps, Artificial Intelligence and Analytics but also empower the organization to take a significant step forward in the development of each.

Sign up for our Newsletter. Get updates on our latest articles and webinars, plus EXCLUSIVE subscriber only content.

George Crump is the Chief Marketing Officer at VergeIO, the leader in Ultraconverged Infrastructure. Prior to VergeIO he was Chief Product Strategist at StorONE. Before assuming roles with innovative technology vendors, George spent almost 14 years as the founder and lead analyst at Storage Switzerland. In his spare time, he continues to write blogs on Storage Switzerland to educate IT professionals on all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought-after public speaker. With over 30 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS, SAN, Virtualization, Cloud, and Enterprise Flash. Before founding Storage Switzerland, he was CTO at one of the nation's largest storage integrators, where he was in charge of technology testing, integration, and product selection.

Tagged with: , , , , , , , ,
Posted in Briefing Note

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
  • 1,906,841 views
%d bloggers like this: