The idea behind software defined solutions is that the hardware has become somewhat commodity and that it is the software that becomes the defining factor to deploying a proper solution. You have open source solutions which include Ceph (from Inktank and acquired by Red Hat), Hadoop, OpenStack, and many more. Many software vendors have even decided to take these off-the-shelf solutions, customize, and rebrand them as there own. While others have decided to build their own proprietary products.
While Software Defined products promise us the world, they have recently begun to showcase their shortcomings; almost all of which are focused around the hardware. What does that mean? Well, we have been misled to believe that all hardware is created equal. That all hard drives to even network cards are the same. The fact is, they are not. The problems experienced with a specific line of Western Digital drives are going to differ with the problems experienced in an equivalent line of Seagate drives. The limitations and or issues experienced with an LSI Host Bus Adapter (HBA) are going to differ from Emulex ones, and that does not include being aware of any network offloading.
This lack of knowledge ends up becoming problematic and in many cases will impact overall performance. Even more so when you begin to scale up and/or out. How do you maintain a software defined solution when you need to cater to different types of hardware and problems? It isn’t an easy process, but it would seem that the industry has begun to catch on. While I am prohibited from naming any names, certain software defined solution vendors are redefining their software to be more hardware aware.
Now what does that mean for you? Do your due diligence. Research any and all companies to make sure you limit any or all problems when deploying a new software defined solution. It also wouldn’t hurt to ask that vendor which hardware solutions they have qualified with their software product(s).
UPDATE 15 December, 2015