Translate

Wednesday, July 16, 2014

Finding the Server Needle in the Infrastructure Haystack – Addressing Infrastructure Complexity



When it comes to managing a large complex data center and the server infrastructure within, many CIOs know they have severe challenges.  Some might even characterize their predicament as being akin to ordered chaos!  Most know where and what their business critical servers are and what they are doing.  These might comprise 10 to 20% of their overall IT infrastructure.  It’s the other 80 to 90% that few know where and what these systems do AND, more importantly, how they impact factors like capacity management (IT flexibility), response and recovery time objectives (DR), labor efficiency (costs), and more.  Wouldn’t it be nice if only there was some type of metric that would assist in understanding where major areas of complexity reside, how to deal with them, and how they compare with other IT environments of similar size, industry, and make up?

Good news!  There is now.  Using IT infrastructure data that the IBM Systems and Technology Group’s Lab Services team has amassed from 1,000 plus completed IT optimization assessments (ITOs) over the last 10 years, one can actually begin to calculate a metric we call the “Infrastructure Complexity Index” (ICI for short).  This ICI can be (and has been!) used to find and educate IT personnel on specific areas of IT complexity and the factors involved in addressing these areas.

The ICI metric itself is composed of a number of components, each of which can have an impact (positive or negative) on the level of IT infrastructure complexity being shown.  By recognizing the relative impact of each of these components and the effect on their ICI, a governing IT management team can advise, and make course corrections to simplification, standardization, and overall optimization of the IT environment.

The major components affecting the ICI metric include the following:

  • Server hardware vendor variation (count of unique hardware vendors in use) 
  •  Server hardware model variation (count of unique hardware models in use)
  • Server physical servers (count of unique serial numbered servers in use)
  • Server operating system vendor variation (count of unique O. S. vendors in use)
  • Server operating system version/releases (count of unique O. S. version/releases in use)
  • Server logical servers (count of unique logical servers using an O. S. in use)

Using these components together, one can calculate the ICI(1) for a given IT environment and share this with others in the organization as part of an overall ITO analysis and set of recommendations on how to decrease IT complexity and the resulting benefits of doing so.  In addition, the ICI can be used to benchmark your IT environment with another of similar size and type.

With the proper data in hand, one can even decompose the ICI down to the operating system (AIX vs Windows vs Linux), vendor (Oracle/SUN vs IBM/AIX vs HP/UX), or even platform type (Unix/RISC vs Linux/x86), thus providing further insights as to where deeper areas of infrastructure complexity might be impacting the previously mentioned areas of IT flexibility, recoverability, and cost.

Although currently focused on server infrastructure complexity, work is ongoing to expand the use of ICI to other areas of IT infrastructure complexity, including storage, network, and even software stacks.  For further clarification on the ICI and how it could be used for your situation, please contact the author (John F. Ryan jfryan1@us.ibm.com).

Thanks to John Ryan for his guest contribution!


(1)    The actual formula used to calculate the ICI is the intellectual property of the IBM and is currently under patent review.  Please contact the author for specifics on how to introduce the ICI to your situation

No comments:

Post a Comment