December 17, 2015

The Elephant in the Room: Quantifying the Value of Data Center Flexibility

Share

Among the most important inputs into a data center TCO analysis is a data center capacity demand forecast, ideally measured in kilowatts. Yet accurately forecasting data center capacity requirements beyond a 12-24 month horizon is virtually impossible.

So what happens when you’re tasked with comparing the 10-year TCO of two data center alternatives and face the virtually impossible task of accurately forecasting capacity demands?

Typically, you substitute a similar task that is possible: inaccurately forecasting demands. This is also known as choosing a best guess and “just going with it” and often takes the specific form of something like “500kW initially, growing at 50kW per year for the next 10 years” or the like.

When you “just goes with” a single discrete capacity forecast as an input to a data center TCO comparison, you lose the ability to quantify the value of flexibility which is a function of probability – not certainty. As a result, single-forecast TCO analysis becomes a useless tool for comparing alternatives that vary significantly with respect to flexibility features.

Why is this important?

Because flexibility features in a data center solution can very often impact actual empirical TCO differentialsmore than every other variable combined. And yes, “every other variable” means: rental price, power rate, PUE, operating expenses, capital expenses, WACC, etc. Flexibility is the elephant in the room.

Lets look at an example:

  • Your “just go with it” data center capacity forecast says you will need 500 kilowatts of power over a 10 year lease term, starting at 500 kilowatts in year 1 and ramping up by 500 kilowatts to 1 megawatt at the beginning of year 6 (a non-realistic growth path, but keeps the analysis simple).
  • Provider A’s offering comes in inflexible 1 mw blocks. To give Provider A the benefit of the doubt (and not confound the comparison), lets assume a willingness to entirely forgive rent on the first 500 kW of the 1mW block for the first 5 years of the 10 year term (obviously unrealistic).
  • Provider B’s offering is flexible. It allows you to deploy and pay for 500kW initially and then flex power/cooling up on demand in small increments.

Assuming an arbitrary cost of $2,500 per kilowatt of capacity per year, Provider A’s solution costs $6.25mm for the first 5 years and $12.5mm for the second 5 years for a total of $18.75mm under all load growth scenarios. Using only the lens of your “just go with it” forecast, Provider B’s solution has an identical cost to Provider A’s: $18.75mm of cost over a 10 year term.

But, what happens if the second 500kW block never materializes 6 years from now? You outperformed on your virtualization targets, outsourced an application to the cloud or divested a subsidiary. Provider B’s 10-year cost drops to $12.5mm, a savings of 33.3% relative to Provider A!

SEE ALSO: A Place of Your Own: “Wholesale Colocation” Defined

Technically you should probability weight the two models. Say there is a 50% likelihood that the load won’t materialize. In that case, there is a 50% likelihood of Provider B’s solution costing $18.75mm and a 50% likelihood of it costing $12.5mm, yielding a risk-weighted cost of $15.625mm or a 16.6% savings over Provider A.

All else equal, the obvious choice relative to TCO is Provider B. And the choice would still be obvious if Provider B’s per unit rental price was 10% higher than Provider A’s. But it wouldn’t be obvious unless the above approach was taken. Said differently, investing the time to build multiple probability weighted forecasts for use in comparative TCO analysis is absolutely critical to a sound analysis.

The More Flexibile, Less Rigid Solution

The above example is extremely simple, but in reality the same dynamics present themselves over and over again with respect to all different types of flexibilities or, conversely, different types of rigidities that characterize competing data center solutions.

Some rigidities are explicit such as the Provider A structure above, but others are implicit: solutions that can’t accommodate physical footprint growth, solutions that can’t accommodate hardware form factor shifts (i.e. many containers), rooms that can’t accommodate changing concentrations of power density, rooms that can’t accommodate technological change (i.e. water-to-the rack).

The example above could have just as easily been structured with the following two alternatives:

  • Solution A is to place a modular container in the back parking lot of your office building, sized to hold twenty (20) 18” racks at 15kW each (300kW total).
  • Solution B is a wholesale colocation solution that offers a 300kW room sized at 2,000 square feet that could accommodate any future hardware form factor and fits up to 60 racks.

In an identical fashion, as illustrated above, assuming identical pricing, these solutions would look identical with respect to TCO if you simply used a baseline “just go with it” capacity forecast over the relevant period of 20 standard-sized racks at 15 kW each.

But what if you need to replace these racks upon refresh with free-standing gear that won’t fit in the container (i.e. large IBM or EMC boxes)? Or if the individual units will fit, what if their equivalent watt to footprint ratio is lower, requiring a higher quantity of units for the equivalent power capacity and that quantity won’t fit?

All of these scenarios would result in premature abandonment of purchased capacity and lead to the exact same conclusion as the analysis detailed above: the more flexible, less rigid solution will always have a better risk-weighted TCO.

One of our prime business goals over the last decade at Sentinel has been to remove as many of these rigidities as possible so that your data center facility enables your business evolution, not prohibit it.

Share

Keep up to date with all the latest news from Sentinel Data Centers!