If you share your secret sauce recipe with a lot of people, is it still secret sauce?
I recently co-hosted a televised Data Center tour alongside Cisco IT distinguished engineer Jon Woolwine. A camera crew shadowed us as we walked the company’s Data Center in Allen, Texas and discussed the site’s infrastructure and how various technologies and Fast IT processes are enabling Cisco’s business.
We provided the live tour as part of Cisco IT’s EMEAR Data Center Day. Cisco IT has hosted this event three years in a row, although this was the first time it spanned continents. About 50 customers gathered at Cisco’s Bedfont Lakes office in the UK, meeting several Cisco IT executives and technical experts and ultimately watching and conversing with Jon and I as we strolled the Data Center.
During the tour, I highlighted several of the Data Center’s physical infrastructure elements – cooling design that makes use of how temperature affects airflow, a rotary UPS system that eliminates the need for batteries and an RFID system that saves countless hours tracking hardware, to name a few. Jon meanwhile talked about how we’re using the Data Center differently than when it opened just five years ago, with automation, workload mobility, and Cisco’s Application Centric Infrastructure providing a slew of operational benefits including greater speed, more flexibility, and reduced costs.
Those physical infrastructure features are familiar topics. I discuss them with hundreds of people who visit that Allen Data Center each year, many of which are preparing to design new server environments, and I highlighted them in the book I wrote about innovative Data Centers a few years ago.
As I told the Data Center Day audience, though, the foundation of this successful Data Center – its not-so-secret-anymore sauce, if you will – isn’t any single piece of infrastructure. It’s the holistic design of the site.
When Cisco approached the multiple technological elements of this Data Center, we made sure they were developed in coordination with one another. Our choice of hardware influenced our structured cabling, which informed our decision to forgo a raised floor, which in turn shaped our cooling delivery, for instance.
Our decision around what type of UPS to deploy is another good example. We knew the tradeoffs: a flywheel-driven rotary UPS avoids the cost and environmental impact of replacing batteries, but can only support a major Data Center’s power demand for seconds. That’s fine when an accompanying generator quickly engages as a long-term source of backup power, but if a glitch occurs a rotary UPS offers insufficient time to avoid the outage or at least gracefully shut down IT hardware before power drops and they crash.
Had we considered the rotary UPS solution on its own, that limited ride-through time would have caused us to dismiss it. Looking at the bigger picture, though, we knew we intended the Data Center to have redundant electrical feeds, an active-active connection to another Data Center in the same metro area, and a high degree of server virtualization by way of Cisco’s Unified Computing System (UCS). Taken together, that’s a lot of redundancy, which made us comfortable to employ the rotary UPS and reap the benefits.
If you ever find yourself in the position to contribute to the design of your company’s Data Center, take a holistic approach. You’ll discover new design options and opportunities to get the most out of your infrastructure.
It is a good point, not only for DC but also in enterprise network… indeed I’ve got this problem with non technical decesion maker (e.g. sales or top level manager) to explain that before involving in a project, think about proper design with integrated components.
Thanks for sharing.
You and Jon did a fantastic job! Very well received from the customers 🙂
A great event with good press – and you’ve made some very salient points regarding the design, creation and management of data centers.