Facebook ATN-3

In 2014 Facebook was in the middle of launching its third data center building at its Altoona, Iowa campus, when it found out that the MEP engineering firm it was using could not meet their schedule milestones to allow construction to be completed on time. Facebook came to our team to ask us about performing the electrical scope of work on a design-build basis. At over $70 million, it was the largest design-build project we had ever done, and we completed the design on the fly as we were constructing the facility. When we came to the finish line, we had an unbelievable three items on the punch list. That is "3" - yes single digits!

 
  

 

 


Command Center

Command Center

QuikTrip IOCC

QuikTrip Corporation is a privately held company headquartered in Tulsa, Oklahoma. Founded in 1958, QuikTrip has grown to a more than $10 billion company with 660+ stores in eleven states. Those revenues place QuikTrip high on the Forbes listing of largest privately held companies. QuikTrip’s strategy is to be the dominant convenience/gasoline retailer in each market and to reach that level not through sheer numbers of stores, but through key, high-volume locations. With nearly 13,000 employees, Fortune magazine has ranked QuikTrip high on the list of Best Companies To Work For the last ten years. To maintain that reputation requires a relentless focus on customer service, and a quality infrastructure support team. Growth in stores and support personnel required QT to transform its Infrastructure Operations Center support organization using the latest technology and creating a new command center to allow team members to collaborate better. A new IOCC was designed to incorporate new video display array technology, acoustic systems to minimize distractions and ergonomic work stations to promote productivity and reduce stress.

 


Baidu Cloud Campus

baidu rendering - samll.jpg

Baidu is the second largest Internet company in China with over 300 million subscribers. As they grew their infrastructure to support the growth rates, they needed to migrate away from multiple small co-location facilities into a flexible, efficient and robust central campus. Flexibility was achieved by designing a custom IT warehouse where new servers and storage hardware can be deployed in modular solutions to minimize deployment time, reduce packaging waste, and allow for more frequent technology refresh cycles. The campus consists of four major data center modules, each of which can support up to 50,000 1U servers with a Tier 3+ 10MW power plant; a staging and storage warehouse facility; an office complex as well as a residential unit for employee living quarters.


GM Development Re-engineering

Our work with GM began resulted initially in a global IT platform standard. Eventually this was expanded to include a new global engineering platform. New product development was taking 24 months from concept to production, and the competition was developing in almost half that time. GM embarked on an initiative to reduce this time from 24 months to 12 months. There were two elements to this effort we provided. The first was design of a high performance computing platform that allowed extensive computer modeling of a design prior to the prototype phase. This eliminated almost 3 months of development time from the schedule and saved millions of dollars in prototype development and re-development.

The second element was a global engineering platform. With engineering staff located in the three major regions of the world, Asia-Pacific, North America and Europe, a common engineering technology platform was created in each region resulting in a 24 x 7 engineering process that followed the sun around the planet. This shaved a significant amount of additional time off of the development cycle.


Icicle Lights - Target's Retail Data Warehouse

Part of Target's business success is what any savvy retailer aspires to: shelves filled with high margin products with high velocity. That is, they sell a lot of profitable consumer goods. In order to stay on top of the constantly changing whims of consumers, Target required a new process and technology platform to analyze store sales and re-act to changes in buying behavior to keep more of what people wanted on the shelves. When icicle lights were first introduced in Minnesota one winter, the Target buyer had allocated the end of one shelf for the product and negotiated a supply order commensurate with the expected sales. When Minnesota consumers consistently emptied shelves of the new-fangled Christmas lighting, Target was able to re-allocate additional shelf space, renegotiate a new supply from the manufacturer making thousands of customers happy and reaping increased revenue and earnings. Since 5% of Target's store profits are invested in communities where their stores are located, they win too. The Target Technology Center in Brooklyn Park, MN makes it all happen with state-of-the-art servers, networking and data storage platforms.


Etrade Interior 2.jpg

E*TRADE Banking Platform

When E*TRADE decided to get into consumer lending and online banking, they needed both an East coast presence that looked like a "bank" as well as the technology to deliver low-latency financial trading and transactional processing. Within an unbelievably aggressive 10-month schedule the E-ROC (eastern regional operations center) was created. Inspired by the youthful staff who regularly spend 10-12 hour stints in the office, the E-ROC provided a playful environment while impressing visitors and potential investors and brokers with a bank-like feel.


Dignity Health Data Center, Phoenix AZ

When this regional healthcare organization was facing capacity limits in its existing 15-year-old 10,000 sq. ft. data center, they did what anyone would do; they started planning a new data center to replace the old one. As a senior member of the outsourcing service provider team operating the data center, I sat in on two days of meetings to discuss the program criteria for the new facility, including site selection, size, and power capacity. The consensus was to build a 20,000 sq. ft. data center with 1.5MW of power capacity, and costing approximately $30M.  

What was strange was that during these meetings, none of the customer's internal system architects were participating. This is strange because one of the fundamentals of data center planning is to have a deep understanding of what the technology looks like inside the data center critical environment. When I finally asked what architecture we were going to be supporting, everyone in the room looked around at each other, hoping someone knew. 

It was apparent that no one had thought to wonder about what technology the data center was supposed to support. I offered to meet with the system architects separately and conduct an afternoon workshop to create a vision of the technology to be supported by the new data center. During the workshop I was pleased to hear that they were migrating to an integrated virtual environment for compute, storage and network infrastructure, basically creating a utility computing grid that would support rapid application develop and deployment without mapping these functions to specific devices or systems. This meant that the hardware would be optimized and highly utilized without a lot of stranded capacity or wasted space.

Armed with this information, I proceeded to sketch the existing environment in the now full, 10,000 sq. ft. data center. After mapping the existing technology into the existing data center, it became apparent that there was an entire row of racks, and part of another that were either empty, marked for decommissioning, or would be within a month. We then sketched up what the new architecture would look like from a physical and form factor perspective. The new systems were much more space efficient than the ones that we would be replacing during the migration to the new data center. By using the one row of empty racks, we could fit approximately three rows of old systems into one row with the new technology. It was easy to see we could migrate step by step from 3 rows to one row, with an end result that we occupied only one-third of the entire 10,000 sq. ft. space, and had room for growth for another two technology life cycles. 

No, the new data center was not built, and they saved $30M that was invested in new and better healthcare equipment rather than an unneeded data center. We did have to invest about $3M in new power capacity to accommodate the higher densities, a net savings of $27M. Not bad for a few days work.