Best practices to delay costly data center expansions

By Sandra Gittlen Apr 19th 2012
Best practices to delay costly data center expansions

Read about the best ways to postpone acquiring expensive datacenter for the organization.

This year marks the 11th anniversary of the 1,200-square-foot data center at the Franklin W. Olin College of Engineering — that means it’s been in use four years longer than CIO and vice president of operations Joanne Kossuth originally expected. The facility needs more capacity and better connectivity, but Kossuth has been forced to put those needs on the back burner because of the state of the economy.

“Demand has certainly increased over the years, pushing the data center to its limits, but the recession has tabled revamp discussions,” she says.
Like many of her peers, including leaders at Citigroup and Marriott International, Kossuth has had to get creative to squeeze more out of servers, storage, and the facility itself. To do so, she’s had to re-examine the life cycles of data and applications, storage array layouts, rack architectures, server utilization, orphaned devices, and more.

Rakesh Kumar, an analyst at Gartner, says he’s been bombarded by inquiries from large organizations looking for ways to avoid the cost of a data center upgrade, expansion, or relocation. “Any data center investment costs at minimum tens of millions, if not hundreds of millions, of dollars. With a typical data center refresh rate of five to 10 years, that’s a lot of money. So companies
are looking for alternatives,” he says.

While that outlook might seem gloomy, Kumar finds that many companies can extract an extra two to five years from their data centers by employing a combination of strategies, including consolidating and rationalizing hardware and software usage, embracing virtualization, and physically moving equipment.

Most companies don’t optimize the components of their data centers, and therefore, bump up against their limitations faster than necessary, he says.
Here are some strategies that IT leaders and other experts suggest using to help push data centers utilization farther.

Relocate Data

One of the first areas that drew Kossuth’s attention at Olin College was the cost of dealing with data. As one example, alumni, admissions staff, and other groups take multiple CDs worth of high-resolution photos at every event. They use server, storage, and bandwidth resources to edit, share, and retain those large images over long periods of time.

To free the data center from dealing with the nearly 10 terabytes of data that those image files represent, Kossuth opened a corporate account on Flickr and moved all processes surrounding management of those photos to the account. That not only eliminated the need to buy a $40,000 storage array, but also alleviated the pressure on the data center from a resource-intensive activity.

“There is little risk in moving non-core data out of the data center, and now we have storage space for mission-critical projects,” Kossuth says.
Ease High-Value Apps

Early on, Olin College purchased an $80,000 Tandberg videoconferencing system and supporting storage array. Rather than exhausting that investment through overuse, Kossuth now prioritizes video capture and distribution, shifting lower-priority projects to less expensive videoconferencing solutions and to YouTube for storage. For example, most public relations videos are generated outside of the Tandberg system and are posted on the college’s YouTube channel. “The data center no longer has to supply dedicated bandwidth for streaming and dedicated hardware for retention,” she says.

More important, the Tandberg system is kept pristine for high-profile conferences and mission-critical distance learning.

Standardize Equipment

Dan Blanchard, vice president of enterprise operations at the Marriott International, boasts that his main data center is 22 years old and that he intends to get 20 more years out of it. He credits its long life to IT’s discipline, particularly to its efforts in standardizing equipment.
Each year, the hotel operator’s IT team settles on a handful of server and storage products to purchase. If a new project starts up or one of the 300 to 400 physical servers fails, machines are ready and waiting. Storage is handled similarly.

Even switches, though on a longer refresh cycle of about five years, are standardized. “Uniformity makes it much simpler to manage resources and predict capacity. If you have lots of unique hardware from numerous vendors, it’s harder to plan [future data center needs],” Blanchard says. He recommends working closely with vendors to understand their road maps and strategize standardized refreshes accordingly. For example, Marriott might delay a planned refresh if feature sets on a vendor’s upcoming release are worth waiting for.

Virtualize

Blanchard also is a fan of virtualization. Marriott’s pool of physical machines supports almost 1,000 virtual servers, freeing up floor space and saving on power and cooling. Though virtualization requires high-power, high-density servers, Marriott is able to consolidate the pool to several hundred physical machines that are energy-efficient saves on overall data center power consumption.

Gartner’s Kumar agrees that consolidation is a positive for data centers because low-utilization servers — those dedicated to just one or two applications — consume almost the same amount of energy as high-utilization ones. “Just to keep a [low-utilization] server on consumes 50 percent to 60 percent of the energy as it would if it were running full,” he says.

Also, older servers tend to be far less efficient than newer models used in today’s virtualization efforts.

 

Retire Devices

Emphasizing the need for consolidation, Kumar says, “Organizations should clean house.” Data center managers should conduct audits using asset managementsoftware or other tools that offer visibility into application and hardware usage.
Here’s what an audit is likely to turn up: Some 5 percent to 10 percent of hardware devices are either switched off or supporting a single, rarely used application, according to Kumar. In light of the fact that servers consume energy and other resources at any utilization rate, either trash the application or virtualize it, and retire or reuse the hardware. “You have to make sure that every piece of hardware in your data center is doing productive work,” he says.

At Citigroup, uncovering idle servers is a regular exercise for those who manage the financial services company’s 14 data centers (the oldest has been in use for 20 years).

“Over the years, things get lost. Not only do we use asset management tools, but we also physically walk around the data center to make sure each device has a purpose,” says Jack Glass, Citigroup’s director of data center planning. However, he says you should consult with applications teams before unplugging anything.

Glass agrees that if an infrequently used application is consuming hardware resources, it should be virtualized. “Virtualization is definitely our standard here. If it can’t be decommissioned, then, where possible, it gets virtualized,” he says.

Contain Sprawl

IT leaders concur that most test and development folks must be monitored closely because they will consume whatever resources you offer them. In fact, Citigroup’s Glass says he often finds abandoned or concluded test and development projects during his utilization reviews.
Kossuth uses a proactive strategy to ensure that test and development efforts don’t overtake her data center. She has roped off a section of the data center, complete with server and storage resources, to be used as a sandbox. Usage is carefully monitored, and when projects end, resources are immediately reabsorbed. She calls it a way to protect the data center without stifling innovation.

Remove Duplicate Data

Jason Kutticherry, Vice President of data center planning at Citigroup, says the company has made a concerted effort to reduce its storage footprint, saving on data center floor space and on power and cooling. A key technology for this has been data deduplication, which keeps an eye out for duplication of files and, in some cases, of data within files.

“As a financial institution, we store a lot of data, so we want to make sure we’re not adding to the burden by saving multiple copies,” he says. Using deduplication consistently has helped the company reclaim storage at a fast enough rate that it has avoided unnecessary build-outs of storage arrays.

Use Effective Coding

While this advice might not seem relevant to a data center manager, Kutticherry insists that inefficient coding can suck away data center resources. For instance, poorly coded applications force servers to work harder, consume far more processing power, and increase the number of servers needed.

“Ensure that your developers are using the tightest code they can so applications are most effective,” he says.
Data center managers should also require developers to use common databases and not the custom variety. “Again, it makes for a more optimized computing environment and reduces the strain on hardware and software,” he adds.

Rearrange Furniture

If a lack of floor space is your problem, consider moving IT equipment around. “What typically happens in most data centers is that once built and commissioned to a certain design specification, new equipment is added over the ensuing years with consideration to cabling and cooling requirements rather than an optimal floor layout,” Gartner’s Kumar says.

He urges IT teams to review layouts every three to five years and then redraw floor plans using tools such as computational fluid dynamics (CFD) analysis systems, which model proper airflow. “While a CFD can be expensive, at around $30,000, gaining a few more years from your data center makes it worthwhile,” he argues.

He also points to pace-layering, a design technique used to organize data centers in an optimal manner, with different pieces evolving at different speeds, or phases. Such an approach takes into account the fact that Web servers need to be managed differently from, say, Tier 1 storage.

Analyze Efficiency

A CFD analysis, in addition to helping to map floor space, is a useful tool for optimizing energy efficiency. Too often, companies overload certain areas of the data center with equipment, creating hotspots that max out power and cooling, Kumar says.

By analyzing the data center’s temperature, you can potentially delay the need to buy larger air conditioners and add power supplies. A simple repositioning of racks or equipment could buy you a few years, he says. Kossuth engineers her racks so that there is good airflow in the back of racks, and equipments are properly cooled. “We have heat and humidity sensors all over the room and receive alerts if the temperature exits a certain band,” she says. “This has helped us maintain optimal power and cooling levels over the years.” 

RECOMMENDEDPartner Content