Best practices to delay costly data center expansions | Features | ChannelWorld.in

PARTNER HOTLINES

%%CLICK_URL_UNESC%%

Best practices to delay costly data center expansions

By Sandra Gittlen, Computerworld on Apr 19, 2012

Read about the best ways to postpone acquiring expensive datacenter for the organization.

This year marks the 11th anniversary of the 1,200-square-foot data center at the Franklin W. Olin College of Engineering — that means it’s been in use four years longer than CIO and vice president of operations Joanne Kossuth originally expected. The facility needs more capacity and better connectivity, but Kossuth has been forced to put those needs on the back burner because of the state of the economy.

“Demand has certainly increased over the years, pushing the data center to its limits, but the recession has tabled revamp discussions,” she says.
Like many of her peers, including leaders at Citigroup and Marriott International, Kossuth has had to get creative to squeeze more out of servers, storage, and the facility itself. To do so, she’s had to re-examine the life cycles of data and applications, storage array layouts, rack architectures, server utilization, orphaned devices, and more.

Rakesh Kumar, an analyst at Gartner, says he’s been bombarded by inquiries from large organizations looking for ways to avoid the cost of a data center upgrade, expansion, or relocation. “Any data center investment costs at minimum tens of millions, if not hundreds of millions, of dollars. With a typical data center refresh rate of five to 10 years, that’s a lot of money. So companies
are looking for alternatives,” he says.

While that outlook might seem gloomy, Kumar finds that many companies can extract an extra two to five years from their data centers by employing a combination of strategies, including consolidating and rationalizing hardware and software usage, embracing virtualization, and physically moving equipment.

Most companies don’t optimize the components of their data centers, and therefore, bump up against their limitations faster than necessary, he says.
Here are some strategies that IT leaders and other experts suggest using to help push data centers utilization farther.

Relocate Data

One of the first areas that drew Kossuth’s attention at Olin College was the cost of dealing with data. As one example, alumni, admissions staff, and other groups take multiple CDs worth of high-resolution photos at every event. They use server, storage, and bandwidth resources to edit, share, and retain those large images over long periods of time.

To free the data center from dealing with the nearly 10 terabytes of data that those image files represent, Kossuth opened a corporate account on Flickr and moved all processes surrounding management of those photos to the account. That not only eliminated the need to buy a $40,000 storage array, but also alleviated the pressure on the data center from a resource-intensive activity.

“There is little risk in moving non-core data out of the data center, and now we have storage space for mission-critical projects,” Kossuth says.
Ease High-Value Apps

Early on, Olin College purchased an $80,000 Tandberg videoconferencing system and supporting storage array. Rather than exhausting that investment through overuse, Kossuth now prioritizes video capture and distribution, shifting lower-priority projects to less expensive videoconferencing solutions and to YouTube for storage. For example, most public relations videos are generated outside of the Tandberg system and are posted on the college’s YouTube channel. “The data center no longer has to supply dedicated bandwidth for streaming and dedicated hardware for retention,” she says.

More important, the Tandberg system is kept pristine for high-profile conferences and mission-critical distance learning.

Standardize Equipment

Dan Blanchard, vice president of enterprise operations at the Marriott International, boasts that his main data center is 22 years old and that he intends to get 20 more years out of it. He credits its long life to IT’s discipline, particularly to its efforts in standardizing equipment.
Each year, the hotel operator’s IT team settles on a handful of server and storage products to purchase. If a new project starts up or one of the 300 to 400 physical servers fails, machines are ready and waiting. Storage is handled similarly.

Even switches, though on a longer refresh cycle of about five years, are standardized. “Uniformity makes it much simpler to manage resources and predict capacity. If you have lots of unique hardware from numerous vendors, it’s harder to plan [future data center needs],” Blanchard says. He recommends working closely with vendors to understand their road maps and strategize standardized refreshes accordingly. For example, Marriott might delay a planned refresh if feature sets on a vendor’s upcoming release are worth waiting for.

Virtualize

Blanchard also is a fan of virtualization. Marriott’s pool of physical machines supports almost 1,000 virtual servers, freeing up floor space and saving on power and cooling. Though virtualization requires high-power, high-density servers, Marriott is able to consolidate the pool to several hundred physical machines that are energy-efficient saves on overall data center power consumption.

Gartner’s Kumar agrees that consolidation is a positive for data centers because low-utilization servers — those dedicated to just one or two applications — consume almost the same amount of energy as high-utilization ones. “Just to keep a [low-utilization] server on consumes 50 percent to 60 percent of the energy as it would if it were running full,” he says.

Also, older servers tend to be far less efficient than newer models used in today’s virtualization efforts.

 


Latest Features