The term “network” applies to everything from LAN to SAN to WAN. All these variations require a network core, so let’s start there. The size of the organization will determine the size and capacity of the core. In most infrastructures, the datacenter core is constructed differently from the LAN core. If we take a hypothetical network that has to serve the needs of a few hundred or a thousand users in a single building, with a datacenter in the middle, it’s not uncommon to find that there are big switches in the middle and aggregation switches at the edges.
Ideally, the core is composed of two modular switching platforms that carry data from the edge over gigabit fiber, located in the same room as the server and storage infrastructure. Two gigabit fiber links to a closet of, say, 100 switch ports is sufficient for most business purposes. In the event that it’s not, you’re likely better off bonding multiple 1Gbit links rather than upgrading to 10G for those closets. As 10G drops in price, this will change, but for now it’s far cheaper to bond several 1Gbit ports than to add 10G capability to both the core and the edge.
In the likely event that VoIP will be deployed, it may be beneficial to implement small modular switches at the edge as well, allowing PoE (Power over Ethernet) modules to be installed in the same switch as the non-PoE ports. Alternatively, deploying trunked PoE ports to each user is also a possibility. This allows a single port to be used for VoIP and desktop access tasks.
In the familiar hub-and-spoke model, the core connects to the edge aggregation switches with at least two links, either connecting to the server infrastructure with direct copper runs or through server aggregation switches in each rack. This decision must be determined site by site, due to the distance limitations of copper cabling.
Either way, it’s cleaner to deploy server aggregation switches in each rack and run only a few fiber links back to the core than try to shoehorn everything into a few huge switches. In addition, using server aggregation switches will allow redundant connections to redundant cores, which will eliminate the possibility of losing server communications in the event of a core switch failure. If you can afford it and your layout permits it, use server aggregation switches.
Regardless of the physical layout method, the core switches need to be redundant in every possible way: redundant power, redundant interconnections, and redundant routing protocols. Ideally, they should have redundant control modules as well, but you can make do without them if you can’t afford them.
Core switches will be responsible for switching nearly every packet in the infrastructure, so they need to be balanced accordingly. It’s a good idea to make ample use of HSRP (Hot Standby Routing Protocol) or VRRP (Virtual Routing Redundancy Protocol). These allow two discrete switches to effectively share a single IP and MAC address, which is used as the default route for a VLAN. In the event that one core fails, those VLANs will still be accessible.
Finally, proper use of STP (Spanning-Tree Protocol) is essential to proper network operation. A full discussion of these two technologies is beyond the scope of this guide, but correct configuration of these two elements will have a significant effect on the resiliency and proper operation of any Layer-3 switched network.
The ability for virtualization hosts to migrate virtual servers across a virtualization farm absolutely requires stable and fast central storage. This can be FC, iSCSI, or even NFS in most cases, but the key is that all the host servers can access a reliable central storage network.
Networking virtualization hosts isn’t like networking a normal server, however. While a normal server might have a front-end and a back-end link, a virtualization host might have six or more Ethernet interfaces. One reason is performance: A virtualization host pushes more traffic than a normal server due to the simple fact that as many as dozens of virtual machines are running on a single host. The other reason is redundancy: With so many VMs on one physical machine, you don’t want one failed NIC to take a whole bunch of virtual servers offline at once.
To combat this problem, virtualization hosts should be constructed with at least two dedicated front-end links, two back-end links, and ideally a single management link. If this infrastructure will service hosts that live in semi-secure networks (such as a DMZ), then it may be reasonable to add physical links for those networks as well, unless you’re comfortable passing semi-trusted packets through the core as a VLAN. Physical separation is still the safest bet and less prone to human error. If you can physically separate that traffic by adding interfaces to the virtualization hosts, then do so.
Each pair of interfaces should be bonded using some form of link aggregation, such as LACP (Link Aggregation Control Protocol) or 802.3ad. Either should suffice, though your switch may support only one form or the other. Bonding these links establishes load-balancing as well as failover protection at the link level, and is an absolute requirement, especially since you’d be hard-pressed to find a switch that doesn’t support it.
In addition to bonding these links, the front-end bundle should be trunked with 802.1q. This allowed multiple VLANs to exist on a single logical interface and makes deploying and managing virtualization farms significantly simpler. You can then deploy virtual servers on any VLAN or mix of VLANs on any host without worrying about virtual interface configuration. You also don’t need to add physical interfaces to the hosts just to connect to a different VLAN.
The virtualization host storage links don’t necessarily need to be either bonded or trunked unless your virtual servers will be communicating with a variety of back-end storage arrays. In most cases, a single storage array will be used, and bonding these interfaces will not necessarily result in performance improvements on a per-server basis.
However, if you require significant back-end server-to-server communication, such as front-end Web servers and back-end database servers, it’s advisable to dedicate that traffic to a specific set of bonded links. They will likely not need to be trunked, but bonding those links will again provide load-balancing and redundancy on a host-by-host basis.
While a dedicated management interface isn’t truly a requirement, it can certainly make managing virtualization hosts far simpler, especially when modifying network parameters. Modifying links that also carry the management traffic can easily result in a loss of communication to the virtualization host.
So if you’re keeping count, you can see how you might have seven or more interfaces in a busy virtualization host. Obviously, this increases the number of switchports required for a virtualization implementation, so plan accordingly. The increasing popularity of 10G networking – and the dropping cost of 10G interfaces – may enable you to drastically reduce the cabling requirements so that you can simply use a pair of trunked and bonded 10G interfaces per host with a management interface. If you can afford it, do it.