Constructing a Physical Network (Underlay Network) on a DCN

This chapter describes the key points in the physical network design of a DCN. A DCN’s physical network, also known as the underlay network, usually adopts the spine-leaf architecture, in which leaf nodes are classified into server leaf nodes, service leaf nodes, and border leaf nodes. The chapter will elaborate on protocols used on such a network, including OSPF or External Border Gateway Protocol (EBGP). It will also detail how servers can connect to server leaf nodes in M-LAG, stack, or standalone mode. Finally, the chapter will outline how service leaf nodes and border leaf nodes can be deployed independently or combined, as well as how border leaf nodes can be connected to external PEs in multiple networking modes or through route advertisement.



DCNs typically adopt the spine-leaf architecture for their physical network. Table 5.1 describes the roles and their functions in physical networking of the cloud DCN solution, while Figure 5.1 illustrates the recommended networking mode in the industry.

TABLE 5.1 Roles and their Functions in Physical Networking

(a) Role


(b) Fabric

Network failure domain that is managed by an SDN controller. It contains one or more spine-leaf architectures


Core node on a VXLAN fabric network. It provides high-speed IP forwarding and connects to leaf nodes through high-speed interfaces


Access node on a VXLAN fabric network. It connects various network devices to the VXLAN fabric network

Service leaf

Functional node that connects VAS devices, such as firewalls and LBs, to a VXLAN fabric network

Server leaf

Functional node that connects virtual and physical servers to a VXLAN fabric network

Border leaf

Functional node that connects to routers or transmission devices outside a DC to forward traffic from an external network to a VXLAN fabric network in a DC

Recommended physical networking

FIGURE 5.1 Recommended physical networking.

A well-designed fabric network can provide consistent access across access nodes. Fabric networks, which feature high bandwidth, large capacity, and low network latency, contain one or more spine-leaf architectures. And such an architecture contains three types of leaf nodes — server leaf nodes, service leaf nodes, and border leaf nodes — which are essentially the same at the forwarding plane. Where they differ is in terms of access devices. Because the spine-leaf architecture is used, the network is flattened, ensuring the east-west traffic forwarding path across the entire network is short and the forwarding efficiency high.

Another key advantage of the fabric network is that it can be elastically scaled. That is, when the number of servers increases, you only need to add leaf nodes. Then, if the spine forwarding bandwidth is insufficient for the increasing number of leaf nodes, you can add spine nodes.

For the spine-leaf architecture, the recommended configuration of spine and leaf nodes is as follows:

  • • Spine node: Spine nodes forward traffic between leaf nodes at a high speed. It is recommended that spine nodes be deployed independently, with the specific number of spine nodes depending on the oversubscription ratio of leaf nodes. While different industries and customers have varying requirements on the oversubscription ratio, an oversubscription ratio of 1:9 to 1:2 is generally used.
  • • Leaf node: A leaf node connects to service servers and VAS servers, and functions as a north-south gateway. While leaf nodes can be deployed flexibly, M-LAG active-active is the recommended deployment mode. If the requirements for reliability and packet loss are not high, virtual chassis technologies such as CSS/iStack can also be used. Each leaf node is connected to all spine nodes, forming a full- mesh topology.

Leaf nodes and spine nodes are connected through Layer 3 routed interfaces, and they communicate at Layer 3 by configuring a dynamic routing protocol. OSPF or BGP is recommended. For details about routing protocol selection, see the following sections in this chapter.

ECMP is recommended for implementing load balancing and link backup, as shown in Figure 5.2. In this case, leaf nodes forward data traffic to spine nodes through multiple ECMP paths, guaranteeing reliability

Using ECMP on the fabric network

FIGURE 5.2 Using ECMP on the fabric network.

while ensuring that network bandwidth improves. It is important to be aware that ECMP links need to use the load balancing algorithm based on the Layer 4 source port number at the transport layer. Because VXLAN uses User Datagram Protocol (UDP) encapsulation, the destination port number of a VXLAN packet is always 4789, while its source port number is variable.

< Prev   CONTENTS   Source   Next >