Platform Architecture

The Network Edge architecture includes a fully capable stack of hardware, software, and design principles that derive from multiple standards bodies and vendors. In this article we will cover:

  • Guiding Principles and standards we apply

  • The network architecture

  • the software stack and major components of the OSS/BSS

  • A “packet walk” from virtual devices to the clouds and other destinations

General trends and standards

Equinix has built a full stack platform for the Network Edge based on the standards set forth by ETSI, the European Telecommunications Standards Institute. Specifically, the ETSI established an NFVNetwork Functions Virtualization is the result of turning network activities, protocols, traffic flow and design into a software service or code. NFV takes a lot of forms from purely new network and traffic flow methods and operating systems to making an entire hardware device accessible via a GUI. At Equinix, we use the term to refer to designing and configuring network solutions using our platform and software, regardless of the device, object, component or vendor. Network Edge is the first NFV offer in our interconnection portfolio and represents a significant leap in a customer's ability to deploy and manage network in a rapid, on-demand way. Industry Specification Group that has defined much of the landscape for network functions virtualization.

The ETSI NFV framework consists of three major components:

  • Network Functions Virtualization Infrastructure (NFVI): A subsystem that consists of all the hardware (servers, storage, and networking) and software components on which Virtual Network Functions (VNFs) are deployed. This includes the compute, storage, and networking resources, and the associated virtualization layer (hypervisor)

  • Management and Orchestration (MANO): A subsystem that includes the Network Functions Virtualization Orchestrator (NFVO), the virtualized infrastructure manager (VIM), and the Virtual Network Functions Manager (VNFM)

  • Virtual Network Functions (VNFs): The software implementation of network functions that are instantiated as one or more virtual machines (VMs) on the NFVI

Overlaid on this framework is legacy, current, and new operational and business support systems that Equinix has procured or built over the years, resulting in a standardized architecture:

Within each component are multiple systems, some of which are described in more detail below.

The core concept behind NFV is to implement these network functions as pure software that runs over the NFVI. A VNF is a virtualized version of a traditional network function, such as a router or firewall – but it could also be a discrete action such as NAT or BGPBorder Gateway Protocol. A standardized exterior gateway protocol designed to exchange routing and reachability information between autonomous systems on the internet. This concept is radically different from the traditional hardware deployment implementation in many ways. Decoupling of the software from the hardware allows the lifecycle and development of each of these network functions in separate cycles. This decoupling allows for a model where the hardware/infrastructure resources can be shared across many software network functions.

A VNF implementation (like a virtual router or virtual switch) doesn’t usually change the essential functional behavior and the external operational interfaces of a traditional Physical Network Function (PNF), like a traditional router or a switch.

The VNF can be implemented as a single virtual machine, multiple VMs, or a function implemented within a single shared VM with other functions.

Network Architecture And Equipment

Within the NFVi component of the architecture resides most of the hardware deployment. Equinix deploys a full complement of compute nodes, management devices, top of rack aggregationOn Enterprise Cloud Exchange (ECX), aggregators are Network Service Providers (NSPs) and Managed Service Providers (MSPs) who provide multi-tenant services. Aggregators are both buyers of ECX service from Equinix and sellers of value-added services over ECX to their end customers. switches, border routers to other services, storage, and other aspects that enable the full suite. The depth and size of each deployment may vary depending on market, projections, capacity and other factors.

We refer to this full suite as a Point Of Deployment, or POD. If you are familiar with typical hyperscaler cloud providers, think of this as a region or availability zone. Each POD is independent of every other POD, even if more than one POD is deployed in the same metro.

A full POD also includes redundant top of rack aggregation switches and management switches for internal use such as operations/support, monitoring or ongoing orchestration of new assets.

Within the POD Equinix hosts virtual machines that run the software images of each VNF. Our VMs are KVM-based and the infrastructure is on an Openstack platform.

Each virtual device is logically connected to the aggregation switches and interconnection platforms above it using VXLAN technology, and a VPP orchestrates the networking between them and in and out of the POD:

The VPP is the vector packet processing software that makes intelligent decisions about switching and routing of packets. The VPP passes traffic back and forth to ECXF and EC (Internet) interconnection platforms and maintains full redundancy in case of failures.

System/Stack Architecture

The NFV Management and Orchestration suite has several key software components that facilitate the platform. This portion of the reference architecture is often referred to as MANO (management and orchestration) for short.

  • VIM: virtual infrastructure management – this module handles the instantiation, configuration, reservation, and other functions of the compute, storage, and other traditional infrastructure elements

  • VNFM: virtual network functions manager – handles lifecycle, monitoring, and other activities of active virtual devices. Runs the workflow of deploying a device, change management, and ultimately teardown/deletion of devices

  • NFVO – network functions virtualization orchestrator – ensures that the right configs are loaded to software images, inventory is fetched and reserved (such as IP addresses and VXLANs), and other features where coordination with other systems and OSS/BSS are required

Equinix maintains redundant orchestrators in each region. When a request is made through the portal or API, it reaches into the relevant orchestrator to begin the process of reserving assets, inventory, and selecting an appropriate configuration and image for the requested device or service.

Here is an example of the flow and interaction between the various systems in a specific region:

When needed, the NE orchestrator interacts with the ECXF orchestrator to coordinate activities of activating a connection from the interface of a VNF to the cloud or other destination of choice. Each activity is regularly checking inventory to see what is available and reserve bandwidth, IP addressing, VLANs or other logical resources so that they are not taken by any other device in the platform.

Equinix also includes a host of internal management and monitoring tools. While they are primarily geared towards our own operations to ensure consistent experience and performance, some details can be gathered by customers, and over time, we will expose some of it via services, where appropriate.

Our suite includes:

  • Monitoring

    • Health and performance of physical and logical assets, such as CPU and RAM utilization

    • POD-level views into physical and virtual active components and objects

  • Analysis and Reporting

    • Service impact analysis to determine the relationships between different components and the effect each has on the other when changes or events occur

    • POD capacity forecasting – let's our engineers know in advance when augments to compute, network, or other assets is going to be needed

  • Automation

    • Auto-discovery when capacity is augmented and added to the POD or uplinks to other platforms, and becomes usable very quickly

    • Reports on and reacts to POD-level health and changes

    • Fully integrated with the VIM

Stitch It together: Packet And Traffic Flow

Network Edge uses EVPN/VXLAN for the control and data plane functions. The main purpose for the L2 control plane and MAC learning is to establish L2 reachability between the VNF and the respective CSP router. Once L2 connectivity is established, L3 peering can be established between the VNF and its respective peer. Therefore, only two MAC addresses are learned across a single VNI because that is all that is needed for connectivity, while many MAC addresses are learned across the VTEP (two for each VNI). The following will build out the data flow from the middle working outward to establish route peering.

The infrastructure control plane consists of EVPN between the compute VTEP and ECXF VTEP to enable dynamic MAC address learning while VXLAN is used as the data plane between the compute and ECXF nodes. Additionally, the VNI is mapped to the VPP vSwitch and the MAC address gets encapsulated on ingress to the VPP and tagged in the example below with the VNI of 10.

Before an overlay control plane session between the private cloud and VNF can be established, one more leg of the L2 control plane between the ECXF and the respective private cloud must exist. During the provisioning process when connecting to a CSP a VLAN gets dynamically instantiated and connected to the ECXF switch, typically over a .1q connection. The MAC address of the CSP is then learned over this .1q trunk port as shown below. In the example below, MAC:01B from the CSP is learned at the physical port on the ECXF switch via VLAN 462. The last connection needed to complete the L2 control plane is done via routing instances (RI) on the ECXF switch which forms an internal link for the EVPN session. Once the last leg of the L2 control plane has been completed., the overlay L3 control plane for BGP peering can be established.

The entire solution looks like this end to end: