#Neutron and @AristaNetworks – Routing and #VXLAN encapsulation at wire speed

Before going further, I strongly suggest to read There’s real magic behind OpenStack neutron to understand in more details what is happening between Nova instances, OpenVSwitch and Neutron since IceHouse.

We’ll take a walk on Neutron (Kilo release) and Arista Network L2 drivers and L3 plug-in throughout this post. Most of the networks orchestration changes between IceHouse, Juno and Kilo have been taken place into L2 drivers, stability, performance and Routing (VRRP support and DVR, check out OpenStack Kilo/Juno –  L3 High Availability VRRP for Neutron). However, Arista provides important improvements to route and forward packets at wire speed for orchestrated networks.

Next picture will be the reference to the following ones.

neutron openvswitch kilo juno openstack arista vxlan vtep L3 L2 driver plug-in VARP Mauricio Rojas.001

Above picture shows two Nova computes and a Neutron host. Each Nova node has 3 instances that we’ll connect through VxLAN as the next pictures shows.

neutron openvswitch kilo juno openstack arista vxlan vtep L3 L2 driver plug-in VARP Mauricio Rojas.002

Instances that belongs to the same compute can be connected through the same OVS bridge. Instances provisioned in the same tenant on different physical compute nodes requires create a tunnel through GRE or VXLAN at BR-TUN. There are some SDN controllers in the market like Nuage and VMWare NSX that brings advanced features for VTEPs (VXLAN Tunnel End Point) at OVS. However, This post will bring more insight for hardware based VXLAN through Arista drivers.

Arista ML2 (Multi-Layer-2) driver vs what Neutron brings by default

Arista ML2 driver maps OVS tagged VLANs to VXLAN and brings VTEP through a Leaf/Spine architecture in hardware. Leaf/Spine configurations get to come up when cloud stacks scales out to many racks.

Let’s start seeing how VXLAN tunnel types work in Neutron by default with OVS.

neutron openvswitch kilo juno openstack arista vxlan vtep L3 L2 driver plug-in VARP Mauricio Rojas.003

Figure 3 is showing how OVS encapsulates and decapsulates VXLANs (two different identifiers called VNIs are illustrated in this case) directly at Computes’s Tunnel Bridge. OpenFlow rules incoming traffic from instances at BR-TUN into VXLAN tunnels. VXLAN tunnels are set between nova nodes through L-2 Switches. L-2 Switches don’t need any specific configuration than to forward packets between different nodes.

neutron openvswitch kilo juno openstack arista vxlan vtep L3 L2 driver plug-in VARP Mauricio Rojas.004

I will use Figure 4 to describe step-by-step what’s going on into every nova node (more details at There’s real magic behind OpenStack neutron):

  1. Traffic between instances starts passing through its eth0 virtual interface to the security groups (Linux Network Stack using iptables to control access traffic to/from the instance)
  2. After instances’ security groups, traffic will head to the integration bridge (BR-INT) to tag and un-tag VLAN IDs depending how they were provisioned from users.
  3. If the destination instance is located into a different physical node, packets will head to the BR-TUN
  4. Traffic will be encapsulated into a VXLAN. VXLAN will be decapsulated at the destination compute’s BR-TUN
  5. Traffic will head to BR-INT and then to the appropriated interface after a VLAN untagging process
  6. Packets will reach the destination instance’s security group for filtering
  7. Packets have arrived to destination.

Now, let’s see how it works using Arista ML2 service driver. Next picture shows how VXLAN is encapsulated/decapsulated at ToR switches. If instances are in nodes located in different racks and connected through a leaf/spine configuration,  VTEPs are set through it. VTEPs start and end at the ToR switches.

neutron openvswitch kilo juno openstack arista vxlan vtep L3 L2 driver plug-in VARP Mauricio Rojas.005

Arista takes the VXLAN encapsulation/decapsulation in hardware at wire speed. Let’s see how it works based on the Figure 6.

neutron openvswitch kilo juno openstack arista vxlan vtep L3 L2 driver plug-in VARP Mauricio Rojas.006

Using Figure 4 as reference for a step-by-step description, we’ll see Figure 6 changes the step 4 and breaks it out into 3 steps:

4.1 BR-TUN brings the packet with the associated VLAN tag directly to the ToR switch. Arista will map this VLAN to its associated VNI (VLAN/VNI Mapping has been set previously into ToR Switches).

4.2 Picture is showing an Arista leaf/spine composition, then VNI need to go through a VTEP between ToR switches. A VTEP has been stablished between ToR that can be located on different racks. Traffic will flow through into this VTEP.

4.3 Destination’s ToR decapsulates VXLAN and releases the previously tagged packets to forward them to the destination’s BR-TUN. Destination’s BR-TUN will check tags and will forward them to destination’s BR-INT depending on OpenFlow rules.

Among advantages of using Arista ML2 drivers: Reduction of CPU utilization at compute for VXLAN encap/decap; Connectivity between instances located at different datacenter’s points or even different sites through VTEPs (VTEPs just need L3 connectivity between VTEP’s devices located at both tunnel’s ends);  The ability to connect physical resources (physical Load balancers, physical Firewalls, or even Data Bases installed in physical servers) located out of the stack with virtual instances through VTEPs.

A downside is this configuration is limited to 4K tenant networks. VXLAN decap/encap occurs at ToR switches after mapping VLAN with the respective VNI.  Then, the amount of networks are still limited by the VLAN space. However, If you are planing to bring privates stacks with their own Neutron nodes each one. You can physically segment ToR switches for every couple of neutron nodes and sharing the same Spine infrastructure. Usually ToR switches belong to just one private stack. One couple of ToR switches is not shared between different private clouds.

Pushing the routing functionality into a hardware through Arista L3 Plug-in

Now your instance would connect with an external point and the connection need to be routed outside. Neutron L3 service plug-in would take packets outside through virtual routers previously provisioned in the network/neutron nodes.

Below figure shows step 4. Step4 is illustrating how BR-TUN out-coming traffic is forwarded to the network node, and so to the virtual routers. This connection goes through a VXLAN that is decap/encap at BR-TUNs as we showed in previous pictures. The difference is the destination’s BR-TUN is located in the network node (check out OpenStack Kilo/Juno –  L3 High Availability VRRP for Neutron for further details).

neutron openvswitch kilo juno openstack arista vxlan vtep L3 L2 driver plug-in VARP Mauricio Rojas.007

There isn’t any special requirement the physical networks switches than forward packets. Maybe, you will have to install an additional couple of L2 switches to connect external links or just create an external VLAN. These switches or VLAN is connected to the external ethernet interface at network nodes.

Let’s switch over to how Arista L3 Plug-in works.

Next picture shows how Arista creates switched virtual interfaces on ToR when a virtual router is created in neutron. Now, the hardware switch becomes the default gateway. Through MLAG switches bring Virtual IPs in order to bring redundancy between ToRs. This solution brings more performance and scalability than the previous. Of course, you will have to design it carefully to not get conflicts outside the cloud switching infrastructure. Remember that you are just solving the first routing hop. Using Virtual IPs for redundancy will route out packets through both switches to external world, your configuration will have to be done to caught routing packets from both.

neutron openvswitch kilo juno openstack arista vxlan vtep L3 L2 driver plug-in VARP Mauricio Rojas.008

An important part of this information has come from

See you soon!

One thought on “#Neutron and @AristaNetworks – Routing and #VXLAN encapsulation at wire speed

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s