In my last post, we examined the basics of the Aviatrix secure cloud network solution and how to move traffic across a cloud network. Now let’s add a feature that can be a big challenge in cloud networking: firewall insertion.

Transit Firenet Under a Microscope

Before we get into the nuts and bolts of how the Aviatrix Transit Firenet works, let’s understand its purpose. In the cloud, visibility and security are real problems to solve. Unlike a traditional data center where every connection is owned by the network engineers, native cloud networks are very basic. Inserting security appliances into traffic flows takes a lot of consideration about how devices connect to each other and where we can insert appliances to do inspection without adversely affecting the flow of traffic. Networking at cloud scale may be basic from a supported features perspective, but it is rarely simple to build a design that can easily scale with the footprint.

In a traditional network we could simply connect a firewall or other security appliance in-line with the traffic flows through switches in the data center, but in the cloud there is no ‘in-line’ option. If we want to shim network virtual appliances into the traffic flow, we have to adjust cloud native route tables and change next-hops to point at them in ways that are tough to build in a resilient fashion. Managing the deployment of these NVAs, the route tables, network interfaces and scale of security solutions presents a true challenge.

So How is Transit FireNet Different?

Transit Firenet utilizes the Aviatrix Transit gateway architecture discussed in the previous post. In short, the Aviatrix Transit acts as the hub in the hub-and-spoke architecture data plane, which is the perfect place to put a firewall for security inspection. Here’s the Transit Firenet in context of the overall network deployment.

Since the FireNet is deployed via the controller, a lot of the complexity is abstracted away. Additionally, the firewall inspection policy is configurable via the controller so that certain source and destination VPCs can be excluded or included.

Let’s look at the process of building a Transit Firenet, and once it is built we’ll send traffic through it and packet walk the flow.

Building a Firenet

To understand what happens when we enable the Firenet function, first take a look at the Aviatrix Transit gateway interfaces pre-Firenet.

Notice that pre-Firenet, the Aviatrix transit gateway has only one interface, eth0. As discussed last time, this is the data plane and control plane interface, responsible for tunnels to the controller as well as other gateways. Once the Transit Firenet is enabled, the gateways change.

Two interfaces are added to the Aviatrix transit gateway to support the new firewall inspection flow. Eth2 is used to forward traffic to the firewall that will be built as part of the Firenet deployment. Eth3 is a new tunnel built to the HA gateway to support failover for firewall inspection. What now?

Now the controller can deploy the firewalls and orchestrate the connectivity between them and the gateways.

There’s a lot of detail here, but the important details are what type of firewall to deploy, what instance size it should be, and which interfaces should be orchestrated and connected to which subnets in the Transit Firenet. Importantly, the Management interface subnet is where management interface will be deployed and have a public IP associated for management of the firewall directly. The egress interface is just what it sounds like, it specifies a public subnet where the interface of the firewall which can be used to provide Internet connectivity (if configured) will reside. Choosing these settings also helps the controller understand how it needs to orchestrate the routing for the Aviatrix data plane as well as how the cloud native route table needs to be set up. The best part is that this complexity is completely handled by the controller!

Usually the recommendation is to deploy transit gateways in an HA pair for high availability, pinning each to a different availability zone. This follows for firewalls as well. Below is how we will deploy the second firewall. It’s important to understand that this is not a primary/standby pair and that there is no clustering between them. Each firewall will be managed separately (or together by a C&C platform like Panorama). The gateways determine what flows go to which firewall with a tuple hashing algorithm that pins flows to certain firewalls to maintain symmetry in traffic. The details on that will be forthcoming as we move on to the packet walk; for now, pay attention to which subnets and availability zones into which the second firewall is deployed and compare that to the primary.

Once both firewalls have been built, deployed and the connectivity orchestrated, we can log into them and set them up. I’m doing that manually in this post, but we can also supply a bootstrap config and get the Firenet going with no manual config also.

Using the Palo Alto firewall, Eth1/1 is the egress or WAN-facing interface, and Eth1/2 is the LAN facing interface that is put into the same subnet as the dedicated interface on the Aviatrix transit gateway. In this setup, the egress functionality of the Firenet is being deferred for another blog post and we are going to concentrate on the LAN-facing function of firewall inspection. In this Firenet setup, the firewall acts in a one-armed capacity, using one interface to receive traffic from the gateway and send traffic to the gateway after inspection.

Another important feature the Aviatrix controller can perform is to orchestrate the routing of the firewalls themselves, pointing traffic to the Aviatrix gateway. Let’s see that in action:

Once all the orchestration is done, we can be somewhat granular in terms of what we want to inspect on a per-Firenet basis.

In this example, we are creating a firewall inspection policy that will ensure any packets originating from or destined to the Spoke 1 VPC will be diverted to be inspected. This is important, because this sets a flag on traffic coming into the Firenet that will be used to policy route the traffic from its original destination to the firewalls, which we will see in action next.

Packet Flow Through the Firenet

Before we talk about the packet flow when the firewalls are involved, let’s see the pre-Firenet flow again.

We covered this particular packet walk in the previous post, so I won’t rehash much here. This flow shows a traceroute from a host in the Spoke 1 VPC in the us-west-2 (Oregon) region to a host in the the us-west-1 (N. California) region in the Spoke 2 VPC.

Now, let’s look at the same traceroute with the addition of a Transit Firenet.

The biggest change is at the second hop, which is the Aviatrix transit gateway in us-west-2 (Oregon). Notice that there is a hop that doesn’t respond, then a new hop back on the same gateway? This is because the packet was hijacked when it came into the tunnel interface on the gateway and redirected to the firewall. The Spoke 1 VPC, having been selected for the inspection policy, causes the transit gateway to flag packets from or to Spoke 1 with an inspect flag. This flag is processed before normal routing decisions and the packet is redirected to one of the firewalls based on the standard 5-tuple hashing algorithm. If you’re unfamiliar with this, each packet has (among other things) five pieces of information that can be used to do load balancing and other traffic selection. They are:

  • Source IP Address
  • Source Port
  • Destination IP Address
  • Destination Port
  • Transport protocol

An algorithm will examine all those pieces of data and produce a hash that directs the packet to one of the two firewalls. This helps for return traffic as well.

Here is the detailed interface diagram with a packet flow:

The packet flows from the interface of the ingress tunnel from Spoke 1 to the dedicated firewall interface, then to the firewall LAN interface to be inspected per the firewall rules.

On the return, the packet goes back via the LAN interface, to the gateway and then back to the interface used for the tunnel to be forwarded on to the original destination, this time using the gateway routing table.

Let’s see the actual firewall log:

Notice the source and destination IP hasn’t changed and that the ingress/egress interface for this packet flow are the same. How does the firewall know where to route this packet? As explained above, the controller orchestrated the firewall route table, setting the next-hop to the Aviatrix gateway in the subnet that the interfaces share.

This traffic flow isn’t the same in every case, in every cloud. In Azure, the Firenet workflow instantiates an Azure cloud native load balancer which opens up options for two or three-tuple load balancing to the firewalls as well as custom health checks, as an example. We can cover that in a future post.

Health Checks? Why?

As with most load balancing endeavors, before forwarding traffic to a network appliance it’s often helpful to know that they are up and responding. Unlike a traditional data center where a link might simply drop and automatically show as down, in the cloud we don’t have that luxury. By default, Aviatrix gateways ping the firewalls to make sure they are alive and able to receive traffic. This means the firewalls need to be configured to accept that traffic on its management plane. We have other options with load balancers (which is also orchestrated by the controller!) such as HTTP/HTTPS health checks. However, as I learned once when troubleshooting, in Azure an HTTP/HTTPS health check actually expects an HTTP response code, which is not what the firewall will supply. In that case, it’s better to set up the load balancer health check to do TCP 80/443 instead.

Wrapping It Up

I hope this has been a helpful look at the Transit Firenet architecture with Aviatrix. While the packet flow seems complex, it is completely orchestrated from the controller and visible from both the controller and firewall. Next time, let’s turn on the Egress Firenet capability and allow these hosts in private subnets to access the Internet!


2 Comments

Aviatrix Packet Walk: Internet Access via Egress Firenet - Carpe DMVPN · May 18, 2022 at 8:48 am

[…] the last post, I covered the Firenet architecture and the security options it enabled on an Aviatrix secure cloud […]

Aviatrix Packet Walk: Site 2 Cloud with Mapped NAT - Carpe DMVPN · May 27, 2022 at 2:42 pm

[…] subnet per availability zone, minus any special-interest subnets such as those created by Firenet. In this case we are restricting the created subnets to two pairs only, one public/private across […]

Comments are closed.