GLBP (Gateway Load Balancing Protocol )
I will take a little break regarding Multicast 🙂 and this time I will talk about a First Hop Redundancy Protocol (FHRP) which is GLBP (Gateway Load Balancing Protocol).
I have decided to look deeper into GLBP as I am quite familiar with HSRP or VRRP but the other day I had to troubleshoot a network which was configured with GLBP and although I can easily configure it there are some mechanisms of GLBP that are still unclear to me. So let´s clear them out!
Gateway Load Balancing Protocol (GLBP) is one of the First Hop Redundancy Protocol you can use when configuring First hop redundancy gateway for your end hosts. GLBP is Cisco proprietary and the big advantage compare to HSRP and VRRP is that you can load balance between the different gateways without changing the default gateway on the end hosts as you would do with HSRP for example to achieve load balancing which results in an extra administrative burden.
GLBP is composed of two components:
- The Active Virtual Gateway (AVG) at the control plane
- The Active Virtual Forwarders (AVF) at the data plane
The AVG responds to ARP requests sent by end hosts to the virtual gateway IP address, and replies with different virtual MAC addresses that correspond to different active virtual forwarders (AVFs).
The AVF are responsible for sending traffic destined to their Virtual Mac address which has been allocated to them by the AVG. Both the AVG and AVFs are redundant, i.e. if a primary physical router representing the AVG or an AVF fails, another physical router will take its role.
Let´s consider the following topology to demonstrate how GLBP works:
Platform/IOS: Cisco 2691/12.4(15)T11 Adv IP services
When GLBP is configured on an interface with a group number and a virtual IP address, the router will start to send hello GLBP packet which are UDP packets with a source and destination port of 3222 and a destination multicast IP address of 126.96.36.199. The following GLBP debug outputs show the different states (both for the AVG and AVF) R1 is going through when GLBP has been enabled on F0/0 with the following configuration:
We can observe from the output above that R1 is elected as the active AVG for the group 1 as it is the only configured GLBP router right now (GLBP is not configured on R2 yet). R1 has also associated an AVF virtual MAC address to itself as well as a forwarder number which is 1. R1 is AVF active for Fwd 1. AVF are backup for each other, we will see an example later on.
Bellow it is Wireshark capture taken when enabling GLBP on R1:
From the capture above we can see that GLBP uses two TLV (type length value) in the hello packets sent to multicast 188.8.131.52 each 3 seconds per default. The first TLV contains information related to the AVG and the second TLV contains information related to the AVF.
In the first TLV we can see the GLBP priority of R1 which is 255. This priority will determine who will be the active AVG. Higher is best so in this case as the max priority is 255 , R1 will be elected the active AVG for group 1 taking in account that R2 is set to 200 .
In the second TLV (related to the AVF) we can see that the virtual MAC address allocated by the active AVG (R1 in this case) to forwarder 1 (R1) is 00:07:b4:00:01:01.
We can also see the different timers used by GLBP:
- Hello: Are sent every 3 seconds by default
- Holdtime: Is 10 seconds
- The redirect time: Is the interval during which the AVG continues to redirect hosts to the old virtual forwarder MAC address. When the redirect time expires, the AVG stops using the old virtual forwarder MAC address in ARP replies.
- The secondary holdtime or forwarder timeout (Timeout in the output): Is the interval during which the virtual forwarder is valid. When the secondary holdtime expires, the virtual forwarder is removed from all gateways in the GLBP group
If we look at the GLBP status on R1 we get the following output:
We can see the same data we saw previously in the debug output and the wireshark capture. There is no AVG standby yet as R2 has not been configured yet.
Let´s configure now GLBP on R2 with the following configuration:
Let´s have a look at the debug glbp when enabling GLBP on R2:
While in the listening state R2 learns that there is an active AVG which is R1 (10.10.10.1). Also R2 is requesting a VF (Virtual Forwarder) number as well as a Virtual MAC to R1 and get allocated a forwarder number of 2 and a virtual MAC of 00007:b400:0102 then R2 become active for this forwarder number which is himself. R2 becomes also backup AVF for R1 by going into listening state -> time 37.011.
At the same if you look at the debug on R1 we get the following output:
So R1 is backup for R2 regarding forwarder 1 and register R2 some backup regarding the AVG failover.
Let´s have a look at the GLBP state on R2 once everything is converged:
So R2 is AVG standby for GLBP group 1 as R1 is the active AVG. Then R2 has been elected as forwarder number 2 which he is active for while R1 is its backup. So that means that if R2 fail R1 will take over and will be active forwarder for both virtual MAC 0007.b400.0101 and 0007.b400.0102. The end hosts will not notice the difference as the MAC address present in their cache table we still be the same. We will see an example later on.
Both CLIENT1 and CLIENT2 are configured with VIP of 10.10.10.254. Let´s see now how GLBP achieve load balancing.
GLBP Load balancing
From the output above we can see that GLBP can achieve load balancing using one the three above methods.
By default GLBP achieves load balancing by using a round-robin algorithm which means that the AVG will respond to ARP requests from clients with a different AVF virtual MAC address each time until all active AVF have been allocated in a round robin fashion.
The weighted option means that the load is proportional to the weight assigned to an AVF. That means if we configure R1 with a weight of 10 and R2 with a weight of 20 GLBP will distribute the traffic load between R2 and R1 in a 2:1 ratio. Each GLBP router in the group will advertise its weighting and assignment then the AVG will act based on that value. If weight value equals to 0, the AVF will not be used to forward traffic (that could be the case for example if you use tracking with IP SLA and decrement the weight so it equals 0).
The host-dependent option is based on the source MAC address of an end host to determine which AVF MAC address the end host will be directed toward. This ensure that an end host will be guarantee to be returned the same virtual MAC address each time it send an ARP request for the virtual IP. This load balancing feature is useful when using statefull NAT. However it is not recommended for a small number of end hosts (less than 20).
Let´s now demonstrate the process of load balancing by leaving the default load balancing mechanism (round-robin) on both R1 and R2. Client 1 and Client 2 will ping 184.108.40.206 (just for test purpose). Let´s look at the debug arp output on Client 1 and Client 2:
From the output above we can see that Client 1 gets the Virtual MAC of forwarder 1 which is R1 and that Client 2 gets the Virtual MAC of forwarder 2 which is R2 effectively achieving the desired round-robin load balancing behavior.
We can also confirm this result by looking at the client-cache on the active AVG (R1):
In order to get the output above “glbp client-cache maximum <x>” should be enabled at the interface level on the AVG.
We could go deeper into GLBP but I think it is enough to get a basic understanding of it. Of course you can use tracking with IP SLA as you would do with HSRP and VRRP.
Thanks for reading.