Basic Multicast part 1 – PIM Dense Mode
I would like to share some basic Multicast examples. As this topic is quite wide I will make different posts. This first post will talk about PIM Dense Mode. The next post will cover topics such as PIM Sparse Mode, Auto-RP and PIM BSR.
Let´s consider the following topology:
Source: The multicast source 18.104.22.168 will be sending to multicast group 22.214.171.124 which is part of the administratively scoped addresses assigned by IANA which is for use in private multicast domains, much like the IP unicast range defined in RFC 1918.
IGP: The IGP used is EIGRP
Platform/IOS: Cisco 2691/12.4(15)T11 Adv IP services
Let ´s start with Dense Mode. PIM Dense Mode is based on the “push” or “implicit join” model where Multicast traffic is flooded throughout the entire multicast domain without the receivers needing to join the specific multicast group being flooded. So PIM Dense Mode is not really scalable and is only suitable for small multicast implementations. The reason of that is the flooding and the (S,G) state creation for every source/group.
Let´s enable IP multicast routing PIM Dense Mode on all the routers. As soon as we enable PIM on an interface the router starts to send multicast PIM hello packet to 126.96.36.199 in order to form adjacency with the others PIM neighbors. This information will be used by PIM to know where to forward multicast packets. PIM packets are encapsulated inside IP packets with protocol number 103.
Now that PIM dense mode is enabled on all the routers let´s send multicast traffic from the source 188.8.131.52 to 184.108.40.206. Please note that the receiver hasn´t joined the group 220.127.116.11 yet.
If we do some debug ip mpacket (disable mroute-cache on the interface in order to see the debug output) on R3 just when the source starts sending multicast traffic we can see the following output:
We can see that R3 is forwarding multicast traffic received on its RPF interface both out its interface f0/0 connected to 18.104.22.168/34 network and out its serial interface connected to the Frame-Relay network. Actually these interfaces are used to forward multicast traffic because R3 has a PIM neighbor on each of these interfaces. Interface F0/1 is not used for forwarding multicast traffic because this interface is the incoming interface for incoming multicast traffic. It is the incoming interface because it has passed the RPF check that is to say when R3 receives the first multicast packet from the source; R3 uses the unicast routing table to perform the RPF checks and look at the source IP address (22.214.171.124) and because this interface is used to route packet out to 126.96.36.199 it is elected as the RPF interface. The RPF interface is computed as the interface with the lowest cost path (based on AD/metric) to IP address of the source or in case of Sparce Mode (*,G) the RP. If multiple interfaces have the same cost, the interface with the highest IP is chosen as the tiebreaker. The RPF check is run every 5 seconds and the outgoing interface list is adjusted appropriately so the RPF interface never appears in the outgoing interface list.
A show ip rpf demonstrates that F0/1 is the RPF interface for this source:
The ? means that the DNS lookup has failed for 188.8.131.52.
Now let´s look at the multicast routing table on R3:
From the above output we can tell that R3 is running in Dense Mode for the multicast group 184.108.40.206. The D flag in the (*,G) entry tells us that. Actually Dense Mode uses only (S,G) entry to forward multicast traffic so you may wonder why is (*,G) also present in the multicast routing table. Well that is because Cisco´s implementation automatically creates a parent (*,G) state whenever (S,G) state is created. So whenever it is necessary to create a (S,G) entry a corresponding parent (*,G) entry is automatically created first.
The outgoing interface list (OIL) of an (*,G) entry reflects the interface where other PIM-DM neighbors are connected or directly member of the group exist. In this case R3 has PIM neighbors on F0/0, F0/1 and S0/0.1 which are respectively R4,R5 and R2. Furthermore the OIL of the (S,G) child is populated from its parent entry (*,G), that is why S0/0.1 and F0/0 are present in the OIL of (220.127.116.11,18.104.22.168). Please note that F0/1 is not appearing in the OIL of the (S,G) because it is the RPF interface and that would defeat the purpose of multicast loop prevention mechanism.
The T flag on the (S,G) entry means that traffic is forward via the SPT which is the shortest path to the source based on the unicast routing table. Also the RPF neighbor is 0.0.0.0 because the source is directly connected to R3.
Please also note that both interface are in forwarding mode and the expire timer will never expire for these interfaces as long as the source is sending packet. Every time the router sees a multicast packet it reset the timer for the (S,G) entry. When the source stops sending multicast traffic the timer for the entry will expire and the (S,G) entry will be deleted which will in turn trigger the count down timer to start for the (*,G ) entry and after 3 minutes this entry will also be deleted.
If we look closely at a debug ip pim/debug ip packet on R3 when the source has just started to send multicast traffic we can see the following:
If you take a look at the diagram you would see that R5 and R3 are both connected to the network 22.214.171.124/24. What is happening is when R5 receive the first multicast packet from the source it will trigger the RFP process to determine if the packet arrived on the correct interface which it does as the source is directly connected to both R5 and R3. So based on the succeed of the RPF check, R5 now forwards the multicast packet out F0/0 which is present in the (S,G) OIL built from the parent entry (*,G) [see above explanation for details]. So now you can guess what is happening. There will be duplicate multicast traffic on the 126.96.36.199/24 network because both R3 and R5 are sending multicast traffic out F0/0 so there will see each other multicast packet coming in their multicast outgoing interface. To avoid this kind of situation PIM has a feature called PIM Asserts which basically triggers a PIM router to send a PIM assert message to all PIM routers (188.8.131.52) containing the source IP, the group and the path cost (AD/metric) to the source every time it receives a multicast packet on its multicast forwarding interface.
In our example because R5 receives a multicast packet on its multicast forwarding interface F0/0 it sends a PIM assert packet to R3. When R3 receives the PIM assert packet (see output above) it compares the metric announced by R5 to reach the source 184.108.40.206 to its own metric and because the metric are equal (both R3 and R5 are connected to the same network where the source is connected) the highest IP address is used a tiebreaker. In this case R5 has an higher IP (220.127.116.11) than R3 so it wins the right to forward traffic on this segment while R3 must prune its interface (F0/0) connected to 18.104.22.168/24 .
As a result of losing the assert process R3 sends a PIM prune packet for (22.214.171.124,126.96.36.199) to R5 (see output above). But what will happen to R4? It still needs (if there any receiver downstream that may be interested in receiving this multicast flow) to receive multicast traffic in order to forward it down to the Frame-Relay network. Well PIM uses a process called Prune override to overcome the situation where R5 will prune its interface connected to 188.8.131.52/24 network as a result of receiving the PIM prune packet from R3. Here is how it works.
When R3 sends the PIM prune packet to R5 as a result of losing the assert process R4 will also hear this message as it is multicasted to all PIM multicast enabled router (184.108.40.206) and because the OIL of R4 for (220.127.116.11,18.104.22.168) is not NULL (R4 hasn´t received a prune message on its multicast forwarding interface (S0/0.1)) and it has a PIM connected neighbor, R4 sends a PIM join (22.214.171.124,126.96.36.199) to R5 in order to override the previous prune from R3.
To make pruning works a 3 seconds delay timer is started when R5 and R4 receives the prune message from R3 which means that R5 will wait 3 seconds before pruning its interface F0/0. As it receives a PIM join from R4 within this interval the interface is not pruned and the multicast traffic can continue to flow normally.
Now that multicast traffic is being flooded let´s have a look at the multicast routing table of R1 which is a leaf of the multicast tree:
So there is an issue there. R1 has no entry in its multicast table for the multicast group 188.8.131.52 although the source is sending multicast traffic to this group. Let´s have a look at R2 which is the hub of the Frame-Relay network:
So R2 is saying that it has an entry for this group but the entry is pruned which is one of the reason why R1 is not receiving this multicast flow. But there is another issue here. From the output above R2 is saying that the RPF interface (incoming multicast interface) is S0/0 which is correct as R2 has only one interface. Next R2 says that its RFP neighbor is R3 which is based on the best path metric to reach 184.108.40.206:
So R2 is choosing to route through R3 due the lower cumulative delay.
But let´s come back to the previous output. What is happening is that as we saw before the incoming multicast interface or RPF interface cannot be present in the OIL and that is what we can see in the output. As the incoming interface is s0/0 this interface cannot be present in the OIL of (220.127.116.11,18.104.22.168) which means that the OIL of R2 for this group is NULL which in turns triggers PIM to send a PIM prune for this group toward the RPF neighbor of R2 that is to say R3 which is confirmed by the following output:
So R3 Prune the interface S0/0.1 in its OIL for (22.214.171.124,126.96.36.199) during 3 minutes. Then the flooding of multicast will start again out this interface and R2 will send a PIM prune again and the process goes on and on. That is the flooding/pruning behavior of PIM Dense Mode.
So to conclude R1 is not getting the multicast traffic because R2 is unable to forward multicast traffic back to the same interface it was received on and this is due to the RPF check process that says that an RPF interface of a multicast forwarding entry must never appear in its outgoing interface list. The second issue is that PIM treats the Frame-Relay interface as a “broadcast” media that is to say it assumes that all the multicast packets received on this interface are heard by all members connected to the Frame-Relay network which is not the case in reality unless the Frame-Relay network was fully meshed. So when R3 forwards a multicast packet only R2 received it on the Frame-Relay network as R2 will never forward the packet back out the same interface. There are 2 solutions to solve this issue in Dense Mode:
- Configure point-point sub interfaces on the hub R2
- Use Multicast tunneling
If we were using Sparse Mode there will be a third solution which will be to use PIM NBMA mode which is an extension of the PIM protocol for WAN interfaces. In our case we cannot use this method as PIM Dense Mode is based on the “implicit join” model and hence PIM NBMA will not be able to track the different PIM join messages.
In this example let´s configure a tunnel between R2 and R1 to solve the Frame-Relay issue so we can send multicast traffic from the source all the way down to R1.
So once the tunnel is configured between R1 and R2 let´s try to send traffic again from the source. If you look at a debug ip mpacket on R1 we now get the following:
R1 can receive the multicast traffic from R2 via the tunnel interface. The issue now is that when R1 receives the first multicast packet via the tunnel interface it does a RPF check on the source IP address using its routing table and find out that the RPF interface for 188.8.131.52 should be via S0/0.1 and not the tunnel interface. This is because R1 has learned the 184.108.40.206/24 via EIGRP and not via the tunnel interface. So the unicast shortest path does not match the multicast distribution tree. To solve this issue we have to use a static mroute statement to allow R1 to perform the check correctly:
This command does not specify any forwarding rules. Instead, it creates an ordered table entries to look up for RPF information. Note that the mroute table is ordered in the same way you enter the IP mroute commands so you should enter the most specific mroutes first.
When a router finds RPF information in both the unicast routing table and the mroute table it prefers the mroute information since the AD is better (AD=0). In this case the AD for the unicast routing table is 90 (EIGRP).
Let´s try to send multicast traffic from the source again and see what happens now that we have solved the RPF issue. Let´s have a look at the multicast table of R1 while the source is sending traffic:
So now R1 has the correct entries for the source 220.127.116.11 in its multicast table. But there is still one issue 🙂 If you look closely at the OIL for the (S,G) R1 says that it will forward multicast traffic out its Frame-Relay interface (not the tunnel which is the RPF interface) which will actually cause a loop in the network because upon receiving the multicast packet from R1, R2 will make an RPF check and that will succeed because the multipoint Frame-Relay interface is the RPF interface. While the source sends traffic we can actually see the loop between R1 and R2 with a debug ip mpacket:
The following diagram illustrates this multicast loop process:
So to solve this issue there are 2 solutions:
- Disable PIM on the Frame-Relay interface of R1 so only the tunnel interface will be used for multicast data plane and control plane traffic
- Configure point-point sub interfaces on the hub R2
In this this example we will just disable PIM on the Frame-Relay interface of R1. So now let´s have a look at the multicast table of R1 when the source is sending multicast traffic:
So now things look correct. As a result of the OIL being NULL R1 sends a PIM prune to R2 as it doesn´t need to receive multicast traffic because it hasn´t any directly connected members or PIM DM neighbors on this interface. Remember that the RPF interface (in this case Tunnel 12) must never appear in the OIL of a (S,G) entry. In the following output we can effectively see that R1 is sending a PIM prune*:
*Note actually that PIM hasn´t a specific PIM prune or PIM join message. There is only a single PIM Join/Prune: message type. Each PIM join/prune message contains both a join list and a prune list. So a router can include multiple entries in the join list or prune list and therefore effectively join/prune multiple sources and/or groups with a single PIM join/prune message.
As our last step let´s make the receiver join the multicast group 18.104.22.168 while the source hasn´t started to forward traffic to this group yet. So if we look at the multicast routing table of R1 we can see the following while the receiver is joining the group:
So R1 is receiving an unsolicited Membership report (“join”) for the multicast group 22.214.171.124. This membership report was multicast to 126.96.36.199 from the receiver (188.8.131.52) as the result of joining the multicast group 184.108.40.206. So now R1 has an (*,G) entry for the multicast group 220.127.116.11 in its multicast routing table. The creation of this entry was triggered by the IGMP membership report sent by the receiver:
As we are using Dense mode the incoming entry equal to NULL has no significance here. In Dense Mode only (S,G) entries are used to forward multicast traffic and they build their OIL from the parent (*,G) entry applying the rule saying that the RPF interface must not be present in the OIL of the (S,G) entry. Unlike Sparse Mode R1 will not send a PIM join message because Dense mode is based on the “Implicit join model” which means that all routers in the multicast domain don´t have an (*;G) entry for 18.104.22.168 considering that the source hasn´t started to send traffic yet.
Please note that every 125 seconds R1 will send an IGMP general query out interface F0/1 (where the receiver is connected) to the multicast address 22.214.171.124 (all hosts multicast group) to find out if any host is interested in receiving packets for any multicast group. After sending the Query R1 expect the receiver to reply with an IGMP solicited membership report for the multicast group 126.96.36.199.
Let´s send some multicast traffic from the source now and see the result on R1 multicast routing table:
So upon receiving the first multicast packet from the source R1 creates an (S,G) entry with the OIL based on its parent entry for this group.
If the receiver were to leave the multicast group it will send an IGMP leave (type 0X17) to all routers Multicast group 188.8.131.52 which will trigger R1 to send an IGMP group specific query to 184.108.40.206 (in our example) in order to check if there are still members that wants to get the multicast stream for this group. If R1 doesn´t get an IGMP membership report from the receiver which is the case in our example as the receiver leaves the multicast group is will set the interface where the receiver is connected in its (S,G) OIL to NULL which will trigger R1 to send a PIM prune to R2 via the tunnel interface. Let´s see the debug output on R1 when the receiver leave the multicast group:
So that is it! Basic multicast PIM Dense Mode has been covered. In my next post I will talk about PIM Sparse mode using the same diagram that I have used in this post. Thanks for reading and your comments are more than welcome.