Home > IP, PfR > PfR (Cisco Performance Routing)

PfR (Cisco Performance Routing)

In this post I would like to explore PfR (Cisco Performance Routing) earlier called OER (Optimized Edge Routing). I will start with an introduction by presenting what is PfR and the goals of it and I will finally demonstrate how to configure basic PfR. To illustrate the different configuration examples I will use the following topology:

 

 Side note: As I am using IOS 12.4 (15)T, PfR is configured with OER commands. In this IOS version PfR is not mature. So if you want to use PfR in production you should use IOS 15.0 and up after. The keyword PfR has been introduced in IOS release 15.1(2)T. From simplicity I will use the term PfR instead of OER even if I am configuring PfR in version earlier than 15.1(2)T.

Platform/IOS: Cisco 2691/12.4(15)T11 Adv IP services.

IGP for Spoke site: EIGRP 10

IGP for Hub site: EIGRP 10

 Both sites use eBGP to peer with their respective ISP. R2 is running eBGP over the GRE tunnel path and iBGP with R1.

Scenario: The Hub location is hosting a Citrix server and a HTTP server. Citrix traffic should always be routed over the MPLS path via R1 while HTTP traffic should be routed over the GRE tunnel via R2. The Voice traffic between the two locations should be routed over the MPLS path via R1.

  •  If the voice traffic delay goes over 300 ms, voice traffic should be moved to the GRE tunnel path via R2
  • The HTTP traffic should only be routed through the GRE tunnel path as long as this path is up and running
  • If the MPLS link utilization goes over 50 % move only the Citrix traffic to the GRE tunnel path via R2

What is PfR?

 PfR has been developed with the goal of adding intelligence to the traditional routing decision made by the different routing protocols. Traditional routing uses static metrics to find where to route traffic. This behavior has many drawbacks in nowadays networks. Let´s take an example where a router has two uplinks each connected to one ISP. One link being the primary and the other the backup link. In order to utilize the backup link (to avoid paying for an expensive backup link underutilized) one could use static routing or a routing protocol to load share the traffic among both links. However this solution result in poor load sharing as the traffic is not balanced based on the load of the links (current utilization of the link).

It would maybe be possible to achieve load balancing by using a combination of EEM with IP SLA and PBR but this solution is not flexible and results in a lot of administrative tasks.

Another drawback is application performance, where for example voice traffic should be routed over the least measure delay link and Internet traffic should be routed on another link. Again a combination of PBR, EEM and IP SLA could be used to solve this scenario but again that result in complex administrative manual tasks.

 Cisco PfR enhances traditional routing (BGP, EIGRP, OSPF, PBR) in order to select the best path based on different real-time advanced parameters such as link utilization, delay, packet loss, traffic load, jitter, MOS, availability, response time.

 A PfR infrastructure consists of a minimum of one Master Controller (MC) and one BR (Border Router). The MC and BR can be configured on the same router. Additionally the MC doesn´t need to be in the data plane path. Let´s see what these two components are in more details:

  •  MC: The MC is the router which makes the decision on how to route traffic based on the different measured metrics (jitter,RTT,availability,link utilization, traffic load, etc) reported to him by the border router(s). The MC is generally configured on the same router as the BR for small locations or as standalone for large locations. The reason behind that is that if there are many network prefixes to monitor it requires a platform with higher capacity CPU and memory. The MC communicates with the BR(s) over TCP.
  • BR: The BR(s) are in the data plane path and collect different performance metrics that they report to the MC. These metrics are collected via Netflow and IP SLA probe and this will happen automatically, that is to say you don´t need to configure these features. Based on these reported metrics, the MC will take a decision on where to route certain type of traffic based on different threshold (delay, availability, jitter, etc) bind to different traffic class (which can be based on prefix class or application class) and communicate this decision to the BR(s). Then the BR(s) will execute this decision by configuring a static route, change some BGP attributes or configured PBR (application based traffic).

 As PfR is a path selection technology there must be at least two external interfaces and one internal interface in the PfR infrastructure. So if there is only one BR, there must be 2 interfaces configured as external and 1 interface configured as internal. If there are 2 BR, it would be enough with each router having 1 external interface and 1 internal interface.

 PfR process occurs in five phases:

  • Learning phase: PfR in the first step learns which kind of traffic class (prefix based traffic or application based traffic) is going through the BR(s). The learning is made on the BR thanks to Netflow. The learning phase per default is automatic for all traffic classes but it can be configured manually for some specific traffic classes. For example only Citrix traffic should be learned.
  •  Measuring phase: In this phase the previously learned traffic classes are measured using Netflow and IP SLA. PfR measure the performance of traffic class using passive (Netflow) and active (IP SLA) monitoring. By default, the utilization of the links is also measured. The BRs do the different measurements and transmit this information to the MC which analyses it and compare it against specifics thresholds define in policies. I will cover the different monitoring mode later in the different examples.
  •  Policy phase: The different measurements for each traffic class are compared against well-known or defined thresholds. If some measurement metric are not conforming to the thresholds, the traffic class will be considered out of policy (OOP) and PfR will try to take some decision (Enforce phase) to get the traffic class or the link in-policy state again.
  • Enforce phase: It is wherethe MC will make routing change decision for link(s) and traffic classe(s) that are OOP.
  • Verify Phase: This phase control that the changes taken in the enforce phase have been made and that the traffic class or link move to an in policy state.

So let´s start with the configuration of PfR taking the above topology as a starting point.

PfR Configuration

All the routers in the topology are configured with loopback interface and format YY.YY.YY.YY /32 where Y is the router number. In addition these loopbacks are advertised in the respective IGPs.

  •  Configuring the MC

 Side note: Although it would make sense to configure PfR at the spoke location in production scenario, for the sake of simplicity I will just skip the configuration of PfR at the spoke location and defined the MPLS link as the preferred path by tuning BGP. This will result in asymmetric routing when using PfR at the Hub location but we are only interested in how PfR is making routing decision at the Hub location so it doesn´t matter really if the return traffic from the spoke is asymmetric.

 So let´s configure the MC at the HUB location. In order to establish the adjacency between the MC and the BR(s) we need to use authentication, define the BR(s)´s IP address to peer with as well as define the different interfaces (external and internal) connected to each BR:

  •  Configuring the BR(s)

Let´s now configure both BRs (R1 and R2) at the HUB location:

 Once the BRs are configured they will establish a TCP session towards the MC with a source IP corresponding to their loopbacks. The destination port for PfR TCP sessions toward the MC is 3949. Let´s see if the session between the BRs and the MC is correctly established:

So from the PfR output above, we can see that the session is correctly established between R1 and R3.

 Side note: As I am using IOS 12.4(15)T the OER version is 2.1. In IOS 15.X I PfR version is 3.X. Version must match between MC and BR (or MC should have a higher version than the BR(s)). As I am running a version earlier than 15.1(2) T the keyword OER is used for the configuration but for simplicity and to avoid confusion in this post I will only use the term PfR.

 Let´s check the status on the PfR master:

So both sessions to each BR are active. We can also see that the PfR master has obtained information of each BR external interface with their respective bandwidth. Per default PfR defined the max bandwidth utilization of the external links to 75% of the actual link bandwidth. The load of all the external links will be monitored constantly with Netflow to verify that they stay in-policy state depending on how the different thresholds are configured.

 Side note: The bandwidth is configured under the external interface of R1 and R2 with the bandwidth keyword.  The bandwidth is configured really low as I am using GNS3 which is really low traffic performance. The link with the bandwidth of 512 Kbit/s is the MPLS link and the link with 256 Kbit/s is the GRE tunnel link. Also load-interval 30 has been configured on the external links to provide the most granular and accurate information to the MC.

 Right now the MPLS path is defined as the primary path so if we were to generate ICMP traffic from R6 to R8 we would see the real-time load on the MPLS external link on R1. Let´s try:

Let´s see the result on the MC:

So we can see that the load of the MPLS link is currently 13 Kbps which is caused by the ICMP traffic.

Actually PfR has turned Netflow automatically on on R1 and R2 in order to measure the different flows going through these BRs. This can be verified with the following output on R1 for example:

We can see that Netflow is capturing the ICMP traffic going through R1. Also note that protocol 01 corresponds to ICMP. The size of each packet is 99 Bytes which almost correspond to our ping packets (100 Bytes).

 At this stage PfR is not taking any action as by default the learning process is disabled. So although traffic (in this case ICMP) is going through the BR (R1) PfR will take any action as this traffic class (12.12.12.0/24 based on Netflow aggregation of /24 per default) should be learned in order for PfR to take any action and optimized routing. PfR has two possible learning modes which can be automatic or manual. To start we will see how automatic mode works and then later on in this post we will use manual mode in order to achieve the desired scenario described at the beginning of this post.

 PfR Traffic Class learning

 So let´s configure the MC to automatically lean this traffic class (TC) which is ICMP traffic in our example.

 Side note: Traffic class (TC) can be based on L3 info such as prefixes or L4 such as port numbers.

In the above configuration we tell R3 to learn prefixes based on the Netflow Top Talkers that have the most throughput during the learn period.

 Side note: It is possible to filter what the MC will learn by using learn-list. So in our example if were to learn ICMP and apply specific parameters to this TC we will use the following configuration:

Side note: please not that ACL cannot be used to filter on prefixes but only for applications. So in this case for ICMP we use the ACL to match the type of traffic (ICMP) and a prefix-list to match the destination prefix. We could have used the keyword application instead of an ACL. Also note that it is possible to disable Global learning by using the command traffic-class filter in order to explicitly specify which TC the MC should learn and therefore have more control over the learning process.

 So now that we have configured TC learning let´s see if R3 learn the ICMP TC when we ping from R6 (Hub location) to R8 (Spoke location=45.45.45.0 /24).

 Side note: iBGP is configured between R1 and R2 and eBGP is configured between R1 and the ISP as well as between R2 and R4 over the GRE tunnel. At the HUB location R1 is the preferred path toward 45.45.45.0/25 based on the BGP Local Preference set to 500 as we can see in the following output:

Actually we cannot see the Local Preference here but it is set to 500. The really important thing to understand in order for PfR to work properly is that you need a parent route. This concept is really important otherwise PfR will not work. A parent route is a route that is equal or less specific than the current one for the destination prefix of the traffic class being optimized. In other words it is an alternate path to the destination. This alternate path must be present in the BGP table, EIGRP table or routing table through static routing for example (floating route could be used) in order for PfR to make a path decision. This is because if no parent route is present, PfR knows that it cannot make any routing decision to another link or BR as otherwise this traffic will be black holed. BGP and static routes are supported in all releases for parent routes. From 12.4(24)T and later any route in the RIB with an equal or less specific mask than the traffic class will qualify as a parent route. EIGRP support was introduced in Cisco IOS release 15.0(1)M.

So let´s ping:

So R3 is learning the prefix based on Netflow automatically configured on the BRs. The TC is then stored in the PfR Monitored Traffic Classes (MTC) table as we can see in the output below:

The TC learned will stay in the MTC for a time defined by the command expire after <session|time[min]> .

 So R3 has learned the prefix 45.45.45.0 /24 and it knows that the current exit path is through R1 right now. Also we can see that the TC is “INPOLICY” state which means that no thresholds have been exceeded yet. We can also see the outbound throughput of the TC (49 Kbit/s). Finally we can observe the short-term and long-term active delay, measured by active probes (Active delay values will not be present if monitor passive mode was used). Once PfR has learned a TC PfR must measure the performance of this TC in order to later compare these results to the configured policy.

 PfR Measuring Phase

 PfR measure the performance of TCs using passive monitoring and active monitoring. Additionally the utilization of the links on all exit paths is also measured.

  • Passive monitoring: is based on Netflow. BR(S) reports to the MC the average delay of the flows. Packet loss, reachability along with outbound throughput for each identified TC by Netflow is reported. Measurement of non-TCP traffic flows in this mode is characterized by throughput only while TCP flows are measured on Delay, Loss, Reachability and throughput. This mode is useful for TCP based flows only and TCP flows must be observed by the BRs to manage prefixes.
  •  Active monitoring: Active monitoring use IP SLA feature in order to generate test traffic for the specific TC and measure performance based on delay, reachability, Jitter, MOS for any type of flows and not just TCP as with passive monitoring. Only the current exit path is used to generate IP SLA until it becomes OOP (out Policy).
  • Monitor Both: Use both passive and active mode and send IP SLA out the current exit point only.
  • Fast monitoring: Send IP SLA out all the exit points and alternate paths are always known allowing immediate use as required. This mode can reroute OOP traffic in less than 3 seconds.

The following output demonstrates that when R3 has learned a TC it will start probing to all exit path do collect measurements:

Once the measurements are done using one of the above modes, PfR will use the configured policy in order to compare these measurements to some define thresholds.

 PfR Apply Policy phase

 The thresholds can be defined globally or per TC. Per default the following global policy is configured and apply to all TCs:

In passive mode PfR maintains short-term counter (statistics from the last 5 minutes) and long-term counter (statistics from the last 60 probes results). Only available for TCP.

In active mode PfR maintains short-term counter (statistics from the last 5 probe results) and long-term counter (statistics from the last 60 probes results), available for all type of flows.

 The thresholds can be defined as Absolute of relative:

  •  Absolute threshold is compared against the short term delay and is configures in the unit of the metric. For example delay will be expressed in msecs.
  •  Relative threshold is compared against the short and long-term value and is configured in percent

 When configuring multiple policy criteria for a single TC or a set of TCs it is possible to have multiple conflicting policies selecting different exit points. So for example for TC Citrix, one exit point may provide best delay and other exit point lowest link utilization. So in order to find out which performance metrics to look at first, priority are associated with each of them.

By default PfR assigns the highest priority to delay followed by utilization but unreachable policy always applies first and has a default priority of 0 which cannot be changed.

 PfR selects a Policy Conforming Exit by following each of the following steps:

  •  Gather traffic class measurements for all exits
  • Gather link utilization for all external interfaces
  • Exits with not measurements are ignored
  • Measurements applied using priority with variance
  • Exists within variance are candidates

 So for example if we have the following PfR policy:

Exit A has delay of 100ms, loss of 1000 ppm and Exit B has delay of 220 ms and loss of 1300 ppm than when the PfR process will try to choose the best exit it will first look at the loos which has the best priority. In this case exit A has a loss of 1000 ppm and exit B 1300 ppm and the acceptable variance is 30 % so any link with an absolute loss of 1300 ppm is candidate so exit B is also candidate. Then the next priority policy is examined which is delay and exit A is elected the best path as delay is 100 ms and even with variance of 20, the max delay tolerated is 120 ms which is much lower than exit B delay.

 Once a policy for a TC is configured, PfR will compare the reported metrics from the BR(s) with the different threshold configured in the policy. Once these values are compared, the PfR process will determine if the TC or the link is in out of policy state and make eventual routing decision. This is the next phase which is called policy enforcement.

 Policy enforcement

 PfR by default operates in Observe mode and therefore if a TC or link is out of policy the MC will not take any decision but only suggest what will happen. In order to enforce the configured policy the mode should be changed to route mode with the following command:

PfR Scenario

 So let´s try to configure the following scenario. The Hub location is hosting a Citrix server and HTTP server. Citrix traffic should always be routed over the MPLS path via R1 while HTTP traffic should be routed over the GRE tunnel via R2. The Voice traffic between the two locations should be routed over the MPLS path via R1.

  •  If the voice traffic delay goes over 300 ms, voice traffic should be moved to the GRE tunnel path via R2
  • The HTTP traffic should only be routed through the GRE tunnel path as long as this path is up and running
  • If the MPLS link utilization goes over 50 % move only the Citrix traffic to the GRE tunnel path via R2

 Right now there are 3 flows generated from the HUB location towards the Spoke location and the primary path is via the MPLS link through R1 (defined with BGP Local Preference):

So we are generating a total amount of traffic of 116 Kbit/s which is divided as follows:

  • CITRIX: TCP | 50 Kbit/s
  • VOICE: UDP | 16 Kbit/s
  •  HTTP: TCP | 50 Kbit/s

 Side note: These values are not realistic and there are used only for the sake of this example.

 Defining the voice TC

 So let´s configure the master controller in order to learn the voice traffic:

In the configuration above we have configured a learn list for the voice traffic going to the spoke location. The aggregation prefix-length is 32 in order to monitor and control traffic-class at granularity of /32. The throughput command is to sort the traffic-class based on throughput at the end of learn cycle. If we were to use delay it will only possible to monitor TCP flows as passive monitoring Netflow can only learn delay for TCP flows. In this case the traffic is UDP so we have to use throughput.

 Side note: The ACL is only used to match the application type while the prefix-list is only used to match the destination.

 So let´s know configure a PfR policy to apply to this voice TC. The policy will reflect the scenario described above:

We saw the significance of most of these values earlier so I am not going over them again. But in the case of voice traffic we are using a monitor mode of active. That means that PfR will probe the current exit at all the time. Utilization is off and delay has the first priority while loss the second priority. Delay threshold is configured as 300 msec. The delay measured by PfR is Round-Trip-Time. For voice delay higher than 150 ms one way decrease the quality of the call.

 Side note: A mode monitor of monitor both will not work in this case as we are monitoring UDP flows (voice). With mode monitor both, passive measurements are used to trigger the out of policy state. So as passive measurements only can monitor performance metric for TCP flows apart from throughput which is monitor for all types of flows (UDP,TCP, L3), out of policy state will never be triggered for flow that are not TCP. So depending on the type of flows which you are running PfR for, you should adapt the corresponding monitoring type.

 Finally the link-group keyword is telling PfR to use MPLS as primary path and GRE as secondary path for voice. So let´s apply this policy to R3 PfR process and see what is happening:

So first R3 is starting the learning process in order to learn the voice traffic via netflow based on throughput:

Then the MC is learning the TC we have configured in the learn list The TC is reported by BR R1 has this path in the preferred exit (BGP LP). The learning process will occur every 60 sec as we have configured it to do so:

 As we can see above the IP protocol number is 17 for UDP and the source port is 16384 which is use by Voice application. Once learned the TC is stored in the PfR Monitored Traffic Classes (MTC) table.

 As we are using active monitoring, the MC will probe the current exit path (through R1) every 4 seconds and will only probe the alternate path through R2 if the TC is out of policy. At the end of each probe cycle R3 is gathering the performance metrics as we can see below (short average delay, long average delay, etc). Then when the performance metrics are compared with the policy configured for this TC.

If we were to check the TC table at this instant we would see that the TC is in policy as the active short delay is below the threshold that we have configured (300 ms):

The delay is measured actively by the IP SLA ICMP echo probes as we can see in the show output below:

One important point we saw in the previous screenshot for the PfR TC table is that the protocol field is flagged as PBR which means that PfR uses PBR in order to make routing change with this TC.

 Side note: If we were to use a TC based only on prefix and not based on application (as it is the case here, voice udp 16384), we would see that PfR will use BGP attributes to change the path for the TC. But in this case as we use an application based TC, the only way that PfR can change the traffic path for this TC is by using PBR. That also the reason why all the BRs have to be directly connected in order not to be more than one hop away of each other as PBR use set ip next-hop keyword in the route-map. In IOS 15.0 (1) M4 and later it is possible to configure to always use PBR as route control protocol. By default the control mechanism is in the order, BGP, EIGRP, STATIC and PBR which can results into failure or ineffective control in DMVPN / MPLS environment.

So let´s see how PfR automatically and dynamically uses PBR on the BRs:

 So in the above output as R1 is the preferred path based on the policy we have configured: set link-group MPLS fallback GRE, PfR enforce this configuration by configuring dynamically PBR and telling R1 to route Voice traffic (see dynamic ACL) through the ISP MPLS router (next hop is 1.1.1.2).

 For R2, PfR is telling the BR to route voice traffic (see dynamic ACL) through R1 (12.12.12.1) which is and must be one hop away.

From now on, as we set the periodic timer to 90 sec, every 90 sec the TC is reevaluated even when the TC is in policy and the MC try to find an exit path. Per default, the MC will look for an alternate path when an OOP event occurs. So in the debug output below the periodic timer reaches 0 (denoted by prefix timeout), the MC try to find an exit path and the exit path through R1 is chosen due to unreachable equals to zero (it is not possible to disable unreachable policy parameter and unreachable has the best priority so it is checked before delay in this case):

PfR is constantly (every 4 seconds, defined by the probe frequency) checking the delay and unreachable reported by the IP SLA probes and compare them to the threshold configured in the policy defined earlier.

Now let´s see what happen when we increase the delay over 300 ms:

So as we can see as soon as the TC performance goes over 300 ms (here 320 ms) the TC is flagged as oop for the path through BR R1 (11.11.11.11) and PfR starts to probe the alternate path through R2 in order to measure the performance metric for this TC. In this case the delay is much lower through R2 (71 ms) and R2 is elected as the transmitting BR for the voice traffic. Then the TC is placed in holdown state during 90 sec as we can see below:

During the holdown state, no routing change can be made to the TC.

After the holdown timer expired the TC will transit to in policy state:

After 90 sec an exit path will be reevaluated and if the path through R1 is less than 300 ms, the TC will rerouted through the MPLS as it is the preferred path (configured in the policy) for this TC. The N flag in the above output means that the value is not applicable and that is because we are using active monitoring and the flow is UDP so as long as the flow is not TCP these values (flag with N) will not be available as it is only possible to monitor passive performance metrics with TCP flows and mode monitor passive. You may also be surprised why the throughput is flagged with N and that is because we are running explicit active mode and throughput can only be monitored with mode monitor passive which is using Netflow. Passive monitoring is enabled by default for all automatically learned prefixes. But in our case we force the mode to monitor active in the PfR map.

 That was quite interesting!

 Side note: Actually it is possible to disable automatic learning by configuring no throughput and no delay and then under the PfR map configure the following:

So in this case the ACL match on the destination also. any any will not work and the PfR process will complain about PBR requirements not met or that the prefix cannot be controlled. When creating this entry PfR will create an automatic IP SLA probe towards 9.9.9.9. It is also possible to configure manually which kind of probes we want to use as well as the target IP address.

 So let´s define now the PfR policy for HTTP traffic.

Defining the HTTP TC

Here we are just interested that the HTTP traffic always flows through the GRE tunnel path and as a last alternative if the GRE tunnel is down, then HTTP traffic should be moved to the MPLS link. We are not interested in probing as often as we are doing with voice, so the probe frequency has been lowered to 30 seconds instead of 4 seconds (Voice).

 Side note: In order to simulated HTTP traffic in GNS3 I am using a traffic generator which will inject one way direction HTTP traffic from the HUB location to the Spoke location. So there will not be any TCP 3-Way Handshake (SYN,SYN-ACK,ACK) for example. Therefore it will make sense to use passive monitoring or monitor both in production in order to monitor TCP metrics with Netflow like Delay (Delay is measured as the RTT between the transmission of TCP segment and receipt of the TCP ACK packet), packet loss (Measures packet loss by tracking TCP seq numbers and comparing it to the subsequent packets), reachability (track TCP SYN messages that have been sent repeatedly without receiving a TCP ACL packet).

  So let´s check if the HTTP traffic is controlled correctly by PfR:

Sure enough, the MC has learned the TC for HTTP and has configured PBR on R1 and R2 in order to change the preferred global routing path from R1 to R2.

We can see that HTTP traffic is not flowing anymore through R1 as shown in the output above. But HTTP traffic is flowing now through R2 as we can see in the output below:

The MC is controlling the TC by using PBR and has created a dynamic route-map on both R1 and R2:

So R1 is routing through R2 regarding the HTTP traffic.

Defining the CITRIX TC

 Let ´s define the last policy which is for Citrix traffic:

Basically what we want to test with Citrix traffic is the utilization of the MPLS link. If it goes over 50 % this traffic should be rerouted to the GRE tunnel path as fast as possible. Therefore we are using monitor fast mode in order to probe also the alternate path (GRE tunnel) at all the time every 5 seconds. In this way the MC will now know what are the performance metrics on the alternate path even when the TC is in policy (utilization of the MPLS link inferior to 50 % of the total available bandwidth) so the traffic can be rerouted faster from one path to the other.

 Let ´s test this configuration!

 First we can check that Citrix traffic is flowing through R1 (default path when PfR is not configured):

Perfect! So let´s check if the MC is learning this traffic and how it enforces the configured policy:

 So R3 has learned the TC corresponding to Citrix traffic and it is routing it through R1 as dictated by the configured policy. Right now the TC is in policy and will become only out of policy if the utilization of the MPLS link goes over 50 % (256 Kbps). We can also see that voice traffic is present and being routed through the MPLS link as configured in the PfR policy.

 So let´s increase the MPLS link utilization over 50 % (256 Kbps):

As soon as we increase the load of the MPLS link over 256 Kbps (50%) the link becomes OOP for BR R1 and the PfR move the Citrix TC to the GRE tunnel as we can see in the output below:

PfR is really great! It has moved the Citrix traffic to the GRE tunnel path but as left the Voice traffic on the MPLS path. In production network PfR can be a great tool to manage your applications by taking care of routing them to the right path in case of bad performance metrics.

So now let´s conclude this post by joining the three TCs (voice, Citrix and HTTP) and check if PfR is enforcing the configured policy correctly. Before enabling PfR we have 3 TCs all going through R1 (based on traditional routing):

Let´s enable PfR on R3 now:

Once the three prefixes have been learned PfR makes the right policy enforcement as we can see in the output below:

All the three TC are routed correctly based on the PfR policy enforcement. The route control uses by R3 is PBR as these TC are application based (TCP and UDP).

 Thanks for reading

 /Laurent

Advertisements
  1. Mohamed
    December 18, 2012 at 10:11

    it is really good explaining with best order , arrangement and simple example

    thanks

  2. December 18, 2012 at 10:59

    Thanks Mohamed 😉
    /Laurent

  3. Pablo Lucena
    January 3, 2013 at 16:38

    Thanks for this great article. One question:

    In your testing, you used the prefix 45.45.45.0/24 as that is the prefix used at the spoke site. How were you able to use 9.9.9.9/32 as your voice traffic if you did not have a parent route for this prefix?

    In the beginning of your article you displayed the BGP table of one of the routers in the hub site, but it did not show any routing information for 9.9.9.9/32. When your PfR is learning the voice traffic, ( which is being filtered in the voice learn list with the ACL matching only voice UDP ports and a prefix-list matching 45.45.45.0/24 ) , how does it pick up the TC for 9.9.9.9/32?

    Thanks!

    • February 5, 2013 at 12:10

      Hi Pablo, sorry for the late reply. I will have a look at it and come back to you as soon as possible. Regards, Laurent

  4. Mark DeLong
    May 27, 2013 at 19:31

    Thanks, Laurent! Great Post!!

    • May 27, 2013 at 22:39

      Thanks Mark 😉 Hope you are doing good…Let me know.

  5. Naveen
    November 16, 2013 at 12:38

    Great explanation not found in other blogs..

    I have just one question, can a default route be considered as a parent route for a particular prefix?

  6. Bahman
    July 23, 2015 at 01:34

    Does MC need to have BGP session or not?

  7. Hms
    September 30, 2015 at 13:30

    Best PfR explanations … Thank you very much.

  8. Sumo Rider
    January 30, 2016 at 17:12

    Thank you for the post. I have been reading Cisco iWAN and PfR documents and then found your post. This is exactly what i needed.

  1. October 22, 2015 at 03:22

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: