QOS总结(en)
作者 elab
QoS stands for Quality of Service. The term is being used by Cisco to refer to
IP-based features that allow specification and delivery of services much like the
Quality of Service features in ATM.
When you get right down to it, there isn't all that much a router can do to control
traffic, since it is not the originator of most of the traffic. The router can drop traffic
-- although we'd prefer it didn't do so. It can put some queued frames out an
interface before others. It can be selective about accepting traffic -- another form
of dropped traffic. And, with TCP, it can selectively drop the occasional packet as
an indirect signal to slow down. With cooperative hosts, the router can try to
accept reservations and hold bandwidth for applications that need it.
Acronyms, features or topics that fall under QoS include: Priority Queuing (PQ),
Custom Queuing (CQ), Fair and Weighted Fair Queuing (WFQ), Random Early
Detection (RED) and its Distributed, Weighted variant (DWRED), Resource
Reservation Protocol (RSVP), Traffic Shaping, Committed Access Rate (CAR),
Policy Routing, QoS Policy Propagation via BGP (QPPB), NetFlow, and Cisco
Express Forwarding (CEF).
The first set of functions relate to queuing, to managing congestion. They are
sometimes referred to as "Fancy Queuing". These include Priority Queuing (PQ),
Custom Queuing (CQ), and Weighted Fair Queuing (WFQ). These features allow
the router to control which frames are sent first on an interface. If there are too
many frames (congestion), then we are, in effect, also selecting which frames get
dropped.
These functions, Priority Queuing (PQ), Custom Queuing (CQ), and Weighted
Fair Queuing (WFQ), are the subject of this article. They are also discussed as
one small part of the Cisco certified ACRC course.
The next feature on the list, Weighted Random Early Detection, is intended to
prevent or reduce congestion -- trying to reduce problems, rather than mitigating
the consequences once the problem has already occurred.
RSVP allows for applications to reserve bandwidth, primarily WAN bandwidth. It
is designed to work with WFQ or Traffic Shaping on the outbound interface.
Traffic Shaping and Committed Access Rate (CAR) control traffic. It seems like a
better acronym could have been chosen: CAR controls traffic? Anyway, CAR
controls the rate of inbound traffic, allowing specification of what to do with traffic
that is coming in faster than policy. Traffic Shaping paces outbound traffic,
controlling use of bandwidth. Traffic Shaping also allows matching the speed of
the output access link across a WAN cloud, so that a faster central hub access
circuit doesn't cause carrier or remote link congestion.
CAR, Policy Routing, and QPPB can also set the IP precedence bits (TOS bits),
which are used by some of the above mechanisms to favor some traffic over
other traffic.
Finally, NetFlow and CEF are switching techniques used in high-performance
routers. They assist in providing QoS by providing efficient packet delivery and
statistics on the traffic, statistics to manage traffic flow, trunk sizing, and network
design with.
__________________
Priority Queuing
About Priority Queuing
Priority Queuing is the oldest of the queuing techniques. Traffic is prioritized with
a priority-list, applied to an interface with a priority-group command. The traffic
goes into one of four queues: high, medium, normal, or low priority. When the
router is ready to transmit a packet, it searches the high queue for a packet. If
there is one, it gets sent. If not, the medium queue is checked. If there is a packet,
it is sent. If not, the normal, and finally the low priority queues are checked. For
the next packet, the process repeats. If there is enough traffic in the high queue,
the other queues may get starved: they never get serviced.
You can regard Priority Queuing as being drastic. It says that the high priority
traffic must go out the interface at all costs, and any other traffic can be dropped.
It is generally intended for use on low bandwidth links.
Configuring Priority Queuing
To assign traffic meeting certain characteristics to a queue (high, medium, normal,
or low), use one of the following commands:
priority-list list-number protocol protocol-name {high | medium | normal | low}
queue-keyword keyword-value
priority-list list-number interface interface-type interface-number {high | medium |
normal | low}
The first of these takes a protocol, like ip, ipx, appletalk, rsrb, dlsw, etc., to
classify traffic. The queue-keyword can be one of: fragments, gt, lt, list, tcp, and
udp. The keyword-value specifies the port for tcp or udp, or the size for gt
(greater than) and lt (less than). The word list allows you to specify an access list
characterizing the traffic. And fragments means just that, IP fragments (which
should probably get expedited handling, so as to not have to retransmit all the
fragments again if one is lost).
The second command above is similar, but classifies traffic based on the
interface it arrived on.
The list-number is any number in the range 1-16. All statements in one policy use
the same number.
To change the default queue for all other traffic:
priority-list list-number default {high | medium | normal | low}
To change the queue sizes from the defaults 20, 40, 60, 80 (don't go overboard
on this if you see output drops, you may make things worse):
priority-list list-number queue-limit high-limit medium-limit normal-limit low-limit
To apply the priority queueing policy for outbound packets on an interface:
interface ...
priority-group list-number
Relevant EXEC Commands
show queueing priority
Sample Configuration
The following configuration sets up a priority list where DLSw traffic goes into the
high priority traffic, as does telnet transmissions. The remaining IP that matches
access list 101 goes to the medium queue, and any thing else goes in the low
queue. (Standard joke: you've planned to send your boss's traffic into the low
queue, to make sure the congestion gets noticed). You've mildly upped the
default queue sizes. And this policy is in effect for packets being sent out serial 0.
priority-list 1 protocol dlsw high
priority-list 1 protocol ip high tcp 23
priority-list 1 protocol ip medium list 101
priority-list 1 default low
priority-list 1 queue-limit 30 60 90 120
interface serial 0
priority-group 1
__________________
Custom Queuing
About Custom Queuing
Custom Queuing uses 17 queues to divide up bandwidth on an interface. Queue
0, the system queue, is always serviced first. It is used for keepalives and other
critical interface traffic. The remaining traffic can be assigned to queues 1 through
16. These queues are serviced in round-robin fashion.
Here's how it works. Packets are sent from each queue in turn. As each packet is
sent, a byte counter is incremented. When the byte counter exceeds the default
or configured threshold for the queue, transmission moves on to the next queue.
The byte count total for the queue that just finished has the threshold value
subtracted from it, so that it starts its next turn penalized by the number of bytes
that it went over its quota. This provides additional fairness to the mechanism.
If you think about it, you can't send half of a packet. That's why this mechanism
might well exceed quota on any given round of transmission from a queue. But on
the next round, the queue is penalized for taking more than it's fair share, so in
the long run it averages out.
Custom Queuing is aimed at fair division of bandwidth. For instance, you might
set it up to allow IP roughly 50% of a link, DLSw 25%, and IPX 25%. When
congestion is taking place, the limits are enforced. If there is unused bandwidth,
say from IPX, it is divided equally among any excess traffic from the other classes
of traffic, IP and DLSw. To implement this, you would tweak the thresholds for the
relevant queues, say making them 3000, 1500, and 1500 bytes respectively.
Some fine tuning to average packet MTU size can make this more precise.
Configuring Custom Queuing
The commands for CQ are very similar to those for PQ. The difference is that you
put the traffic into queues numbered 1-16, rather than named high, medium,
normal, low. Hence we build our CQ policy with:
queue-list list-number protocol protocol-name queue-number queue-keyword
keyword-value
queue-list list-number interface interface-type interface-number queue-number
You can specify the default queue, the one that receives any unmatched traffic,
with the command:
queue-list list-number default queue-number
(
The default default queue is 1).
You can specify the number of packets allowed in any queue with the command:
queue-list list-number queue queue-number limit limit-number
The threshold for a queue can be changed with the following command:
queue-list list-number queue queue-number byte-count byte-count-number
The default threshold for the queues is 1500 bytes.
And the CQ policy is applied to outbound frames on an interface with:
interface ...
custom-queue-list list-number
Relevant EXEC Commands
show queueing custom
show interface type number
Sample Configuration
The following configuration is similar to that for PQ, except that we're not making
DLSw and Telnet traffic top priority any more. Instead, we're using four (4)
queues (since default traffic goes to queue 10). The thresholds are 1500, 1500,
3000, and 1500, so Telnet in queue 3 gets 3000/7500 = 40% of the bandwidth,
and the other queues get 20% each.
queue-list 1 protocol dlsw 1
queue-list 1 protocol ip 2 list 101
queue-list 1 protocol ip 3 tcp 23
queue-list 1 default 10
queue-list 1 queue 3 limit 40
queue-list 1 queue 3 byte-count 3000
interface serial 0
custom-queue-list 1
__________________
Weighted Fair Queuing (WFQ)
About WFQ
Weighted fair queueing provides automatically sorts among individual traffic
streams without requiring that you first define access lists. It can manage one
way or two way streams of data: traffic between pairs of applications or voice and
video. It automatically smooths out bursts to reduce average latency.
In WFQ, packets are sorted in weighted order of arrival of the last bit, to
determine transmission order. Using order of arrival of last bit emulates the
behavior of Time Division Multiplexing (TDM), hence "fair". In Frame Relay,
FECN, BECN, and DE bits will cause the weights to be automatically adjusted,
slowing flows if needed.
From one point of view, the effect of this is that WFQ classifies sessions as highor
low-bandwidth. Low-bandwidth traffic gets priority, with high-bandwidth traffic
sharing what's left over. If the traffic is bursting ahead of the rate at which the
interface can transmit, new high-bandwidth traffic gets discarded after the
configured or default congestive-messages threshold has been reached.
However, low-bandwidth conversations, which include control-message
conversations, continue to enqueue data.
Weighted fair queuing uses some parts of the protocol header to determine flow
identity. For IP, WFQ uses the Type of Service (TOS) bits, the IP protocol code,
the source and destination IP addresses (if not a fragment), and the source and
destination TCP or UDP ports.
Distributed WFQ is available in IOS 12.0 on high-end interfaces and router
models.
Configuring Fair Queuing (FQ)
fair-queue [congestive-discard-threshold [dynamic-queues [reservable-queues]]]
no fair-queue
congestive-discard-threshold: Number of messages allowed in each queue in the
range 1 to 4096, default 64.
dynamic-queues: Number of dynamic queues used for best-effort conversations.
Values are 16, 32, 64, 128, 256,
512, 1024, 2048, and 4096. The default is 256.
reservable-queues: Number of reservable queues used for reserved (RSVP)
conversations, range 0 to 1000. The default is 0. If RSVP is enabled on a WFQ
interface with reservable-queues set to 0, the reservable queue size is
automatically set to bandwidth divided by 32 Kbps. Specify a reservable-queue
size other than 0 if you wish different behavior.
Fair queuing is enabled by default for physical interfaces whose bandwidth is less
than or equal to 2.048 Mbps, except for Link Access Procedure, Balanced (LAPB),
X.25, or Synchronous Data Link Control (SDLC) encapsulations. Enabling
custom queuing or priority queuing on an interface disables fair queueing. Fair
queuing is automatically disabled if you enable autonomous or SSE switching on
a 7000 model. Fair queueing is now enabled automatically on multilink PPP
interfaces. WFQ is not supported on tunnels.
Configuring Weighted Fair Queuing (WFQ)
When congestion occurs, the weight for a class or group specifies the percentage
of the output bandwidth allocated to that group. A weight of 60 gives 60% of the
bandwidth during congestion periods.
Start by specifying what type of fair queuing is in effect on an interface:
[no] fair-queue [ tos | qos-group ]
If you omit tos and qos-group, you get flow-based WFQ. Otherwise you get TOS
(precedence)-based or QoS-group based WFQ on the interface. You then set the
total number of buffered packets on the interface. Below this limit, packets will not
be dropped. Default is based on bandwidth and memory space available.
fair-queue aggregate-limit <aggregate-limit>
You also specify the limit for each queue. Default is half the aggregate limit.
fair-queue individual-limit <individual-limit>
The documentation suggests you not alter the queue limits without a good reason.
To specify the depth of queue for a class of traffic:
fair-queue {tos <0-7> | qos-group <0-99> } limit <queue-limit>
Finally, to specify weight (percentage of the link) for a class of traffic:
fair-queue {tos <0-7> | qos-group <0-99> } weight <weight>
The percentages on an interface must add up to no more than 99 (percent).
Relevant EXEC Commands
show interface [interface] fair-queue
show queueing fair
Sample Configuration
Fair Queuing
interface serial 0
fair-queue 64 256 0
This restores the defaults on a T1 serial link.
Weighted Fair Queuing - QoS Group based
The following configuration sets up two QoS groups, 2 and 6, corresponding to
precedences 2 and 6. It then specifies WFQ in terms of those two QoS groups.
interface Hssi0/0/0
ip address 188.1.3.70 255.255.255.0
rate-limit output access-group rate-limit 6 155000000 2000000 8000000
conform-action
set-qos-transmit 6 exceed-action drop
rate-limit output access-group rate-limit 2 155000000 2000000 8000000
conform-action
set-qos-transmit 2 exceed-action drop
fair-queue qos-group
fair-queue qos-group 2 weight 10
fair-queue qos-group 2 limit 27
fair-queue qos-group 6 weight 30
fair-queue qos-group 6 limit 27
access-list rate-limit 2 2
access-list rate-limit 6 6
Weighted Fair Queuing - Precedence (TOS) based
The following configuration directly specifies WFQ based on precedences 1, 2,
and 3:
interface Hssi0/0/0
ip address 188.1.3.70 255.255.255.0
fair-queue tos
fair-queue tos 1 weight 20
fair-queue tos 1 limit 27
fair-queue tos 2 weight 30
fair-queue tos 2 limit 27
fair-queue tos 3 weight 40
fair-queue tos 3 limit 27
__________________
Random Early Detection (RED)
About Random Early Detection (RED)
Random Early Detection (RED) is a high-speed congestion avoidance
mechanism. It is not intended as a congestion management mechanism, the way
the queuing techniques (PQ, CQ, WFQ) are. It is also more appropriate for
long-haul trunks with many traffic flows, e.g. trans-oceanic links, rather than
campus networks.
When enabled, RED responds to congestion by dropping packets at the selected
rate. This is recommended only for TCP/IP networks with mostly TCP traffic. The
drops are intended to cause TCP to back off its transmission rate.
TCP normally adapts its transmission rate to the rate the network can support.
Each TCP flow repeats a cycle of ramping up to approximately the available
bandwidth, then slowing to either near zero or near half the bandwidth,
depending on the implementation. Thus a typical TCP flow may average between
1/2 and 3/4 of the available bandwidth, in the absence of any other traffic.
Multiple TCP flows tend to become synchronized, speeding up and slowing down
in synchronization. This behavior is sometimes called "porpoising", because the
flows surface and dive in unison, like a pod of porpoises. When congestion
occurs, all TCP sessions normally get slowed down simultaneously, resulting in
periods where link capacity is underutilized. By randomly slowing one TCP
session, the others benefit, resulting in better goodput.
Note that dropping packets does not work with most other protocols, including
AppleTalk and Novell.
When RSVP is also configured, packets from other flows are dropped before
those from RSVP flows, when possible. We'll look at RSVP in a later article.
Weighted RED (WRED) allows you to specify a RED policy in combination with IP
precedence, so that different types of packets are dropped at different rates and
levels of congestion. You can set it so precedence is ignored, or you can set it so
that lower precedence packets are more likely to be dropped. WRED is an IOS
11.1 CC or 12.0 feature.
Distributed Weighted RED (DWRED) is available in IOS 12.0 for hardware that
supports it. The Distributed WRED (DWRED) feature uses the VIP rather than the
RSP to perform the queuing. It requires a Cisco 7500 series router or Cisco 7000
series router with RSP.
Configuring Random Early Detection (RED)
The default is for RED to be disabled on an interface. RED is only useful on
interfaces where most of the traffic is TCP. Random early detection cannot be
configured on an interface already configured with custom, priority, or fair
queueing. To enable RED on an interface, configure:
random-detect
You may also configure
random-detect exponential-weighting-constant constant
Here constant is a number in the range 1 to 16 used to determine the rate that
packets are dropped when
congestion occurs. The default is 10. The number is an exponent used in the
exponential decay rate for the weighted queue size calculation used in RED. It is
suggested that you change the default with caution. A big value means the queue
size measurement changes slowly, making RED less responsive. The formula
used for tracking queue size is:
average = (old_average * (1-1/2^n)) + (current_queue_size * 1/2^n)
where n is the exponential weighting constant.
To configure WRED on an interface, configure:
random-detect precedence <0..7> <min-thresh> <max-thresh>
<mark-probability-denom>
In this command, precedence refers to IP precedence, number 0 to 7. And
min-thresh is the minimum threshold in number of packets, from 1 to 4096. When
the average queue length reaches this number, RED begins to drop packets with
the specified IP precedence. The number max-thresh is the maximum threshold
in number of packets, from 1 to 4096. When the average queue length exceeds
this number, WRED drops all packets with the specified IP precedence. Finally,
mark-prob-denom is the denominator for the fraction of packets dropped when
the average queue depth is max-threshold, in the range 1 to 65536. If the
denominator is 512, one out of every 512 packets is dropped when the average
queue is at the max-threshold. The value is from 1 to 65536. The default is 10.
The per-precedence min-threshold defaults are 9/18, 10/18, ... 16/18 of the
max-threshold size, for precedences 0 through 7 respectively. The max-threshold
is determined based on interface speed and output buffering capacity.
Relevant EXEC Commands
show interface
show interface [interface] random-detect
Sample Configuration
RED configuration
interface Hssi0/0/0
ip address ...
random-detect
WRED configuration
interface Hssi0/0/0
description 45Mbps to R1
ip address ...
random-detect exponential-weighting-constant 9
random-detect precedence 0 540 1080 10
random-detect precedence 1 607 1080 10
random-detect precedence 2 674 1080 10
random-detect precedence 3 741 1080 10
random-detect precedence 4 808 1080 10
random-detect precedence 5 875 1080 10
random-detect precedence 6 942 1080 10
random-detect precedence 7 1009 1080 10
random-detect
__________________
Committed Access Rate (CAR)
About Committed Access Rate (CAR)
Committed Access Rate (CAR) has two functions:
Packet Classification, using IP Precedence and QoS group setting
Access Bandwidth Management, through rate limiting
So CAR is basically the input side of Traffic Shaping (which we've talked about
somewhat in a prior Frame Relay article).
Traffic is sequentially classified using pattern matching specifications, just like
access lists, on a first-match basis. The pattern matched specifies what action
policy rule to use, based on whether the traffic conforms . That is, if traffic is
within the specified rate, it conforms, and is treated one way. Non-conforming
(excess) traffic can be treated differently, usually either by giving it lower priority
or by dropping it. If no rule is matched, the default is to transmit the packet. This
allows you to use rules to rate limit some traffic, and allow the rest to be
transmitted without any rate controls.
The possible action policy rules:
transmit
drop
continue (go to next rate-limit rule on the list)
set IP Precedence bits and transmit
set IP Precedence bits and continue
set QoS group and transmit
set QoS group and continue
IP Precedence uses the 3 bit precedence field in the IP header. This gives up to 6
Classes of Service (CoS): 0-5 can be used, but 6 and 7 are reserved per RFC791.
QoS group is an identifier within the router only. It can be set by CAR or by QPPB
(see elsewhere). The QoS group is a number in the range 0 to 99, with 0 the
default for unassigned packets (and not usable in assignments of QoS group).
The configurable parameters include:
committed rate (bits/second) -- in increments of 8 Kbps
normal burst size (bytes) -- how many bytes are handled in a burst above the
committed rate limit without a penalty
extended burst size (bytes) -- number of bytes in an extended burst -- beyond this,
packets are dropped
For traffic falling between normal and extended burst sizes, selected packets are
dropped using a RED-like managed drop policy. (See RED, elsewhere).
Configuring Committed Access Rate (CAR)
It's mostly one long command, repeated over and over with various rule
specifications:
[no] rate-limit {input|output}
[access-group [rate-limit] <acl-index> | qos-group <qos-group> ]
<bps> <normal-burst> <extended-burst>
conform-action { drop|
transmit|
continue|
set-prec-transmit <precedence> |
set-prec-continue <precedence>
set-qos-group-transmit <qos-group>
set-qos-group-continue <qos-group>}
exceed-action { drop|
transmit|
continue|
set-prec-transmit <precedence> |
set-prec-continue <precedence>|
set-qos-group-transmit <qos-group>|
set-qos-group-continue <qos-group>}
The arguments bps, normal-burst, extended-burst are as noted prior to this
section (committed rate in bps and burst sizes in bytes).
Traffic matches can be specified using access-lists:
[no] access-list rate-limit acl-index {precedence | mac-address | mask prec-mask}
where acl-index is the access list number: from 1 to 99 classifies packets by
precedence or precedence mask, from 100 to 199 classifies by MAC address.
And mask prec-mask is the IP precedence mask; a two-digit hexadecimal number.
This is used to assign multiple precedences to the same rate-limit access list.
(Precedences map to bits: precedence 0 is the 1 bit, precedence 1 the 2 bit, etc.).
Relevant EXEC Commands
show access-lists rate-limit [acl-index]
show interface [interface] rate-limit
Sample Configuration
Here's a simple sample:
interface Hssi0/0/0
description 45Mbps to R1
rate-limit input 20000000 24000 24000 conform-action transmit exceed-action
drop
ip address 200.200.14.250 255.255.255.252
And a more complex one:
interface Hssi0/0/0
description 45Mbps to R2
rate-limit input access-group 101 20000000 24000 32000 conform-action
set-prec-transmit 5 exceed-action set-prec-transmit 0
rate-limit input access-group 102 10000000 24000 32000 conform-action
set-prec-transmit 5 exceed-action drop
rate-limit input 8000000 16000 24000 conform-action set-prec-transmit 5
exceed-action drop
ip address 200.200.14.250 255.255.255.252
(etc.)
access-list 101 permit tcp any any eq www
access-list 102 permit tcp any any eq ftp
__________________
Traffic Shaping
About Traffic Shaping
Traffic Shaping comes in two forms: Generic Traffic Shaping and Frame Relay
Traffic Shaping. These are found in IOS 11.2 and later.
Traffic Shaping allows you to control how fast packets are sent out an interface,
any interface. You might want to do this to avoid congestion either locally or
elsewhere in your network, for example if you have a network with different
access rates or if you are restricting some traffic to a fraction of the available
bandwidth. For example, if one end of the link in a Frame Relay network is 256
Kbps and the other end of the link is only 128 Kbps, sending packets at 256 Kbps
at the very least causes congestion. Somewhere.
You can traffic shape all traffic on an interface, or use an access list to specify
certain traffic. On Frame Relay interfaces, additional per-virtual-circuit features
are available with Frame Relay Traffic Shaping.
Traffic shaping is not supported with optimum, distributed, or flow switching. If
you enable traffic shaping, all interfaces will revert to fast switching.
Configuring Generic Traffic Shaping
traffic-shape rate bit-rate [burst-size [excess-burst-size ]]
traffic-shape group access-list bit-rate [burst-size [excess-burst-size]]
The former command traffic shapes all traffic on an interface. The latter uses an
access-list to specify which traffic is to be traffic shaped.
bit-rate: Bit rate that traffic is shaped to in bits per second.
burst-size: Sustained number of bits that can be transmitted per interval. The
default is the bit-rate divided by 8.
excess-burst-size: Maximum number of bits that can exceed the burst size in the
first interval in a congestion event. The default is equal to the burst-size.
The measurement interval is calculated by dividing the burst-size (if non-zero) by
the bit rate. If the burst-size is zero, the excess-burst-size is used (if non-zero).
For Frame Relay, you can use:
traffic-shape adaptive [bit-rate]
This command uses the configured bit rate as a lower bound, with the bit rate
specified by the traffic-shape rate command as the upper bound for bandwidth.
The actual rate that the traffic is shaped to lies between those two rates. It should
be configured at both ends of the link because it also configures the devices to
reflect forward explicit congestion notifications (FECN's) as BECN's, enabling the
faster end of the link to adjust to congestion at the other end.
Relevant EXEC Commands
show traffic-shape [interface]
show traffic-shape statistics [interface]
Sample Configuration
access-list 101 permit udp any any
interface Ethernet0
traffic-shape group 101 1000000 125000 125000
interface Ethernet1
traffic-shape rate 5000000 625000 625000
Frame Relay Traffic Shaping
The Frame Relay traffic shaping allows
rate enforcement per PVC or SVC
dynamic traffic throttling in response to BECN packets
custom or priority queuing per virtual circuit
The intent is to allow guaranteed bandwidth for each type of traffic. The queuing
features let us prioritize per-circuit, and the rate enforcement makes sure that we
won't have a burst on one virtual circuit denying access line bandwidth to the
others.
__________________
Policy Routing
About Policy Routing
Policy routing is the name given to use of a route map on packets to influence the
routing decision. The routing next hop or output interface can be chosen based
on inbound interface, source, or type of traffic. The IP precedence can also be set
by the route map.
If you're choosing outbound interface or next hop in response to destination , then
you're doing normal routing, subject to some policy perhaps. Policy routing in the
Cisco world refers specifically to routing based on source or other traffic
characteristics, other than destination. Since this may have performance impact,
use it only where needed and appropriate.
Policy routing has performance impact: it is process or fast switched. It is
therefore suitable for setting precedence at low speed edge routers, but not
elsewhere.
Configuring Policy Routing
To specify use of a route-map for policy routing on an interface, configure:
ip policy route-map map-tag
The route map blocks then are defined using:
route-map map-tag [permit | deny] [sequence-number]
Route-map match conditions used for policy routing can match either packet
length or an IP extended access list.
To match the Layer 3 length of the packet, use:
match length min max
To match IP sources and destinations based on standard or extended access
list(s):
match ip address {access-list-number | name} [... access-list-number | name]
The route-map block's set conditions can specify precedence value, next-hop for
IP routing, or output interface.
To set the precedence value in the IP header:
set ip precedence value
To specify the next hop to which to route the packet (it need not be adjacent):
set ip next-hop ip-address [... ip-address]
To specify the output interface(s) for the packet:
set interface type number [... type number]
To specify the default route next hop for use when there is no explicit route:
set ip default next-hop ip-address [... ip-address]
To specify the default output interface(s) for use when there is no explicit route:
set default interface type number [... type number]
Fast-switched policy routing supports all of the match commands and most of the
set commands, except for the set ip default command and some use of the set
interface command. The set interface command is supported only over
point-to-point links, unless a route-cache entry exists using the same interface
specified in the set interface command in the route map.
When process switching policy routing, the routing table is used to check output
interface sanity. During fast switching, if the packet matches, the software blindly
forwards the packet to the specified interface. To configure fast-switched policy
routing on an interface:
ip route-cache policy
Packets generated by the router are not normally policy-routed. To enable local
policy routing of such packets, specify the route map to use. This is a global
configuration mode command.
ip local policy route-map map-tag
Related EXEC Commands
show ip cache policy
show ip local policy
Sample Configuration
The following example provides two sources with equal access to two different
service providers. Packets arriving on serial interface 1 from 1.1.1.1 are sent to
the next hop 3.3.3.3 if there is no explicit route for the packet's destination.
Packets arriving from 2.2.2.2 are sent to the next hop 4.4.4.4 there is no explicit
route for the packet's destination. All other packets for which the router has no
explicit route to the destination are discarded.
access-list 1 permit ip 1.1.1.1
access-list 2 permit ip 2.2.2.2
!
interface serial 1
ip policy route-map equal-access
!
route-map equal-access permit 10
match ip address 1
set ip default next-hop 3.3.3.3
route-map equal-access permit 20
match ip address 2
set ip default next-hop 4.4.4.4
route-map equal-access permit 30
set default interface null0
__________________
QoS Policy Propagation via BGP (QPPB)
About QPPB
QoS Policy Propagation via BGP (QPPB) allows you to classify packets based on
access lists, BGP community lists, and BGP AS paths. The classification can
then set either IP precedence (a global tagging scheme), or internal QoS group
identifier (internal to the router). The BGP community can also contain both AS
and IP precedence information -- see the second example below. After
classification, other QoS features such as CAR and WRED can then be used to
enforce business policy.
Note that this allows you to set up a policy at one BGP speaking router, and
propagate that to other routers via BGP. Hence the name. This means that at the
service provider router connecting to a site, a policy can be set up so that
inbound traffic elsewhere is classified into the right class of service (IP
Precedence bits). This can then interact with Tag Switching, or MPLS.
If you set the QoS group ID, it can then be used for rate-limiting or WFQ based on
QoS group ID. This expands on the classes of service provided by the 8 IP
precedence values.
If you use IP precedence, it can now be set based on source or destination
address.
Configuring QPPB
Configuring QPPB—ToS
[no] bgp-policy input ip-prec-map
[no] bgp-policy output ip-prec-map
Configuring QPPB—QoS groups
[no] bgp-policy input ip-qos-map
[no] bgp-policy output ip-qos-map
Relevant EXEC Commands
show ip bgp
show ip bgp community-list community-list-number
show ip cef prefix
show ip interface
show ip route prefix
Sample Configuration
Configuring QPPB on an interface:
int hssi0/0/0
ip address 210.210.2.1 255.255.255.252
bgp-policy input ip-prec-map
Configuring BGP to set QoS groups:
router bgp 210
neighbor 210.210.14.1 remote-as 210
neighbor 210.210.14.1 route-map comm-relay-prec out
neighbor 210.210.14.1 send-community
!
ip bgp-community new-format
!
access-list 1 permit 210.210.1.0 0.0.0.255
!
route-map comm-relay-prec permit 10
match ip address 1
set community 210:5
!
route-map comm-relay-prec permit 20
set community 210:0
Configuring BGP to set TOS bits (precedence):
router bgp 210
table-map precedence-map
neighbor 200.200.14.4 remote-as 210
neighbor 200.200.14.4 update-source Loopback0
!
ip bgp-community new-format
!
ip community-list 1 permit 210:5
!
route-map precedence-map permit 10
match community 1
set ip precedence 5
!
route-map precedence-map permit 20
set ip precedence 0
__________________
Configuring RSVP
RSVP is disabled by default. To enable RSVP on an interface:
interface ...
ip rsvp bandwidth [interface-kbps] [single-flow-kbps]
The default maximum bandwidth is 75% of the interface bandwidth. A single flow
can reserve up to the entire reservable bandwidth. On subinterfaces, this uses
the more restrictive of the available bandwidths of the physical interface and the
subinterface.
You can configure the router to behave as though it is periodically receiving an
RSVP PATH message from a sender, or previous hop routes containing the
indicated attributes:
interface ...
ip rsvp sender session-ip-address sender-ip-address [tcp | udp | ip-protocol]
session-dport sender-sport previous-hop-ip-address previous-hop-interface
You can configure the router to behave as though it is continuously receiving an
RSVP RESV message from an originator containing the indicated attributes:
interface ...
ip rsvp reservation session-ip-address sender-ip-address [tcp | udp | ip-protocol]
session-dport sender-sport next-hop-ip-address next-hop-interface {ff | se | wf}
{rate | load} [bandwidth] [ burst-size]
Here {ff | se | wf } refers to the reservation style: Fixed Filter (ff) is single
reservation, Shared Explicit (se) is shared reservation, limited scope, and Wild
Card (wf) is shared reservation, unlimited scope.
The keyword {rate | load} refers to QOS: guaranteed bit rate service or controlled
load service. In both cases, the bit rate and token bucket size is specified.
RSVP control messages are intended to use raw IP packets, but some hosts may
not be able to transmit such packets. If RSVP neighbors are seen using UDP
encapsulation, the router will automatically generate UDP-encapsulated
messages for the neighbors. Some hosts will not originate such messages unless
they hear from the router first. To enter a UDP multicast address, use the global
command:
interface ...
ip rsvp udp-multicast [multicast-address]
To limit which routers may offer reservations, configure:
interface ...
ip rsvp neighbors access-list-number
Relevant EXEC Mode Commands
show ip rsvp interface [type number]
show ip rsvp installed [type number]
show ip rsvp neighbor [type number]
show ip rsvp sender [type number]
show ip rsvp request [type number]
show ip rsvp reservation [type number]
Sample Configuration
The following shows the various commands in use, in no particular scenario:
interface ethernet 2
ip rsvp bandwidth 15000 1000
ip rsvp udp-multicast 224.0.0.20
ip rsvp neighbors 1
ip rsvp reservation 224.250.0.2 130.240.1.1 UDP 20 30 130.240.4.1 Eth1 se load
100 60
ip rsvp reservation 224.250.0.2 130.240.2.1 TCP 20 30 130.240.4.1 Eth1 se load
150 65
ip rsvp reservation 224.250.0.3 0.0.0.0 UDP 20 0 130.240.4.1 Eth1 wf rate 300
60
ip rsvp reservation 224.250.0.3 0.0.0.0 UDP 20 0 130.240.4.1 Eth1 wf rate 350
65
ip rsvp sender 224.250.0.1 130.240.2.1 udp 20 30 130.240.2.1 loopback 1 50 5
ip rsvp sender 224.250.0.2 130.240.2.1 udp 20 30 130.240.2.1 loopback 1 50 5
ip rsvp sender 224.250.0.2 130.240.2.28 udp 20 30 130.240.2.28 loopback 1 50 5
__________________
QoS stands for Quality of Service. The term is being used by Cisco to refer to
IP-based features that allow specification and delivery of services much like the
Quality of Service features in ATM.
When you get right down to it, there isn't all that much a router can do to control
traffic, since it is not the originator of most of the traffic. The router can drop traffic
-- although we'd prefer it didn't do so. It can put some queued frames out an
interface before others. It can be selective about accepting traffic -- another form
of dropped traffic. And, with TCP, it can selectively drop the occasional packet as
an indirect signal to slow down. With cooperative hosts, the router can try to
accept reservations and hold bandwidth for applications that need it.
Acronyms, features or topics that fall under QoS include: Priority Queuing (PQ),
Custom Queuing (CQ), Fair and Weighted Fair Queuing (WFQ), Random Early
Detection (RED) and its Distributed, Weighted variant (DWRED), Resource
Reservation Protocol (RSVP), Traffic Shaping, Committed Access Rate (CAR),
Policy Routing, QoS Policy Propagation via BGP (QPPB), NetFlow, and Cisco
Express Forwarding (CEF).
The first set of functions relate to queuing, to managing congestion. They are
sometimes referred to as "Fancy Queuing". These include Priority Queuing (PQ),
Custom Queuing (CQ), and Weighted Fair Queuing (WFQ). These features allow
the router to control which frames are sent first on an interface. If there are too
many frames (congestion), then we are, in effect, also selecting which frames get
dropped.
These functions, Priority Queuing (PQ), Custom Queuing (CQ), and Weighted
Fair Queuing (WFQ), are the subject of this article. They are also discussed as
one small part of the Cisco certified ACRC course.
The next feature on the list, Weighted Random Early Detection, is intended to
prevent or reduce congestion -- trying to reduce problems, rather than mitigating
the consequences once the problem has already occurred.
RSVP allows for applications to reserve bandwidth, primarily WAN bandwidth. It
is designed to work with WFQ or Traffic Shaping on the outbound interface.
Traffic Shaping and Committed Access Rate (CAR) control traffic. It seems like a
better acronym could have been chosen: CAR controls traffic? Anyway, CAR
controls the rate of inbound traffic, allowing specification of what to do with traffic
that is coming in faster than policy. Traffic Shaping paces outbound traffic,
controlling use of bandwidth. Traffic Shaping also allows matching the speed of
the output access link across a WAN cloud, so that a faster central hub access
circuit doesn't cause carrier or remote link congestion.
CAR, Policy Routing, and QPPB can also set the IP precedence bits (TOS bits),
which are used by some of the above mechanisms to favor some traffic over
other traffic.
Finally, NetFlow and CEF are switching techniques used in high-performance
routers. They assist in providing QoS by providing efficient packet delivery and
statistics on the traffic, statistics to manage traffic flow, trunk sizing, and network
design with.
__________________
Priority Queuing
About Priority Queuing
Priority Queuing is the oldest of the queuing techniques. Traffic is prioritized with
a priority-list, applied to an interface with a priority-group command. The traffic
goes into one of four queues: high, medium, normal, or low priority. When the
router is ready to transmit a packet, it searches the high queue for a packet. If
there is one, it gets sent. If not, the medium queue is checked. If there is a packet,
it is sent. If not, the normal, and finally the low priority queues are checked. For
the next packet, the process repeats. If there is enough traffic in the high queue,
the other queues may get starved: they never get serviced.
You can regard Priority Queuing as being drastic. It says that the high priority
traffic must go out the interface at all costs, and any other traffic can be dropped.
It is generally intended for use on low bandwidth links.
Configuring Priority Queuing
To assign traffic meeting certain characteristics to a queue (high, medium, normal,
or low), use one of the following commands:
priority-list list-number protocol protocol-name {high | medium | normal | low}
queue-keyword keyword-value
priority-list list-number interface interface-type interface-number {high | medium |
normal | low}
The first of these takes a protocol, like ip, ipx, appletalk, rsrb, dlsw, etc., to
classify traffic. The queue-keyword can be one of: fragments, gt, lt, list, tcp, and
udp. The keyword-value specifies the port for tcp or udp, or the size for gt
(greater than) and lt (less than). The word list allows you to specify an access list
characterizing the traffic. And fragments means just that, IP fragments (which
should probably get expedited handling, so as to not have to retransmit all the
fragments again if one is lost).
The second command above is similar, but classifies traffic based on the
interface it arrived on.
The list-number is any number in the range 1-16. All statements in one policy use
the same number.
To change the default queue for all other traffic:
priority-list list-number default {high | medium | normal | low}
To change the queue sizes from the defaults 20, 40, 60, 80 (don't go overboard
on this if you see output drops, you may make things worse):
priority-list list-number queue-limit high-limit medium-limit normal-limit low-limit
To apply the priority queueing policy for outbound packets on an interface:
interface ...
priority-group list-number
Relevant EXEC Commands
show queueing priority
Sample Configuration
The following configuration sets up a priority list where DLSw traffic goes into the
high priority traffic, as does telnet transmissions. The remaining IP that matches
access list 101 goes to the medium queue, and any thing else goes in the low
queue. (Standard joke: you've planned to send your boss's traffic into the low
queue, to make sure the congestion gets noticed). You've mildly upped the
default queue sizes. And this policy is in effect for packets being sent out serial 0.
priority-list 1 protocol dlsw high
priority-list 1 protocol ip high tcp 23
priority-list 1 protocol ip medium list 101
priority-list 1 default low
priority-list 1 queue-limit 30 60 90 120
interface serial 0
priority-group 1
__________________
Custom Queuing
About Custom Queuing
Custom Queuing uses 17 queues to divide up bandwidth on an interface. Queue
0, the system queue, is always serviced first. It is used for keepalives and other
critical interface traffic. The remaining traffic can be assigned to queues 1 through
16. These queues are serviced in round-robin fashion.
Here's how it works. Packets are sent from each queue in turn. As each packet is
sent, a byte counter is incremented. When the byte counter exceeds the default
or configured threshold for the queue, transmission moves on to the next queue.
The byte count total for the queue that just finished has the threshold value
subtracted from it, so that it starts its next turn penalized by the number of bytes
that it went over its quota. This provides additional fairness to the mechanism.
If you think about it, you can't send half of a packet. That's why this mechanism
might well exceed quota on any given round of transmission from a queue. But on
the next round, the queue is penalized for taking more than it's fair share, so in
the long run it averages out.
Custom Queuing is aimed at fair division of bandwidth. For instance, you might
set it up to allow IP roughly 50% of a link, DLSw 25%, and IPX 25%. When
congestion is taking place, the limits are enforced. If there is unused bandwidth,
say from IPX, it is divided equally among any excess traffic from the other classes
of traffic, IP and DLSw. To implement this, you would tweak the thresholds for the
relevant queues, say making them 3000, 1500, and 1500 bytes respectively.
Some fine tuning to average packet MTU size can make this more precise.
Configuring Custom Queuing
The commands for CQ are very similar to those for PQ. The difference is that you
put the traffic into queues numbered 1-16, rather than named high, medium,
normal, low. Hence we build our CQ policy with:
queue-list list-number protocol protocol-name queue-number queue-keyword
keyword-value
queue-list list-number interface interface-type interface-number queue-number
You can specify the default queue, the one that receives any unmatched traffic,
with the command:
queue-list list-number default queue-number
(
The default default queue is 1).
You can specify the number of packets allowed in any queue with the command:
queue-list list-number queue queue-number limit limit-number
The threshold for a queue can be changed with the following command:
queue-list list-number queue queue-number byte-count byte-count-number
The default threshold for the queues is 1500 bytes.
And the CQ policy is applied to outbound frames on an interface with:
interface ...
custom-queue-list list-number
Relevant EXEC Commands
show queueing custom
show interface type number
Sample Configuration
The following configuration is similar to that for PQ, except that we're not making
DLSw and Telnet traffic top priority any more. Instead, we're using four (4)
queues (since default traffic goes to queue 10). The thresholds are 1500, 1500,
3000, and 1500, so Telnet in queue 3 gets 3000/7500 = 40% of the bandwidth,
and the other queues get 20% each.
queue-list 1 protocol dlsw 1
queue-list 1 protocol ip 2 list 101
queue-list 1 protocol ip 3 tcp 23
queue-list 1 default 10
queue-list 1 queue 3 limit 40
queue-list 1 queue 3 byte-count 3000
interface serial 0
custom-queue-list 1
__________________
Weighted Fair Queuing (WFQ)
About WFQ
Weighted fair queueing provides automatically sorts among individual traffic
streams without requiring that you first define access lists. It can manage one
way or two way streams of data: traffic between pairs of applications or voice and
video. It automatically smooths out bursts to reduce average latency.
In WFQ, packets are sorted in weighted order of arrival of the last bit, to
determine transmission order. Using order of arrival of last bit emulates the
behavior of Time Division Multiplexing (TDM), hence "fair". In Frame Relay,
FECN, BECN, and DE bits will cause the weights to be automatically adjusted,
slowing flows if needed.
From one point of view, the effect of this is that WFQ classifies sessions as highor
low-bandwidth. Low-bandwidth traffic gets priority, with high-bandwidth traffic
sharing what's left over. If the traffic is bursting ahead of the rate at which the
interface can transmit, new high-bandwidth traffic gets discarded after the
configured or default congestive-messages threshold has been reached.
However, low-bandwidth conversations, which include control-message
conversations, continue to enqueue data.
Weighted fair queuing uses some parts of the protocol header to determine flow
identity. For IP, WFQ uses the Type of Service (TOS) bits, the IP protocol code,
the source and destination IP addresses (if not a fragment), and the source and
destination TCP or UDP ports.
Distributed WFQ is available in IOS 12.0 on high-end interfaces and router
models.
Configuring Fair Queuing (FQ)
fair-queue [congestive-discard-threshold [dynamic-queues [reservable-queues]]]
no fair-queue
congestive-discard-threshold: Number of messages allowed in each queue in the
range 1 to 4096, default 64.
dynamic-queues: Number of dynamic queues used for best-effort conversations.
Values are 16, 32, 64, 128, 256,
512, 1024, 2048, and 4096. The default is 256.
reservable-queues: Number of reservable queues used for reserved (RSVP)
conversations, range 0 to 1000. The default is 0. If RSVP is enabled on a WFQ
interface with reservable-queues set to 0, the reservable queue size is
automatically set to bandwidth divided by 32 Kbps. Specify a reservable-queue
size other than 0 if you wish different behavior.
Fair queuing is enabled by default for physical interfaces whose bandwidth is less
than or equal to 2.048 Mbps, except for Link Access Procedure, Balanced (LAPB),
X.25, or Synchronous Data Link Control (SDLC) encapsulations. Enabling
custom queuing or priority queuing on an interface disables fair queueing. Fair
queuing is automatically disabled if you enable autonomous or SSE switching on
a 7000 model. Fair queueing is now enabled automatically on multilink PPP
interfaces. WFQ is not supported on tunnels.
Configuring Weighted Fair Queuing (WFQ)
When congestion occurs, the weight for a class or group specifies the percentage
of the output bandwidth allocated to that group. A weight of 60 gives 60% of the
bandwidth during congestion periods.
Start by specifying what type of fair queuing is in effect on an interface:
[no] fair-queue [ tos | qos-group ]
If you omit tos and qos-group, you get flow-based WFQ. Otherwise you get TOS
(precedence)-based or QoS-group based WFQ on the interface. You then set the
total number of buffered packets on the interface. Below this limit, packets will not
be dropped. Default is based on bandwidth and memory space available.
fair-queue aggregate-limit <aggregate-limit>
You also specify the limit for each queue. Default is half the aggregate limit.
fair-queue individual-limit <individual-limit>
The documentation suggests you not alter the queue limits without a good reason.
To specify the depth of queue for a class of traffic:
fair-queue {tos <0-7> | qos-group <0-99> } limit <queue-limit>
Finally, to specify weight (percentage of the link) for a class of traffic:
fair-queue {tos <0-7> | qos-group <0-99> } weight <weight>
The percentages on an interface must add up to no more than 99 (percent).
Relevant EXEC Commands
show interface [interface] fair-queue
show queueing fair
Sample Configuration
Fair Queuing
interface serial 0
fair-queue 64 256 0
This restores the defaults on a T1 serial link.
Weighted Fair Queuing - QoS Group based
The following configuration sets up two QoS groups, 2 and 6, corresponding to
precedences 2 and 6. It then specifies WFQ in terms of those two QoS groups.
interface Hssi0/0/0
ip address 188.1.3.70 255.255.255.0
rate-limit output access-group rate-limit 6 155000000 2000000 8000000
conform-action
set-qos-transmit 6 exceed-action drop
rate-limit output access-group rate-limit 2 155000000 2000000 8000000
conform-action
set-qos-transmit 2 exceed-action drop
fair-queue qos-group
fair-queue qos-group 2 weight 10
fair-queue qos-group 2 limit 27
fair-queue qos-group 6 weight 30
fair-queue qos-group 6 limit 27
access-list rate-limit 2 2
access-list rate-limit 6 6
Weighted Fair Queuing - Precedence (TOS) based
The following configuration directly specifies WFQ based on precedences 1, 2,
and 3:
interface Hssi0/0/0
ip address 188.1.3.70 255.255.255.0
fair-queue tos
fair-queue tos 1 weight 20
fair-queue tos 1 limit 27
fair-queue tos 2 weight 30
fair-queue tos 2 limit 27
fair-queue tos 3 weight 40
fair-queue tos 3 limit 27
__________________
Random Early Detection (RED)
About Random Early Detection (RED)
Random Early Detection (RED) is a high-speed congestion avoidance
mechanism. It is not intended as a congestion management mechanism, the way
the queuing techniques (PQ, CQ, WFQ) are. It is also more appropriate for
long-haul trunks with many traffic flows, e.g. trans-oceanic links, rather than
campus networks.
When enabled, RED responds to congestion by dropping packets at the selected
rate. This is recommended only for TCP/IP networks with mostly TCP traffic. The
drops are intended to cause TCP to back off its transmission rate.
TCP normally adapts its transmission rate to the rate the network can support.
Each TCP flow repeats a cycle of ramping up to approximately the available
bandwidth, then slowing to either near zero or near half the bandwidth,
depending on the implementation. Thus a typical TCP flow may average between
1/2 and 3/4 of the available bandwidth, in the absence of any other traffic.
Multiple TCP flows tend to become synchronized, speeding up and slowing down
in synchronization. This behavior is sometimes called "porpoising", because the
flows surface and dive in unison, like a pod of porpoises. When congestion
occurs, all TCP sessions normally get slowed down simultaneously, resulting in
periods where link capacity is underutilized. By randomly slowing one TCP
session, the others benefit, resulting in better goodput.
Note that dropping packets does not work with most other protocols, including
AppleTalk and Novell.
When RSVP is also configured, packets from other flows are dropped before
those from RSVP flows, when possible. We'll look at RSVP in a later article.
Weighted RED (WRED) allows you to specify a RED policy in combination with IP
precedence, so that different types of packets are dropped at different rates and
levels of congestion. You can set it so precedence is ignored, or you can set it so
that lower precedence packets are more likely to be dropped. WRED is an IOS
11.1 CC or 12.0 feature.
Distributed Weighted RED (DWRED) is available in IOS 12.0 for hardware that
supports it. The Distributed WRED (DWRED) feature uses the VIP rather than the
RSP to perform the queuing. It requires a Cisco 7500 series router or Cisco 7000
series router with RSP.
Configuring Random Early Detection (RED)
The default is for RED to be disabled on an interface. RED is only useful on
interfaces where most of the traffic is TCP. Random early detection cannot be
configured on an interface already configured with custom, priority, or fair
queueing. To enable RED on an interface, configure:
random-detect
You may also configure
random-detect exponential-weighting-constant constant
Here constant is a number in the range 1 to 16 used to determine the rate that
packets are dropped when
congestion occurs. The default is 10. The number is an exponent used in the
exponential decay rate for the weighted queue size calculation used in RED. It is
suggested that you change the default with caution. A big value means the queue
size measurement changes slowly, making RED less responsive. The formula
used for tracking queue size is:
average = (old_average * (1-1/2^n)) + (current_queue_size * 1/2^n)
where n is the exponential weighting constant.
To configure WRED on an interface, configure:
random-detect precedence <0..7> <min-thresh> <max-thresh>
<mark-probability-denom>
In this command, precedence refers to IP precedence, number 0 to 7. And
min-thresh is the minimum threshold in number of packets, from 1 to 4096. When
the average queue length reaches this number, RED begins to drop packets with
the specified IP precedence. The number max-thresh is the maximum threshold
in number of packets, from 1 to 4096. When the average queue length exceeds
this number, WRED drops all packets with the specified IP precedence. Finally,
mark-prob-denom is the denominator for the fraction of packets dropped when
the average queue depth is max-threshold, in the range 1 to 65536. If the
denominator is 512, one out of every 512 packets is dropped when the average
queue is at the max-threshold. The value is from 1 to 65536. The default is 10.
The per-precedence min-threshold defaults are 9/18, 10/18, ... 16/18 of the
max-threshold size, for precedences 0 through 7 respectively. The max-threshold
is determined based on interface speed and output buffering capacity.
Relevant EXEC Commands
show interface
show interface [interface] random-detect
Sample Configuration
RED configuration
interface Hssi0/0/0
ip address ...
random-detect
WRED configuration
interface Hssi0/0/0
description 45Mbps to R1
ip address ...
random-detect exponential-weighting-constant 9
random-detect precedence 0 540 1080 10
random-detect precedence 1 607 1080 10
random-detect precedence 2 674 1080 10
random-detect precedence 3 741 1080 10
random-detect precedence 4 808 1080 10
random-detect precedence 5 875 1080 10
random-detect precedence 6 942 1080 10
random-detect precedence 7 1009 1080 10
random-detect
__________________
Committed Access Rate (CAR)
About Committed Access Rate (CAR)
Committed Access Rate (CAR) has two functions:
Packet Classification, using IP Precedence and QoS group setting
Access Bandwidth Management, through rate limiting
So CAR is basically the input side of Traffic Shaping (which we've talked about
somewhat in a prior Frame Relay article).
Traffic is sequentially classified using pattern matching specifications, just like
access lists, on a first-match basis. The pattern matched specifies what action
policy rule to use, based on whether the traffic conforms . That is, if traffic is
within the specified rate, it conforms, and is treated one way. Non-conforming
(excess) traffic can be treated differently, usually either by giving it lower priority
or by dropping it. If no rule is matched, the default is to transmit the packet. This
allows you to use rules to rate limit some traffic, and allow the rest to be
transmitted without any rate controls.
The possible action policy rules:
transmit
drop
continue (go to next rate-limit rule on the list)
set IP Precedence bits and transmit
set IP Precedence bits and continue
set QoS group and transmit
set QoS group and continue
IP Precedence uses the 3 bit precedence field in the IP header. This gives up to 6
Classes of Service (CoS): 0-5 can be used, but 6 and 7 are reserved per RFC791.
QoS group is an identifier within the router only. It can be set by CAR or by QPPB
(see elsewhere). The QoS group is a number in the range 0 to 99, with 0 the
default for unassigned packets (and not usable in assignments of QoS group).
The configurable parameters include:
committed rate (bits/second) -- in increments of 8 Kbps
normal burst size (bytes) -- how many bytes are handled in a burst above the
committed rate limit without a penalty
extended burst size (bytes) -- number of bytes in an extended burst -- beyond this,
packets are dropped
For traffic falling between normal and extended burst sizes, selected packets are
dropped using a RED-like managed drop policy. (See RED, elsewhere).
Configuring Committed Access Rate (CAR)
It's mostly one long command, repeated over and over with various rule
specifications:
[no] rate-limit {input|output}
[access-group [rate-limit] <acl-index> | qos-group <qos-group> ]
<bps> <normal-burst> <extended-burst>
conform-action { drop|
transmit|
continue|
set-prec-transmit <precedence> |
set-prec-continue <precedence>
set-qos-group-transmit <qos-group>
set-qos-group-continue <qos-group>}
exceed-action { drop|
transmit|
continue|
set-prec-transmit <precedence> |
set-prec-continue <precedence>|
set-qos-group-transmit <qos-group>|
set-qos-group-continue <qos-group>}
The arguments bps, normal-burst, extended-burst are as noted prior to this
section (committed rate in bps and burst sizes in bytes).
Traffic matches can be specified using access-lists:
[no] access-list rate-limit acl-index {precedence | mac-address | mask prec-mask}
where acl-index is the access list number: from 1 to 99 classifies packets by
precedence or precedence mask, from 100 to 199 classifies by MAC address.
And mask prec-mask is the IP precedence mask; a two-digit hexadecimal number.
This is used to assign multiple precedences to the same rate-limit access list.
(Precedences map to bits: precedence 0 is the 1 bit, precedence 1 the 2 bit, etc.).
Relevant EXEC Commands
show access-lists rate-limit [acl-index]
show interface [interface] rate-limit
Sample Configuration
Here's a simple sample:
interface Hssi0/0/0
description 45Mbps to R1
rate-limit input 20000000 24000 24000 conform-action transmit exceed-action
drop
ip address 200.200.14.250 255.255.255.252
And a more complex one:
interface Hssi0/0/0
description 45Mbps to R2
rate-limit input access-group 101 20000000 24000 32000 conform-action
set-prec-transmit 5 exceed-action set-prec-transmit 0
rate-limit input access-group 102 10000000 24000 32000 conform-action
set-prec-transmit 5 exceed-action drop
rate-limit input 8000000 16000 24000 conform-action set-prec-transmit 5
exceed-action drop
ip address 200.200.14.250 255.255.255.252
(etc.)
access-list 101 permit tcp any any eq www
access-list 102 permit tcp any any eq ftp
__________________
Traffic Shaping
About Traffic Shaping
Traffic Shaping comes in two forms: Generic Traffic Shaping and Frame Relay
Traffic Shaping. These are found in IOS 11.2 and later.
Traffic Shaping allows you to control how fast packets are sent out an interface,
any interface. You might want to do this to avoid congestion either locally or
elsewhere in your network, for example if you have a network with different
access rates or if you are restricting some traffic to a fraction of the available
bandwidth. For example, if one end of the link in a Frame Relay network is 256
Kbps and the other end of the link is only 128 Kbps, sending packets at 256 Kbps
at the very least causes congestion. Somewhere.
You can traffic shape all traffic on an interface, or use an access list to specify
certain traffic. On Frame Relay interfaces, additional per-virtual-circuit features
are available with Frame Relay Traffic Shaping.
Traffic shaping is not supported with optimum, distributed, or flow switching. If
you enable traffic shaping, all interfaces will revert to fast switching.
Configuring Generic Traffic Shaping
traffic-shape rate bit-rate [burst-size [excess-burst-size ]]
traffic-shape group access-list bit-rate [burst-size [excess-burst-size]]
The former command traffic shapes all traffic on an interface. The latter uses an
access-list to specify which traffic is to be traffic shaped.
bit-rate: Bit rate that traffic is shaped to in bits per second.
burst-size: Sustained number of bits that can be transmitted per interval. The
default is the bit-rate divided by 8.
excess-burst-size: Maximum number of bits that can exceed the burst size in the
first interval in a congestion event. The default is equal to the burst-size.
The measurement interval is calculated by dividing the burst-size (if non-zero) by
the bit rate. If the burst-size is zero, the excess-burst-size is used (if non-zero).
For Frame Relay, you can use:
traffic-shape adaptive [bit-rate]
This command uses the configured bit rate as a lower bound, with the bit rate
specified by the traffic-shape rate command as the upper bound for bandwidth.
The actual rate that the traffic is shaped to lies between those two rates. It should
be configured at both ends of the link because it also configures the devices to
reflect forward explicit congestion notifications (FECN's) as BECN's, enabling the
faster end of the link to adjust to congestion at the other end.
Relevant EXEC Commands
show traffic-shape [interface]
show traffic-shape statistics [interface]
Sample Configuration
access-list 101 permit udp any any
interface Ethernet0
traffic-shape group 101 1000000 125000 125000
interface Ethernet1
traffic-shape rate 5000000 625000 625000
Frame Relay Traffic Shaping
The Frame Relay traffic shaping allows
rate enforcement per PVC or SVC
dynamic traffic throttling in response to BECN packets
custom or priority queuing per virtual circuit
The intent is to allow guaranteed bandwidth for each type of traffic. The queuing
features let us prioritize per-circuit, and the rate enforcement makes sure that we
won't have a burst on one virtual circuit denying access line bandwidth to the
others.
__________________
Policy Routing
About Policy Routing
Policy routing is the name given to use of a route map on packets to influence the
routing decision. The routing next hop or output interface can be chosen based
on inbound interface, source, or type of traffic. The IP precedence can also be set
by the route map.
If you're choosing outbound interface or next hop in response to destination , then
you're doing normal routing, subject to some policy perhaps. Policy routing in the
Cisco world refers specifically to routing based on source or other traffic
characteristics, other than destination. Since this may have performance impact,
use it only where needed and appropriate.
Policy routing has performance impact: it is process or fast switched. It is
therefore suitable for setting precedence at low speed edge routers, but not
elsewhere.
Configuring Policy Routing
To specify use of a route-map for policy routing on an interface, configure:
ip policy route-map map-tag
The route map blocks then are defined using:
route-map map-tag [permit | deny] [sequence-number]
Route-map match conditions used for policy routing can match either packet
length or an IP extended access list.
To match the Layer 3 length of the packet, use:
match length min max
To match IP sources and destinations based on standard or extended access
list(s):
match ip address {access-list-number | name} [... access-list-number | name]
The route-map block's set conditions can specify precedence value, next-hop for
IP routing, or output interface.
To set the precedence value in the IP header:
set ip precedence value
To specify the next hop to which to route the packet (it need not be adjacent):
set ip next-hop ip-address [... ip-address]
To specify the output interface(s) for the packet:
set interface type number [... type number]
To specify the default route next hop for use when there is no explicit route:
set ip default next-hop ip-address [... ip-address]
To specify the default output interface(s) for use when there is no explicit route:
set default interface type number [... type number]
Fast-switched policy routing supports all of the match commands and most of the
set commands, except for the set ip default command and some use of the set
interface command. The set interface command is supported only over
point-to-point links, unless a route-cache entry exists using the same interface
specified in the set interface command in the route map.
When process switching policy routing, the routing table is used to check output
interface sanity. During fast switching, if the packet matches, the software blindly
forwards the packet to the specified interface. To configure fast-switched policy
routing on an interface:
ip route-cache policy
Packets generated by the router are not normally policy-routed. To enable local
policy routing of such packets, specify the route map to use. This is a global
configuration mode command.
ip local policy route-map map-tag
Related EXEC Commands
show ip cache policy
show ip local policy
Sample Configuration
The following example provides two sources with equal access to two different
service providers. Packets arriving on serial interface 1 from 1.1.1.1 are sent to
the next hop 3.3.3.3 if there is no explicit route for the packet's destination.
Packets arriving from 2.2.2.2 are sent to the next hop 4.4.4.4 there is no explicit
route for the packet's destination. All other packets for which the router has no
explicit route to the destination are discarded.
access-list 1 permit ip 1.1.1.1
access-list 2 permit ip 2.2.2.2
!
interface serial 1
ip policy route-map equal-access
!
route-map equal-access permit 10
match ip address 1
set ip default next-hop 3.3.3.3
route-map equal-access permit 20
match ip address 2
set ip default next-hop 4.4.4.4
route-map equal-access permit 30
set default interface null0
__________________
QoS Policy Propagation via BGP (QPPB)
About QPPB
QoS Policy Propagation via BGP (QPPB) allows you to classify packets based on
access lists, BGP community lists, and BGP AS paths. The classification can
then set either IP precedence (a global tagging scheme), or internal QoS group
identifier (internal to the router). The BGP community can also contain both AS
and IP precedence information -- see the second example below. After
classification, other QoS features such as CAR and WRED can then be used to
enforce business policy.
Note that this allows you to set up a policy at one BGP speaking router, and
propagate that to other routers via BGP. Hence the name. This means that at the
service provider router connecting to a site, a policy can be set up so that
inbound traffic elsewhere is classified into the right class of service (IP
Precedence bits). This can then interact with Tag Switching, or MPLS.
If you set the QoS group ID, it can then be used for rate-limiting or WFQ based on
QoS group ID. This expands on the classes of service provided by the 8 IP
precedence values.
If you use IP precedence, it can now be set based on source or destination
address.
Configuring QPPB
Configuring QPPB—ToS
[no] bgp-policy input ip-prec-map
[no] bgp-policy output ip-prec-map
Configuring QPPB—QoS groups
[no] bgp-policy input ip-qos-map
[no] bgp-policy output ip-qos-map
Relevant EXEC Commands
show ip bgp
show ip bgp community-list community-list-number
show ip cef prefix
show ip interface
show ip route prefix
Sample Configuration
Configuring QPPB on an interface:
int hssi0/0/0
ip address 210.210.2.1 255.255.255.252
bgp-policy input ip-prec-map
Configuring BGP to set QoS groups:
router bgp 210
neighbor 210.210.14.1 remote-as 210
neighbor 210.210.14.1 route-map comm-relay-prec out
neighbor 210.210.14.1 send-community
!
ip bgp-community new-format
!
access-list 1 permit 210.210.1.0 0.0.0.255
!
route-map comm-relay-prec permit 10
match ip address 1
set community 210:5
!
route-map comm-relay-prec permit 20
set community 210:0
Configuring BGP to set TOS bits (precedence):
router bgp 210
table-map precedence-map
neighbor 200.200.14.4 remote-as 210
neighbor 200.200.14.4 update-source Loopback0
!
ip bgp-community new-format
!
ip community-list 1 permit 210:5
!
route-map precedence-map permit 10
match community 1
set ip precedence 5
!
route-map precedence-map permit 20
set ip precedence 0
__________________
Configuring RSVP
RSVP is disabled by default. To enable RSVP on an interface:
interface ...
ip rsvp bandwidth [interface-kbps] [single-flow-kbps]
The default maximum bandwidth is 75% of the interface bandwidth. A single flow
can reserve up to the entire reservable bandwidth. On subinterfaces, this uses
the more restrictive of the available bandwidths of the physical interface and the
subinterface.
You can configure the router to behave as though it is periodically receiving an
RSVP PATH message from a sender, or previous hop routes containing the
indicated attributes:
interface ...
ip rsvp sender session-ip-address sender-ip-address [tcp | udp | ip-protocol]
session-dport sender-sport previous-hop-ip-address previous-hop-interface
You can configure the router to behave as though it is continuously receiving an
RSVP RESV message from an originator containing the indicated attributes:
interface ...
ip rsvp reservation session-ip-address sender-ip-address [tcp | udp | ip-protocol]
session-dport sender-sport next-hop-ip-address next-hop-interface {ff | se | wf}
{rate | load} [bandwidth] [ burst-size]
Here {ff | se | wf } refers to the reservation style: Fixed Filter (ff) is single
reservation, Shared Explicit (se) is shared reservation, limited scope, and Wild
Card (wf) is shared reservation, unlimited scope.
The keyword {rate | load} refers to QOS: guaranteed bit rate service or controlled
load service. In both cases, the bit rate and token bucket size is specified.
RSVP control messages are intended to use raw IP packets, but some hosts may
not be able to transmit such packets. If RSVP neighbors are seen using UDP
encapsulation, the router will automatically generate UDP-encapsulated
messages for the neighbors. Some hosts will not originate such messages unless
they hear from the router first. To enter a UDP multicast address, use the global
command:
interface ...
ip rsvp udp-multicast [multicast-address]
To limit which routers may offer reservations, configure:
interface ...
ip rsvp neighbors access-list-number
Relevant EXEC Mode Commands
show ip rsvp interface [type number]
show ip rsvp installed [type number]
show ip rsvp neighbor [type number]
show ip rsvp sender [type number]
show ip rsvp request [type number]
show ip rsvp reservation [type number]
Sample Configuration
The following shows the various commands in use, in no particular scenario:
interface ethernet 2
ip rsvp bandwidth 15000 1000
ip rsvp udp-multicast 224.0.0.20
ip rsvp neighbors 1
ip rsvp reservation 224.250.0.2 130.240.1.1 UDP 20 30 130.240.4.1 Eth1 se load
100 60
ip rsvp reservation 224.250.0.2 130.240.2.1 TCP 20 30 130.240.4.1 Eth1 se load
150 65
ip rsvp reservation 224.250.0.3 0.0.0.0 UDP 20 0 130.240.4.1 Eth1 wf rate 300
60
ip rsvp reservation 224.250.0.3 0.0.0.0 UDP 20 0 130.240.4.1 Eth1 wf rate 350
65
ip rsvp sender 224.250.0.1 130.240.2.1 udp 20 30 130.240.2.1 loopback 1 50 5
ip rsvp sender 224.250.0.2 130.240.2.1 udp 20 30 130.240.2.1 loopback 1 50 5
ip rsvp sender 224.250.0.2 130.240.2.28 udp 20 30 130.240.2.28 loopback 1 50 5
__________________
浙公网安备 33010602011771号