Home > IPv6, netflow, nexus > IPv4 and IPv6 Netflow on Cat 6500 and Nexus 7k

IPv4 and IPv6 Netflow on Cat 6500 and Nexus 7k

Netflow can be a handy tool to use on the network when you need to see information such as total number of flows/sec, bps per source/destination pair, and top talkers. It can also be used by service providers for usage based billing purposes or for help in mitigating DDoS attacks etc. etc. Here I will show you how to configure netflow on Cisco Nexus 7k and Catalyst 6500 based switches for both IPv4 and IPv6.

There are two main versions of netflow that I will talk about and what their differences are. The are V5 and V9 netflow. V4 has been around for a while and only supports IPv4. It is also a fixed format in that the info you can get from a V4 netflow record is all that you will ever get. There is no expandability built into V4 to allow it to support future data types.

Netflow V9 on the other hand supports both IPv4 and IPv6. It is also template based. This means you can customize the data being sent to your collectors. You can pick and chose which data fields you want to see. V9 is also extendable in that new data types can be added to the template as required in the future.

Lets get to the nuts and bolts of actually getting some netflow data off of our switches and into a collector so that we can search on it and visualize it if we so choose. For my examples below I will be using V9 as I like to collect both IPv4 and IPv6 statistics as I am running a dual stacked network.

Cat 6500 Config
Here is the basic config to get netflow v9 running and exporting flow data to your collector of choice:

mls netflow interface
mls flow ip interface-full
mls flow ipv6 interface-full
mls aging long 300
mls aging normal 120
mls nde sender

Here is what is going on in the first few lines of config. We are enabling per VLAN flow data to be collected, setting both the IPv4 and IPv6 flow masks to interface-full. Setting the mask to this exports all netflow data captured. There are several flow masks you can configure, each one sending less and less data. Setting the MLS aging timers is a good idea to get a more accurate view of flows as netflow data is not sent for a flow until it has timed out. For long lived flows, this is 32 mins by default. Setting the long timer to 300 means that long lived flows will be timed out in the netflow cache every 5 minutes thus allowing the collection of data for that flow to happen much sooner. I have set the normal timers to 120 just to age out inactive flows a little quicker than the default of 300 (5 minutes). Finally we enable the export of netflow data.

ip flow-export source Vlan100
ip flow-export version 9
ip flow-export destination 10.10.10.10 9995

Here we are specifying netflow updates should be sourced from vlan 100, they are to be version 9 records, due to the fact I am exporting both IPv4 and IPv6 flows and my netflow collector is at 10.10.10.10 and listenign on port 9995.

Now we configure our interface to collect netflow data.

interface Vlan100 ip flow ingress

That is pretty much it. If you have tons of flows you could also set up sampled netflow. Where 1 out of every x packets are sampled for the netflow chache. This will improve CPU utilization on the switch and will be less flow data to have to store on the netflow collector as well.

Nexus 7k Config
Configuring netflow on a Nexus 7k is a bit different in that Nexus supports what is called Flexible Netflow. Here we configure netflow in a more modular approach. We configure Flow Records which state what data we want collected, Flow Exporters that specify where to send the flow data and what version of netflow to use and Flow Monitors which are placed on the interfaces where we wish to capture flow data. You can optionally specify a Flow Sampler which will do what its name implies. Capture sampled netflow data instead of looking at every packet.

Here I am configuring two separate flow records, one for IPv4 and one for IPv6 as you cannot have both IPv4 and IPv6 protocols in the same flow record. I am matching on specific keys and then collecting data based off of those keys.

flow record ipv4-netflow
match ipv4 source address
match ipv4 destination address
match ip protocol
match ip tos
match transport source-port
match transport destination-port
collect routing source as
collect routing destination as
collect routing next-hop address ipv4
collect transport tcp flags
collect counter bytes
collect counter packets
collect timestamp sys-uptime first
collect timestamp sys-uptime last

flow record ipv6-netflow
match ip protocol
match transport source-port
match transport destination-port
match ipv6 source address
match ipv6 destination address
collect routing source as
collect routing destination as
collect routing next-hop address ipv6
collect transport tcp flags
collect counter bytes
collect counter packets
collect timestamp sys-uptime first
collect timestamp sys-uptime last

Now lets create the flow exporter and flow monitors. Again you must create a seperate flow monitor for IPv4 and IPv6.

flow exporter flow-exporter
destination 10.10.10.10
source loopback0
transport udp 9995 !This is the default value
version 9

flow monitor ipv4-netflow-mon
record ipv4-netflow
exporter flow-exporter

flow monitor ipv6-netflow-mon
record ipv6-netflow
exporter flow-exporter

Finally we configure the flow monitors on the interfaces we wish to obtain netflow data from:

interface Vlan300
ip flow monitor ipv4-netflow-mon input
ipv6 flow monitor ipv6-netflow-mon input

That’s it! You can verify you are collecting flow data by issuing the following commands. On a 6500 the command is:

show mls netflow ip – Shows the flow cache table.
show mls netflow ipv6 – Shows the flow cache table for IPv6.
show mls nde – Shows netflow export info.

For the Nexus the verification commands are:

show hardware flow ip – Shows IPv4 flow info
show hardware flow ipv6 – Shows IPv6 flow info
show flow exporter – Shows flow export info

Categories: IPv6, netflow, nexus
  1. January 30, 2015 at 2:27 pm

    Nice post! Might be helpful to include some notes on series F vs. M line cards and how they process flow in software vs. hardware and some commands to determine how much processing power is being consumed by flow generation.

    • January 30, 2015 at 2:35 pm

      Thanks for the feedback. I will look into adding this to a future article about some of the more advanced features with netflow and what to look for as far a resources go per line card.

  1. No trackbacks yet.

Leave a reply to IPFIXed Cancel reply