Manual Pages

Table of Contents


na_ifgrp - Manages interface group (ifgrp) configuration.


ifgrp create [ single | multi | lacp ] ifgrp_name [ -b {rr|mac|ip|port} ] [ interface_list ]

ifgrp destroy ifgrp_name

ifgrp delete ifgrp_name interface_name

ifgrp add ifgrp_name interface_list

ifgrp { favor | nofavor } interface

ifgrp status [ ifgrp_name ]

ifgrp stat ifgrp_name [ interval ]

In the ifgrp commands, ifgrp_name stands for the name of an interface group. The name must be a string that is no longer than 15 characters and meets the following criteria:

It begins with a letter.

It does not contain a space.

It is not in use for another interface group.

Interface group names are case-sensitive.


An interface group is a mechanism that supports aggregation of network interfaces ("links") into one logical interface unit ("trunk").

Once created, an interface group is indistinguishable from a physical network interface. You can inspect and modify statistical and configuration information using the ifconfig and netstat commands, among others.

You can create an interface group in one of three modes: multi, single or LACP.

Multi-mode interface groups are partly compliant with IEEE 802.3ad. Multi-mode interface groups support static configuration but not dynamic aggregate creation. In a multi-mode interface group, all links are simultaneously active. This mode is only useful if all the links are connected to a switch that supports trunking/aggregation over multiple port connections. The switch must be configured to understand that all the port connections share a common media access control (MAC) address and are part of a single logical interface.

Dynamic multi-mode (LACP) interface groups are completely compliant with IEEE 802.3ad. LACP protocol is used determine which of the underlying links can be aggregated. LACP protocol is also used to monitor the link status. If the configuration on both ends of the links is correct, then all the interfaces of an interface group are active.

While the switch is responsible for determining how to forward incoming packets to the node, the node supports load balancing on the network traffic transmitted over a multi-mode/LACP interface group. The user can choose from any of the following four methods:

IP based. The outgoing interface is selected on the basis of the node and client's IP address.

MAC based. The outgoing interface is selected on the basis of the node and client's MAC address.

Round-Robin. All the interfaces are selected on a round-robin basis.

Port based. The outgoing interface is selected on the basis of the transport layer connection 4-tuple. This includes the node's IP and port number and the client's IP and port number. For traffic such as ICMP, only the souce and destination IP addresses are used.

Since the Round-Robin based load balancing policy may lead to out-of-order of packets, it should be used carefully.

In single-mode interface groups, only one of the links is active at a time. No configuration is necessary on the switch. If Data ONTAP detects a fault in the active link, a standby link of the interface group, if available, is activated. Note that load balancing is not supported on single-mode interface groups.

Network interfaces belonging to an interface group do not have to be on the same network card. With the ifgrp command, you can also create second-level single or multimode interface groups. For example, a subnetwork has two switches that are capable of trunking over multiple port connections. The storage system has a two-link multi-mode interface group to one switch and a two-link multi-mode interface group to the second switch. You can create a second-level single-mode interface group that contains both of the multi-mode interface groups. When you configure the second-level interface group using the ifconfig command, only one of the two multi-mode interface group is brought up as the active link. If all the underlying interfaces in the active interface group fail, the second-level interface group activates its standby interface group. Please note that multi-level LACP interface groups are not permitted.

You can destroy an interface group only if you have configured it down using the ifconfig command.


Creates a new instance of an interface group. If no mode is specified, the interface group is created in multi-mode. If a list of interfaces is provided, the interfaces are configured and added to the interface group. Load balancing is specified with the -b option.

- rr refers to Round-robin Load balancing.

- ip refers to IP-based load
balancing. The IP based load balancing is used as default for multi-mode interface groups if none is specified by user.

- mac refers to MAC-based load balancing.

- port refers to port-based load balancing.

Destroys a previously created interface group. The interface must be configured down prior to invoking this option.

Deletes the specified interface from a previously created interface group. The interface group must be configured down prior to invoking this option.

Adds a list of interfaces to an existing interface group trunk. Each interface corresponds to a single link in the trunk.

Designates the specified interface as active in a single-mode interface group. When a single-mode interface group is created, an interface is randomly selected to be the active interface. Use the favor command to override the random selection.

If the specified interface is part of a single-mode interface group, this command ensures that the link corresponding to this interface is not preferred when determining which link to activate.

Displays the status of the specified interface group. If no interface is specified, displays the status of all interface groups.

Displays the number of packets received and transmitted on each link that makes up the interface group. You can specify the time interval, in seconds, at which the statistics are displayed. By default, the statistics are displayed at a two-second interval.


The ifgrp driver constantly checks each interface group and each link for status. Links issue two types of indications:

The link is receiving active status from its media access unit.

The link is not receiving active status from its media access unit.

In the case of a link that in itself is an interface group, the media access unit refers to the collection of media access units of the underlying physical network interfaces. If any of the underlying media access units issues an up indication, the ifgrp driver issues an up indication to the next higher level interface group on its behalf. If all underlying physical network interfaces issue broken indications, the ifgrp driver issues broken indication to the next level interface group.

If all the links in an interface group are broken, the ifgrp driver issues a system log message similar to this:

Fri Oct 16 15:09:29 PDT [toaster: pifgrp_monitor]: ifgrp0: all links are down

If all links on an interface group are broken and a link subsequently comes back up, the ifgrp driver issues a system log message similar to this:

Fri Oct 16 15:09:42 PDT [toaster: pifgrp_monitor]: ifgrp0: switching to e3a

In the case of LACP interface groups, LACP frames are exchanged periodically. Failure to receive LACP frames within a specified time period is construed as a link failure and the corresponding link is marked down.

In the case of single-mode interface groups, broadcast frames are sent out from each interface periodically. Failure to receive these periodic frames provides a hint on the link status.


The following command creates a multi-mode interface group ifgrp0, with ip based load balancing, consisting of two links, e10 and e5:

ifgrp create multi ifgrp0 -b ip e3a e3b

The status option prints out results in the following form. Here is an example of the output for ifgrp0:

ifgrp status

  default: transmit 'IP Load balancing', IFGRP Type 'multi_mode', fail 'log'
  ifgrp0: 2 links, transmit 'none', IFGRP Type 'multi-mode' fail 'default'

  IFGRP Status     Up      Addr_set
          e10: state up, since 05Oct2001 17:17:15 (05:23:05)
                  mediatype: auto-1000t-fd-up
                  flags: enabled
                  input packets 2000, input bytes 12800
                  output packets 173, output bytes 1345
                  up indications 1, broken indications 0
                  drops (if) 0, drops (link) 0
                  indication: up at boot
                          consecutive 3, transitions 1
          e5: state broken, since 05Oct2001 17:18:03 (00:10:03)
                  mediatype: auto-1000t-fd-down
                  flags: enabled
                  input packets 134, input bytes 987
                  output packets 20, output bytes 156
                  up indications 1, broken indications 1
                  drops (if) 0, drops (link) 0
                  indication: broken
                          consecutive 4, transitions 1

In this example, one of the ifgrp0 links are is in the active (up) state. The second interface e5 is broken on detection of a link failure. ifgrp0 is configured to transmit over multiple links and its failure behavior is the default (send errors to the system log). Links are in one of three states:

The link is active and is sending and receiving data (up).

The link is inactive but is believed to be operational (down).

The link is inactive and is believed to be nonoperational ("broken").

In this example, the active link has been in the up state for 5 hours, 23 minutes, 5 seconds. The inactive link has been inactive for the last 10 minutes. Both links are enabled (flags: enabled), meaning that they are configured to send and receive data. During takeover, links can also be set to match the MAC address of the partner. The flags field is also used to indicate whether a link has been marked as favored.

Links constantly issue either up or broken indications based on their interaction with the switch. The consecutive count indicates the number of consecutively received indications with the same value (in this example, up). The transitions count indicates how many times the indication has gone from up to down or from down to up.

If ifgrp0 is a link in a second-layer interface group (for example, ifgrp create ifgrp2 ifgrp0), an additional line is added to its status information:

      trunked: ifgrp2

The following example displays statistics about multi-mode interface group ifgrp0:

ifgrp stat ifgrp0

  Interface group (trunk) ifgrp0
      e10                     e5
  In      Out             In      Out
  8637076 47801540        158     159
  1617    9588            0       0
  1009    5928            0       0
  1269    7506            0       0
  1293    7632            0       0
  920     5388            0       0
  1098    6462            0       0
  2212    13176           0       0
  1315    7776            0       0


An interface group behaves almost identically to a physical network interface in the HA pair. For the takeover of a partner to work properly, three things are required:

1. The local node must specify, using the partner option of the ifconfig command, the mapping of the partner's interface group. For example, to map the partner's ifgrp2 interface to the local ifgrp1 interface, the following command is required:

ifconfig ifgrp1 partner ifgrp2

Note that the interface must be named, not the address. The mapping must be at the top-level trunk, if trunks are nested. You do not map link-by-link.

2. After takeover, the partner must "create" its interface group. Typically, this takes place in the /etc/rc file. For example:

ifgrp create ifgrp2 e3a e3b

When executed in takeover mode, the local node does not actually create an interface group. Instead, it looks up the mapping (in this example partner ifgrp2 to local ifgrp1) and initializes its internal data structures. The interface list (in this example, e3a and e3b) is ignored because the local node can have different mappings of devices for its ifgrp1 trunk.

3. After the partner interface group has been initialized, it must be configured. For example:

ifconfig ifgrp2 `hostname`-ifgrp2

Only the create, stat, and status options are enabled in partner mode. The create option does not create new interface group in partner mode. Instead, it initializes internal data structures to point at the mapped local ifgrp interface. The status and stat options reference the mapped interface group. However, all links are printed using the local device names.

When using multi-mode interface groups with HA pairs, connecting the interface groups into a single switch constitutes a single point of failure. By adding a second switch and setting up two multi-mode interface groups on each node in the HA pair so that the multi-mode interface groups on each node are connected to separate switches the interface groups will continue to operate in the face of single switch failure. The following /etc/rc file sequence illustrates this approach:

  # configuration for node 1

  # first level multi interface group:
  # attach e4a and e4b to Switch 1
  ifgrp create multi ifgrp0 e4a e4b

  # first level multi interface group:
  # attach e4c and e4d to Switch 2
  ifgrp create multi ifgrp1 e4c e4d

  # second level single interface group consisting of both
  # first level interface groups; only one active at a time
  ifgrp create single ifgrp10 ifgrp0 ifgrp1

  # use ifgrp0 unless it is unavailable
  ifgrp favor ifgrp0

  # configure the interface group with an interface and partner
  ifconfig ifgrp10 `hostname-ifgrp10` partner ifgrp10

The partner node is configured similarly; the favored first level interface in this case is the interface group connected to "Switch 2".


IEEE 802.3ad requires the speed of all underlying interfaces to be the same and in full-duplex mode. Additionally most switches do not support mixing 10/100 and GbE interfaces in an aggregate/trunk. Check the documentation that comes with your Ethernet switch or router on how to configure the Ethernet interfaces to be full-duplex. (Hint: Allow both ends of a link to autonegotiate.)


Though interface groups can support up to sixteen links, the number of interfaces in an aggregate is limited by the switch.


na_ifconfig(1), na_netstat(1), na_sysconfig(1)

Table of Contents