Manual Pages


Table of Contents

NAME

na_vol - Commands for managing volumes, displaying volume status, moving volumes, and copying volumes.

SYNOPSIS

vol command argument ...

DESCRIPTION

The vol family of commands manages volumes. A volume is a logical unit of storage, containing a file system image and associated administrative options such as snapshot schedules. The disk space that a volume occupies (as well as the characteristics of the RAID protection it receives) is provided by an aggregate (see na_aggr(1)).

Prior to Data ONTAP 7.0, volumes and aggregates were fused into a single administrative unit, where each aggregate (RAID-level collection of disks) contained exactly one volume (logical, user-visible file system). The vol family of commands managed both the lower-level disk storage aspects and the higher-level file system aspects of these tightly-bound volume/aggregate pairs. Such traditional volumes still exist for backwards compatibility.

Administrators can now decouple the management of logical file systems (volumes) from their underlying physical storage (aggregates). In particular, this new class of flexible volumes provides much greater freedom.

Aggregates can be created, destroyed, and managed independently (via the aggr command family). When an aggregate is created, it is a completely clean slate, free of any independent logical file systems (flexible volumes).

Refer to the Storage Management Guide for the maximum number of volumes that a storage system can support.

The administrator can take Snapshot copies, create SnapMirror relationships and perform Snaprestore operations on Flexvol volumes independently from all other FlexVol volumes contained in the same aggregate.

Administrators can also move 7-mode flexible volumes within controllers to either re-balance workloads and/or adjust capacity utilization. As part of the move, any and all associated volume attributes such as replica relationships, Snapshot relationships, MetroCluster relationships, thin provisioning settings and clones will move nondisruptively. The move of FlexVol volumes can happen while servicing block IO to SCSI clients without application outage. vol move in 7-mode applies only to volumes containing LUNs (excluding exported volumes for NFS & CIFS).

The vol move of 7-mode flexible volumes consists of three phases, namely, Setup Phase, Data Copy Phase, and the Cutover Phase.

To move FlexVol volumes to another aggregate, user should have the necessary permissions and the aggregate that the volume is being moved to must have the available space. The destination volume could be in an aggregate that is laid out on a different drive type and(or) laid on drives of a different size and(or) has a different RAID property from the source volume.

Aggregates that contain one or more flexible volumes cannot be restricted or taken offline. In order to restrict or offline an aggregate, it is necessary to first destroy all of its contained flexible volumes. This guarantees that flexible volumes cannot disappear in unexpected and unclean ways, without having their system state properly and completely cleaned up. This also makes sure that any and all protocols that are being used to access the data in the flexible volumes can perform clean shutdowns. Aggregates that are embedded in traditional volumes can never contain flexible volumes, so they do not operate under this limitation.

Because FlexVol volumes (also called flexible volumes) are independent entities from their containing aggregates, their size may be both increased and decreased. FlexVol volumes may be as small as 20 MB. The maximum size for a FlexVol volume depends on the volume format (32-bit or 64-bit) and the storage system model. 32-bit FlexVol volumes are never larger than 16 TB. Refer to the System Configuration Guide for the maximum sizes of 64-bit aggregates, which determines the maximum size of the volume they contain.

Clone volumes can be quickly and efficiently created. A clone volume is in effect a writable snapshot of a flexible volume. Initially, the clone and its parent share the same storage. More storage space is consumed only as one volume or the other changes. Clones may be split from their parents, promoting them to fully-independent flexible volumes that no longer share any blocks. A clone is always created in the same aggregate as its parent. Clones of clones may be created.

FlexCache volumes can be quickly created using the vol command. FlexCache volumes are housed on the local node, referred to as caching node, and are cached copies of separate volumes, which are on a different node, referred to as the origin node. Clients access the FlexCache volume as they would access any other volume exported over NFS. FlexCache must be licensed on the caching node but is not required for the origin node. On the origin node, option flexcache.enable must be set to "on" and option flexcache.access must be appropriately set. The current version of FlexCache only supports client access via NFSv2 and NFSv3.

The vol command family is compatible in usage with earlier releases and can manage both traditional and flexible volumes. Some new vol commands in this release apply only to flexible volumes. The new aggr command family provides control over RAIDlevel storage. The underlying aggregate of flexible volumes can only be managed through this command.

The vol command family has a special set of restrictions that only apply when it is executed on an Cluster-Mode deployment of Data ONTAP 8.0 and beyond via a special provision of the Cluster CLI. These restrictions are necessary in this specific product environment so as to mesh cleanly with the additional cluster-wide databases. If these restrictions are encountered in this narrow use case, the vol command family will provide detailed information about them.

The vol commands can create new volumes, destroy existing ones, change volume status, increase the size of a volume (or decrease the size if it is a flexible volume), apply options to a volume, copy one volume to another, display status, move volumes within controllers (flexible volumes only) and create and manage clones of flexible volumes.

Each volume has a name, which can contain letters, numbers, and the underscore character (_); the first character must be a letter or underscore.

A volume may be online, restricted, iron_restricted, or offline. When a volume is restricted, certain operations are allowed (such as vol copy and parity reconstruction), but data access is not allowed. When a volume is iron_restricted, wafliron is running in optional commit mode on the volume and data access is not allowed.

Volumes can be in combinations of the following states:

copying
The volume is currently the target of active vol copy or snapmirror operations.

degraded
The volume's containing aggregate contains at least one degraded RAID group that is not being reconstructed.

flex
The volume is a flexible volume contained by an aggregate and may be grown or shrunk in 4K increments.

foreign
The disks that the volume's containing aggregate contains were moved to the current node from another node.

growing
Disks are in the process of being added to the volume's containing aggregate.

initializing
The volume or its containing aggregate is in the process of being initialized.

invalid
The volume does not contain a valid file system. This typically happens only after an aborted vol copy operation.

ironing
A WAFL consistency check is being performed on the volume's containing aggregate.

mirror degraded
The volume's containing aggregate is a mirrored aggregate, and one of its plexes is offline or resynchronizing.

mirrored
The volume's containing aggregate is mirrored and all of its RAID groups are functional.

needs check
A WAFL consistency check needs to be performed on the volume's containing aggregate.

out-of-date
The volume's containing aggregate is mirrored and needs to be resynchronized.

partial
At least one disk was found for the volume's containing aggregate, but two or more disks are missing.

raid0
The volume's containing aggregate consists of RAID-0 (no parity) RAID groups (V-Series and NetCache only).

raid4
The volume's containing aggregate consists of RAID-4 RAID groups.

raid_dp
The volume's containing aggregate consists of RAID-DP (Double Parity) RAID groups.

reconstruct
At least one RAID group in the volume's containing aggregate is being reconstructed.

resyncing
One of the plexes of the volume's containing mirrored aggregate is being resynchronized.

snapmirrored
The volume is a snapmirrored replica of another volume.

sv-restoring
Restore-on-Demand is currently in progress on this volume. The volume is accessible, even though all of the blocks in the volume may not have been restored yet. Use the snapvault status command to view the restore progress.

trad The volume is what is referred to as a traditional volume. It is fused to an aggregate, and no other volumes may be contained by this volume's containing aggregate. This type is exactly equivalent to the volumes that existed before Data ONTAP 7.0.

unrecoverable
The volume is a flexible volume that has been marked unrecoverable. Please contact Customer Support if a volume appears in this state.

verifying
A RAID mirror verification operation is currently being run on the volume's containing aggregate.

wafl inconsistent
The volume or its containing aggregate has been marked corrupted. Please contact Customer Support if a volume appears in this state.

flexcache
The volume is a FlexCache volume.

connecting
The volume is a FlexCache volume, and the network connection between this volume and the origin volume is not yet established.

USAGE

The following commands are available in the vol suite:

  add          destroy        offline     scrub
  autosize     lang           online      size
  clone        media_scrub    options     split
  container    mirror         rename      status
  copy         move           restrict    verify
  create

vol add volname
[ -f ]
[ -n ]
[ -g raidgroup ]
{ ndisks[@size]
|
-d disk1 [ disk2 ... ] [ -d diskn [ diskn+1 ... ] ] }

Adds the specified set of disks to the aggregate portion of the traditional volume named volname, and grows the user-visible file system portion of the traditional volume by that same amount of storage. See the na_aggr (1) man page for a description of the various arguments.

The vol add command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations on their containing aggregates be handled via the new aggr command suite. In this specific case, aggr add should be used.

vol autosize volname
[ -m size [k|m|g|t] ]
[ -i size [k|m|g|t] ]
[ -minimum-size size [k|m|g|t] ]
[ -grow-threshold-percent <used space %> ] [ -shrink-threshold-percent <used space %> ] [ grow(on) | grow_shrink |off | reset ]

Volume autosize allows a flexible volume to automatically grow or shrink in size within an aggregate. Autogrow is useful when a volume is about to run out of available space, but there is space available in the containing aggregate for the volume to grow. Autoshrink is useful in combination with autogrow. It can return unused blocks to the aggregate when the amount of data in a volume drops below a user configurable shrink threshold. Autoshrink can be enabled via the grow_shrink subcommand. Autoshrink without autogrow is not supported. The autogrow feature works together with snap autodelete to automatically reclaim space when a volume is about to get full. The volume option try_first controls the order in which these two reclaim policies are used.

By default autosize is disabled. The minimum autosize is set to the volume size, the maximum autosize is set to 120% of the volume size, and the autosize increment is set to the lesser value of either 1GB or 5% of the volume size at the time of command. The grow (or on) subcommand can be used to enable autogrow on a volume. The grow_shrink subcommand enables both autogrow and autoshrink. The reset subcommand resets the settings of volume autosize to defaults. The off subcommand can be used to disable autosize.

The -m switch allows the user to specify the maximum size to which a flexible volume will be allowed to grow. When increasing the size of a volume, Data ONTAP uses the increment size specified with the -i switch as a guide; the actual size increase may be larger or smaller. You can specify the increment amount either as a fixed size (in bytes) or as a percentage. The percentage is converted to a fixed size that is based on the current volume size. If the value of the -m parameter is invalidated by a manual volume resize or is invalid when autosize is enabled, the maximum size is reset to 120% of the volume size, and the autosize increment is reset to the lesser value of either 1GB or 5% of the volume size.

The -minimum-size switch allows you to specify the minimum size below which a flexible volume is not allowed to shrink. The default value is the size of the volume at the time grow_shrink is enabled. If the value of the -minimum-size parameter is invalidated by a manual volume resize or is invalid when autosize is enabled, the minimum size is reset to the volume size. The -shrink-threshold-percent switch allows you to specify the threshold percentage of used space below which the autoshrink action is triggered. Similarly, if the used space in a volume exceeds the value specified with the -grow-threshold-percent switch, the autogrow action is triggered.

vol clone create clone_vol
[ -s none | file | volume ]
-b parent_vol [ parent_snap ]

The vol clone create command creates a flexible volume named clone_vol on the local node that is a clone of a backing flexible volume named parent_vol. A clone is a volume that is a writable snapshot of another volume. Initially, the clone and its parent share the same storage; more storage space is consumed only as one volume or the other changes.

If a specific parent_snap within parent_vol is provided, it is chosen as the backing snapshot. Otherwise, the node will create a new snapshot named clone_parent_<UUID> (using a freshlygenerated UUID) in parent_vol for that purpose.

The parent_snap is locked in the parent volume, preventing its deletion until the clone is either destroyed or split from the parent using the vol clone split start command.

Backing flexible volume parent_vol may be a clone itself, so "clones of clones" are possible. A clone is always created in the same aggregate as its parent_vol.

The vol clone create command fails if the chosen parent_vol is currently involved in a vol clone split operation.

The vol clone create command fails if the chosen parent_vol is a traditional volume. Cloning is a new capability that applies exclusively to flexible volumes.

By default, the clone volume is given the same storage guarantee as the parent volume; the default may be overridden with the -s switch. Clone create with -s option and guarantee set to volume or none, the fractional_reserve will take the same value as that of the parent value. Clone create with -s option and guarantee set to file the fractional_reserve is set to 100. See the vol create command for more information on the storage guarantee.

A clone volume may not be currently used as a target for vol copy or volume snapmirror. A clone volume can be used as the target for qtree snapmirror.

vol clone split start volname

This command begins separating clone volume volname from its underlying parent. New storage is allocated for the clone volume that is distinct from the parent.

This process may take some time and proceeds in the background. Use the vol clone split status command to view the command's progress.

Both clone and parent volumes remain available during this process of splitting them apart. Upon completion, the snapshot on which the clone was based will be unlocked in the parent volume. Any snapshots in the clone are removed at the end of processing. Use the vol clone split stop command to stop this process.

The vol clone split start command also fails if the chosen volname is a traditional volume. Cloning is a new capability that applies exclusively to flexible volumes.

vol clone split status [volname]

This command displays the progress in separating clone volumes from their underlying parent volumes. If volname is specified, then the splitting status is provided for that volume. If no volume name appears on the command line, then status for all clone splitting operations that are currently active is provided.

The vol clone split status command fails if the chosen volname is a traditional volume. Cloning is a new capability that applies exclusively to flexible volumes.

vol clone split estimate [volname]

This command displays an estimate of the free disk space required in the aggregate to split the indicated clone volume from its underlying parent volume. The value reported may differ from the space actually required to perform the split, especially if the clone volume is changing when the split is being performed.

vol clone split stop volname

This command stops the process of separating a clone from its parent volume. All of the blocks that were formerly shared between volname and its backing volume that have already been split apart by the vol clone split start will remain split apart.

The vol clone split stop command fails if the chosen volname is a traditional volume. Cloning is a new capability that applies exclusively to flexible volumes.

vol container volname

This command displays the name of the aggregate that contains flexible volume volname.

The vol container command fails if asked to operate on a traditional volume, as its tightly-bound aggregate portion cannot be addressed independently.

vol copy abort operation_number | all

This command terminates volume copy operations. The operation_number parameter in the vol copy abort command specifies which operation to terminate. If all is specified, all volume copy operations are terminated.

vol copy start [ -p {inet | inet6 } ] [ -S | -s snapshot ] source destination

Copies all data, including snapshots, from one volume to another. If the -S flag is used, the command copies all snapshots in the source volume to the destination volume. To specify a particular snapshot to copy, use the -s flag followed by the name of the snapshot. If neither the -S nor -s flag is used in the command, the node automatically creates a distinctively-named snapshot at the time the vol copy start command is executed and copies only that snapshot to the destination volume.

The -p option is used for selecting the IP connection mode. The value for this argument can be inet or inet6. When the value is inet6, the connection will be established using IPv6 addresses only. If there is no IPv6 address configured for the destination, then the connection will fail. When the value is inet, the connection will be established using IPv4 addresses only. If there is no IPv4 address configured on the destination, then the connection will fail. When this argument is not specified, then the connection will be tried using both IPv6 and IPv4 addresses. inet6 mode will have higher precedence than inet mode. If a connection request using inet6 mode fails, the connection will be retried using inet mode.

This option is not meaningful when an IP address is specified instead of a hostname. If the IP address format and connection mode doesn't match, the operation prints an error message and aborts.

The source and destination volumes must either both be traditional volumes or both be flexible volumes. The vol copy command will abort if an attempt is made to copy between different volume types.

The source and destination volumes can be on the same node or on different nodes. If the source or destination volume is on a node other than the one on which the vol copy start command was entered, specify the volume name in the node_name:volume_name format.

Note that the source and destination volumes must be of the same type, either both flexible or both traditional.

The nodes involved in a volume copy must meet the following requirements for the vol copy start command to be completed successfully:

The source volume must be online and the destination volume must be offline.

If data is copied between two nodes, each node must be defined as a trusted host of the other node. That is, the node's name must be in the /etc/hosts.equiv file of the other node. If one node is not in the /etc/hosts.equiv file of the other node then "Permission denied" error message is displayed to the user.

If data is copied on the same node, localhost must be included in the node's /etc/hosts.equiv file. Also, the loopback address must be in the node's /etc/hosts file. Otherwise, the node cannot send packets to itself through the loopback address when trying to copy data.

The usable disk space of the destination volume must be greater than or equal to the usable disk space of the source volume. Use the df pathname command to see the amount of usable disk space of a particular volume.

Each vol copy start command generates two volume copy operations: one for reading data from the source volume and one for writing data to the destination volume. Each node supports up to four simultaneous volume copy operations.

vol copy status [ operation_number]

Displays the progress of one or all active volume copy operations, if any. The operations are numbered from 0 through 3. If no operation_number is specified, then status for all active vol copy operations is provided.

vol copy throttle [ operation_number ] value

This command controls the performance of the volume copy operation. The value ranges from 10 (full speed) to 1 (one-tenth of full speed). The default value is maintained in the node's vol.copy.throttle option and is set 10 (full speed) at the factory. The performance value can be applied to an operation specified by the operation_number parameter. If an operation number is not specified, the command applies to all active volume copy operations.

Use this command to limit the speed of volume copy operations if they are suspected to be causing performance problems on a node. In particular, the throttle is designed to help limit the volume copy's CPU usage. It cannot be used to fine-tune network bandwidth consumption patterns.

The vol copy throttle command only enables the speed of a volume copy operation that is in progress to be set. To set the default volume copy speed to be used by future volume copy operations, use the options command to set the vol.copy.throttle option.

vol create flex_volname
[ -l language_code ]
[ -s none | file | volume ]
aggrname size

vol create trad_volname
[ -l language_code ]
[-f] [-n] [-m]
[-L [compliance | enterprise]]
[-t raidtype ] [-r raidsize ]
{ ndisks[@size]

|
-d disk1 [ disk2 ... ] [ -d diskn [ diskn+1 ... ] ] }

vol create flexcache_volname
[ -l language_code ]
aggrname size
[ size [k|m|g|t] ]
[ -S remotehost:remotevolume ]

Creates a flexible, traditional, or FlexCache volume.

If the first format is used, a flexible volume named flex_volname is created in the storage provided by aggregate aggrname. The size argument specifies the size of the flexible volume being created. It is a number, optionally followed by k, m, g, or t, denoting kilobytes, megabytes, gigabytes, or terabytes respectively. If none of the above letters is used, the unit defaults to bytes (and is rounded up to the nearest 4 KB). FlexVol volumes (also called flexible volumes) may be as small as 20 MB. The maximum size for a FlexVol volume depends on the volume format (32-bit or 64-bit), which is inherited from containing aggregate, and the storage system model. 32-bit FlexVol volumes are never larger than 16 TB. Refer to the System Configuration Guide for the maximum sizes of 64-bit aggregates, which determines the maximum size of the volume they contain.

The optional -s switch controls whether the volume is guaranteed some amount of disk space. The default value is volume, which means that the entire size of the volume will be preallocated. The file value means that space will be preallocated for all the space-reserved files and LUNs within the volume. Storage is not preallocated for files and LUNs that are not spacereserved. Writes to these can fail if the underlying aggregate has no space available to store the written data. This value can be set if fractional_reserve is 100. Note that a guarantee of file will no longer be supported in a future release of Data ONTAP. The none value means that no space will be preallocated, even if the volume contains space-reserved files or LUNs; if the aggregate becomes full, space will not be available even for space-reserved files and LUNs within the volume. Note that the file setting allow for overbooking the containing aggregate aggrname. As such, it will be possible to run out of space in the new flexible volume even though it has not yet consumed its stated size. Use this setting carefully, and take care to regularly monitor space utilization in overbooking situations.

To create a clone of a flexible volume, use the vol clone create command.

If the underlying aggregate aggrname upon which the flexible volume is being created is a SnapLock aggregate, the flexible volume will be a SnapLock volume and automatically inherit the SnapLock type, either Compliance or Enterprise, from the aggregate.

If the second format is used, a traditional volume named trad_volname is created using the specified set of disks. See the na_aggr (1) man page for a description of the various arguments to this traditional form of volume creation.

If the third format is used, a FlexCache volume named flexcache_volname is created in the aggreagate aggrname. The FlexCache volume is created for the volume remotevolume located on the node remotehost. This option is only valid if FlexCache functionality is licensed. If the size is not specified then the FlexCache volume will be created with autogrow enabled. The original size of the volume will be the smallest possible size of a flexible volume, but the size will automatically grow as more spaces is needed in the FlexCache volume to improve performance by avoiding evictions. Although the size is left as an optional parameter, the recommended way of using FlexCache volumes is with autogrow enabled.

If the -l language_code argument is used, the node creates the volume with the language specified by the language code. The default is the language used by the node's root volume.

Language codes are:

          C            (POSIX)
          ar           (Arabic)
          cs           (Czech)
          da           (Danish)
          de           (German)
          en           (English)
          en_US        (English (US))
          es           (Spanish)
          fi           (Finnish)
          fr           (French)
          he           (Hebrew)
          hr           (Croatian)
          hu           (Hungarian)
          it           (Italian)
          ja           (Japanese euc-j)
          ja_JP.PCK    (Japanese PCK (sjis))
          ko           (Korean)
          no           (Norwegian)
          nl           (Dutch)
          pl           (Polish)
          pt           (Portuguese)
          ro           (Romanian)
          ru           (Russian)
          sk           (Slovak)
          sl           (Slovenian)
          sv           (Swedish)
          tr           (Turkish)
          zh           (Simplified Chinese)
          zh.GBK       (Simplified Chinese (GBK))
          zh_TW        (Traditional Chinese euc-tw)
          zh_TW.BIG5   (Traditional Chinese Big 5)

To use UTF-8 as the NFS character set, append `'.UTF-8'' to the above language codes.

vol create will create a default entry in the /etc/exports file unless the option nfs.export.auto-update is disabled.

To create a SnapLock volume, specify -L flag with vol create command. This flag is only supported if either SnapLock Compliance or SnapLock Enterprise is licensed. The type of the SnapLock volume created, either Compliance or Enterprise, is determined by the type of installed SnapLock license. If both SnapLock Compliance and SnapLock Enterprise are licensed, use -L compliance or -L enterprise to specify the desired volume type.

vol destroy { volname | plexname } [ -f ]

Destroys the (traditional or flexible) volume named volname, or the plex named plexname within a traditional mirrored volume.

Before destroying the volume or plex, the user is prompted to confirm the operation. The -f flag can be used to destroy a volume or plex without prompting.

It is acceptable to destroy flexible volume volname even if it is the last one in its containing aggregate. In that case, the aggregate simply becomes devoid of user-visible file systems, but fully retains all its disks, RAID groups, and plexes.

If a plex within a traditional mirrored volume is destroyed in this way, the traditional volume is left with just one plex, and thus becomes unmirrored.

All of the disks in the plex or traditional volume destroyed by this operation become spare disks.

Only offline volumes and plexes can be destroyed.

vol destroy will delete all entries belonging to the volume in the /etc/exports file unless the option nfs.export.auto-update is disabled.

vol lang [ volname [ language_code ] ]

Displays or changes the character mapping on volname.

If no arguments are given, vol lang displays the list of supported languages and their language codes.

If only volname is given, it displays the language of the specified volume.

If both volname and language-code are given, it sets the language of the specified volume to the given language. This will require a reboot to fully take effect.

vol media_scrub status [ volname | plexname | groupname -s disk-name ]
[-v]

This command prints the status of the media scrub on the named traditional volume, plex, RAID group, or spare drive. If no name is given, then status is given on all RAID groups and spare drives currently running a media scrub. The status includes a percent-complete and the suspended status (if any).

The -v flag displays the date and time at which the last full media scrub completed, the date and time at which the current instances of media scrub started, and the current status of the named traditional volume, plex RAID group, or spare drive. This is provided for all RAID groups if no name is given.

The vol media_scrub status command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should use the aggr media_scrub status command.

vol mirror volname
[ -n ]
[ -v victim_volname ]
[ -f ]
[ -d disk1 [ disk2 ... ] ]

Mirrors the currently-unmirrored traditional volume volname, either with the specified set of disks or with the contents of another unmirrored traditional volume victim_volname, which will be destroyed in the process.

The vol mirror command fails if either the chosen volname or victim_volname are flexible volumes. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite.

For more information about the arguments used for this command, see the information for the aggr mirror command on the na_aggr(1) man page.

vol move start ndmsrcvol dstaggr [ -k ] [ -m | -r num_cutover_attempts ] [ -w cutover_window] [ -o ] [ -d ]

Starts the vol move of volume named ndmsrcvol to the destination aggregate named dstaggr. The execution sequence starts with a series of checks on the controller, source volume, source and destination aggregates. If all the checks are successful, the move starts with Setup phase in which a placeholder volume in the destination aggregate is created, and baseline transfer from source to destination volume initiated. This is followed by Data Copy phase wherein the destination volume requests successive snapmirror updates from source volume to synchronize itself completely with the source volume. Finally the move completes with the cutover phase.

By default, vol move will initiate cutover automatically, unless invoked with an optional -m that disables automatic cutover. With the -m option, vol move continues to trigger snapmirror updates from source volume and the user can initiate cutover at any time with the vol move cutover command.

The duration of the cutover window can be specified by the -w option. The minimum, default and maximum values for cutover window are respectively 30, 60 and 300. The number of cutover attempts is provided by an optional -r. The minimum, default and maximum values for cutover attempts are 1, 3 and 25. If user has not specified -m option and cutover cannot be completed in the specified number of attempts, vol move will pause. The user may either abort or resume vol move with/without -m option. After successful move, the source volume, by default, will be destroyed unless the move was started with -k option.

Before executing cutover, vol move performs a series of checks, similar to the checks during the initialization phase, to verify that the conditions are favorable to cutover. If any of the checks fail, vol move pauses with an EMS message that indicates the exact reason for pause. The user may wait for the unfavorable event to complete and resume vol move thereafter.

The -o option is provided to ignore the redundancy characteristics of aggregates in MetroCluster environment when a vol move is initiated from the mirrored source aggregate to an unmirrored destination aggregate. In other words, without -o option, vol move will not start when the redundancy characteristics of the two aggregates are different and if started with -o option, will pause before entering cutover if the redundancy characteristics of the two aggregates are different.

The -d option is used to perform dry run. When issued with this option, vol move sub system only runs a series of checks without starting vol move. Appropriate error messages are displayed in case any checks fail.

vol move pause ndmsrcvol

Pauses the move of the volume named ndmsrcvol if it is in the Setup or Data Copy phase. The pause aborts the present active transfer, if any, and pauses vol move. The command returns an error if vol move is in the cutover phase.

vol move resume ndmsrcvol [ -k ] [ -m | -r num_cutover_attempts ] [ -w cutover_window ] [ -o ]

Resume the move of volume named ndmsrcvol that has been paused. On resuming, vol move runs the same set of checks that were run at initialization phase. The user can add to or change the options previously specified in vol move start. Options -k and -o, if specified in vol move start command, cannot be undone. The user may switch from automatic to manual cutover by resuming vol move with -m option. Similarly, specifying -r will enable vol move to switch from manual to automatic cutover.

vol move abort ndmsrcvol

Aborts the move of volume named ndmsrcvol. The current data transfer is aborted and placeholder destination volume destroyed. An EMS message is logged. Vol move cannot be aborted during cutover phase.

vol move status [ ndmsrcvol ] [-v ]

View the status of the vol move operation of the volume named ndmsrcvol.

This command returns the following data:

          vol move source volume name.
          Destination aggregate name.
          Length of Cutover window.
          Number of Cutover attempts
          State of the move;

The state could be one of the following:

          Setup
          Move
          Cutover
          Abort

If the move had been paused, the state would be one of the following:

          Setup (paused)
          Move (paused)

The -v option returns additional data like:

          Amount of data (KB) and time taken for last completed transfer.
          Amount of data (KB) currently being transferred.

vol move cutover ndmsrcvol [ -w cutover_window ]

Initiates manual cutover of volume named ndmsrcvol. The command returns error if move was configured in automatic cutover. The move will pause if cutover could not be completed. The user may choose to resume the move in manual cutover or automatic cutover or abort vol move using vol move abort command.

Manual cutover can be initiated if the move was started or resumed with -m option. Manual cutover is forbidden when move is paused or in process of getting aborted. The duration of the cutover window can be specified by the -w option, the minimum and default value for which is 60 seconds.

vol offline { volname | plexname }
[ -t cifsdelaytime ]

Takes the volume named volname (or the plex named plexname within a traditional volume) offline. The command takes effect before returning. If the volume is already in restricted or iron_restricted state, then it is already unavailable for data access, and much of the following description does not apply.

The current root volume may not be taken offline. Neither may a volume marked to become root (by using vol options volname root) be taken offline.

If a volume contains CIFS shares, users should be warned before taking the volume offline. Use the -t option to do this. The cifsdelaytime argument specifies the number of minutes to delay before taking the volume offline, during which time CIFS users are warned of the pending loss of service. A time of 0 means that the volume should be taken offline immediately and without warning. CIFS users can lose data if they are not given a chance to terminate applications gracefully.

If a plexname is specified, the plex must be part of a mirrored traditional volume, and both plexes must be online. Prior to taking a plex offline, the system will flush all internally-buffered data associated with the plex and create a snapshot that is written out to both plexes. The snapshot allows for efficient resynchronization when the plex is subsequently brought back online.

A number of operations being performed on the volume in question can prevent vol offline from succeeding for various lengths of time. If such operations are found, there will be a one-second wait for such operations to finish. If they do not, the command is aborted.

A check is also made for files on the volume opened by internal ONTAP processes. The command is aborted if any are found.

The vol offline command fails if plexname resides not in a traditional mirrored volume, but in an independent aggregate. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should consult the na_aggr(1) man page for a more detailed description of the aggr offline command.

vol online { volname [ -f ] | plexname }

This command brings the volume named volname (or the plex named plexname within a traditional volume) online. It takes effect immediately. If there are CIFS shares associated with the volume, they are enabled.

If a volname is specified, it must be currently offline, restricted, or in a foreign aggregate. If volname belongs to a foreign aggregate, the aggregate will be made native before being brought online. A foreign aggregate is an aggregate that consists of disks moved from another node and that has never been brought online on the current node. Aggregates that are not foreign are considered native.

If the volume is inconsistent but has not lost data, the user will be cautioned and prompted before bringing it online. The -f flag can be used to override this behavior. It is advisable to run WAFL_check (or do a snapmirror initialize in case of a replica volume) prior to bringing an inconsistent volume online. Bringing an inconsistent volume online increases the risk of further file system corruption. If the volume is inconsistent and has experienced possible loss of data, it cannot be brought online unless WAFL_check (or snapmirror initialize) has been run on the volume.

If the volume is a flexible volume and the containing aggregate cannot honour the space guarantees required by this volume, the volume online operation will fail. The -f flag can be used to override this behavior. It is not advisable to use volumes with their space guarantees disabled. Lack of free space can lead to failure of writes which in turn can appear as data loss to some applications.

If a plexname is specified, the plex must be part of an online, mirrored traditional volume. The system will initiate resynchronization of the plex as part of online processing.

The vol online command fails if plexname resides not in a traditional volume, but in an independent aggregate. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should consult the na_aggr(1) man page for a more detailed description of the aggr online command.

vol options volname [ optname optval ]

This command displays the options that have been set for volume volname, or sets the option named optname of the volume named volname to the value optval.

The command remains effective after the node is rebooted, so there is no need to add vol options commands to the /etc/rc file. Some options have values that are numbers. Other options have values that may be on (which can also be expressed as yes, true, or 1) or off (which can also be expressed as no, false, or 0). A mixture of uppercase and lowercase characters can be used when typing the value of an option. The vol status command displays the options that are set per volume. The root option is special in that it does not have a value. To set the root option, use this syntax:

vol options volname root

There are four categories of options handled by this command. The first category is the set of options that are defined for all volumes, both flexible and traditional, since they have to do with the volume's user-visible file system aspects. The second category is the set of aggregate-level (that is, disk and RAID) options that only apply to traditional volumes and not to flexible volumes. The third category is the set of options that are applicable only to flexible volumes and not to traditional volumes. The fourth category is the set of options that are applicable only to FlexCache volumes.

This section documents all four categories of options. It begins by describing, in alphabetical order, options common to all volumes (both flexible and traditional) and their possible values:

convert_ucode on | off

Setting this option to on forces conversion of all directories to UNICODE format when accessed from both NFS and CIFS. By default, it is set to off, in which case access from CIFS causes conversion of pre-4.0 and 4.0 format directories. Access from NFS causes conversion of 4.0 format directories. The default setting is off.

create_ucode on | off

Setting this option to on forces UNICODE format directories to be created by default, both from NFS and CIFS. By default, it is set to off, in which case all directories are created in pre-4.0 format, and the first CIFS access will convert it to UNICODE format. The default setting is off.

extent on | space_optimized | off

Setting this option to on or space_optimized enables extents in the volume. This causes application writes to be written in the volume as a write of a larger group of related data blocks called an extent. Using extents may help workloads that perform many small random writes followed by large sequential reads. However, using extents may increase the amount of disk operations performed on the controller, so this option should only be used where this trade-off is desired. If the option is set to space_optimized then the reallocation update will not duplicate blocks from Snapshot copies into the active file system, and will result in conservative space utilization. Using space_optimized may be useful when the volume has Snapshot copies or is a SnapMirror source, when it can reduce the storage used in the Flexible Volume and the amount of data that SnapMirror needs to move on the next update. The space_optimized value can only be used for Flexible volumes and can result in degraded read performance of Snapshot copies. The default value is off; extents are not used.

fractional_reserve <pct>

This option changes the amount of space reserved for overwrites of reserved objects (LUNs, files) in a volume. The option is set to 100 by default with guarantee set to file or volume. A setting of 100 means that 100% of the required reserved space will actually be reserved, so the objects are fully protected for overwrites. The option is set to 0 by default with guarantee set to none. The value can be either 0 or 100 when guarantee is set to volume or none. If guarantee is set to file, 100 is the only allowed value. Using a value of 0 indicates that no space is reserved for overwrites. This returns the extra space to the available space for the volume, decreasing the total amount of space used. However, this does leave the protected objects in the volume vulnerable to out of space errors. If the percentage is set to 0%, the administrator must monitor the space usage on the volume and take corrective action.

fs_size_fixed on | off

This option causes the file system to remain the same size and not grow or shrink when a SnapMirrored volume relationship is broken, or when a vol add is performed on it. This option is automatically set to be on when a volume becomes a SnapMirrored volume. It will remain on after the snapmirror break command is issued for the volume. This allows a volume to be SnapMirrored back to the source without needing to add disks to the source volume. If the volume is a traditional volume and the size is larger than the file system size, turning off this option will force the file system to grow to the size of the volume. If the volume is a flexible volume and the volume size is larger than the file system size, turning off this option will force the volume size to become equal to the file system size. The default setting is off.

guarantee file | volume | none

This option controls whether the volume is guaranteed some amount of disk space. The default value is volume, which means that the entire size of the volume will be preallocated. The file value means that space will be preallocated for all the space-reserved files and LUNs within the volume. Storage is not preallocated for files and LUNs that are not space-reserved. Writes to these can fail if the underlying aggregate has no space available to store the written data. This value can be set if fractional_reserve is 100. Note that a guarantee of file will no longer be supported in a future release of Data ONTAP. The none value means that no space will be preallocated, even if the volume contains space-reserved files or LUNs; if the aggregate becomes full, space will not be available even for space-reserved files and LUNs within the volume. Note that the file setting allows for overbooking the containing aggregate aggrname. As such, it will be possible to run out of space in the new flexible volume even though it has not yet consumed its stated size. Use this setting carefully, and take care to regularly monitor space utilization in overbooking situations. For flexible root volumes, to ensure that system files, log files, and cores can be saved, the guarantee must be volume. This is to ensure support of the appliance by customer support, if a problem occurs.

Disk space is preallocated when the volume is brought online and, if not used, returned to the aggregate when the volume is brought offline. It is possible to bring a volume online even when the aggregate has insufficient free space to preallocate to the volume. In this case, no space will be preallocated, just as if the none option had been selected. The vol options command will display the guarantee type, and the vol status command will display information about disabled guarantees.

maxdirsize number

Sets the maximum size (in KB) to which a directory can grow. The default maximum directory size is model-dependent, and optimized for the size of system memory. You can increase it for a specific volume by using this option, but doing so could impact system performance. Do not increase the maxdirsize without consulting with customer support. When a user tries to create a file in a directory that is at the limit, the system returns an ENOSPC error and fails the create.

minra on | off

If this option is on, the node performs minimal file read-ahead on the volume. By default, this option is off, causing the node to perform speculative file read-ahead when needed. Using speculative read-ahead will improve performance with most workloads, so enabling this option should be used with caution.

no_atime_update on | off

If this option is on, it prevents the update of the access time on an inode when a file is read. This option is useful for volumes with extremely high read traffic, since it prevents writes to the inode file for the volume from contending with reads from other files. It should be used carefully. That is, use this option when it is known in advance that the correct access time for inodes will not be needed for files on that volume. The default setting is off.

no_i2p on | off

If this option is on, it disables inode to parent pathname translations on the volume. The default setting is off.

nosnap on | off

If this option is on, it disables automatic snapshots on the volume. The default setting is off.

nosnapdir on | off

If this option is on, it disables the visible .snapshot directory that is normally present at client mount points, and turns off access to all other .snapshot directories in the volume. The default setting is off.

nvfail on | off

If this option is on, the node performs additional status checking at boot time to verify that the NVRAM is in a valid state. This option is useful when storing database files. If the node finds any problems, database instances hang or shut down, and the node sends error messages to the console to alert administrators to check the state of the database. The default setting is off.

read_realloc on | space_optimized | off

Setting this option to on or space_optimized enables read reallocation in the volume. This results in the optimization of file layout by writing some blocks to a new location on disk. The layout is updated only after the blocks have been read because of a user read operation, and only when updating their layout will provide better read performance in the future. Using read reallocation may help workloads that perform a mixture of random writes and large sequential reads. If the option is set to space_optimized then the reallocation update will not duplicate blocks from Snapshot copies into the active file system, and will result in conservative space utilization. Using space_optimized may be useful when the volume has Snapshot copies or is a SnapMirror source, when it can reduce the storage used in the Flexible Volume and the amount of data that snapmirror needs to move on the next update. The space_optimized value can only be used for Flexible Volumes and can result in degraded read performance of Snapshot copies. The default value is off; read reallocation is not used.

root [ -f ]

The volume named volname will become the root volume for the node on the next reboot. This option can be used on one volume only at any given time. The existing root volume will become a non-root volume after the reboot.

Until the system is rebooted, the original volume will continue to show root as one of its options, and the new root volume will show diskroot as an option. In general, the volume that has the diskroot option is the one that will be the root volume following the next reboot.

The only way to remove the root status of a volume is to set the root option on another volume.

The act of setting the root status on a flexible volume will also move the HA mailbox disk information to disks on that volumes. A flexible volume must meet the minimum size requirement for the appliance model, and also must have a space guarantee of volume, before it can be designated to become the root volume on the next reboot. This is to ensure support of the appliance by customer support, because the root volume contains system files, log files, and in the event of reboot panics, core files.

Since setting a volume to be a root volume is an important operation, the user is prompted if they want to continue or not. If system files are not detected on the target volume, then the set root operation will fail. You can override this with the -f flag, but upon reboot, the appliance will need to be reconfigured via setup.

Note that it is not possible to set the root status on a SnapLock volume.

schedsnapname create_time | ordinal

If this option is ordinal, the node formats scheduled snapshot names using the type of the snapshot and its ordinal (such as hourly.0) If the option is create_time, the node formats scheduled snapshot names base on the type of the snapshot and the time at which it was created, such as hourly.2005-04-21_1100. The default setting is ordinal.

snaplock_compliance

This read only option indicates that the volume is a SnapLock Compliance volume. Volumes can only be designated SnapLock Compliance volumes at creation time.

snaplock_default_period min | max | infinite <count>d|m|y

This option is only visible for SnapLock volumes and specifies the default retention period that will be applied to files committed to WORM state without an associated retention period.

If this option value is min, then snaplock_minimum_period is used as the default retention period. If this option value is max, then snaplock_maximum_period is used as the default retention period. If this option value is infinite, then a retention period that never expires will be used as the default retention period.

The retention period can also be explicitly specified as a number followed by a suffix. The valid suffixes are s for seconds, h hours, d for days, m for months and y for years. For example, a value of 6m represents a retention period of 6 months. The maximum valid retention period is 70 years.

snaplock_enterprise

This read only option indicates that the volume is a SnapLock Enterprise volume. Volumes can only be designated SnapLock Enterprise volumes at creation time.

snaplock_maximum_period infinite | <count>d|m|y

This option is only visible for SnapLock volumes and specifies the maximum allowed retention period for files committed to WORM state on the volume. Any files committed with a retention period longer than this maximum will be assigned this maximum value.

If this option value is infinite then files that have retention periods that never expire may be committed to the volume.

Otherwise, the retention period is specified as a number followed by a suffix. The valid suffixes are s for seconds, h hours, d for days, m for months and y for years. For example, a value of 6m represents a retention period of 6 months. The maximum allowed retention period is 70 years. This option is not applicable while extending retention period of an already committed WORM file.

snaplock_minimum_period infinite | <count>d|m|y

This option is only visible for SnapLock volumes and specifies the minimum allowed retention period for files committed to WORM state on the volume. Any files committed with a retention period shorter than this minimum will be assigned this minimum value.

If this option value is infinite, then every file committed to the volume will have a retention period that never expires.

Otherwise, the retention period is specified as a number followed by a suffix. The valid suffixes are are s for seconds, h hours, d for days, m for months and y for years. For example, a value of 6m represents a retention period of 6 months. The maximum allowed retention period is 70 years. This option is not applicable while extending retention period of an already committed WORM file.

snaplock_autocommit_period none | <count> h|d|m|y

This options is visible for SnapLock volumes only. This option defines the criteria for committing files to WORM on a SnapLock volume by the autocommit scanner. The h, d, m, y denote hours, days, months and years respectively. The default value of this option is none that corresponds to autocommit being disabled in the SnapLock volume. The minimum autocommit period on a SnapLock volume is 2h. Any valid value other than none, specified in hours (h), days (d), months (m) or years (y) would trigger the autocommit scanner on the Snaplock volume.

snapmirrored off

If SnapMirror is enabled, the node automatically sets this option to on. Set this option to off if SnapMirror is no longer to be used to update the mirror. After setting this option to off, the mirror becomes a regular writable volume. This option can only be set to off; only the node can change the value of this option from off to on.

snapshot_clone_dependency on | off

Setting this option to on will unlock all initial and intermediate backing snapshots for all inactive LUN clones. For active LUN clones, only the backing snapshot will be locked. If the option is off the backing snapshot will remain locked until all intermediate backing snapshots are deleted.

try_first volume_grow | snap_delete

A flexible volume can be configured to automatically reclaim space in case the volume is about to run out of space, by either increasing the size of the volume or deleting snapshots in the volume. If this option is set to volume_grow ONTAP will try to first increase the size of volume before deleting snapshots to reclaim space. If the option is set to fBsnap_delete ONTAP will first automatically delete snapshots and in case of failure to reclaim space will try to grow the volume.

svo_allow_rman on | off

If this option is on, the node performs SnapValidator for Oracle data integrity checks that are compatible with volumes that contain Oracle RMAN backup data. If the node finds any problems, the write will be rejected if the svo_reject_errors option is set to on. The default setting is off.

svo_checksum on | off

If this option is on, the node performs additional SnapValidator for Oracle data integrity checksum calculations of all writes on the volume. If the node finds any problems, the write will be rejected if the svo_reject_errors option is set to on. The default setting is off.

svo_enable on | off

If this option is on, the node performs additional SnapValidator for Oracle data integrity checking of all operations on the volume. If the node finds any problems, the operation will be rejected if the svo_reject_errors option is set to on. The default setting is off.

svo_reject_errors on | off

If this option is on, the node will return an error to the host and log the error if any of the SnapValidator for Oracle checks fail. If the option is off, the error will be logged only. The default setting is off.

The second category of options managed by the vol options command comprises the set of things that are closely related to aggregate-level (that is, disk and RAID) qualities, and are thus only accessible via the vol options command when dealing with traditional volumes. Note that these aggregate-level options are also accessible via the aggr family of commands. The list of these aggregate-level options is provided below in alphabetical order:

ignore_inconsistent on | off

If this option is set to on, then aggregate-level inconsistencies that would normally be considered serious enough to keep the associated volume offline are ignored during booting. The default setting is off.

raidsize number

The -r raidsize argument specifies the maximum number of disks in each RAID group in the traditional volume. The maximum and default values of raidsize are platform-dependent, based on performance and reliability considerations.

raidtype raid4 | raid_dp | raid0

The -t raidtype argument specifies the type of RAID group(s) to be used to create the traditional volume. The possible RAID group types are raid4 for RAID-4, raid_dp for RAID-DP (Double Parity), and raid0 for simple striping without parity protection. Setting the raidtype on V-Series systems is not permitted; the default of raid0 is always used.

resyncsnaptime number

This option is used to set the mirror resynchronization snapshot frequency (in minutes). The default value is 60 minutes.

For new volumes, options convert_ucode, create_ucode, and maxdirsize get their values from the root volume. If the root volume doesn't exist, they get the default values.

The following are the options that only apply to flexible volumes:

nbu_archival_snap on | off [-f]

Setting this option to on for a volume enables archival snapshot copies for SnapVault for NetBackup. If this option is set to off, no archival snapshot copy is taken after a backup. Drag-and-drop restores are only available for those backups that are captured in archival snapshot copies. Enabling or re-enabling archival snapshot copies will only be permitted on a volume if no SnapVault for NetBackup backups exist on that volume. If the nbu_archival_snap vol option is not configured at the time the first SnapVault for NetBackup backup starts for that volume, the vol option is then set according to the value of the snapvault.nbu.archival_snap_default option. The -f option disables the prompt that asks for confirmation.

There are a set of options managed by the vol options command that are tied to FlexCache volumes. The list of these options is as follows:

acregmax <timeout> [m|h|d|w]

Attribute Cache regular file timeout. The amount of time (in seconds) in which the cache considers regular files on the given volume to be valid before consulting the origin. The timeout value is a number, optionally followed by m, h, d or w, denoting minutes, hours, days or weeks respectively. If none of the above letters is used, the unit defaults to seconds. The default value is 15 seconds. A value of zero means the cache will perform an attribute verify for every client request.

acdirmax <timeout> [m|h|d|w]

Similar to acregmax, but for directories.

acsymmax <timeout> [m|h|d|w]

Similar to acregmax, but for symbolic links.

actimeo <timeout> [m|h|d|w]

Attribute Cache default timeout. This value is used for any attribute cache timeout option that has not been explicitly assigned a value. For example, if the administrator has explicitly set acregmax to 15 and actimeo to 60 but has not set values for the other 2 options, regular files will be considered valid for 15 seconds while symbolic links and directories will be considered valid for 60.

acdisconnected <timeout> [m|h|d|w]

Attribute cache timeout value used when the disconnected mode feature is enabled on this volume. If this option is set to 0 (the default value), access will be allowed indefinitely.

disconnected_mode off | hard | soft

This option is used to configure the behavior of the cache volume when it is disconnected from the origin and the normal TTL (for example, acregmax) on the object has expired. When disabled (off), all access attempts will hang. When set to hard or soft, read-only access attempts will be allowed up to the value of the acdisconnected option. After the acdisconnected timeout is exceeded, attempts will either hang (hard) or have an error returned (soft). All attempts to modify the file system contents or access data that is not currently in the cache volume will hang.

flexcache_autogrow on | off

Setting this option to on enables autogrow on the FlexCache volume. This causes the FlexCache volume to automatically grow, if there is room in the aggregate, in order to avoid evictions. Setting this option to off will cause the FlexCache volume to no longer automatically grow. The size will not be reverted back to its original size. This option is only valid on FlexCache volumes. Autogrow will be enabled by default on new FlexCache volumes that are created without a size parameter.

flexcache_min_reserve size

Alter the space reserved in the aggregate for the given FlexCache volume, such that the volume is guaranteed to be able to cache up to size data. The size parameter is given as in the vol create command.

vol rename volname newname

Renames the volume named volname to the name newname. vol rename will rewrite all entries belonging to the volume in the /etc/exports file unless the option nfs.export.auto-update is disabled.

vol restrict volname
[ -t cifsdelaytime ]

Put the volume volname in restricted state, starting from either online or offline state. If the volume is online, then it will be made unavailable for data access as described above under vol offline.

If a volume contains CIFS shares, users should be warned before taking the volume offline. Use the -t option for this. The cifsdelaytime argument specifies the number of minutes to delay before taking the volume offline, during which time CIFS users are warned of the pending loss of service. A time of 0 means take the volume offline immediately with no warnings given. CIFS users can lose data if they are not given a chance to terminate applications gracefully.

vol scrub resume [ volname | plexname | groupname ]

Resume parity scrubbing on the named traditional volume, plex, or RAID group. If no name is given, then all suspended parity scrubs are resumed.

The vol scrub resume command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should use the aggr scrub resume command.

vol scrub start [ volname | plexname | groupname ]

Start parity scrubbing on the named traditional volume, plex, or RAID group. If volname is a flexible volume, vol scrub start aborts.

Parity scrubbing compares the data disks to the parity disk in a RAID group, correcting the parity disk's contents as necessary.

If no name is given, then start parity scrubs on all online RAID groups on the node. If a traditional volume is given, scrubbing is started on all RAID groups contained in the traditional volume. Similarly, if a plex name is given, scrubbing is started on all RAID groups in the plex.

The vol scrub start command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should use the aggr scrub start command.

vol scrub status [ volname | plexname | groupname ] [ -v ]

Print the status of parity scrubbing on the named traditional volume, plex or RAID group. If no name is provided, the status is given on all RAID groups currently undergoing parity scrubbing. The status includes a percent-complete as well as the scrub's suspended status (if any).

The -v flag displays the date and time at which the last full scrub completed, along with the current status on the named traditional volume, plex, or RAID group. If no name is provided, full status is provided for all RAID groups on the node.

The vol scrub status command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should use the aggr scrub status command.

vol scrub stop [ volname | plexname | groupname ]

Stop parity scrubbing for the named traditional volume, plex or RAID group. If no name is given, then parity scrubbing is stopped on any RAID group on which one is active.

The vol scrub stop command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should use the aggr scrub stop command.

vol scrub suspend [ volname | plexname | groupname ]

Suspend parity scrubbing on the named traditional volume, plex, or RAID group. If no name is given, all active parity scrubs are suspended.

The vol scrub suspend command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should use the aggr scrub suspend command.

vol size volname [[+|-]size]

This command sets or displays the given flexible volume's size as specified; using space from the volume's containing aggregate. It can make the flexible volume either larger or smaller. The size argument has the same form and obeys the same rules as when it is used in the vol create command to create a flexible volume. Be careful if the sum of the sizes of all flexible volumes in an aggregate exceeds the size of the aggregate.

If [+|-]size is used, then the flexible volume's size is changed (grown or shrunk) by that amount. Otherwise, the volume size is set to size (rounded up to the nearest 4 KB).

When displaying the flexible volume's size, the units used have the same form as when creating the volume or setting the volume size. The specific unit chosen for a given size is based on matching the volume size to an exact number of a specific unit. k is used if no larger units match.

The file system size of a readonly replica flexible volume, such as a snapmirror destination, is determined from the replica source. In such cases, the value set in vol size is interpreted as an upper limit on the size. A flexible volume with the fs_size_fixed option set may have its size displayed, but not changed.

A flexible root volume cannot be shrunk below a minimum size determined by the appliance model. This to ensure that there is sufficient space in the root volume to store system files, log files, and core files for use by NetApp technical support if a problem with the system occurs.

The amount of space available for the active filesystem in a volume is limited by the snapshot reservation set for that volume. The snapshot reservation should be taken into account when sizing a volume. See na_snap (1) for details on how to set a volume's snapshot reservation.

vol split volname/plexname new_volname

This command removes plexname from a mirrored traditional volume and creates a new, unmirrored traditional volume named new_volname that contains the plex. The original mirrored traditional volume becomes unmirrored. The plex to be split from the original traditional volume must be functional (not partial), but it could be inactive, resyncing, or out-of-date. The vol split can therefore be used to gain access to a plex that is not up to date with respect to its partner plex if its partner plex is currently failed.

If the plex is offline at the time of the split, the resulting traditional volume will be offline. Otherwise, the resulting traditional volume will be in the same online/offline/restricted state as the original traditional volume. A split mirror can be joined back together via the -v option to vol mirror.

The aggr split command is the preferred way to split off plexes. It is the only way to split off plexes from mirrored aggregates that contain flexible volumes.

vol status [ volname ]
[ -r | -v[C] | -d | -l | -c | -C[v] |-b | -s | -f -m | -w | -S[k|m|g|t] | -F[k|m|g|t] ]

Displays the status of one or all volumes on the node. If volname is used, the status of the specified volume is printed. Otherwise, the status of all volumes in the node is printed. By default, it prints a one-line synopsis of the volume, which includes the volume name, its type (either traditional or flexible), whether it is online or offline, other states (for example, partial, degraded, wafl inconsistent and so on) and pervolume options. It also reports volume attributes related to the node's clustered/scale-out capabilities and configuration, if any. For example, Data ONTAP 8.0 Cluster-Mode systems identify which of the selected volumes are ClusterMode (owned by Vservers). Per-volume options are displayed only if the options have been turned on using the vol options command. If the wafl inconsistent state is displayed, please contact Customer Support.

When run in a vfiler context only the -v, -l, -b, and -? flags can be passed to vol status.

The -v flag shows the on/off state of all pervolume options and displays information about each plex and RAID group within the traditional volume or the aggregate containing the flexible volume. aggr status -v is the preferred manner of obtaining the per-aggregate options and the RAID information associated with flexible volumes.

The -C flag displays additional information related to the new clustered/scale-out capabilities, if any. It can be used alone or in direct combination with the -v flag described above (that is, -vC or -Cv) to control the amount of additional cluster-related information displayed. Included are the volume's Owner UUID, Master Data Set ID, Data Set ID and Mirror Type.

The -r flag displays a list of the RAID information for the traditional volume or the aggregate containing the flexible volume. If no volname is specified, it prints RAID information about all traditional volumes and aggregates, information about file system disks, spare disks, and failed disks. For more information about failed disks, see the -f option description below.

The -d flag displays information about the disks in the traditional volume or the aggregate containing the flexible volume. The types of disk information are the same as those from the sysconfig -d command. aggr status -d is the preferred manner of obtaining this low-level information for aggregates that contain flexible volumes.

The -l flag displays, for each volume on a node, the name of the volume, the language code, and language being used by the volume.

The -c flag displays the upgrade status of the Block Checksums data integrity protection feature for the traditional volume or the aggregate containing the flexible volume. aggr status -c is the preferred manner of obtaining this information for a flexible volume's containing aggregate.

The -b is used to get the size of source and destination traditional volumes for use with SnapMirror. The output contains the size of the traditional volume and the size of the file system in the volume. SnapMirror and aggr copy use these numbers to determine if the source and destination volume sizes are compatible. The file system size of the source must be equal or smaller than the volume size of the destination. These numbers can be different if using SnapMirror between volumes of dissimilar geometry.

The -s flag displays a list of the spare disks on the system. aggr status -s is the preferred manner of obtaining this information.

The -m flag displays a list of the disks in the system that are sanitizing, in recovery mode, or in maintenance testing.

The -f flag displays a list of the failed disks on the system. The command output includes the disk failure reason which can be any of following:

      unknown           Failure reason unknown.
      failed            Data ONTAP failed disk, due to a
                        fatal disk error.
      admin failed      User issued a 'disk fail' command
                        for this disk.
      labeled broken    Disk was failed under Data ONTAP
                        6.1.X or an earlier version.
      init failed       Disk initialization sequence failed.
      admin removed     User issued a 'disk remove' command
                        for this disk.
      not responding    Disk not responding to requests.
      pulled            Disk was physically pulled or no
                        data path exists on which to access
                        the disk.
      bypassed          Disk was bypassed by ESH.

aggr status -f is the preferred manner of obtaining this information.

The -w flag displays expiry date of the volume which is maximum retention time of WORM files and WORM snapshots on that volume. A value of "infinite" indicates that the volume has infinite expiry date. A value of "Unknown...volume offline" indicates that expiry date is not displayed since the volume is offline. A value of "Unknown...scan in progress" indicates that expiry date is not displayed since WORM scan on the volume is in progress. A value of "none" indicates that the volume has no expiry date. The volume has no expiry date when it does not hold any WORM files or WORM snapshots. A value of "-" is displayed for regular volumes.

The -S flag displays information about space usage within the volume for online volumes. Depending on volume state, a variety of relevant, non-zero rows will be displayed. Rows that are not selfexplanatory are detailed below. All sizes are reported in units automatically scaled for best readability unless a specific measurement unit is requested by specifying one of the following flags: -k, -m, -g, or -t.
User Data: This is the amount of data written to the volume via CIFS, NFS or SAN protocols plus the metadata (e.g. indirect blocks, directory blocks, etc.) directly associated with user files, plus the space reserved in the volume for these files (hole and overwrite reserves). This is the same information displayed by running the Unix du command on the mount point.
Inodes: This is the amount of space required to store inodes in the file system and is proportional to the maximum number of files ever created in the volume. The inode file is not compacted or truncated, so if a large number of files are created and then deleted, the inode file does not shrink.
Snapshot Spill: This is the amount of Snapshot spill in the volume. If Snapshot used space exceeds the Snapshot reserve, the volume's snapshots are considered to spill out of the reserve. This space cannot be used by the active file system. Total Used: This is the total amount of space used in the volume, including the space used by the Snapshot reserve.

The -F flag displays information about the space used in associated aggregates by FlexVol volumes and features enabled in those volumes. Depending on volume and aggregate state, a variety of relevant, non-zero rows will be displayed. Percentages are based on aggregate size. Output rows that are not self-explanatory are detailed below. All sizes are reported in units automatically scaled for best readability unless a specific measurement unit is requested by specifying one of the following flags: -k, -m, -g, or -t.
Volume Data Footprint: This is the total amount of data written to the volume. It includes data in the volume's active file system as well as volume Snapshot copies. This row only includes data and not reserved space, so when volumes have reserved files, the volume's total usage as displayed in the output of the vol status -S command can exceed the value in this row. Flexible Volume Metadata: This is the space used or reserved in the aggregate for metadata associated with this volume.
Delayed Frees: When Data ONTAP frees space in a volume, this space is not always immediately shown as free in the aggregate. This is because the operations that free the space in the aggregate are batched for increased performance. Blocks that are declared free in the FlexVol volume but which are not yet free in the aggregate are considered "delayed free blocks" until they are processed. For SnapMirror destinations, this row will have a value of 0 and will not be displayed. SnapMirror Destination: During a SnapMirror transfer, this row will include incoming SnapMirror data and SnapMirror-triggered delayed free blocks from previous SnapMirror transfers. Volume Guarantee: This is the amount of space reserved by this volume in the aggregate for future writes. The amount of space reserved depends on the guarantee type (the provisioning mode) of the volume. For a "volume guaranteed" volume, this is the size of the volume minus the amount in the Volume Data Footprint row. For a "file guaranteed" volume, this is the sum of all of the space reserved for hole fills and overwrites in all of the space reserved files in the volume.

vol verify resume [ volname ]

Resume RAID mirror verification on the given traditional volume. If no volume name is given, then resume all suspended RAID mirror verification operations.

The vol verify resume command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled by the new aggr command suite. In fact, the administrator should always use the aggr verify resume command.

vol verify start [ volname ] [ -f plexnumber ]

Start RAID mirror verification on the named online, mirrored traditional volume. If no name is given, then RAID mirror verification is started on all traditional volumes and aggregates on the node.

RAID mirror verification compares the data in both plexes of a mirrored traditional volume or aggregate. In the default case, all blocks that differ are logged, but no changes are made. If the -f flag is given, the plex specified is fixed to match the other plex when mismatches are found. A volume name must be specified with the -f plexnumber option.

The vol verify start command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled by the new aggr command suite. In fact, the administrator should always use the aggr verify start command.

vol verify status [ volname ]

Print the status of RAID mirror verification on the given traditional volume. If no volume name is given, then provide status for all active RAID mirror verification operations. The status includes a percent-complete and the verification's suspended status (if any).

The vol verify status command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled by the new aggr command suite. In fact, the administrator should always use the aggr verify status command.

vol verify stop [ volname ]

Stop RAID mirror verification on the named traditional volume. If no volume name is given, stop all active RAID mirror verification operations on traditional volumes and aggregates.

The vol verify stop command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled by the new aggr command suite. In fact, the administrator should always use the aggr verify stop command.

vol verify suspend [ volname ]

Suspend RAID mirror verification on the named traditional volume. If no volume name is given, then suspend all active RAID mirror verification operations on traditional volumes and aggregates.

The vol verify suspend command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled by the new aggr command suite. In fact, the administrator should always use the aggr verify suspend command.

HA CONSIDERATIONS

Volumes on different nodes in an HA pair can have the same name. For example, both nodes in an HA pair can have a volume named vol0.

However, having unique volume names in an HA pair makes it easier to move volumes between the nodes in the HA pair.

VFILER CONSIDERATIONS

A subset of the vol subcommands are available via vfiler contexts. They are used for vfiler SnapMirror operations. These subcommands are: online, offline, and restrict. These volume operations are only allowed if the vfiler owns the specified volumes. See na_vfiler(1) and na_snapmirror(1) for details on vfiler and snapmirror operations.

EXAMPLES

vol create vol1 aggr0 50g

Creates a flexible volume named vol1 using storage from aggregate aggr0. This new flexible volume's size will be set to 50 gigabytes.

vol create vol1 -r 10 20

Creates a traditional volume named vol1 with 20 disks. The RAID groups in this traditional volume can contain up to 10 disks, so this traditional volume has two RAID groups. The node adds the current spare disks to the new traditional volume, starting with the smallest disk.

vol create vol1 20@9

Creates a traditional volume named vol1 with 20 9-GB disks. Because no RAID group size is specified, the default size (8 disks) is used. The newly created traditional volume contains two RAID groups with 8 disks and a third RAID group with four disks.

vol create vol1 -d 8a.1 8a.2 8a.3

Creates a traditional volume named vol1 with the specified disks.

vol create vol1 aggr1 20m -S kett:vol2

Creates a flexible volume named vol1 on aggr1 of size 20 megabytes, which caches source volume vol2 residing on the origin node kett.

vol create vol1 10
vol options vol1 raidsize 5

The first command creates a traditional volume named vol1 with 10 disks that belong to one RAID group. The second command specifies that if any disks are subsequently added to this traditional volume, they will not cause any current RAID group to have more than five disks. Each existing RAID group will continue to have 10 disks, and no more disks will be added to those RAID groups. When new RAID groups are created, they will have a maximum size of five disks.

vol size vol1 250g

Changes the size of flexible volume vol1 to 250 gigabytes.

vol size vol1 +20g

Adds 20 gigabytes to the size of flexible volume vol1.

vol clone create vol2 -b vol1 snap2

The node will create a writable clone volume vol2 that is backed by the storage of flexible volume vol1, snapshot snap2.

vol clone create will create a default entry in the /etc/exports file unless the option nfs.export.auto-update is disabled.

vol clone split start vol2

The node will start an operation on clone volume vol2 to separate the it from its parent volume. The backing snapshot for vol2 will be unlocked once the separation is complete.

vol options vol1 root

The volume named vol1 becomes the root volume after the next node reboot.

vol options vol1 nosnapdir on

In the volume named vol1, the snapshot directory is made invisible at the client mount point or at the root of a share. Also, for UNIX clients, the .snapshot directories that are normally accessible in all the directories become inaccessible.

vol status vol1 -r

Displays the RAID information about the volume named vol1:

  Volume vol1 (online, raid4) (zoned checksums)
    Plex /vol1/plex0 (online, normal, active)
      RAID group /vol1/plex0/rg0 (normal)

        RAID Disk Device  HA    SHELF BAY CHAN  Used (MB/blks)    Phys (MB/blks)
        --------- ------  --------------- ----  --------------    --------------
        parity    3a.0    3a    0     0   FC:A  34500/70656000    35239/72170880
        data      3a.1    3a    0     1   FC:A  34500/70656000    35239/72170880

vol copy start -s nightly.1 vol0 toaster1:vol0

Copies the nightly snapshot named nightly.1 on volume vol0 on the local node to the volume vol0 on a remote node named toaster1.

vol copy status

Displays the status of all active volume copy operations.

vol copy abort 1

Terminates volume copy operation 1.

vol copy throttle 1 5

Changes volume copy operation 1 to half (50%) of its full speed.

SEE ALSO

na_aggr (1), na_license (1), na_partner (1), na_snapmirror (1), na_sysconfig (1)


Table of Contents