Manual Pages


Table of Contents

NAME

na_disk - RAID disk configuration control commands

SYNOPSIS

disk assign { <disk_name> | all | [-T <storage type> -shelf <shelf name>] [-n <count>] | auto } [-p <pool>] [-o <ownername>] [-s {<sysid>|unowned}] [-c {block|zoned}] [-f]

disk encrypt show <disk_list>

disk encrypt lock <disk_list>

disk encrypt rekey <new_key_id> <disk_list>

disk encrypt sanitize { -all | <disk_list> }

disk encrypt destroy <disk_list>

disk fail [-i] [-f] <disk_name>

disk maint start [-t test_list] [-c cycle_count] [-f] [-i] -d <disk_list>

disk maint abort <disk_list>

disk maint list

disk maint status [-v] [ <disk_list>]

disk reassign {-o old_name | -s old_sysid} [-n new_name] [-d new_sysid]

disk remove [-w] <disk_name>

disk replace start [-f] [-m] <disk_name> <spare_disk_name>

disk replace stop <disk_name>

disk sanitize start [-p <pattern1> |-r [-p <pattern2> |-r [-p <pattern3> |-r]]] [-c <number_of_cycles> <disk_list>

disk sanitize abort <disk_list>

disk sanitize status [ <disk_list>]

disk sanitize release <disk_name>

disk scrub start

disk scrub stop

disk show [ -o <ownername> | -s <sysid> | -u | -n | -v -a ]

disk swap

disk unswap

disk zero spares

DESCRIPTION

The disk fail command forces a file system disk to fail. The disk reassign command is used in maintenance mode to reassign disks after the NVRAM card has been swapped. The disk remove command unloads a spare disk so that you can physically remove the disk from the node. The disk replace command can be used to replace a file system disk with a more appropriate spare disk.

The disk scrub command causes the node to scan disks for media errors. If a media error is found, the node tries to fix it by reconstructing the data from parity and rewriting the data. Both commands report status messages when the operation is initiated and return completion status when an operation has completed.

The node's ``hot swap'' capability allows removal or addition of disks to the system with minimal interruption to file system activity. Before you physically remove or add a SCSI disk, use the disk swap command to stall I/O activity. After you removed or added the disk, file system activity automatically continues. If you should type the disk swap command accidentally, or you choose not to swap a disk at this time, use disk unswap to cancel the swap operation and continue service.

If you want to remove or add a Fibre Channel disk, there is no need to enter the disk swap command.

Before you swap or remove a disk, it is a good idea to run syconfig -r to verify which disks are where.

The disk zero spares command zeroes out all non-zeroed RAID spare disks. The command runs in the background and can take much time to complete, possibly hours, depending on the number of disks to be zeroed and the capacity of each disk. Having zeroed spare disks available helps avoid delay in creating or extending an aggregate. Spare disks that are in the process of zeroing are still eligible for use as creation, extension, or reconstruct disks. After invoking the command, the aggr status -s command can be used to verify the status of the spare disk zeroing.

The disk assign and disk show commands are available only on systems with software-based disk ownership, and are used to assign, or display disk ownership.

The disk sanitize start, disk sanitize abort, and disk sanitize status commands are used to start, abort, and obtain status of the disk sanitization process. This process runs in the background and sanitizes the disk by writing the entire disk with each of the defined patterns. The set of all pattern writes defines a cycle; both pattern and cycle count parameters can be specified by the user. Depending on the capacity of the disk and the number of patterns and cycles defined, this process can take several hours to complete. When the process has completed, the disk is in the sanitized state. The disk sanitize release command and the disk unfail command allow the user to return a sanitized disk to the spare pool.

The disk maint start, disk maint abort, and disk maint status commands are used to start, abort, and obtain status of the disk maintenance test process from the command line. This test process can be invoked by the user through this command or invoked automatically by the system when it encounters a disk that is returning nonfatal errors. The goal of disk maintenance is to either correct the errors or remove the disk from the system. The disk maintenance command executes either a set of predefined tests defined for the disk type or the user specified tests. Depending on the capacity of the disk and the number of tests and cycles defined, this process can take several hours to complete.

The disk encrypt show, disk encrypt lock, and disk encrypt rekey commands are used to show, lock, and rekey selfencrypting disks on a Storage Encryption enabled system. These disks are manufactured with special circuitry to automatically encrypt and decrypt all data read to and written from the disk media.

The disk encrypt sanitize command cryptographically erases self-encrypting disks on a Storage Encryption enabled system. Because the internal disk encryption key (DEK) is changed to a new value, all data encrypted with the previous DEK becomes irretrievable. This command is extremely fast (on the order of seconds) compared to the traditional disk wipe techniques provided by disk sanitize. Sanitized drives may be reused.

The disk encrypt destroy command cryptographically destroys self-encrypting disks on a Storage Encryption enabled system. Data ONTAP changes the internal disk encryption key (DEK) as in the disk encrypt sanitize command, and then issues commands to the disk to render all data on the disk permanently inaccessible. Destroyed disks cannot be recovered or reused. Use this command with extreme care.

USAGE

disk assign {<disk_name> | all | [-T <storage type> -shelf <shelf name>] [-n <count>] | auto } [-p <pool>]
[-o <ownername>]
[-s {<sysid>|unowned}]
[-c {block|zoned}] [-f]

Used to assign ownership of a disk to the specified system. Available only on systems with software-based disk ownership. The disk_name or -shelf <shelf name> [-n <count>] or all or [-T <storage_type>] -n count or auto option is required. The option -shelf <shelf name> will assign all currently unassigned disks of the specified shelf. The shelf name is available in the output of the "storage show shelf" command. An example of a valid shelf name is 0a.shelf1 or switch:4.shelf11. If the system is not configured properly, then this command might not be able to find the shelf. An example of misconfigured system is, same shelf id is given in a stack.

The keyword all will cause all unassigned disks to be assigned. The -n count option will cause the number of unassigned disks specified by count to be assigned. If the -T {ATA | BSAS | FCAL | FSAS | LUN | SAS | SATA | SSD} option is specified along with the -n count option only disks with the specified type are selected up to count. The auto option will cause any disks eligible for auto-assignment to be immediately assigned, regardless of the setting of the disk.auto_assign option. Unowned disks which are on loops where only 1 node owns the disks and the pool information is the same will be assigned. The pool value can be either 0 or 1. If the disks are unowned and are being assigned to a non-local node, either the ownername and/or sysid parameters need to be specified to identify the node. The -c option is used to specify the checksum type for a disk or an array LUN. It can also be used to modify the checksum type of certain disks. For some disks (for example, FCAL, SSD, SAS), the checksum type cannot be modified. For more information on modifying the checksum type, please refer to the Storage Management Guide. The -f option needs to be specified if the node already owns the disk.

To make an owned disk unowned, use the `-s unowned' option. The local node should own this disk. Use -f option if the disk is not owned by the local node and may result in data corruption if the current owner of the disk is up.

This command can be used on subset of disks using wildcard character ("*") in the disk_name parameter. For example:

disk assign XX.* -p 0

For direct attached disks, all the disks connected to port XX will be assigned to pool0.

disk assign swY:X.* -p 1

For switch attached disks, all the disks connected to port X of switch swY are assigned to pool1.

disk encrypt show <disk_list>

On a Storage Encryption enabled system, this command displays the status of self-encrypting disks.

Disks shown with a key ID value of "0x0" are currently keyed to MSID (Manufacturer Secure ID), the default authentication key set by the manufacturer. The MSID value is electronically readable from the drive and is not secret.

Disks shown in the "locked" state require authentication at the next disk power-on or powercycle event.

Disks that are locked but set to MSID are automatically authenticated by Data ONTAP (or by an attacker who steals the disk) by simply using the MSID value as the authentication key.

Disks that are locked with a specific key ID require the authentication key associated with that key ID. All authentication key / key ID pairs are maintained in an external key server management system. See the key_manager command for more information.

Note that disk_list supports basic wildcard matching as found in most UNIX shells. The disk_list wildcard pattern may include *, ?, and []. Use "*" to match many characters, use "?" to match a single character, and use "[]" to match a set or range of characters. Character sets may be negated using "!" as the first character of the set.

The disk_list parameter contains no spaces or commas. Only one pattern is accepted on the command, but it may contain several wildcards.

As an example, to match all disk names that do not end with 0,

disk encrypt show *.?[1-9]

or the alternatives,

disk encrypt show *.?[!0]
disk encrypt show *[!0]

disk encrypt lock <disk_list>

On a Storage Encryption enabled system, this command configures self-encrypting disks to require authentication at the next disk power-on or powercycle event.

A self-encrypting disk that requires authentication at power-on or power-cycle is said to be "locked".

Note that a "locked" disk does not necessarily mean a "protected" disk. The authentication key must be changed from MSID to another value in order for a locked disk to be considered protected from attackers who might steal the disk. The metaphor to consider is this: an attacker can easily open a locked door if you have failed to remove the factory default key from the door knob. Changing the authentication key to something other than MSID (and locking the disk) removes this simple attack against a disk still keyed at MSID.

A disk power-on or power-cycle event does not reset the locked flag in the disk firmware. Once locked, the disk will require authentication on subsequent power-cycles until the lock control is reset. This can be accomplished with the disk encrypt sanitize command.

This command supports the same <disk_list> and wildcard matching as the disk encrypt show command.

disk encrypt rekey <new_key_id> <disk_list>

On a Storage Encryption enabled system, this command configures self-encrypting disks to use the authentication key associated with new_key_id. The act of changing the authentication key is known as "rekeying" the disk.

A key ID is a unique string of 64 hexadecimal characters that is associated with a specific authentication key value. The authentication key is secret, and is used as the passphrase to authenticate a locked disk at the next disk poweron or power-cycle event.

New authentication key / key ID pairs are created using the key_manager rekey command.

If new_key_id is the special value "0x0" then the disk is rekeyed back to the factory default authentication key known as MSID.

Rekeying the disk to MSID sets the disk to an unprotected state. You must assume an attacker would attempt the obvious simple authentication of a stolen (but locked) disk by trying the disk's factory default MSID value as the authentication key.

This command does not affect data access, since it does not alter the disk encryption key (DEK) used to encrypt or decrypt disk data. This command simply changes the authentication key (AK) that the disk will require when authentication is needed to perform certain encryption-related actions on the disk, such as clearing the data protect error or changing the AK.

Note that the authentication key is effectively a passphrase for permission to change other encryption settings. Another key, called the disk encryption key (DEK), is the actual key used to encrypt and decrypt data and is known only to the disk. To change the DEK requires the use of the disk encrypt sanitize command.

This command supports the same <disk_list> and wildcard matching as the disk encrypt show command.

disk encrypt sanitize { -all | <disk_list> }

On a Storage Encryption enabled system, this command cryptographically erases self-encrypting disks, resulting in total and permanent data loss on the affected disk. The disks themselves may be redeployed.

This command instructs the disk to change the existing encryption key (DEK) to a new value. Data on the disk that has been encrypted with a previous DEK will no longer be accessible because it cannot be decrypted correctly.

No disk contents are read or written by this command. By design of the disk manufacturer, the DEK never leaves the disk. The storage system cannot access the DEK, and there is no mechanism to tell the disk to revert to a previous DEK.

After a disk is cryptographically erased, the disk is unlocked and rekeyed to MSID, and made available for reuse. The disk can be restored to service with disk sanitize release.

Note that since cryptographic erase also erases the labels on the disk, you will need the advanced command disk unfail -s to rewrite the labels and make the disk a spare. If it needs to be assigned, you will need to do a disk assign command before the disk unfail -s.

This command is extremely fast (on the order of seconds) compared to the traditional disk wipe techniques provided by disk sanitize.

Unless -all is specified, disk names matched by disk_list must be spare disks.

The -all option will issue a warning and prompt you to type a confirmation code. If this confirmation is entered more than 60 seconds after the prompt, the sanitize request is cancelled.

The -all option is mutually exclusively with disk_list. Use -all in a situation where normal checks, such as ensuring the disk is spare, should be bypassed.

The -all option is available when privilege is "advanced" or higher, and when in maintenance mode. It causes all available disks, including those still in aggregates, to be cryptographically sanitized.

This command does not use the shell wildcard matching used by disk encrypt show. The disk_list parameter may be a single disk name or series of space-separated disk names.

disk encrypt destroy <disk_list>

On a Storage Encryption enabled system, this command cryptographically destroys self-encrypting disks, resulting in end-of-life for the disk. A destroyed disk cannot be recovered or reused.

Data ONTAP first instructs the disk drive to cryptographically erase all data, then issues commands to the disk to render further I/O with the disk impossible.

Disk names matched by disk_list must be spare disks.

This command should be used with extreme caution; there is no prompt for confirmation before the destroy operation begins.

This command does not use the shell wildcard matching used by disk encrypt show. The disk_list parameter may be a single disk name or series of space-separated disk names.

disk fail [-i] [-f] <disk_name>
Force a file system disk to be failed. The disk fail command is used to remove a file system disk that may be logging excessive errors and requires replacement.

If disk fail is used without options, the disk will first be marked as ``prefailed''. If an appropriate spare is available, it will be selected for Rapid RAID Recovery. In that process, the prefailed disk will be copied to the spare. At the end of the copy process, the prefailed disk is removed from the RAID configuration. The node will spin that disk down, so that it can be removed from the shelf. (disk swap must be used when physically removing SCSI disks.)

The disk being removed is marked as ``broken'', so that if it remains in the disk shelf, it will not be used by the node as a spare disk. If the disk is moved to another node, that node will use it as a spare. This is not a recommended course of action, as the reason that the disk was failed may have been because it needed to be replaced.

Option -i can be used to avoid Rapid RAID Recovery and remove the disk from the RAID configuration immediately. Note that when a file system disk has been removed in this manner, the RAID group to which the disk belongs will enter degraded mode (meaning, a disk is missing from the RAID group). If a suitable spare disk is available, the contents of the disk being removed will be reconstructed onto that spare disk.

If used without options, disk fail issues a warning and waits for confirmation before proceeding. Option -f can be used to skip the warning and force execution of the command without confirmation.

disk maint start
[-t test_list] [-c cycle_count] [-f] [-i] -d disk_list

Used to start the Maintenance Center tests on the disks listed. The -t option defines the tests that are to be run. The available tests are displayed using the disk maint list command. If no tests are specified, the default set of tests for the particular disk type are run. The -c option specifies the number of cycles of the test set to run. The default is 1 cycle.

If a filesystem disk is selected and the -i option is not specified, the disk will first be marked as pending. If an appropriate spare is available, it will be selected for Rapid RAID Recovery. In that process, the disk will be copied to the spare. At the end of the copy process, the disk is removed from the RAID configuration and begins Maintenance Center testing. The -i option avoids Rapid RAID Recovery and removes the disk immediately from the RAID configuration to start Maintenance Center testing. Note that when a filesystem disk has been removed in this manner, the RAID group to which the disk belongs will enter degraded mode (meaning, a disk is missing from the RAID group). If a suitable spare disk is available, the contents of the disk being removed will be reconstructed onto that spare disk.

If used without the -f option on filesystem disks, disk maint start issues a warning and waits for confirmation before proceeding. The -f option can be used to skip the warning and force execution of the command without confirmation.

The testing may be aborted with the disk maint abort command.

disk maint abort disk_list

Used to terminate the maintenance testing process for the specified disks. If the testing was started by the user, the disk will be returned to the spare pool provided that the tests have passed. If any tests have failed, the disk will be failed.

disk maint status [-v] [ disk_list]

Return the percent of the testing that has completed for either the specified list of disks or for all of the testing disks. The -v option returns an expanded list of the test status.

disk maint list

List the tests that are available.

disk reassign [-o <old_name> | -s <old_sysid>] [-n <new_name>] -d <new_sysid>
Used to reassign disks. This command can only be used in maintenance mode after an NVRAM card swap. Available only on systems with software-based disk ownership.

disk remove [-w] <disk_name>

Remove the specified spare disk from the RAID configuration, spinning the disk down when removal is complete.

This command does not remove disk ownership information from the disk. Therefore, if you plan to reuse the disk in a different storage system, you should use the disk remove_ownership (advanced) command instead. Refer to the "Storage Management Guide" for the complete procedure.

NOTE: For systems with multi-disk carriers, it is important to ensure that none of the disks in the carrier are filesystem disks before attempting removal. To convert a filesystem disk to a spare disk, see disk replace.

The option -w is valid only for V-Series systems. It wipes out the label of the disk being removed.

disk replace start [-f] [-m] <disk_name> <spare_disk_name>

This command uses Rapid RAID Recovery to copy data from the specified file system disk to the specified spare disk. At the end of that process, roles of disks are reversed. The spare disk will replace the file system disk in the RAID group and the file system disk will become a spare. The option -f can be used to skip the confirmation. The option -m allows mixing disks with different characteristics. It allows using the target disk with rotational speed that does not match that of the majority of disks in the aggregate. It also allows using the target disk from the opposite spare pool.

disk replace stop <disk_name>

This command can be used to abort disk replace, or to prevent it if copying did not start.

disk sanitize start
[-p <pattern> |-r [-p <pattern> |-r [-p <pattern> |-r]]] [-c <cycles> <disk_list>

This command is used to start the sanitization process on the disks listed.

The -p option defines the byte pattern(s) that will be written to the disk during one of the passes of each cycle. The user may define up to 3 patterns, which may include a random pattern. If no pattern options are specified, the following 3 patterns are defined as the default: 0x55 on the first pass, 0xaa on the second pass, and 0x3c on the third pass.

The -r option may be used instead of a pattern option to generate a write pass of random data instead of a defined byte pattern. The random data generated will be the same data for each of the disks for each cycle. If the -r option is used as one or more of the patterns, there is a limit of 100 disks that can be sanitized simultaneously.

The -c option specifies the number of cycles of pattern writes. The default is 1 cycle.

All sanitization process information is written to the log file at /etc/log/sanitization.log. The serial numbers of all sanitized disks are written to /etc/log/sanitized_disks. The log files are written at 15 minute intervals.

Note: If both of the SES disks of a shelf are being sanitized, you may receive a warning and the enclosure services will be limited.

disk sanitize abort <disk_list>

Used to terminate the sanitization process for the specified disks. If the disk is in the format stage, the process will be aborted when the format is complete. A message will be displayed when the format is complete and when an abort is complete.

disk sanitize status [ <disk_list>]

Return the percent of the process that has completed for either the specified list of disks or for all of the currently sanitizing disks.

disk sanitize release <disk_name>

Modifies the state of the disk from sanitized to spare, and returns the disk to the spare pool.

Note: If the disk is not returned to the spare pool, you will need the advanced command disk unfail -s to rewrite the labels and make the disk a spare. If it needs to be assigned, you will need to do a disk assign before the disk unfail -s.

disk scrub start

Start a RAID scrubbing operation on all RAID groups. The raid.scrub.enable option is ignored; scrubbing will be started regardless of the setting of that option (The option is applicable only to scrubbing that gets started periodically by the system.).

disk scrub stop

Stop a RAID scrubbing operation.

disk show [ -o <ownername> | -s <sysid> | -n | -v | -a]

Used to display information about the ownership of the disks. Available only on systems with software-based disk ownership. -o lists all disks owned by the node with the name <ownername>. -s lists all disks owned by the node with the serial number <sysid>. -n lists all unassigned disks. -v lists all disks. -a lists all assigned disks.

The wildcard character ("*") can be used with this command to get information about a subset of the disks attached to the storage system. The wildcard character can be combined with other command options also. For example:

disk show XX.*

For direct attached disks, all the disks connected to port XX will be displayed.

disk show -a swY:X.*

For switch attached disks, all the disks assigned to port X of switch swY will be displayed.

disk show -o ZZ swY:*

For switch attached disks, all the disks owned by storage system ZZ and connected to switch swY will be displayed.

disk swap

Applies to SCSI disks only. It stalls all I/O on the node to allow a disk to be physically added or removed from a disk shelf. Typically, this command would be used to allow removal of a failed disk, or of a file system or spare disk that was prepared for removal using the disk fail or disk remove command. Once a disk is physically added or removed from a disk shelf, system I/O will automatically continue.

NOTE: It is important to issue the disk swap command only when you have a disk that you want to physically remove or add to a disk shelf, because all I/O will stall until a disk is added or removed from the shelf.

disk unswap

Used to undo a disk swap command, cancel the swap operation, and continue service.

disk zero spares

Zero all non-zeroed RAID spare disks.

SEE ALSO

na_aggr(1), na_sysconfig(1), na_vol(1), na_key_manager(1)


Table of Contents