Table of ContentsView in Frames

Increasing the size of an aggregate that uses root-data partitioning

When you add storage to an existing aggregate that is using partitioned drives, you should be aware of whether you are adding a partitioned drive or an unpartitioned drive, and the tradeoffs of mixing those types of drives in the same RAID group versus creating a new RAID group.

Before you begin

About this task

When you add storage to an existing aggregate, you can let Data ONTAP choose the RAID group to add the storage to, or you can designate the target RAID group for the added storage, including creating a new RAID group.

If an unpartitioned drive is added to a RAID group composed of partitioned drives, the new drive is partitioned, leaving an unused spare partition. If you do not want the new drive to be partitioned, you can add it to a RAID group that contains only unpartitioned (physical) drives. However, partitioning a drive might be preferable to creating a new, small RAID group.

Following these best practices when you add storage to an aggregate optimizes aggregate performance:

When you provision partitions, you must ensure that you do not leave the node without a drive with both partitions as spare. If you do, and the node experiences a controller disruption, valuable information about the problem (the core file) might not be available to provide to technical support.

Steps

  1. Show the available spare storage on the system that owns the aggregate: storage aggregate show-spare-disks -original-owner node_name
    You can use the -is-disk-shared parameter to show only partitioned drives or only unpartitioned drives.
  2. Show the current RAID groups for the aggregate: storage aggregate show-status aggr_name
  3. Simulate adding the storage to the aggregate: storage aggregate add-disks -aggregate aggr_name -diskcount number_of_disks_or_partitions -simulate true
    This enables you to see the result of the storage addition without actually provisioning any storage. If any warnings are displayed from the simulated command, you can adjust the command and repeat the simulation.
  4. Add the storage to the aggregate: storage aggregate add-disks -aggregate aggr_name -diskcount number_of_disks_or_partitions
    You can use the -raidgroup parameter if you want to add the storage to a different RAID group than the default.

    If you are adding partitions to the aggregate, you must use a disk that shows available capacity for the required partition type. For example, if you are adding partitions to a data aggregate (and using a disk list), the disk names you use must show available capacity in the Local Data Usable column.

  5. Verify that the storage was added successfully: storage aggregate show-status -aggregate aggr_name
  6. Ensure that the node still has at least one drive with both the root partition and the data partition as spare: storage aggregate show-spare-disks -original-owner node_name
    If the node does not have a drive with both partitions as spare and it experiences a controller disruption, then valuable information about the problem (the core file) might not be available to provide to technical support.

Example: Adding partitioned drives to an aggregate

The following example shows that the cl1-s2 node has multiple spare partitions available:

cl1-s2::> storage aggregate show-spare-disks -original-owner cl1-s2 -is-disk-shared true

Original Owner: cl1-s2
 Pool0
  Shared HDD Spares
                                                            Local    Local
                                                             Data     Root Physical
 Disk                        Type     RPM Checksum         Usable   Usable     Size Status
 --------------------------- ----- ------ -------------- -------- -------- -------- --------
 1.0.1                       BSAS    7200 block           753.8GB  73.89GB  828.0GB zeroed
 1.0.2                       BSAS    7200 block           753.8GB       0B  828.0GB zeroed
 1.0.3                       BSAS    7200 block           753.8GB       0B  828.0GB zeroed
 1.0.4                       BSAS    7200 block           753.8GB       0B  828.0GB zeroed
 1.0.8                       BSAS    7200 block           753.8GB       0B  828.0GB zeroed
 1.0.9                       BSAS    7200 block           753.8GB       0B  828.0GB zeroed
 1.0.10                      BSAS    7200 block                0B  73.89GB  828.0GB zeroed
2 entries were displayed.

The following example shows that the data_1 aggregate is composed of a single RAID group of five partitions:

cl1-s2::> storage aggregate show-status -aggregate data_1

Owner Node: cl1-s2
 Aggregate: data_1 (online, raid_dp) (block checksums)
  Plex: /data_1/plex0 (online, normal, active, pool0)
   RAID Group /data_1/plex0/rg0 (normal, block checksums)
                                                              Usable Physical
     Position Disk                        Pool Type     RPM     Size     Size Status
     -------- --------------------------- ---- ----- ------ -------- -------- ----------
     shared   1.0.10                       0   BSAS    7200  753.8GB  828.0GB (normal)
     shared   1.0.5                        0   BSAS    7200  753.8GB  828.0GB (normal)
     shared   1.0.6                        0   BSAS    7200  753.8GB  828.0GB (normal)
     shared   1.0.11                       0   BSAS    7200  753.8GB  828.0GB (normal)
     shared   1.0.0                        0   BSAS    7200  753.8GB  828.0GB (normal)
5 entries were displayed.

The following example shows which partitions would be added to the aggregate:

cl1-s2::> storage aggregate add-disks data_1 -diskcount 5 -simulate true

Addition of disks would succeed for aggregate "data_1" on node "cl1-s2". The
following disks would be used to add to the aggregate: 1.0.2, 1.0.3, 1.0.4, 1.0.8, 1.0.9.

The following example adds five spare data partitions to the aggregate:

cl1-s2::> storage aggregate add-disks data_1 -diskcount 5

The following example shows that the data partitions were successfully added to the aggregate:

cl1-s2::> storage aggregate show-status -aggregate data_1

Owner Node: cl1-s2
 Aggregate: data_1 (online, raid_dp) (block checksums)
  Plex: /data_1/plex0 (online, normal, active, pool0)
   RAID Group /data_1/plex0/rg0 (normal, block checksums)
                                                              Usable Physical
     Position Disk                        Pool Type     RPM     Size     Size Status
     -------- --------------------------- ---- ----- ------ -------- -------- ----------
     shared   1.0.10                       0   BSAS    7200  753.8GB  828.0GB (normal)
     shared   1.0.5                        0   BSAS    7200  753.8GB  828.0GB (normal)
     shared   1.0.6                        0   BSAS    7200  753.8GB  828.0GB (normal)
     shared   1.0.11                       0   BSAS    7200  753.8GB  828.0GB (normal)
     shared   1.0.0                        0   BSAS    7200  753.8GB  828.0GB (normal)
     shared   1.0.2                        0   BSAS    7200  753.8GB  828.0GB (normal)
     shared   1.0.3                        0   BSAS    7200  753.8GB  828.0GB (normal)
     shared   1.0.4                        0   BSAS    7200  753.8GB  828.0GB (normal)
     shared   1.0.8                        0   BSAS    7200  753.8GB  828.0GB (normal)
     shared   1.0.9                        0   BSAS    7200  753.8GB  828.0GB (normal)
10 entries were displayed.

The following example verifies that an entire disk, disk 1.0.1, remains available as a spare:

cl1-s2::> storage aggregate show-spare-disks -original-owner cl1-s2 -is-disk-shared true

Original Owner: cl1-s2
 Pool0
  Shared HDD Spares
                                                            Local    Local
                                                             Data     Root Physical
 Disk                        Type     RPM Checksum         Usable   Usable     Size Status
 --------------------------- ----- ------ -------------- -------- -------- -------- --------
 1.0.1                       BSAS    7200 block           753.8GB  73.89GB  828.0GB zeroed
 1.0.10                      BSAS    7200 block                0B  73.89GB  828.0GB zeroed
2 entries were displayed.