Table of ContentsView in Frames

Initializing a node to configure root-data partitioning

If you are initializing a node whose platform model supports root-data partitioning, you must complete some steps before performing the initialization to ensure that root-data partitioning is configured correctly.

Before you begin

About this task

This procedure can be used for both entry-level platform models and All-Flash FAS (AFF) platform models. For entry-level platform models, only internal disks are partitioned. For AFF platform models, up to 48 SSDs are partitioned, depending on the number of SSDs that are attached to the controller.

This procedure is designed to be run on an HA pair, so the nodes are called Node A and Node B. However, you can still use this procedure if you have only one controller; ignore instructions that are specifically for Node B.

This procedure can take many hours to complete, depending on the amount and type of storage attached to the HA pair.

Steps

  1. Record your node configuration, including network configuration, license values, and passwords, and if you want to restore the same storage architecture as you have now, record that also.
    All of the current node configuration information will be erased when the system is initialized.
  2. If you are performing this procedure on a two-node cluster, disable the HA capability: cluster ha modify -configured false
    This can be done from either node.
  3. Disable storage failover for both nodes: storage failover modify -enabled false -node nodeA,nodeB
  4. On both nodes, boot into maintenance mode:
    1. Halt the system: system node halt -node node_name
      You can ignore any error messages about epsilon. For the second node, you can include the -skip-lif-migration-before-shutdown flag if prompted to do so.
    2. At the LOADER prompt, boot Data ONTAP: boot_ontap
    3. Monitor the boot process, and when prompted, press Ctrl-C to display the boot menu.
      Example
      *******************************
      *                             *
      * Press Ctrl-C for Boot Menu. *
      *                             *
      *******************************
    4. Select option 5, Maintenance mode boot.
      Example
      Please choose one of the following:
      
      (1) Normal Boot.
      (2) Boot without /etc/rc.
      (3) Change password.
      (4) Clean configuration and initialize all disks.
      (5) Maintenance mode boot.
      (6) Update flash from backup config.
      (7) Install new software first.
      (8) Reboot node.
      Selection (1-8)? 5
  5. For both nodes, if there are external disks connected to the node, destroy all aggregates, including the root aggregate:
    1. Display all aggregates: aggr status
    2. For each aggregate, take the aggregate offline: aggr offline aggr_name
    3. For each aggregate, destroy the aggregate: aggr destroy aggr_name
    All disks are converted to spares.
  6. Ensure that no drives in the HA pair are partitioned:
    1. Display all drives owned by both nodes: disk show
      If any of the drives show three entries, that drive has been partitioned. If your nodes have no partitioned disks, you can skip to step 7.

      The following example shows partitioned drive 0a.10.11:

      Example
        DISK       OWNER              POOL   SERIAL NUMBER         HOME         
      ------------ -------------      -----  -------------         ------------- 
      0a.10.11     sys1(536880559)    Pool0  N11YE08L              sys1(536880559)
      0a.10.11P1   sys1(536880559)    Pool0  N11YE08LNP001         sys1(536880559)
      0a.10.11P2   sys1(536880559)    Pool0  N11YE08LNP002         sys1(536880559)
    2. Note any partitions that do not have the same owner as their container disk.
      Example
      In the following example, the container disk (0a.10.14) is owned by sys1, but partition one (0a.10.14P1) is owned by sys2.
        DISK       OWNER              POOL   SERIAL NUMBER         HOME         
      ------------ -------------      -----  -------------         ------------- 
      0a.10.14     sys1(536880559)    Pool0  N11YE08L              sys1(536880559)
      0a.10.14P1   sys2(536880408)    Pool0  N11YE08LNP001         sys2(536880408)
      0a.10.14P2   sys1(536880559)    Pool0  N11YE08LNP002         sys1(536880559)
    3. Update the ownership of all partitions owned by a different node than their container disk:disk assign disk_partition -f -o container_disk_owner
      Example
      You would enter a command like the following example for each disk with partitions owned by a different node than their container disk: disk assign 0a.10.14P1 -f -o sys1
    4. For each drive with partitions, on the node that owns the container disk, remove the partitions: disk unpartition disk_name
      disk_name is the name of the disk, without any partition information, such as "0a.10.3".
      Example
      You would enter a command like the following example for each disk with partitions: disk unpartition 0a.10.14
  7. On both nodes, remove disk ownership: disk remove_ownership
  8. On both nodes, verify that all drives connected to both nodes are unowned: disk show
    Example
    *> disk show
    Local System ID: 465245905
    disk show: No disk match option show.
  9. On both nodes, return to the LOADER menu: halt
  10. On Node A only, begin zeroing the drives:
    1. At the LOADER prompt, boot Data ONTAP: boot_ontap
    2. Monitor the boot process, and when prompted, press Ctrl-C to display the boot menu.
    3. Select option 4, Clean configuration and initialize all disks.
      When the drives begin to be zeroed, a series of dots are printed to the console.
  11. After the drives on Node A have been zeroing for a few minutes, repeat the previous step on Node B.
    The drives on both nodes that will be partitioned are zeroing . When the zeroing process is complete, the nodes return to the Node Setup wizard.
  12. Restore your system configuration.
  13. Confirm that the root aggregate on both nodes is composed of partitions: storage aggregate show-status
    The Position column shows shared and the usable size is a small fraction of the physical size:
    Owner Node: sys1
     Aggregate: aggr0_1 (online, raid_dp) (block checksums)
      Plex: /aggr0_1/plex0 (online, normal, active, pool0)
       RAID Group /aggr0_1/plex0/rg0 (normal, block checksums)
                                                                    Usable Physical
         Position Disk                         Pool  Type     RPM     Size     Size Status
         -------- ---------------------------  ----  ----- ------ -------- -------- --------
         shared   0a.10.10                       0   BSAS    7200  73.89GB  828.0GB (normal)
         shared   0a.10.0                        0   BSAS    7200  73.89GB  828.0GB (normal)
         shared   0a.10.11                       0   BSAS    7200  73.89GB  828.0GB (normal)
         shared   0a.10.6                        0   BSAS    7200  73.89GB  828.0GB (normal)
         shared   0a.10.5                        0   BSAS    7200  73.89GB  828.0GB (normal)
    
    
  14. Run System Setup to reconfigure the HA pair or rejoin the cluster, depending on your initial configuration.