Table of ContentsView in Frames

Assigning ownership for replaced disks

If you replaced disks when restoring hardware at the disaster site or you had to zero disks or remove ownership, you must assign ownership to the affected disks.

Before you begin

The disaster site must have at least as many available disks as it did prior to the disaster.

About this task

These steps are performed on the cluster at the disaster site.

This procedure shows the reassignment of all disks at the disaster site.

The examples in this procedure assume that:
  • Site A is the disaster site.
  • node_A_1 has been replaced.
  • node_A_2 has been replaced.
  • Site B is the surviving site.
  • node_B_1 is healthy.
  • node_B_2 is healthy.
The controller modules have the following original system IDs:
Node Original system ID
node_A_1 4068741258
node_A_2 4068741260
node_B_1 4068741254
node_B_2 4068741256

Steps

  1. Assign the new, unowned disks to the appropriate disk pools using the following series of commands: storage disk assign -sysid sysid -count disk-count -pool pool-number
    1. Systematically assign the replaced disks for each node to their respective disk pools: disk assign -s sysid -n old-count-of-disks -p pool
      From the surviving site, you issue a disk assign command for each node:
      cluster_B::> disk assign -s node_B_1-sysid -n old-count-of-disks -p 0 (remote pool of surviving site) 
      cluster_B::> disk assign -s node_B_2-sysid -n old-count-of-disks -p 0 (remote pool of surviving site) 
      cluster_B::> disk assign -s node_A_1-old-sysid -n old-count-of-disks -p 1 (local pool of surviving site) 
      cluster_B::> disk assign -s node_A_2-old-sysid -n old-count-of-disks -p 1 (local pool of surviving site) 
      Example
      The following example shows the commands with the system IDs:
      cluster_B::> disk assign -s 4068741254 -n 24 -p 0  
      cluster_B::> disk assign -s 4068741256 -n 24 -p 0  
      cluster_B::> disk assign -s 4068741258 -n 24 -p 1  
      cluster_B::> disk assign -s 4068741260 -n 24 -p 1  
      

    old-count-of-disks indicates the number of disks that will be assigned to the disk pool. This number must be at least the same number of disks for each node that were present before the disaster. If a lower number of disks is specified or present the healing operations may not complete due to insufficient space.

  2. Confirm the ownership of the disks: storage disk show -fields owner, pool
    Example
    storage disk show -fields owner, pool
    cluster_A::> storage disk show -fields owner, pool
    disk     owner          pool
    -------- ------------- -----
    0c.00.1  node_A_1      Pool0
    0c.00.2  node_A_1      Pool0
    .
    .
    .
    0c.00.8  node_A_1      Pool1
    0c.00.9  node_A_1      Pool1
    .
    .
    .
    0c.00.15 node_A_2      Pool0
    0c.00.16 node_A_2      Pool0
    .
    .
    .
    0c.00.22 node_A_2      Pool1
    0c.00.23 node_A_2      Pool1
    .
    .
    .
  3. On the surviving site, turn disk autoassignment back on: storage disk option modify -autoassign on *
    Example
    cluster_B::> storage disk option modify -autoassign on *
    2 entries were modified.
    
  4. On the surviving site, confirm that disk autoassignment is on: storage disk option show
    Example
     cluster_B::> storage disk option show
     Node     BKg. FW. Upd.  Auto Copy   Auto Assign  Auto Assign Policy
    --------  -------------  -----------  -----------  ------------------
    node_B_1       on            on          on             default
    node_B_2       on            on          on             default
    2 entries were displayed.
    
     cluster_B::>