SnapDrive for UNIX throws error messages, which are common.
The following table gives you detailed information about the most common errors that you encounter when using SnapDrive for UNIX:
Error code | Return code | Type | Description | Solution |
---|---|---|---|---|
0000-001 | NA | Admin | Datapath has been configured for the storage system <STORAGE-SYSTEM-NAME>. Please delete it using snapdrive config delete -mgmtpath command and retry. | Before deleting the storage system, delete the management path configured for the storage system by using the snapdrive config delete -mgmtpath command. |
0001-242 | NA | Admin | Unable to connect using https to storage system: 10.72.197.213. Ensure that 10.72.197.213 is a valid storage system name/address, and if the storage system that you configure is running on a Data ONTAP operating in 7-Mode, add the host to the trusted hosts (options trusted.hosts) and enable SSL on the storage system 10.72.197.213 or modify the snapdrive.conf to use http for communication and restart the snapdrive daemon. If the storage system that you configure is running on clustered Data ONTAP, ensure that the Vserver name is mapped to IP address of the Vserver’s management LIF. | Execute to check the following conditions:
|
0003- 004 | NA | Admin | Failed to deport LUN <LUN-NAME> on storage system <STORAGE-SYSTEM-NAME> from the Guest OS. Reason: No mapping device information populated from CoreOS | This happens when you execute snapdrive snap disconnect operation in the guest operating system. Check if there is any RDM LUN mapping in the ESX server or stale RDM entry in the ESX server. Delete the RDM mapping manually in the ESX server as well as in the guest operating system. |
0001- 019 | 3 | Command | invalid command line -- duplicate filespecs: <dg1/vol2 and dg1/vol2> | This happens when the command executed has multiple host entities on the same host volume. For example, the command explicitly specified the host volume and the file system on the same host volume. What to do: Complete the following steps: 1. Remove all the duplicate instances of the host entities. 2. Execute the command again. |
0001-023 | 11 | Admin | Unable to discover all LUNs in Disk Group dg1.Devices not responding: dg1 Please check the LUN status on the storage system and bring the LUN online if necessary or add the host to the trusted hosts (options trusted.hosts) and enable SSL on the storage system or retry after changing snapdrive.conf to use (http/https) for storage system communication and restarting snapdrive daemon. | This happens when a SCSI inquiry on the device fails. A SCSI inquiry on the device can fail for multiple reasons. What to do: Execute the following steps in the same order if the preceding step does not solve the issue:
If the preceding solutions do not solve the issue, contact NetApp technical support to identify the issue in your environment. |
0001-395 | NA | Admin | No HBAs on this host! | This occurs If you have a large number of LUNs connected to your host system. Check if the variable enable-fcp-cache is set to on in the snapdrive.conf file. |
0001-389 | NA | Admin | Cannot get HBA type for HBA assistant linuxfcp | This occurs If you have a large number of LUNs connected to your host system. Check if the variable enable-fcp-cache is set to on in the snapdrive.conf file. |
0001-389 | NA | Admin | Cannot get HBA type for HBA assistant vmwarefcp | The following conditions to be checked:
|
0001-682 | NA | Admin | Host preparation for new LUNs failed: This functionality checkControllers is not supported. | Execute the command again for the SnapDrive operation to be successful. |
0001-859 | NA | Admin | None of the host's interfaces have NFS permissions to access directory <directory name> on storage system <storage system name> | In the snapdrive.conf file, ensure that the check-export-permission-nfs-clone configuration variable is set to off. |
0002-253 | Admin | Flex clone creation failed | It is a storage system side error. Please collect the sd-trace.log and storage system logs to troubleshoot it. |
|
0002-264 | Admin | FlexClone is not supported on filer <filer name> | FlexClone is not supported with the current Data ONTAP version of the storage system. Upgrade storage system's Data ONTAP version to 7.0 or later and then retry the command. |
|
0002-265 | Admin | Unable to check flex_clone license on filer <filername> | It is a storage system side error. Collect the sd-trace.log and storage system logs to troubleshoot it. |
|
0002-266 | NA | Admin | FlexClone is not licensed on filer <filername> | FlexClone is not licensed on the storage system. Retry the command after adding FlexClone license on the storage system. |
0002-267 | NA | Admin | FlexClone is not supported on root volume <volume-name> | FlexClones cannot be created for root volumes. |
0002-270 | NA | Admin | The free space on the aggregate <aggregate-name> is less than <size> MB(megabytes) required for diskgroup/flexclone metadata |
|
0002-332 | NA | Admin | SD.SnapShot.Restore access denied on qtree storage_array1:/vol/vol1/qtree1 for user lnx197-142\john | Contact Operations Manager administrator to grant the required capability to the user. |
0002-364 | NA | Admin | Unable to contact DFM: lnx197-146, please change user name and/or password. | Verify and correct the user name and password of sd-admin user. |
0002-268 | NA | Admin | <volume-Name> is not a flexible volume | FlexClones cannot be created for traditional volumes. |
0003-003 | Admin | 1. Failed to export LUN <LUN_NAME> on storage system <STORAGE_NAME> to the Guest OS. or |
|
|
0003-012 | Admin | Virtual Interface Server win2k3-225-238 is not reachable. | NIS is not configured on for the host/guest OS. You must provide the name and IP mapping in the file located at /etc/hosts For example: # cat /etc/hosts 10.72.225.238 win2k3-225-238.eng.org.com win2k3-225-238 |
|
0001-552 | NA | Command | Not a valid Volume-clone or LUN-clone | Clone-split cannot be created for traditional volumes. |
0001-553 | NA | Command | Unable to split “FS-Name” due to insufficient storage space in <Filer- Name> | Clone-split continues the splitting process and suddenly, the clone split stops due to insufficient storage space not available in the storage system. |
0003-002 | Command | No more LUN's can be exported to the guest OS. | As the number of devices supported by the ESX server for a controller has reached the maximum limit, you must add more controllers for the guest operating system. Note: The ESX server limits the maximum controller per guest operating system to 4.
|
|
9000- 023 | 1 | Command | No arguments for keyword -lun | This error occurs when the command with the -lun keyword does not have the lun_name argument. What to do: Do either of the following; 1. Specify the lun_name argument for the command with the -lun keyword. 2. Check the SnapDrive for UNIX help message |
0001-028 | 1 | Command | File system </mnt/qa/dg4/vol1> is of a type (hfs) not managed by snapdrive. Please resubmit your request, leaving out the file system <mnt/qa/dg4/vol1> | This error occurs when a non-supported file system type is part of a command. What to do: Exclude or update the file system type and then use the command again. For the latest software compatibility information see the Interoperability Matrix. |
9000-030 | 1 | Command | -lun may not be combined with other keywords | This error occurs when you combine the -lun keyword with the -fs or -dg keyword. This is a syntax error and indicates invalid usage of command. What to do: Execute the command again only with the -lun keyword. |
0001-034 | 1 | Command | mount failed: mount: <device name> is not a valid block device" | This error occurs only when the cloned LUN is already connected to the same filespec present in Snapshot copy and then you try to execute the snapdrive snap restore command. The command fails because the iSCSI daemon remaps the device entry for the restored LUN when you delete the cloned LUN. What to do: Do either of the following: 1. Execute the snapdrive snap restore command again. 2. Delete the connected LUN (if it is mounted on the same filespec as in Snapshot copy) before trying to restore a Snapshot copy of an original LUN. |
0001-046 and 0001-047 | 1 | Command | Invalid snapshot name: </vol/vol1/NO_FILER_PRE FIX> or Invalid snapshot name: NO_LONG_FILERNAME - filer volume name is missing | This is a syntax error which indicates invalid use of command, where a Snapshot operation is attempted with an invalid Snapshot name. What to do: Complete the following steps: 1. Use the snapdrive snap list - filer <filer-volume-name> command to get a list of Snapshot copies. 2. Execute the command with the long_snap_name argument. |
9000-047 | 1 | Command | More than one -snapname argument given | SnapDrive for UNIX cannot accept more than one Snapshot name in the command line for performing any Snapshot operations. What to do: Execute the command again, with only one Snapshot name. |
9000-049 | 1 | Command | -dg and -vg may not be combined | This error occurs when you combine the -dg and -vg keywords. This is a syntax error and indicates invalid usage of commands. What to do: Execute the command either with the -dg or -vg keyword. |
9000-050 | 1 | Command | -lvol and -hostvol may not be combined | This error occurs when you combine the -lvol and -hostvol keywords. This is a syntax error and indicates invalid usage of commands. What to do: Complete the following steps: 1. Change the -lvol option to - hostvol option or vice-versa in the command line. 2. Execute the command. |
9000-057 | 1 | Command | Missing required -snapname argument | This is a syntax error that indicates an invalid usage of command, where a Snapshot operation is attempted without providing the snap_name argument. What to do: Execute the command with an appropriate Snapshot name. |
0001-067 | 6 | Command | Snapshot hourly.0 was not created by snapdrive. | These are the automatic hourly Snapshot copies created by Data ONTAP. |
0001-092 | 6 | Command | snapshot <non_existent_24965> doesn't exist on a filervol exocet: </vol/vol1> | The specified Snapshot copy was not found on the storage system. What to do: Use the snapdrive snap list command to find the Snapshot copies that exist in the storage system. |
0001- 099 | 10 | Admin | Invalid snapshot name: <exocet:/vol2/dbvol:New SnapName> doesn't match filer volume name <exocet:/vol/vol1> | This is a syntax error that indicates invalid use of commands, where a Snapshot operation is attempted with an invalid Snapshot name. What to do: Complete the following steps: 1. Use the snapdrive snap list - filer <filer-volume-name> command to get a list of Snapshot copies. 2. Execute the command with the correct format of the Snapshot name that is qualified by SnapDrive for UNIX. The qualified formats are: long_snap_name and short_snap_name. |
0001-122 | 6 | Admin | Failed to get snapshot list on filer <exocet>: The specified volume does not exist. | This error occurs when the specified storage system (filer) volume does not exist. What to do: Complete the following steps: 1. Contact the storage administrator to get the list of valid storage system volumes. 2. Execute the command with a valid storage system volume name. |
0001-124 | 111 | Admin | Failed to removesnapshot <snap_delete_multi_inuse_24374> on filer <exocet>: LUN clone | The Snapshot delete operation failed for the specified Snapshot copy because the LUN clone was present. What to do: Complete the following steps: 1. Use the snapdrive storage show command with the -all option to find the LUN clone for the Snapshot copy (part of the backing Snapshot copy output). 2. Contact the storage administrator to split the LUN from the clone. 3. Execute the command again. |
0001-155 | 4 | Command | Snapshot <dup_snapname23980> already exists on <exocet: /vol/vol1>. Please use -f (force) flag to overwrite existing snapshot | This error occurs if the Snapshot copy name used in the command already exists. What to do: Do either of the following: 1. Execute the command again with a different Snapshot name. 2. Execute the command again with the -f (force) flag to overwrite the existing Snapshot copy. |
0001-158 | 84 | Command | diskgroup configuration has changed since <snapshotexocet:/vol/vo l1:overwrite_noforce_25 078> was taken. removed hostvol </dev/dg3/vol4> Please use '-f' (force) flag to override warning and complete restore | The disk group can contain multiple LUNs and when the disk group configuration changes, you encounter this error. For example, when creating a Snapshot copy, the disk group consisted of X number of LUNs and after making the copy, the disk group can have X+Y number of LUNs. What to do: Use the command again with the -f (force) flag. |
0001-185 | NA | Command | storage show failed: no NETAPP devices to show or enable SSL on the filers or retry after changing snapdrive.conf to use http for filer communication. | This problem can occur for the following reasons: If the iSCSI daemon or the FC service on the host has stopped or is malfunction, the snapdrive storage show -all command fails, even if there are configured LUNs on the host. What to do: Resolve the malfunctioning iSCSI or FC service.The storage system on which the LUNs are configured is down or is undergoing a reboot. What to do: Wait until the LUNs are up.The value set for the usehttps- to-filer configuration variable might not be a supported configuration. What to do: Complete the following steps:1. Use the sanlun lun show all command to check if there are any LUNs mapped to the host. 2. If there are any LUNs mapped to the host, follow the instructions mentioned in the error message. Change the value of the usehttps- to-filer configuration variable (to “on” if the value is “off”; to “off’ if the value is “on”). |
0001-226 | 3 | Command | 'snap create' requires all filespecs to be accessible Please verify the following inaccessible filespec(s): File System: </mnt/qa/dg1/vol3> | This error occurs when the specified host entity does not exist. What to do: Use the snapdrive storage show command again with the -all option to find the host entities which exist on the host. |
0001- 242 | 18 | Admin | Unable to connect to filer: <filername> | SnapDrive for UNIX attempts to connect to a storage system through the secure HTTP protocol. The error can occur when the host is unable to connect to the storage system. What to do: Complete the following steps: 1. Network problems: a. Use the nslookup command to check the DNS name resolution for the storage system that works through the host. b. Add the storage system to the DNS server if it does not exist. You can also use an IP address instead of a host name to connect to the storage system. 2. Storage system Configuration: a. For SnapDrive for UNIX to work, you must have the license key for the secure HTTP access. b. After the license key is set up, check if you can access the storage system through a Web browser. 3. Execute the command after performing either Step 1 or Step 2 or both. |
0001- 243 | 10 | Command | Invalid dg name: <SDU_dg1> | This error occurs when the disk group is not present in the host and subsequently the command fails. For example, SDU_dg1 is not present in the host. What to do: Complete the following steps: 1. Use the snapdrive storage show -all command to get all the disk group names. 2. Execute the command again, with the correct disk group name. |
0001- 246 | 10 | Command | Invalid hostvolume name: </mnt/qa/dg2/BADFS>, the valid format is <vgname/hostvolname>, i.e. <mygroup/vol2> | What to do: Execute the command again, with the following appropriate format for the host volume name: vgname/hostvolname |
0001- 360 | 34 | Admin | Failed to create LUN </vol/badvol1/nanehp13_ unnewDg_fve_SdLun> on filer <exocet>: No such volume | This error occurs when the specified path includes a storage system volume which does not exist. What to do: Contact your storage administrator to get the list of storage system volumes which are available for use. |
0001- 372 | 58 | Command | Bad lun name:: </vol/vol1/sce_lun2a> - format not recognized | This error occurs if the LUN names that are specified in the command do not adhere to the pre-defined format that SnapDrive for UNIX supports. SnapDrive for UNIX requires LUN names to be specified in the following pre-defined format: <filer-name: /vol/<volname>/<lun-name> What to do: Complete the following steps: 1. Use the snapdrive help command to know the pre-defined format for LUN names that SnapDrive for UNIX supports. 2. Execute the command again. |
0001- 373 | 6 | Command | The following required 1 LUN(s) not found: exocet:</vol/vol1/NotARealLun> | This error occurs when the specified LUN is not found on the storage system. What to do: Do either of the following: 1. To see the LUNs connected to the host, use the snapdrive storage show -dev command or snapdrive storage show -all command. 2. To see the entire list of LUNs on the storage system, contact the storage administrator to get the output of the lun show command from the storage system. |
0001- 377 | 43 | Command | Disk group name <name> is already in use or conflicts with another entity. | This error occurs when the disk group name is already in use or conflicts with another entity. What to do: Do either of the following: Execute the command with the - autorename option Use the snapdrive storage show command with the -all option to find the names that the host is using. Execute the command specifying another name that the host is not using. |
0001- 380 | 43 | Command | Host volume name <dg3/vol1> is already in use or conflicts with another entity. | This error occurs when the host volume name is already in use or conflicts with another entity What to do: Do either of the following: 1. Execute the command with the - autorename option. 2. Use the snapdrive storage show command with the -all option to find the names that the host is using. Execute the command specifying another name that the host is not using. |
0001- 417 | 51 | Command | The following names are already in use: <mydg1>. Please specify other names. | What to do: Do either of the following: 1. Execute the command again with the -autorename option. 2. Use snapdrive storage show - all command to find the names that exists on the host. Execute the command again to explicitly specify another name that the host is not using. |
0001- 430 | 51 | Command | You cannot specify both -dg/vg dg and - lvol/hostvol dg/vol | This is a syntax error which indicates an invalid usage of commands. The command line can accept either -dg/vg keyword or the -lvol/hostvol keyword, but not both. What to do: Execute the command with only the -dg/vg or - lvol/hostvol keyword. |
0001- 434 | 6 | Command | snapshot exocet:/vol/vol1:NOT_E IST doesn't exist on a storage volume exocet:/vol/vol1 | This error occurs when the specified Snapshot copy is not found on the storage system. What to do: Use the snapdrive snap list command to find the Snapshot copies that exist in the storage system. |
0001- 435 | 3 | Command | You must specify all host volumes and/or all file systems on the command line or give the -autoexpand option. The following names were missing on the command line but were found in snapshot <snap2_5VG_SINGLELUN _REMOTE>: Host Volumes: <dg3/vol2> File Systems: </mnt/qa/dg3/vol2> | The specified disk group has multiple host volumes or file system, but the complete set is not mentioned in the command. What to do: Do either of the following: 1. Re-issue the command with the - autoexpand option. 2. Use the snapdrive snap show command to find the entire list of host volumes and file systems. Execute the command specifying all the host volumes or file systems. |
0001- 440 | 6 | Command | snapshot snap2_5VG_SINGLELUN_ REMOTE does not contain disk group 'dgBAD' | This error occurs when the specified disk group is not part of the specified Snapshot copy. What to do: To find if there is any Snapshot copy for the specified disk group, do either of the following: 1. Use the snapdrive snap list command to find the Snapshot copies in the storage system. 2. Use the snapdrive snap show command to find the disk groups, host volumes, file systems, or LUNs that are present in the Snapshot copy. 3. If a Snapshot copy exists for the disk group, execute the command with the Snapshot name. |
0001- 442 | 1 | Command | More than one destination - <dis> and <dis1> specified for a single snap connect source <src>. Please retry using separate commands. | What to do: Execute a separate snapdrive snap connect command, so that the new destination disk group name (which is part of the snap connect command) is not the same as what is already part of the other disk group units of the same snapdrive snap connect command. |
0001- 465 | 1 | Command | The following filespecs do not exist and cannot be deleted: Disk Group: <nanehp13_ dg1> | The specified disk group does not exist on the host, therefore the deletion operation for the specified disk group failed. What to do: See the list of entities on the host by using the snapdrive storage show command with the all option. |
0001- 476 | NA | Admin | Unable to discover the device associated with <long lun name> If multipathing in use, there may be a possible multipathing configuration error. Please verify the configuration and then retry. | There can be many reasons for this failure.
The preceding issues are very difficult to diagnose in an algorithmic or sequential manner. What to do: NetApp It is recommends that before you use SnapDrive for UNIX, you follow the steps recommended in the Host Utilities Setup Guide (for the specific operating system) for discovering LUNs manually. After you discover LUNs, use the SnapDrive for UNIX commands. |
0001- 486 | 12 | Admin | LUN(s) in use, unable to delete. Please note it is dangerous to remove LUNs that are under Volume Manager control without properly removing them from Volume Manager control first. | SnapDrive for UNIX cannot delete a LUN that is part of a volume group. What to do: Complete the following steps: 1. Delete the disk group using the command snapdrive storage delete -dg <dgname>. 2. Delete the LUN. |
0001- 494 | 12 | Command | Snapdrive cannot delete <mydg1>, because 1 host volumes still remain on it. Use -full flag to delete all file systems and host volumes associated with <mydg1> | SnapDrive for UNIX cannot delete a disk group until all the host volumes on the disk group are explicitly requested to be deleted. What to do: Do either of the following: 1. Specify the -full flag in the command. 2. Complete the following steps: a. Use the snapdrive storage show -all command to get the list of host volumes that are on the disk group. b. Mention each of them explicitly in the SnapDrive for UNIX command. |
0001- 541 | 65 | Command | Insufficient access permission to create a LUN on filer, <exocet>. | SnapDrive for UNIX uses the sdhostname.prbac or sdgeneric.prbac file on the root storage system (filer) volume for its pseudo access control mechanism. What to do: Do either of the following: 1. Modify the sd-hostname.prbac or sdgeneric. prbac file in the storage system to include the following requisite permissions (can be one or many): a. NONE b. SNAP CREATE c. SNAP USE d. SNAP ALL e. STORAGE CREATE DELETE f. STORAGE USE g. STORAGE ALL h. ALL ACCESS
Note:
2. In the snapdrive.conf file, ensure that the all-access-if-rbacunspecified configuration variable is set to “on”. |
0001-559 | NA | Admin | Detected I/Os while taking snapshot. Please quiesce your application. See Snapdrive Admin. Guide for more information. | This error occurs if you try to create a Snapshot copy, while parallel input/output operations occur on the file specification and the value of snapcreate-cg-timeout is set to urgent. What to do: Increase the value of consistency groups time out by setting the value of snapcreate-cg-timeout to relaxed. |
0001- 570 | 6 | Command | Disk group <dg1> does not exist and hence cannot be resized | This error occurs when the disk group is not present in the host and subsequently the command fails. What to do: Complete the following steps: 1. Use the snapdrive storage show -all command to get all the disk group names. 2. Execute the command with the correct disk group name. |
0001- 574 | 1 | Command | <VmAssistant> lvm does not support resizing LUNs in disk groups | This error occurs when the volume manager that is used to perform this task does not support LUN resizing. SnapDrive for UNIX depends on the volume manager solution to support the LUN resizing, if the LUN is part of a disk group. What to do: Check if the volume manager that you are using supports LUN resizing. |
0001- 616 | 6 | Command | 1 snapshot(s) NOT found on filer: exocet:/vol/vol1:MySnapName> | SnapDrive for UNIX cannot accept more than one Snapshot name in the command line for performing any Snapshot operations. To rectify this error, re-issue the command with one Snapshot name. This is a syntax error which indicates invalid use of command, where a Snapshot operation is attempted with an invalid Snapshot name. To rectify this error, complete the following steps: 1. Use the snapdrive snap list - filer <filer-volume-name> command to get a list of Snapshot copies. 2. Execute the command with the long_snap_name argument. |
0001- 640 | 1 | Command | Root file system / is not managed by snapdrive | This error occurs when the root file system on the host is not supported by SnapDrive for UNIX. This is an invalid request to SnapDrive for UNIX. |
0001- 684 | 45 | Admin | Mount point <fs_spec> already exists in mount table | What to do: Do either of the following: 1. Execute the SnapDrive for UNIX command with a different mountpoint. 2. Check that the mountpoint is not in use and then manually (using any editor) delete the entry from the following files: Linux: /etc/fstab |
0001- 796 and 0001- 767 | 3 | Command | 0001-796 and 0001-767 | SnapDrive for UNIX does not support more than one LUN in the same command with the -nolvm option. What to do: Do either of the following: 1. Use the command again to specify only one LUN with the -nolvm option. 2. Use the command without the - nolvm option. This will use the supported volume manager present in the host, if any. |
0001- 876 | NA | Admin | HBA assistant not found | The error might occur during any of the following conditions: a. If the HBA service is not running, you will get this error on executing the SnapDrive for UNIX commands, such as, snapdrive storage create, snapdrive config prepare luns. Workaround: Check the status of the FC or iSCSI service. If it is not running, start the service and execute the SnapDrive for UNIX command. b. When the 32-bit libnl libraries found missing in 64-bit versions of operating systems. Workaround: 1. Execute the command in the host to check the libraries ls /usr/lib/libnl*. 2. Check if the libnl files are missing in this location, and if it is available in /usr/lib64 location. 3. If the 32-bit libraries are found missing, then install the 32-bit libnl libraries from the operating system distribution media or server. 4. Now, re-install the HBA anywhere software. |
2715 | NA | NA | Volume restore zephyr not available for the filer <filename>Please proceed with lun restore | For older ONTAP versions, volume restore zapi is not available. Reissue the command with SFSR. |
2278 | NA | NA | SnapShots created after <snapname> do not have volume clones ... FAILED | Split or delete the clones |
2280 | NA | NA | LUNs mapped and not in active or SnapShot <filespec-name> FAILED | Un-map/ storage disconnect the host entities |
2282 | NA | NA | No SnapMirror relationships exist ... FAILED |
|
2286 | NA | NA | LUNs not owned by <fsname> are application consistent in snapshotted volume … FAILED. Snapshot luns not owned by <fsname> which may be application inconsistent | Verify that the LUNs mentioned in the check results are not in use. Only after that, use the –force option. |
2289 | NA | NA | No new LUNs created after snapshot <snapname> ... FAILED | Verify that the LUNs mentioned in the check results are not in use. Only after that, use the –force option. |
2290 | NA | NA | Could not perform inconsistent and newer Luns check. Snapshot version is prior to SDU 4.0 | This happens with SnapDrive 3.0 for UNIX Snapshots when used with –vbsr. Manually check that any newer LUNs created will not be used anymore and then proceed with –force option. |
2292 | NA | NA | No new SnapShots exist... FAILED. SnapShots created will be lost. | Check that snapshots mentioned in the check results will no longer be used. And if so, then proceed with –force option. |
2297 | NA | NA | Both normal files) and LUN(s) exist ... FAILED | Ensure that the files and LUNs mentioned in the check results will not be used anymore. And if so, then proceed with –force option. |
2302 | NA | NA | NFS export list does not have foreign hosts ... FAILED | Contact the storage administrator to remove the foreign hosts from the export list or ensure that the foreign hosts are not using the volumes through NFS. |
9000-305 | NA | Command | Could not detect type of the entity /mnt/my_fs. Provide a specific option (-lun, -dg, -fs or -lvol) if you know the type of the entity | Verify the entity if it already exists in the host. If you know the type of the entity provide the file-spec type. |
9000-303 | NA | Command | Multiple entities with the same name - /mnt/my_fs exist on this host. Provide a specific option (-lun, -dg, -fs or -lvol) for the entity you have specified. | The user has multiple entities with the same name. In this case user has to provide the file-spec type explicitly. |
9000-304 | NA | Command | /mnt/my_fs is detected as keyword of type file system, which is not supported with this command. | Operation on the auto detected file_spec is not supported with this command. Verify with the respective help for the operation. |
9000-301 | NA | Command | Internal error in auto defection | Auto detection engine error. Provide the trace and daemon log for further analysis. |
NA | NA | Command | snapdrive.dc tool unable to compress data on RHEL 5Ux environment | Compression utility is not installed by default. You must install the compression utility ncompress, for example ncompress-4.2.4-47.i386.rpm. To install the compression utility, enter the following command: rpm -ivh ncompress-4.2.4-47.i386.rpm |
NA | NA | Command | Invalid filespec | This error occurs when the specified host entity does not exist or inaccessible. |
NA | NA | Command | Job Id is not valid | This message is displayed for the clone split status, result, or stop operation if the specified job ID is invalid job or the result of the job is already queried. You must specify a valid or available job ID and retry this operation. |
NA | NA | Command | Split is already in progress | This message is displayed when:
|
NA | NA | Command | Not a valid Volume-Clone or LUN-Clone | Specified filespec or LUN pathname is not a valid volume clone or LUN clone. |
NA | NA | Command | No space to split volume | The error message is due to the required storage space is not available to split the volume. Free enough space in the aggregate to split the volume clone. |