Module netapp_ontap.resources.aggregate

Copyright © 2022 NetApp Inc. All rights reserved.

Updating storage aggregates

The PATCH operation is used to modify properties of the aggregate. There are several properties that can be modified on an aggregate. Only one property can be modified for each PATCH request. PATCH operations on the aggregate's disk count will be blocked while one or more nodes in the cluster are simulating or implementing automatic aggregate creation.
The following is a list of properties that can be modified using the PATCH operation including a brief description for each:

  • name - This property can be changed to rename the aggregate.
  • node.name and node.uuid - Either property can be updated in order to relocate the aggregate to a different node in the cluster.
  • block_storage.mirror.enabled - This property can be changed from 'false' to 'true' in order to mirror the aggregate, if the system is capable of doing so.
  • block_storage.primary.disk_count - This property can be updated to increase the number of disks in an aggregate.
  • block_storage.primary.raid_size - This property can be updated to set the desired RAID size.
  • block_storage.primary.raid_type - This property can be updated to set the desired RAID type.
  • cloud_storage.tiering_fullness_threshold - This property can be updated to set the desired tiering fullness threshold if using FabricPool.
  • data_encryption.software_encryption_enabled - This property enables or disables NAE on the aggregate.

Aggregate expansion

The PATCH operation also supports automatically expanding an aggregate based on the spare disks which are present within the system. Running PATCH with the query "auto_provision_policy" set to "expand" starts the recommended expansion job. In order to see the expected change in capacity before starting the job, call GET on an aggregate instance with the query "auto_provision_policy" set to "expand".

Manual simulated aggregate expansion

The PATCH operation also supports simulated manual expansion of an aggregate. Running PATCH with the query "simulate" set to "true" and "block_storage.primary.disk_count" set to the final disk count will start running the prechecks associated with expanding the aggregate to the proposed size. The response body will include information on how many disks the aggregate can be expanded to, any associated warnings, along with the proposed final size of the aggregate.

Deleting storage aggregates

If volumes exist on an aggregate, they must be deleted or moved before the aggregate can be deleted. See the /storage/volumes API for details on moving or deleting volumes.


Examples

Retrieving a specific aggregate from the cluster

The following example shows the response of the requested aggregate. If there is no aggregate with the requested UUID, an error is returned.

from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate

with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
    resource = Aggregate(uuid="870dd9f2-bdfa-4167-b692-57d1cec874d4")
    resource.get()
    print(resource)

Aggregate(
    {
        "home_node": {"uuid": "caf95bec-f801-11e8-8af9-005056bbe5c1", "name": "node-1"},
        "name": "test1",
        "uuid": "19425837-f2fa-4a9f-8f01-712f626c983c",
        "create_time": "2018-12-04T15:40:38-05:00",
        "snapshot": {
            "max_files_available": 5,
            "files_used": 3,
            "max_files_used": 50,
            "files_total": 10,
        },
        "space": {
            "efficiency": {
                "ratio": 6.908119720880661,
                "logical_used": 1646350,
                "savings": 1408029,
            },
            "efficiency_without_snapshots_flexclones": {
                "ratio": 2.0,
                "logical_used": 10000,
                "savings": 5000,
            },
            "efficiency_without_snapshots": {
                "ratio": 1.0,
                "logical_used": 737280,
                "savings": 0,
            },
            "snapshot": {
                "total": 5000,
                "available": 2000,
                "used_percent": 45,
                "used": 3000,
                "reserve_percent": 20,
            },
            "cloud_storage": {"used": 0},
            "block_storage": {
                "full_threshold_percent": 98,
                "volume_footprints_percent": 14,
                "data_compaction_space_saved_percent": 47,
                "used": 43061248,
                "aggregate_metadata_percent": 8,
                "available": 191942656,
                "used_including_snapshot_reserve_percent": 35,
                "size": 235003904,
                "physical_used": 5271552,
                "aggregate_metadata": 2655,
                "volume_deduplication_shared_count": 567543,
                "physical_used_percent": 1,
                "data_compacted_count": 666666,
                "used_including_snapshot_reserve": 674685,
                "volume_deduplication_space_saved": 23765,
                "data_compaction_space_saved": 654566,
                "volume_deduplication_space_saved_percent": 32,
            },
        },
        "state": "online",
        "data_encryption": {
            "software_encryption_enabled": False,
            "drive_protection_enabled": False,
        },
        "cloud_storage": {"attach_eligible": False},
        "snaplock_type": "non_snaplock",
        "node": {"uuid": "caf95bec-f801-11e8-8af9-005056bbe5c1", "name": "node-1"},
        "block_storage": {
            "mirror": {"state": "unmirrored", "enabled": False},
            "plexes": [{"name": "plex0"}],
            "hybrid_cache": {"enabled": False},
            "primary": {
                "disk_count": 6,
                "raid_type": "raid_dp",
                "raid_size": 24,
                "disk_type": "ssd",
                "disk_class": "solid_state",
                "checksum_style": "block",
            },
        },
    }
)

Retrieving statistics and metric for an aggregate

In this example, the API returns the "statistics" and "metric" properties for the aggregate requested.

from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate

with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
    resource = Aggregate(uuid="538bf337-1b2c-11e8-bad0-005056b48388")
    resource.get(fields="statistics,metric")
    print(resource)

Aggregate(
    {
        "name": "aggr4",
        "statistics": {
            "iops_raw": {
                "total": 3052032,
                "write": 1137230,
                "read": 328267,
                "other": 1586535,
            },
            "status": "ok",
            "timestamp": "2019-07-08T22:17:09+00:00",
            "latency_raw": {
                "total": 844628724,
                "write": 313354426,
                "read": 54072313,
                "other": 477201985,
            },
            "throughput_raw": {
                "total": 213063348224,
                "write": 63771742208,
                "read": 3106045952,
                "other": 146185560064,
            },
        },
        "uuid": "538bf337-1b2c-11e8-bad0-005056b48388",
        "metric": {
            "status": "ok",
            "timestamp": "2019-07-08T22:16:45+00:00",
            "latency": {"total": 124, "write": 230, "read": 149, "other": 123},
            "throughput": {
                "total": 194141115,
                "write": 840226,
                "read": 7099,
                "other": 193293789,
            },
            "duration": "PT15S",
            "iops": {"total": 11682, "write": 17, "read": 1, "other": 11663},
        },
    }
)

For more information and examples on viewing historical performance metrics for any given aggregate, see DOC /storage/aggregates/{uuid}/metrics

Simulating aggregate expansion

The following example shows the response for a simulated data aggregate expansion based on the values of the 'block_storage.primary.disk_count' attribute passed in. The query does not modify the existing aggregate but returns how the aggregate will look after the expansion along with any associated warnings. Simulated data aggregate expansion will be blocked while one or more nodes in the cluster are simulating or implementing automatic aggregate creation. This will be reflected in the following attributes:

  • space.block_storage.size - Total usable space in bytes, not including WAFL reserve and aggregate Snapshot copy reserve.
  • block_storage.primary.disk_count - Number of disks that could be used to create the aggregate.
from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate

with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
    resource = Aggregate(uuid="cae60cfe-deae-42bd-babb-ef437d118314")
    resource.block_storage = {"primary": {"disk_count": 14}}
    resource.patch(hydrate=True, simulate=True)

Retrieving a recommendation for an aggregate expansion

The following example shows the response with the recommended data aggregate expansion based on what disks are present within the system. The query does not modify the existing aggregate but returns how the aggregate will look after the expansion. The recommendation will be reflected in the attributes - 'space.block_storage.size' and 'block_storage.primary.disk_count'. Recommended data aggregate expansion will be blocked while one or more nodes in the cluster are simulating or implementing automatic aggregate creation.

from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate

with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
    resource = Aggregate(uuid="cae60cfe-deae-42bd-babb-ef437d118314")
    resource.get(auto_provision_policy="expand")
    print(resource)

Aggregate(
    {
        "name": "node_2_SSD_1",
        "uuid": "cae60cfe-deae-42bd-babb-ef437d118314",
        "space": {"block_storage": {"size": 1116180480}},
        "node": {
            "_links": {
                "self": {
                    "href": "/api/cluster/nodes/4046dda8-f802-11e8-8f6d-005056bb2030"
                }
            },
            "uuid": "4046dda8-f802-11e8-8f6d-005056bb2030",
            "name": "node-2",
        },
        "block_storage": {
            "mirror": {"enabled": False},
            "hybrid_cache": {"enabled": False},
            "primary": {
                "disk_count": 23,
                "raid_type": "raid_dp",
                "disk_type": "ssd",
                "disk_class": "solid_state",
            },
        },
        "_links": {
            "self": {
                "href": "/api/storage/aggregates/cae60cfe-deae-42bd-babb-ef437d118314"
            }
        },
    }
)

Updating an aggregate in the cluster

The following example shows the workflow of adding disks to the aggregate.
Step 1: Check the current disk count on the aggregate.

from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate

with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
    resource = Aggregate(uuid="19425837-f2fa-4a9f-8f01-712f626c983c")
    resource.get(fields="block_storage.primary.disk_count")
    print(resource)

Aggregate(
    {
        "name": "test1",
        "uuid": "19425837-f2fa-4a9f-8f01-712f626c983c",
        "block_storage": {"primary": {"disk_count": 6}},
    }
)

Step 2: Update the aggregate with the new disk count in 'block_storage.primary.disk_count'. The response to PATCH is a job unless the request is invalid.

from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate

with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
    resource = Aggregate(uuid="19425837-f2fa-4a9f-8f01-712f626c983c")
    resource.block_storage = {"primary": {"disk_count": 8}}
    resource.patch()

Step 3: Wait for the job to finish, then call GET to see the reflected change.

from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate

with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
    resource = Aggregate(uuid="19425837-f2fa-4a9f-8f01-712f626c983c")
    resource.get(fields="block_storage.primary.disk_count")
    print(resource)

Aggregate(
    {
        "name": "test1",
        "uuid": "19425837-f2fa-4a9f-8f01-712f626c983c",
        "block_storage": {"primary": {"disk_count": 8}},
    }
)

The following example shows the workflow to enable software encryption on an aggregate.
Step 1: Check the current software encryption status of the aggregate.

from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate

with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
    resource = Aggregate(uuid="f3aafdc6-be35-4d93-9590-5a402bffbe4b")
    resource.get(fields="data_encryption.software_encryption_enabled")
    print(resource)

Aggregate(
    {
        "name": "aggr5",
        "uuid": "f3aafdc6-be35-4d93-9590-5a402bffbe4b",
        "data_encryption": {"software_encryption_enabled": False},
    }
)

Step 2: Update the aggregate with the encryption status in 'data_encryption.software_encryption_enabled'. The response to PATCH is a job unless the request is invalid.

from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate

with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
    resource = Aggregate(uuid="f3aafdc6-be35-4d93-9590-5a402bffbe4b")
    resource.data_encryption = {"software_encryption_enabled": "true"}
    resource.patch()

Step 3: Wait for the job to finish, then call GET to see the reflected change.

from netapp_ontap import HostConnection
from netapp_ontap.resources import Aggregate

with HostConnection("<mgmt-ip>", username="admin", password="password", verify=False):
    resource = Aggregate(uuid="f3aafdc6-be35-4d93-9590-5a402bffbe4b")
    resource.get(fields="data_encryption.software_encryption_enabled")
    print(resource)

Aggregate(
    {
        "name": "aggr5",
        "uuid": "f3aafdc6-be35-4d93-9590-5a402bffbe4b",
        "data_encryption": {"software_encryption_enabled": True},
    }
)

Classes

class Aggregate (*args, **kwargs)

Allows interaction with Aggregate objects on the host

Initialize the instance of the resource.

Any keyword arguments are set on the instance as properties. For example, if the class was named 'MyResource', then this statement would be true:

MyResource(name='foo').name == 'foo'

Args

*args
Each positional argument represents a parent key as used in the URL of the object. That is, each value will be used to fill in a segment of the URL which refers to some parent object. The order of these arguments must match the order they are specified in the URL, from left to right.
**kwargs
each entry will have its key set as an attribute name on the instance and its value will be the value of that attribute.

Ancestors

Static methods

def count_collection(*args, connection: HostConnection = None, **kwargs) -> int

Fetch a count of all objects of this type from the host.

This calls GET on the object to determine the number of records. It is more efficient than calling get_collection() because it will not construct any objects. Query parameters can be passed in as kwargs to determine a count of objects that match some filtered criteria.

Args

*args
Each entry represents a parent key which is used to build the path to the child object. If the URL definition were /api/foos/{foo.name}/bars, then to get the count of bars for a particular foo, the foo.name value should be passed.
connection
The HostConnection object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context.
**kwargs
Any key/value pairs passed will be sent as query parameters to the host. These query parameters can affect the count. A return_records query param will be ignored.

Returns

On success, returns an integer count of the objects of this type. On failure, returns -1.

Raises

NetAppRestError: If the API call returned a status code >= 400, or if there is no connection available to use either passed in or on the library.

def delete_collection(*args, records: Iterable[_ForwardRef('Aggregate')] = None, body: Union[Resource, dict] = None, poll: bool = True, poll_interval: Union[int, NoneType] = None, poll_timeout: Union[int, NoneType] = None, connection: HostConnection = None, **kwargs) -> NetAppResponse

Deletes the aggregate specified by the UUID. This request starts a job and returns a link to that job.

  • storage aggregate delete

Learn more


Delete all objects in a collection which match the given query.

All records on the host which match the query will be deleted.

Args

*args
Each entry represents a parent key which is used to build the path to the child object. If the URL definition were /api/foos/{foo.name}/bars, then to delete the collection of bars for a particular foo, the foo.name value should be passed.
records
Can be provided in place of a query. If so, this list of objects will be deleted from the host.
body
The body of the delete request. This could be a Resource instance or a dictionary object.
poll
If set to True, the call will not return until the asynchronous job on the host has completed. Has no effect if the host did not return a job response.
poll_interval
If the operation returns a job, this specifies how often to query the job for updates.
poll_timeout
If the operation returns a job, this specifies how long to continue monitoring the job's status for completion.
connection
The HostConnection object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context.
**kwargs
Any key/value pairs passed will be sent as query parameters to the host. Only resources matching this query will be deleted.

Returns

A NetAppResponse object containing the details of the HTTP response.

Raises

NetAppRestError: If the API call returned a status code >= 400

def find(*args, connection: HostConnection = None, **kwargs) -> Resource

Retrieves the collection of aggregates for the entire cluster.

Expensive properties

There is an added cost to retrieving values for these properties. They are not included by default in GET results and must be explicitly requested using the fields query parameter. See Requesting specific fields to learn more. * metric.* * space.block_storage.inactive_user_data * space.block_storage.inactive_user_data_percent * space.footprint * statistics.*

  • storage aggregate show

Learn more


Find an instance of an object on the host given a query.

The host will be queried with the provided key/value pairs to find a matching resource. If 0 are found, None will be returned. If more than 1 is found, an error will be raised or returned. If there is exactly 1 matching record, then it will be returned.

Args

*args
Each entry represents a parent key which is used to build the path to the child object. If the URL definition were /api/foos/{foo.name}/bars, then to find a bar for a particular foo, the foo.name value should be passed.
connection
The HostConnection object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context.
**kwargs
Any key/value pairs passed will be sent as query parameters to the host.

Returns

A Resource object containing the details of the object or None if no matches were found.

Raises

NetAppRestError: If the API call returned more than 1 matching resource.

def get_collection(*args, connection: HostConnection = None, max_records: int = None, **kwargs) -> Iterable[Resource]

Retrieves the collection of aggregates for the entire cluster.

Expensive properties

There is an added cost to retrieving values for these properties. They are not included by default in GET results and must be explicitly requested using the fields query parameter. See Requesting specific fields to learn more. * metric.* * space.block_storage.inactive_user_data * space.block_storage.inactive_user_data_percent * space.footprint * statistics.*

  • storage aggregate show

Learn more


Fetch a list of all objects of this type from the host.

This is a lazy fetch, making API calls only as necessary when the result of this call is iterated over. For instance, if max_records is set to 5, then iterating over the collection causes an API call to be sent to the server once for every 5 records. If the client stops iterating before getting to the 6th record, then no additional API calls are made.

Args

*args
Each entry represents a parent key which is used to build the path to the child object. If the URL definition were /api/foos/{foo.name}/bars, then to get the collection of bars for a particular foo, the foo.name value should be passed.
connection
The HostConnection object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context.
max_records
The maximum number of records to return per call
**kwargs
Any key/value pairs passed will be sent as query parameters to the host.

Returns

A list of Resource objects

Raises

NetAppRestError: If there is no connection available to use either passed in or on the library. This would be not be raised when get_collection() is called, but rather when the result is iterated.

def patch_collection(body: dict, *args, records: Iterable[_ForwardRef('Aggregate')] = None, poll: bool = True, poll_interval: Union[int, NoneType] = None, poll_timeout: Union[int, NoneType] = None, connection: HostConnection = None, **kwargs) -> NetAppResponse

Updates the aggregate specified by the UUID with the properties in the body. This request starts a job and returns a link to that job.

  • storage aggregate add-disks
  • storage aggregate mirror
  • storage aggregate modify
  • storage aggregate relocation start
  • storage aggregate rename

Learn more


Patch all objects in a collection which match the given query.

All records on the host which match the query will be patched with the provided body.

Args

body
A dictionary of name/value pairs to set on all matching members of the collection. The body argument will be ignored if records is provided.
*args
Each entry represents a parent key which is used to build the path to the child object. If the URL definition were /api/foos/{foo.name}/bars, then to patch the collection of bars for a particular foo, the foo.name value should be passed.
records
Can be provided in place of a query. If so, this list of objects will be patched on the host.
poll
If set to True, the call will not return until the asynchronous job on the host has completed. Has no effect if the host did not return a job response.
poll_interval
If the operation returns a job, this specifies how often to query the job for updates.
poll_timeout
If the operation returns a job, this specifies how long to continue monitoring the job's status for completion.
connection
The HostConnection object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context.
**kwargs
Any key/value pairs passed will be sent as query parameters to the host. Only resources matching this query will be patched.

Returns

A NetAppResponse object containing the details of the HTTP response.

Raises

NetAppRestError: If the API call returned a status code >= 400

def post_collection(records: Iterable[_ForwardRef('Aggregate')], *args, hydrate: bool = False, poll: bool = True, poll_interval: Union[int, NoneType] = None, poll_timeout: Union[int, NoneType] = None, connection: HostConnection = None, **kwargs) -> Union[List[Aggregate], NetAppResponse]

Automatically creates aggregates based on an optimal layout recommended by the system. Alternatively, properties can be provided to create an aggregate according to the requested specification. This request starts a job and returns a link to that job. POST operations will be blocked while one or more nodes in the cluster are simulating or implementing automatic aggregate creation.

Required properties

Properties are not required for this API. The following properties are only required if you want to specify properties for aggregate creation: * name - Name of the aggregate. * node.name or node.uuid - Node on which the aggregate will be created. * block_storage.primary.disk_count - Number of disks to be used to create the aggregate.

Default values

If not specified in POST, the following default values are assigned. The remaining unspecified properties will receive system dependent default values. * block_storage.mirror.enabled - false * snaplock_type - non_snaplock

  • storage aggregate auto-provision
  • storage aggregate create

Example:

POST /api/storage/aggregates {"node": {"name": "node1"}, "name": "test", "block_storage": {"primary": {"disk_count": "10"}}}

Learn more


Send this collection of objects to the host as a creation request.

Args

records
A list of Resource objects to send to the server to be created.
*args
Each entry represents a parent key which is used to build the path to the child object. If the URL definition were /api/foos/{foo.name}/bars, then to create a bar for a particular foo, the foo.name value should be passed.
hydrate
If set to True, after the response is received from the call, a a GET call will be made to refresh all fields of each object. When hydrate is set to True, poll must also be set to True.
poll
If set to True, the call will not return until the asynchronous job on the host has completed. Has no effect if the host did not return a job response.
poll_interval
If the operation returns a job, this specifies how often to query the job for updates.
poll_timeout
If the operation returns a job, this specifies how long to continue monitoring the job's status for completion.
connection
The HostConnection object to use for this API call. If unset, tries to use the connection which is set globally for the library or from the current context.
**kwargs
Any key/value pairs passed will be sent as query parameters to the host. Only resources matching this query will be patched.

Returns

A list of Resource objects matching the provided
type which have been created by the host and returned. This is _not_
 
the same list that was provided, so to continue using the object, you
 

should save this list. If poll is set to False, then a NetAppResponse object is returned instead.

Raises

NetAppRestError: If the API call returned a status code >= 400

Methods

def delete(self, body: Union[Resource, dict] = None, poll: bool = True, poll_interval: Union[int, NoneType] = None, poll_timeout: Union[int, NoneType] = None, **kwargs) -> NetAppResponse

Deletes the aggregate specified by the UUID. This request starts a job and returns a link to that job.

  • storage aggregate delete

Learn more


Send a deletion request to the host for this object.

Args

body
The body of the delete request. This could be a Resource instance or a dictionary object.
poll
If set to True, the call will not return until the asynchronous job on the host has completed. Has no effect if the host did not return a job response.
poll_interval
If the operation returns a job, this specifies how often to query the job for updates.
poll_timeout
If the operation returns a job, this specifies how long to continue monitoring the job's status for completion.
**kwargs
Any key/value pairs passed will be sent as query parameters to the host.

Returns

A NetAppResponse object containing the details of the HTTP response.

Raises

NetAppRestError: If the API call returned a status code >= 400

def get(self, **kwargs) -> NetAppResponse

Retrieves the aggregate specified by the UUID. The recommend query cannot be used for this operation.

Expensive properties

There is an added cost to retrieving values for these properties. They are not included by default in GET results and must be explicitly requested using the fields query parameter. See Requesting specific fields to learn more. * metric.* * space.block_storage.inactive_user_data * space.block_storage.inactive_user_data_percent * space.footprint * statistics.*

  • storage aggregate show

Learn more


Fetch the details of the object from the host.

Requires the keys to be set (if any). After returning, new or changed properties from the host will be set on the instance.

Returns

A NetAppResponse object containing the details of the HTTP response.

Raises

NetAppRestError: If the API call returned a status code >= 400

def patch(self, hydrate: bool = False, poll: bool = True, poll_interval: Union[int, NoneType] = None, poll_timeout: Union[int, NoneType] = None, **kwargs) -> NetAppResponse

Updates the aggregate specified by the UUID with the properties in the body. This request starts a job and returns a link to that job.

  • storage aggregate add-disks
  • storage aggregate mirror
  • storage aggregate modify
  • storage aggregate relocation start
  • storage aggregate rename

Learn more


Send the difference in the object's state to the host as a modification request.

Calculates the difference in the object's state since the last time we interacted with the host and sends this in the request body.

Args

hydrate
If set to True, after the response is received from the call, a a GET call will be made to refresh all fields of the object.
poll
If set to True, the call will not return until the asynchronous job on the host has completed. Has no effect if the host did not return a job response.
poll_interval
If the operation returns a job, this specifies how often to query the job for updates.
poll_timeout
If the operation returns a job, this specifies how long to continue monitoring the job's status for completion.
**kwargs
Any key/value pairs passed will normally be sent as query parameters to the host. If any of these pairs are parameters that are sent as formdata then only parameters of that type will be accepted and all others will be discarded.

Returns

A NetAppResponse object containing the details of the HTTP response.

Raises

NetAppRestError: If the API call returned a status code >= 400

def post(self, hydrate: bool = False, poll: bool = True, poll_interval: Union[int, NoneType] = None, poll_timeout: Union[int, NoneType] = None, **kwargs) -> NetAppResponse

Automatically creates aggregates based on an optimal layout recommended by the system. Alternatively, properties can be provided to create an aggregate according to the requested specification. This request starts a job and returns a link to that job. POST operations will be blocked while one or more nodes in the cluster are simulating or implementing automatic aggregate creation.

Required properties

Properties are not required for this API. The following properties are only required if you want to specify properties for aggregate creation: * name - Name of the aggregate. * node.name or node.uuid - Node on which the aggregate will be created. * block_storage.primary.disk_count - Number of disks to be used to create the aggregate.

Default values

If not specified in POST, the following default values are assigned. The remaining unspecified properties will receive system dependent default values. * block_storage.mirror.enabled - false * snaplock_type - non_snaplock

  • storage aggregate auto-provision
  • storage aggregate create

Example:

POST /api/storage/aggregates {"node": {"name": "node1"}, "name": "test", "block_storage": {"primary": {"disk_count": "10"}}}

Learn more


Send this object to the host as a creation request.

Args

hydrate
If set to True, after the response is received from the call, a a GET call will be made to refresh all fields of the object.
poll
If set to True, the call will not return until the asynchronous job on the host has completed. Has no effect if the host did not return a job response.
poll_interval
If the operation returns a job, this specifies how often to query the job for updates.
poll_timeout
If the operation returns a job, this specifies how long to continue monitoring the job's status for completion.
**kwargs
Any key/value pairs passed will normally be sent as query parameters to the host. If any of these pairs are parameters that are sent as formdata then only parameters of that type will be accepted and all others will be discarded.

Returns

A NetAppResponse object containing the details of the HTTP response.

Raises

NetAppRestError: If the API call returned a status code >= 400

Inherited members

class AggregateSchema (*, only: Union[Sequence[str], Set[str]] = None, exclude: Union[Sequence[str], Set[str]] = (), many: bool = False, context: Dict = None, load_only: Union[Sequence[str], Set[str]] = (), dump_only: Union[Sequence[str], Set[str]] = (), partial: Union[bool, Sequence[str], Set[str]] = False, unknown: str = None)

The fields of the Aggregate object

Ancestors

  • netapp_ontap.resource.ResourceSchema
  • marshmallow.schema.Schema
  • marshmallow.base.SchemaABC

Class variables

block_storage GET POST PATCH

The block_storage field of the aggregate.

cloud_storage PATCH

The cloud_storage field of the aggregate.

create_time GET

Timestamp of aggregate creation.

Example: 2018-01-01T16:00:00.000+0000

data_encryption GET POST PATCH

The data_encryption field of the aggregate.

dr_home_node GET POST PATCH

The dr_home_node field of the aggregate.

home_node GET POST PATCH

The home_node field of the aggregate.

inactive_data_reporting GET POST PATCH

The inactive_data_reporting field of the aggregate.

The links field of the aggregate.

metric GET

The metric field of the aggregate.

name GET POST PATCH

Aggregate name.

Example: node1_aggr_1

node GET POST PATCH

The node field of the aggregate.

recommendation_spares GET POST PATCH

Information on the aggregate's remaining hot spare disks.

snaplock_type GET POST

SnapLock type.

Valid choices:

  • non_snaplock
  • compliance
  • enterprise
snapshot GET POST PATCH

The snapshot field of the aggregate.

space GET POST PATCH

The space field of the aggregate.

state GET

Operational state of the aggregate.

Valid choices:

  • online
  • onlining
  • offline
  • offlining
  • relocating
  • unmounted
  • restricted
  • inconsistent
  • failed
  • unknown
statistics GET

The statistics field of the aggregate.

uuid GET

Aggregate UUID.