Table of ContentsView in Frames

Determining Flash Pool candidacy and optimal cache size

Before converting an existing aggregate to a Flash Pool aggregate, you can determine whether the aggregate is I/O bound, and what would be the best Flash Pool cache size for your workload and budget. You can also check whether the cache of an existing Flash Pool aggregate is sized correctly.

Before you begin

You should know approximately when the aggregate you are analyzing experiences its peak load.

Steps

  1. Enter advanced mode: set advanced
  2. If you need to determine whether an existing aggregate would be a good candidate for conversion to a Flash Pool aggregate, determine how busy the disks in the aggregate are during a period of peak load, and how that is affecting latency: statistics show-periodic -object disk:raid_group -instance raid_group_name -counter disk_busy|user_read_latency -interval 1 -iterations 60
    You can decide whether reducing latency by adding Flash Pool cache makes sense for this aggregate.
    Example
    The following command shows the statistics for the first RAID group of the aggregate "aggr1": statistics show-periodic -object disk:raid_group -instance /aggr1/plex0/rg0 -counter disk_busy|user_read_latency -interval 1 -iterations 60
  3. Start AWA: system node run -node node_name wafl awa start aggr_name
    AWA begins collecting workload data for the volumes associated with the specified aggregate.
  4. Exit advanced mode: set admin
  5. Allow AWA to run until one or more intervals of peak load have occurred.
    AWA analyzes data for up to one rolling week in duration. Running AWA for more than one week will report only on data collected from the previous week. Cache size estimates are based on the highest loads seen during the data collection period. You do not need to ensure that the load is high for the entire data collection period.
    AWA collects workload statistics for the volumes associated with the specified aggregate.
  6. Enter advanced mode: set advanced
  7. Display the workload analysis: system node run -node node_name wafl awa print
    AWA displays the workload statistics and optimal Flash Pool cache size.
  8. Stop AWA: system node run -node node_name wafl awa stop
    All workload data is flushed and is no longer available for analysis.
  9. Exit advanced mode: set admin

Example

In the following example, AWA was run on aggregate "aggr1". Here is the output of the awa print command after AWA had been running for about 3 days (442 10-minute intervals):

### FP AWA Stats ###
 

Basic Information
 
                Aggregate aggr1
             Current-time Mon Jul 28 16:02:21 CEST 2014
               Start-time Thu Jul 31 12:07:07 CEST 2014
      Total runtime (sec) 264682
    Interval length (sec) 600
          Total intervals 442
        In-core Intervals 1024
 
Summary of the past 442 intervals
                                   max 
          Read Throughput       39.695 MB/s
         Write Throughput       17.581 MB/s
       Cacheable Read (%)           92 %
      Cacheable Write (%)           83 %
Max Projected Cache Size          114 GiB
   Projected Read Offload           82 %
  Projected Write Offload           82 %
 
Summary Cache Hit Rate vs. Cache Size
 
       Size        20%        40%        60%        80%       100% 
   Read Hit         34         51         66         75         82 
  Write Hit         35         44         53         62         82 
 
The entire results and output of Automated Workload Analyzer (AWA) are
estimates. The format, syntax, CLI, results and output of AWA may
change in future Data ONTAP releases. AWA reports the projected cache
size in capacity. It does not make recommendations regarding the
number of data SSDs required. Please follow the guidelines for
configuring and deploying Flash Pool; that are provided in tools and
collateral documents. These include verifying the platform cache size
maximums and minimum number and maximum number of data SSDs.
 
 
 
### FP AWA Stats End ###
________________________________________

The results provide the following pieces of information: