Blog index > NetApp ADP - Overview - Fast Lane UK Blog
avatar

NetApp ADP – Overview

October 4, 2016

Since the “GA” release of Clustered Data ONTAP (CDoT) 8.3 and subsequent releases you are able to utilise NetApp’s Advanced Drive Partitioning (ADP).

Today ADP has two main implementations: Root-data partitioning and Flash Pool SSD Partitioning.

Root-Data Partitioning

Before ADP was released in 8.3 the smaller systems had an issue with excessive drive usage purely to get the system running. Best practice says that each node requires a 3 drive root aggregate (that’s 6 out of the 12 drives gone), plus 2 host spares (now you’ve lost 8) leaving 4 drives. So your options were extremely limited. If you take the total number of drives and divide by the number of data drives this gives a storage efficiency of ~17% (this does not sit well with customers).

NetApp recognise that this was a real sticky point for customers and as such they introduced ADP for the lower end systems and All Flash FAS arrays to regain the competitive edge to these units.

Without ADP you have an active-passive configuration:

Being Active-Passive means that only one node of the pair is actively accessing the data aggregate and should that controller fail then its partner will then take over and continue operations. This for most customers was not ideal.

In the above example we lose a total of (8) drives to parity and at least (1) drive per node is used as a hot spare.  This leaves only (2) drives to store actual data which nets 1.47TB of usable capacity. Now with ADP, instead of having to dedicate disks for the nodes root aggregate we can now logically partition the drive into two separate segments, a smaller segment to be used for the root aggregate and larger segment for data.

adp_high_lvl

With ADP you now have either active-passive or active-active:

By using ADP you now have significantly more useable capacity for data and using an active-passive configuration where you now use all remaining space for data again if you take the total number of drives and divide by the number of data drives this gives a storage efficiency of ~77% (this is more palatable for customers).

adp_active_passive_low

You can also create two data partitions and run with active-active, however you lose more capacity for the parity of two data partitions compared with just one. So you will go from ~77% to about ~62%.

adding-drives-adp

Constraints to consider before leveraging Root-Partitions 

  • HDD types that are not available as internal drives: ATA, FCAL and MSATA
  • 100GB SSDs cannot be leveraged to create root partitions
  • MetroCluster and ONTAP-v do not support root partitions
  • Aggregates composed or partitioned drives must leverage Raid-DP
  • Supported only on entry (2240, 2550, 2552, 2554) and All Flash FAS systems (systems with only SSDs attached).
  • Removing or failing one drive will cause raid rebuild and slight performance degradation for both the node’s root aggregate and underlying data aggregate.
  • Aggregates composed of partitioned drives must have a RAID type of RAID-DP.

Flash Pool SSD Partitioning

Previously Flash Pools were created by dedicating a sub-set of SSDs in either a Raid-4 or Raid-DP raid group to an existing spinning disk aggregate.  In clusters that may have multiple aggregates this traditional approach is very wasteful. For Example, if a 2-node cluster had (4) data aggregates, (3) on one node and (1) on the other the system we would require a minimum a total of (10) drives to allow for only (1) caching drive per data aggregate.  If these SSDs are 400GB SSDs then each aggregate would not only 330GB (right sized actual) of cache out of the 4TB total RAW.

With CDoT 8.3 a new concept “Storage Pools” were introduced, which increase cache-allocation agility by granting the ability to provision cache based on capacity rather than number of drives. Storage Pools allows the administrator to create one or more logical “pools” of SSD that is then divided into (4) equally divided slices (allocation units) that can then be applied to existing data aggregates across either node in the HA pair.

storage-pool

Using the same example as previously described, creating one large Storage Pool with (10) 400GB SSD drives we would net a total of (4) allocation units each with 799.27GB of usable cache capacity that can then be applied to our four separate aggregates.

By default when a storage pool is created (2) allocation units are assigned to each node in the HA pair.  These allocation units can be re-assigned to which ever node/aggregate needed them as long as they sit within the same HA pair. Thereby making better use of your SSD storage pool, compared to the previous method where those SSD would be allocated to one aggregate and there they would remain.

 

peter-green

 

Pete Green
Fast Lane Lead NetApp Expert

Tags: , , , , , , , ,

Leave a Reply

You must be logged in to post a comment.