Learn ONTAP 9.4 FabricPool Enhancements through NetApp ONTAP training

Thumb

Learn ONTAP 9.4 FabricPool Enhancements through NetApp ONTAP training

IT students and professionals pursue quality certifications and training to stay abreast with the evolving technology and IT infrastructures in organizations today. Among other top names in the industry, NetApp training is some of the most informative, powerful certifications educating learners on a plethora of topics. Besides the top NetApp certification, both students and professionals also consult equally competent IT blogs specifically catering the NetApp technology and related topics in the field. Here is a quick refresher related to the core discussion of our topic. Last year, NetApp officially announced the release of ONTAP 9.4 with new, enhanced features such as FabricPool, security, support of 30TB SSD, REST API support, SMB multi-channel support and a simple upgrade to the new version. This is where we learn about the ONTAP 9.4 FabricPool Enhancements and the best practices in detail, including data tiering with cloud volumes, performance, private storage, and more.

FabricPool Best Practices

FabricPool, first introduced in ONTAP 9.2, is a NetApp Data Fabric technology that regulates the tiering automation of data to low-cost object storage tiers for both on or off premises. FabricPool benefits the IT teams by reducing the total cost of ownership through tiering automation of data for reducing the storage cost. The benefits associated with cloud economics are great, by tiering to public clouds including Microsoft Azure Blob Storage, IBM Cloud Object Storage, and Amazon S3 along with to private clouds developed through NetApp StorageGRID®. FabricPool is easily incorporated by applications and facilitates organizations to avail the cloud economics benefits without affecting the performance or needing to redesign solutions for better storage efficiency.

Primary Use Cases

Apparently, the core aim of FabricPool is to contain the storage footprints and the related costs. Active data stays intact with high-performance SSDs, while the inactive counterpart is tiered to low-cost object storage for saving ONTAP data efficiencies. When we talk about the primary use cases of FabricPool, the three cases are: reclaim capacity on primary storage, shrink secondary storage footprint, and shift the entire volumes on the cloud. Even though FabricPool has the potential to greatly diminish storage footprints, this is not a backup solution. The WAFL® (Write Anywhere File Layout) metadata is always called on the performance tier. In case a terrible disaster has hit and destroyed the performance tier, a new environment using the cloud tier or data won’t be created due to the nonexistence of WAFL metadata. To ensure reliable protection including WAFL metadata, the user should consider employing the current ONTAP technologies like SnapVault and SnapMirror. In addition to that, FabricPool is also used to limit costs by tiering data on secondary storage.

Reclaim Capacity on Primary Storage

Snapshot-only tiering policy

Snapshot copies tend to often cover more than 10% of a typical storage environment. Even though these point-in-time copies are vital for disaster protection and recovery, but they are rarely utilized and result in a below-par use of high-performance SSDs. Snapshot-Only is a default volume tiering policy for FabricPool, and offers a simplified route to reclaim the storage space on SSDs. With appropriate configuration, the policy can be used through cold Snapshot blocks that aren’t shared with an active file system and are shifted to the cloud tier. If they are read, cold data blocks on the cloud tier become hot and are directed on the performance tier.

Shrink Secondary Storage Footprint

The secondary data, i.e., data protection volumes that are NetApp SnapMirror® [disaster recovery] or NetApp SnapVault® [backup] destination targets, is often catered by storing on secondary clusters. The secondary clusters have a 1:1 ratio or more with the primary data that they are backing up (one baseline copy and multiple Snapshot copies). Let’s not forget that in the case of large datasets, this approach will be obviously expensive, where the users will be moved to make expensive decisions regarding the data requiring protection. Similar to Snapshot copies, backups are also irregularly used, aren’t efficient in the use of high-performance SSDs, and are quite costly for large datasets even when employed through HDDs. Do note that FabricPool’s Backup volume tiering policy influences this paradigm.

Rather than the 1:1 ratio of primary data to back up, the FabricPool Backup policy permits the users in greatly limiting the number of disk shelves on their secondary clusters, tiering most of the backup to low-cost object stores. WAFL metadata stays on the performance tier of the secondary cluster. If the cold data blocks in volumes utilizing the backup policy are read, they won’t be written to the performance tier. In such a case, the demand for high-capacity secondary storage performance tiers is reduced. A backup use case presents the secondary as a traditional cluster running ONTAP. The secondary can also be found in the cloud through the Cloud ONTAP Volumes, or within a software-based environment using ONTAP Select. Do note that data can be tiered through FabricPool, wherever the ONTAP deployment takes place.

Shift the Entire Volumes on the Cloud (Backup plus Volume Move)

Beside the Snapshot copies and backups, the other common uses of FabricPool is to shift the data to the cloud. The best items that could be moved to low-cost object storage are concluded projects, historical records, legacy reports, or any dataset that is required to be retained, but not guaranteed to be frequently read. Shifting entire volumes can be done by establishing a backup volume tiering policy on a volume at the time of starting a volume move—the only time when a backup tiering policy is created on a volume which isn’t a data protection destination target. At the time of volume move, the data (except the WAFL metadata) is instantly shifted to the cloud tier associated with the destination aggregate. After the accomplishment of volume move, the tiering policy on the volume automatically goes back from backup to auto. If cold data blocks placed on the cloud tier are read, they become hot and get associated with the performance bar.

Previous Post Next Post
Hit button to validate captcha