Download CloudTier Storage Tiering SDK Setup File Download CloudTier Storage Tiering SDK Zip File
For example, SSD disk drives, magnetic disk drives and tape storage. The most important or frequently-accessed data is stored on the fastest, and most expensive media (SSD) and the least important on the slowest, cheapest media (tape).The minimal storage tiering system has two tiers—one for frequently accessed data and one for archive. The more tiers are available, the more choice administrators have over the placement of specific data classes, and the more efficiently storage resources can be utilized.
Storage tiering is not just about offering different storage technologies. A key aspect of a storage tiering architecture is how to classify data into levels of importance and assign it to the appropriate storage tiers. Over time, data classification can change—for example, as data ages, it may need to be moved into lower tiers or archive storage.
Data classification must be ongoing and must be smart enough to enable rapid classification of large volumes of data.
Automated storage enables you to optimize your storage tiering by adapting to your needs dynamically. It continuously monitors data use and access to determine the priority of data and what level of tiering is needed. To use automated storage, you configure your desired thresholds and leave the rest to automation.
Once data hits predefined thresholds of use it is moved accordingly. If the frequency of data access has increased, it is moved up to a lower-latency tier. If data is not being used, it is moved down to a lower-cost, higher-latency tier. In this way, your costs and performance are optimized with minimal effort and no ongoing maintenance.
Hierarchical storage management (HSM) is a data storage technique that automatically moves data between high-cost and low-cost storage media. HSM systems exist because high-speed storage devices, such as solid state drive arrays, are more expensive (per byte stored) than slower devices, such as hard disk drives, optical discs and magnetic tape drives. While it would be ideal to have all data available on high-speed devices all the time, this is prohibitively expensive for many organizations. Instead, HSM systems store the bulk of the enterprise's data on slower devices, and then copy data to faster disk drives when needed. In effect, HSM turns the fast disk drives into caches for the slower mass storage devices. The HSM system monitors the way data is used and makes best guesses as to which data can safely be moved to slower devices and which data should stay on the fast devices.
The following example will generate some stub files. To handle the read request of the stub file, we need to register the callback function for the file system filter driver. When the stub file was accessed, the callback function will be invoked, the callback function will retrieve the data from the remote server and send back to the filter driver.
public static Boolean ProcessRequest(MessageSendData messageSend, ref MessageReplyData messageReply)
{
Boolean ret = false;
try
{
//here the data buffer is the reparse point tag data, in our test,
//we assume the reparse point tag data is the cache file name of the stub file.
string cacheFileName = Encoding.Unicode.GetString(messageSend.DataBuffer);
cacheFileName = cacheFileName.Substring(0, (int)messageSend.DataBufferLength / 2);
if (messageSend.MessageType == (uint)MessageType.MESSAGE_TYPE_RESTORE_FILE_TO_CACHE)
{
//for the first write request, the filter driver needs to restore the whole file first,
//here we need to download the whole cache file and return the cache file name to the filter driver,
//the filter driver will replace the stub file data with the cache file data.
//for memory mapping file open( for example open file with notepad in local computer,
//it also needs to download the whole cache file and return the cache file name to the filter driver,
//the filter driver will read the cache file data, but it won't restore the stub file.
ret = DownloadCacheFile(messageSend,cacheFileName, ref messageReply);
}
else if (messageSend.MessageType == (uint)MessageType.MESSAGE_TYPE_RESTORE_BLOCK_OR_FILE)
{
//for this request, the user is trying to read block of data, you can either return the whole cache file
//or you can just restore the block of data as the request need, you also can rehydrate the file at this point.
//if the whole cache file was restored, you better to return the cache file instead of block data.
if (GlobalConfig.RehydrateFileOnFirstRead || GlobalConfig.ReturnCacheFileName)
{
ret = DownloadCacheFile(messageSend, cacheFileName, ref messageReply);
}
else
{
ret = GetRequestedBlockData(cacheFileName, messageSend.Offset, messageSend.Length, ref messageReply);
}
}
else
{
messageReply.ReturnStatus = (uint)NTSTATUS.STATUS_UNSUCCESSFUL;
}
messageReply.MessageId = messageSend.MessageId;
messageReply.MessageType = messageSend.MessageType;
EventLevel eventLevel = EventLevel.Information;
if (messageReply.ReturnStatus != (uint)NTSTATUS.STATUS_SUCCESS)
{
eventLevel = EventLevel.Error;
}
ret = true;
}
catch (Exception ex)
{
EventManager.WriteMessage(181, "ProcessRequest", EventLevel.Error, "Process request exception:" + ex.Message);
return false;
}
return ret;
}