Home Sponsored Announcements How to boost storage performance by up to 900 per cent
Get all your tech news delivered to your mail box five days a week .
iTWire UPDATE - it's FREE!


By Ted Oade, Senior Director, Worldwide Product Marketing, Overland Storage

By combining auto-cache technology with enterprise flash drives (SSDs) in a high density modular storage array platform, administrators can decrease latency, increase I/O performance and achieve extremely fast SSD response speeds. 

The SSDs provide I/O improvement for the most frequently accessed data by boosting performance significantly. In monitored trials, the combination has improved application performance by almost 900 per cent.

Auto-cache is an effective solution for improving application performance while reducing overall storage costs. Level 2 (or L2) cache is part of a multi-level storage strategy for improving computer performance. The present model uses up to three levels of cache, termed L1, L2 and L3, each bridging the gap between the very fast computer processing unit (CPU) and the much slower random access memory (RAM).

Information system performance depends on many elements, including access time or retrieval speed from hard disk drive (HDD) storage. Response times slow down if random access workload is overly concentrated on a single disk resource, and auto-cache technology is designed to solve this problem. It does so by using SSD in a unique way.

Substantial performance improvements are achievable by using tiered storage technology, in which all logical disks are moved automatically to the most optimum storage device in an array system. Tiered storage depends on data access frequency patterns, and data allocation optimisation technology can provide this functionality.

Alternatively, SSD drives may be used as a cache.  Auto-caching allows a controller to use SSD as a cache in front of traditional disk storage. The controller identifies frequently accessed data, sometimes called ‘hot data’, and automatically moves it to solid-state media.

The improved performance is the result of placing hot data on to the highest I/O-capable media, which increases I/O performance and decreases I/O latency. As I/O patterns change over time, controllers automatically observe data that is most frequently accessed and move it on to the highest I/O-capable, without any IT intervention.

Caching differs from tiered storage as it doesn’t use solid-state memory as a permanent location for data storage. Rather it is used to dynamically redirect read and write requests from disk to cache on-demand to accelerate I/O performance.   This is especially true for random I/O, a resource-intensive operation for traditional drive architectures. Auto-cache technology can benefit any application whose data is considered ‘hot’. Significantly, the potential performance improvement is even greater as more data is placed into cache.

The superior I/O performance of SSDs makes them suitable for caching operations. Auto-cache improves responsiveness by utilising the performance-boosting capability of SSD. These devices also avoid performance degradation based on the type of data access (sequential versus random) because they perform equally well in all access contexts. Any enterprise experiencing performance issues using current non-cached, non-SSD array architectures could benefit by deploying auto-cache technology.

Simply inserting the SSD devices into the drive bays makes them available for auto-caching, which can use SSD devices in three ways: as read cache, as write cache, and for persistent write cache.

Reduce operational costs

With traditional disk drive architectures it may be necessary to distribute heavy loads among multiple arrays to handle the workloads adequately. However, using a large number of arrays and disks for load distribution results in high power consumption. By contrast, auto-cache technology improves response time by adding a few SSD drives instead of installing many traditional drives or even a new array. This also reduces overall energy consumption based on the low per-unit energy consumption of SSD.

Multiple applications can benefit from auto-cache simultaneously. The technology automatically detects data hotspots anywhere in the array with no application-specific tuning required.

Since frequently accessed data is accessed from SSD drives controlled by Auto-cache, overall load and demand on traditional disk storage is lessened, extending even power and operational cost savings.

Auto-cache Performance

Tests were executed for measuring auto-cache performance in a variety of real world operational scenarios. The three scenarios were:

1. Auto-cache enabled in Read Only mode.

2. Auto-cache enabled in Read Write mode.

3. Auto-cache off or Auto-cache disabled: Tests were performed using L1 cache only.

The following data sets demonstrate areas where Auto-cache brings the greatest overall improvement. Following are highlights demonstrated by the test results:

      Performance increased up to nearly 900 per cent, depending on conditions, with Read/Write Auto-cache enabled specifically on 60/40 and 80/20 Read/Write ratios.

      Performance was increased by decreasing overall access area.

      For Read Only Cache mode, using 60/40 and 80/20 Read/Write ratios gives improved performance.

Auto-caching demonstrates the strongest performance gains when the operational conditions are on 60/40 Read/Write ratio using 4K and 8 K Block size (a real world situation).

 

Sponsored Announcement Service

Publish your latest media release up on iTWire and expose it to thousands of readers every day. iTWire is Australia's most widely read independent IT and Telecoms news and information source.

Publish Your Press Release Today

Connect