Friday, 28 April 2017 08:51

Software-defined storage on the march, says SUSE expert Featured


Initial deployments of software-defined storage are primarily focused on the storage of unstructured data, according to Larry Morris, senior product manager at SUSE Linux.

This was because most customer data was unstructured data, and unstructured data was also growing much more rapidly than structured data, he told iTWire in a detailed interview.

The Germany-based SUSE announced its software-defined storage product back in 2014 and Morris, who in an earlier life designed proprietary storage systems with HP, said some examples of use cases where SUSE Enterprise Storage was deployed included disk-based back-up where the software-defined storage solution was used to store the back-up data.

"Other use cases are storage for large files such as medical images or video surveillance data; archival data that has not been accessed in a while and is migrated from primary storage systems; and on-premise cloud storage using a SUSE Enterprise Storage Amazon S3-compliant interface," he said.

SUSE Enterprise Storage is built on the open source Ceph project and deploys object storage across a single, distributed, computer cluster made up of industry standard hardware.

"It enables the creation of different pools of storage within the single distributed storage cluster," Morris, a storage nerd to the core, explained. "One pool within the storage cluster could be composed of nodes containing very fast NVMe drives and provide file access. Another pool could be composed of nodes containing high capacity spinning drives deploying erasure coding for redundancy and provide an Amazon S3-compliant interface.

"A third pool within the storage cluster could have both a pool composed of nodes containing very fast NVMe drives and a pool of nodes containing high-capacity spinning drives and use SUSE Enterprise Storage cache tiering to cache the data in the high performance pool and provide an iSCSI block interface."

While the product provided true unified data access with block, file or object interfaces, Morris said all data was stored within the storage cluster as an object.

"These objects are replicated for redundancy and striped across multiple drives within the pool in the storage cluster. An algorithm called CRUSH (Controlled Replication Under Scalable Hashing) ensures that object data is evenly distributed across the drives within the pool of the storage cluster.

"The fact that data placement is computed using the CRUSH algorithm and not random as with other virtualised storage systems is key to enabling SUSE Enterprise Storage to be both self-managing and self-healing. The drives within the pools of the storage cluster communicate with each other (peering)."

He said when a drive failed, for example, other drives in that pool of the storage cluster immediately became aware. "Using the CRUSH algorithm, the remaining drives in the pool determine the data that resided on that drive. Replicated copies of the failed drive’s data are distributed across the other drives in that pool of the storage cluster.

"The drives that contain the replicated data use the CRUSH algorithm to determine where the new copy of data should be placed and copy the data to that drive. Because data is distributed across the drives in the pool and written to drives across the pool, many drives are involved in the reconstruction of data redundancy, enabling this to be done very rapidly. This is all done automatically and in the background by these intelligent drives."

Due to this, Morris explained, there was no single point of failure within the SUSE Enterprise Storage cluster. "As a result, the system is highly scalable. New computer nodes can be added online to the storage cluster to expand capacity, performance or both. As an example, when a new node is added to an existing pool within the SUSE Enterprise Storage cluster, the other drives within the pool become aware of the additional drives and, using the CRUSH algorithm, automatically rebalance the object data across the pool.

"And as a true unified block, file and object storage system, a single SUSE Enterprise Storage cluster can be used to address many different use cases."

Asked how SUSE's product differed from what Red Hat, the biggest open source company, is doing with GlusterFS, Morris replied that both products were open source software-defined storage products. There the similarity ended.

"Red Hat positions GlusterFS as a software-defined storage solution optimised for file applications. SUSE Enterprise Storage, on the other hand, is a software-defined storage solution for block, file and object storage applications, provides much more flexibility and can be used to address a much broader set of use cases (block, file and object) than can be addressed with Red Hat GlusterFS (file)."

Morris was asked about proprietary products, like that from DataCore. He responded by  pointing out that the storage industry was in the midst of a transition from traditional proprietary storage products to software-defined storage products.

"One of the primary drivers for this transition is the desire customers have to free themselves from being locked into the propriety hardware required as part of traditional proprietary storage products," he said. "Both SUSE Enterprise Storage and products like DataCore SANsymphony are software-defined storage products that run on industry standard hardware. But unlike proprietary software-defined storage technologies like those available from companies such as DataCore, SUSE Enterprise Storage is open source and built on open source Ceph technology.

"Ceph is supported by a community and has an active advisory board with board membership including Canonical, Cisco, Fujitsu, Intel, Red Hat, SanDisk and SUSE. More than 900 developers have made contributions to the Ceph product. Unique downloads of Ceph are now approaching 38 million."

Morris added that customers were moving to software-defined storage because they did not want to be locked into proprietary hardware technology.

"This transition is not going to stop with the transition to industry standard hardware, as customers do not want to be locked into proprietary software-defined storage technology either," he said. "In addition, when a business makes a decision to deploy a proprietary software-defined storage solution, there is always the risk that the company may not be in business in a few years. Customers who choose to deploy Ceph software-defined storage have the assurance of knowing there is a vibrant community behind the product."

Asked whether SUSE's technology would remain fully open and if there was not some advantage to keeping a few aspects closed like Red Hat and other open source vendors often do, Morris was categorical that SUSE would be 100% open source. "Everything we develop is pushed upstream to the open source community. We believe it is the right thing to do as an open source company and the right thing for our customers," he said.

SUSE is preparing to release the fifth version of SUSE Enterprise Storage with a beta to be available during the western summer 2017. Version 4 of SUSE Enterprise Storage has been shipping since December 2016.

WEBINAR event: IT Alerting Best Practices 27 MAY 2PM AEST

LogicMonitor, the cloud-based IT infrastructure monitoring and intelligence platform, is hosting an online event at 2PM on May 27th aimed at educating IT administrators, managers and leaders about IT and network alerts.

This free webinar will share best practices for setting network alerts, negating alert fatigue, optimising an alerting strategy and proactive monitoring.

The event will start at 2pm AEST. Topics will include:

- Setting alert routing and thresholds

- Avoiding alert and email overload

- Learning from missed alerts

- Managing downtime effectively

The webinar will run for approximately one hour. Recordings will be made available to anyone who registers but cannot make the live event.



Security requirements such as confidentiality, integrity and authentication have become mandatory in most industries.

Data encryption methods previously used only by military and intelligence services have become common practice in all data transfer networks across all platforms, in all industries where information is sensitive and vital (financial and government institutions, critical infrastructure, data centres, and service providers).

Get the full details on Layer-1 encryption solutions straight from PacketLight’s optical networks experts.

This white paper titled, “When 1% of the Light Equals 100% of the Information” is a must read for anyone within the fiber optics, cybersecurity or related industry sectors.

To access click Download here.


Sam Varghese

website statistics

Sam Varghese has been writing for iTWire since 2006, a year after the site came into existence. For nearly a decade thereafter, he wrote mostly about free and open source software, based on his own use of this genre of software. Since May 2016, he has been writing across many areas of technology. He has been a journalist for nearly 40 years in India (Indian Express and Deccan Herald), the UAE (Khaleej Times) and Australia (Daily Commercial News (now defunct) and The Age). His personal blog is titled Irregular Expression.



Recent Comments