It is not that Brocade make storage - that is the role of partners including EMC, HPE, Hitachi Data Systems and Intel - but it underpins the network ‘fabric’ by supplying high-speed switches, routers, fibre channel and IP storage networking and a lot more.
Casamento – real title Global Systems Architect - was in Australia from Brocade’s corporate headquarters in San Jose, California but it was clear that airport hotels are more his domain. He likes to spend at least 30% of his time with customers, 30% thinking about how to improve their lot, and the rest in doing.
His keynote speech at the Brocade’s Storage Networking Roadshow was titled ‘Transformation to the All-Flash Data Centre’ and I don’t pretend to understand it all – several levels above my pay-grade – but I will do my best to paraphrase and convert it to layman’s terms.
Virtualisation of operating systems, desktops and apps will continue to grow. Computing will continue to grow ergo storage will continue to grow. The problem is that physical storage has not kept up with the speed or capacity to say silicon. Look at Intel now versus where is was 20 years ago. Storage has not kept up until recently when SSD (solid state drives) started producing storage that could match compute speeds.
The problem with storage is the ‘write once, never use again’ issue. Storage people are paranoid about deleting data - they just may need that bit of data again one day. Most storage companies call that stale data and recently iTWire wrote about the Veritas backed Data Genomics Index – who stores what, when, where and why. Fast, reliable, storage – lack or inefficient use of – is the greatest enterprise computing issue at present – it is one major bottleneck.
The cloud has been touted as the solution but the cloud simply means sharing other people’s computing resources, hopefully resulting in a lower overall cost. Regardless of how euphemistic the term cloud is there is still a need for massive infrastructure to underpin it. And the more dependent we become on the cloud, the more critical it becomes that it is fast, reliable and doesn’t suffer outages – it must never fail.
Flash memory drives – called SSD have been around for a while. The proponents cited speed and the opponents cited higher cost – Bucks per Byte - and reliability/lifespan issues. All of these issues have been solved and flash is now faster, cheaper, more reliable and more manageable than spinning disks. It provides orders of magnitude more speed and reliability – no single disk to fail.
We have seen the increases in CPU performance, memory scale and network bandwidths. I think what Flash storage does is to bring the storage connectivity up to the same levels of your I/O performance that matches those increases on CPU memory and the network rate, similarly as if you’re widening the highway. As you widen the section rate you have to then increase the next section to prevent bottlenecking and I think that’s pretty much what you see here with the adoption of Flash-based storage.
But that places a strain on the network – or rather a storage network – currently on a 16-bit Gen 5 Fibre channel platform and soon on a 32-bit Gen 6 platform.
In my experience, SSD will benefit application performance. It reduces the latency response time for primary storage between the server the application. You must also be cognisant of the relationship between bandwidth and I/O’s per second (IOPS). Back to the highway analogy, bandwidth would be how many lanes you have IOPS would be how many cars and at what rate of speed and average spacing it is. That reflects back to the application performance, which allows you scale — both in terms of I/O for applications that require massive I/O, but it also allows you scale in terms of the number of virtualised machines that you want to stack up.
100% of the Top 20 VMmark benchmarks use fibre channel. 85% of VM workloads are on fibre-channel. 85% of VM workloads run on fibre channel. But importantly SSD can achieve 1 million IOPS whereas the fastest hard disk is around 6,000. SSD does not have the latency and random read/write speed loss.
Further supporting SSD - 52% of unplanned outages could have been prevented by better equipment. Consider that a 12-hour outage of iTunes would cost Apple US$25 million dollars.
Casamento strongly supported the need for speed – software will always consume all the available resources. But more than that 360° photos, VR, 2/4/8K video, and the move to streaming instead of local storage will consume storage and require speed. You want 4K movies in your home that have 20-30 layers of image at 30 frames per second over an IP network. Storage networks need to get bigger and faster.
He spoke about the future – NVMe or non-volatile storage media attached via PCI Express (PCIe) bus. While SSD was designed to replace spinning hard disks this is designed be part of the server bus significantly reducing latency.
A panel session with representatives from EMC, HPE, Hitachi Data Systems and IBM followed Casamento’s speech. I don’t intend to report on that in detail but the theme was all about how flash would fuel the transformation – the journey to the third platform called the cloud.
Key take outs included that for every $1 spent on storage, it cost $3 on management labour overhead. Flash management cost significantly less – 50 cents. ROI was measured in months, not years due to the lower operational complexity and increased reliability.
Flash had started as a niche product but in many corporations, it was now the primary storage for al online workloads. Performance was now the key differentiator. Now it is more about the storage networks keeping up with the ever increasing demands of more apps.
There was a lot of discussion on convergence and hyper-convergence - a software-centric architecture that tightly integrates compute, storage, networking, virtualization resources and other technologies in a commodity x86 hardware box usually supported by a single vendor. There was a diversity of opinion from that it was great to it lacked individual scalability of separate stacks. All agreed that the problem with achieving hyper-convergence was that there were very few opportunities to start from scratch.
There was lots of discussion on Software defined network and storage. The best explanation was that instead of using custom chips to control the hardware (disks etc.) that software to do so moves to x86 commodity hardware and that would allow for more scalability. As IBM said “We can move anything that is software based to the cloud.
I finished by asking Casamento what was going to happen over the next five years. There is a move to Gen 6 but Gen 5 chassis will be supported for at least 3-4 years more – if not longer. There is a lot of work being done in things like MVMe and even silicon photonics where data is transferred among computer chips by optical rays that can carry far more data in less time than electrical conductors.
There will be improvements all round in the use of legacy copper (CatV or later) and that is mainly due to the huge amount in place. He believes that for the foreseeable future fibre channel will remain the fastest storage bus mechanism.