Last #EMCWorld we’ve got cool key notes showing beautiful trends on storage technologies. There were frequent recalls on these two types of Storage architectures:
- Distributed Share Nothing: This type of architectures works on independent controllers no sharing memory resources between nodes. This sort of solution has beed made for Non-transactional data and brings distributed data protection features. Object Storage is a solution that fits with this description and you have several options like OpenStack Swift or Ceph Object Storage or Amazon S3.
- Loosely Coupled Scale-Out: Similar description to the other, but it’s aimed to store transactional data. The data is distributed through all nodes in blocks or pieces and you get consistency writes and reads among the nodes. Some part of the software maps the location of the pieces of the data and help you to put it together to have a coherent read operation. The performance and the capacity scale out adding nodes and usually you can control the importance of every node into the whole cluster depending on its hardware features and its contribution to the overall performance. Some examples are: EMC ScaleIO, Ceph Block Storage, VMWare Virtual SAN, Nutanix and Pivot3.
More details about these architectures at “Understanding Storage Architectures”
Some Key notes slides at EMCWorld has shown Ceph as an only “Distributed Share Nothing” type of architecture. I think It was a horrible mistake, cause Ceph could work in both worlds simultaneously – I am using this note to reply it -.
We are using Ceph as an only Block Storage Solution through OpenStack Cinder in our case. Obviously, There is a cost with extra IO through the cluster network to synchronize writes and reads, and for maintenance, but if you bring enough network resources you can get more than enough throughput to your cloud servers.
(Below you have a picture showing both architectures)
My Last note I’ve brought some results of our test with Ceph Block Storage and OpenStack Cinder. Using local SSD Disk we’ve got numbers around 900 MB/s from a VM executing “dd if=/dev/zero bs=1M”. This case we’ve got results a bit less than local disks, but we’ve won on redundancy, data persistency, flexibility, scalability and cost – you have to sacrifice something to succeed -. It could be so much worst if we didn’t use PCI Flash cards for journal – A lesson we have learned –
Anyway, we’ve been able to lower our storage costs and bring high performance and availability levels to our customers.
(Below you can see a picture of our labs results with Ceph Block Storage and OpenStack Cinder)
See you around