I want to share with you some results of our work with #Ceph with #OpenStack Cinder. We are working with three servers with SSD drives, and also Flash Cards as our journal and cache. We’ve executed three write tests with different sizes (1024MB, 2048MB and 4096MB) to Ceph devices from a virtual machine provisioned on KVM through OpenStack – the disk was provisioned through Cinder – using “dd if=/dev/zero bs=1M”. As you can see in the below picture, we’ve got results from 529 MB/s to 782 MB/s… and again, we’ve got it from a virtual server. How much should you have to pay to a traditional storage vendors to get such performance results? I will let you figure it out by yourself.
Below you can see a status of our Ceph’s cluster and the Placement Groups versions that we’ve got using the command “ceph -w”. We are using 44 OSDs – OSD is the object storage daemon for the Ceph distributed file system. It is responsible for storing objects on a local file system and providing access to them over the network (more info at “Ceph-OSD – Ceph Object Storage Daemon“) – and three monitors for redundancy.
Also we have set two copies for any stored data to provide a better protection against hardware failures. What is A placement groups? well, I will share a couple of sentences with the definition and its advantage (more info at “Monitoring OSDs and PGs”):
- “A Placement Group (PG) aggregates a series of objects into a group, and maps the group to a series of OSDs”.
- “Placement groups reduce the number of processes and the amount of per-object metadata Ceph must track when storing and retrieving data”
Next picture shows a OSD’s tree and its weight and status into the Cluster
with this speed in your cloud, users will be more than happy.
Well, see you around