Hello, after almost a year working with Ceph in our cloud solution koolfit.mx … we are pleased to share some earlier test performance results, we are on the way to prepare something nicer, however, I’ve got a not so weird impulse to publish it despite those numbers could change later – We’ve concluded to do some adjustments to the platform, based on this first round of results… we are sure that we can get better numbers than these.
Even thought, here you have the results:
All these test to Ceph have been done without any disk cache (libvirt cache=none)
Some additional info:
- We’ve done some test directly to Ceph (not thru KVM and libvirt) getting 3x the highest numbers have been shown in the chart.
- Case of VMWare, we have also disabled cache to disk in order to get similar conditions.
- We have disabled cache into the Ceph storage nodes also (RAID’s Cache).
- KVM and libvirt requires configuration changes in order to get much better performance. We are working on that now
- We are upgrading Ceph to Giant version in the next implemented stacks
- We want to do some test with cache enable CINDER and libvirt to see the differences, the idea the customer could decide to enable it or not thru Horizon.
- We will use JBOD at hardware level to reduce overhead into every disk device into Ceph’s storage nodes (today, we are obligated to create a RAID-0 into every OSD)
Finally, I will add more details about the testing:
- Tool: Fio Flexible I/O Tester Synthetic Benchmark http://freecode.com/projects/fio
- Tool’s parameters:
- Results details (gist at github… feel free to share): https://gist.github.com/pinrojas/54737a5989d85d1c4975
See you around!