Test #openstack #performance with #rally in a few steps

Before docker, install rally was… hmmm Sh***y too many dependencies. I screwed my OpenStack controller once. Then started to use an external server. Now, I can even run that on my laptop with docker ūüôā Ok, the steps on CentOS would be: Step ONE: Install docker and pull rally image Step TWO: Recreate Database As soon as you get into the container, do the following … Continue reading Test #openstack #performance with #rally in a few steps

#Ceph 0.87 earlier perf results KVM 2.0 & libvirt1.2.2 #Openstack Icehouse

Hello, after almost¬†a year working with Ceph in our cloud solution koolfit.mx … we are pleased to share some earlier test performance results, we are on the way to prepare something nicer, however, I’ve got a not so weird¬†impulse to publish it despite those numbers could change later – We’ve concluded to do¬†some¬†adjustments to the platform,¬†based on this first round of results… we are sure … Continue reading #Ceph 0.87 earlier perf results KVM 2.0 & libvirt1.2.2 #Openstack Icehouse

What do I have to choose to efficiently store, protect and manage my private cloud’s data?

The amount of¬†available¬†storage systems¬†in the market¬†have¬†increased in my radar¬†during the¬†last months therefore the complexity to choose one of them – I¬†am¬†personally afraid¬†of choosing¬†one now and regret¬†it¬†later,¬†aren¬īt¬†you? –¬† The storage ecosystem is changing so fast that probably if you select something now, the criterion that you have just used won‚Äôt¬†work few months later. This new trend of software defined something¬†that¬†has blessed the storage field and now … Continue reading What do I have to choose to efficiently store, protect and manage my private cloud‚Äôs data?

Other Ceph’s interesting features for Data Management and Storage Efficiency

Ceph is not just a JBODD (Just a Bunch of Dumb Disk) Technology with an amazing algorithm – see my previous posts for more information about CRUSH- to manage data location among nodes. You can get much interesting features to help you saving money or being more productive with your time. Let’s start from the file system used as the most basic component by¬†the Object … Continue reading Other Ceph’s interesting features for Data Management and Storage Efficiency

Commodity Hardware it’s not the only option for the future

I met @sakacc last Thursday’s evening face-to-face enjoying¬†a¬†not so short chat that make me to re-think some given statements – doesn’t mean I’ve changed my thoughts about #Ceph and its assortment¬†within type 4 ¬†, ok? – I know that I’ve committed to write in my previous note about other features on¬†#Ceph, but I will use this current space¬†to talk about something more important, my internal … Continue reading Commodity Hardware it’s not the only option for the future

#CRUSH: Distributing the low-level block allocation problem

I remember when I started to work with some Block Storage Arrays – or SAN storage systems –¬†and¬†how vendors those days were committed¬†to bring savings¬†through¬†a disk consolidation project. A solution that caused some confusion when you¬†realized that you had¬†to pay three times more for no so too much performance and capacity – consolidation was the trend on those days – I remember when those storage … Continue reading #CRUSH: Distributing the low-level block allocation problem

#OpenStack speeds Loosely Coupled Scale-Out storage Type’s adoption

In my previous notes I’ve shown the definition of Loosely Coupled Scale-Out Storage Architecture and I’ve mentioned¬†some products and vendors as reference. #Ceph, the open source project, and its block storage presentation, is an example of this type of solutions.¬†I’ve exposed as well how #Ceph integrates itself to #Openstack as a block and object storage system, and I have to say that I personally prefer … Continue reading #OpenStack speeds Loosely Coupled Scale-Out storage Type’s adoption

#Ceph versus Distributed Share Nothing Storage Architectures

Last #EMCWorld we’ve got cool key notes showing beautiful trends on storage technologies. There were frequent recalls on these two types of Storage architectures: Distributed Share Nothing: This type of architectures works on independent controllers no sharing memory resources between nodes. ¬†This sort of solution has beed made for¬†Non-transactional data and brings distributed data protection features. Object Storage is a solution that fits with this … Continue reading #Ceph versus Distributed Share Nothing Storage Architectures

#Ceph: The most affordable flash storage to speed your Cloud (our labs results with #OpenStack)

I want to share with you some results¬†of our work with #Ceph with #OpenStack Cinder. We are working with three servers with SSD drives, and also Flash Cards as our journal and cache. We’ve executed three write tests with different sizes (1024MB, 2048MB and 4096MB)¬†to Ceph devices from a virtual machine provisioned on KVM through OpenStack – the disk was provisioned through Cinder – using … Continue reading #Ceph: The most affordable flash storage to speed your Cloud (our labs results with #OpenStack)

#OpenStack likes #Ceph: You will love the way it works

Ceph¬†scales out, Ceph is highly redundant, Ceph is flexible, Ceph is full of cool features. Ceph and¬†#OpenStack are close friends. Te queremos Ceph! Firefly is the last Ceph’s release and is a beauty starting point for us to develop¬†what we named¬†“the first storage solution fully supported by KIO”.¬†As I’ve mentioned¬†in my previous notes regarding to this “Software Defined Storage” current reality, commodity is helping out … Continue reading #OpenStack likes #Ceph: You will love the way it works