A strong Data Protection Strategy supporting your Cloud will bring you sweet dreams

As I’ve told, @KoolFit is not just another Cloud option in our portfolio, it’s an important sample of our company’s talent to play at any level facing technology challenges like this.

OpenStack is getting some traction in the market – I invite you to read “The OpenStack opportunity: so how big is it?” where they’ve shown figures such 3.3B in 2018 as the expected revenue from OpenStack related business, according a 451Research’s report –

OpenStack Revenue Predition

Quoting this Gigaom’s note: “If you take the $3.3 billion as gospel, and divide that by the 60 companies surveyed, that’s $55 million per company in 2018. Not chicken feed but not a blockbuster either”. Not agree, I think there will be a 20% of these companies getting 80% of this revenue – I will let pareto’s principle support me on this case –

Now, let’s think about the data protection strategy spinning around your Cloud’s portfolio. OpenStack will bring you awesomeness in orchestration matters, and besides you will find projects regards of protecting your information, It’s really important land your customer’s expectation about what can they get from that.

The weakest point of any cloud is the data availability. Providers can bring back your compute power fast enough after a failure to meet their published SLAs – please, read the contract’s small letters-. However, what about the data into every instance, Will it be recovered at the last bit?

If you want to move applications to private or public cloud’s infrastructure, you have to be prepared for failures, you have to consider worst escenarios. failures can come from software bugs at storage, hyper-visor, network, etc… no vendor or provider can ensure you won’t have failures. Software failures could be catastrophic. Software failures could kick down complete stacks of servers or wipe out all the data from any storage, no matters if you have mirror or RAID-1 settings on your disks’ pools.

All network and storage components are based mostly in software. Most important differentiators in storage/network’s Vendors comes from software. hardware is being commodity.

hardware failures are predictable and they are being managed by software.

to get sweet dreams, my suggestion is not to use just one solution, hire as many options you can. And get skilled people involved to manage it to decrease your RTO/RPO:

  • Snaps to get recovered from logical errors in your data like a Database corruption – big part of the cases -. It’s extremely much faster than tape. the disadvantage is snaps reside into the storage: you lose your storage, and you lose your snaps also.
  • Take your data to a backup system based on disk, this need be closed to your instances in order to get better bandwidth to transfer data as fast is possible and recover your services with a decent RTO (it could be other instances in a different HA Zone). Big issue is you cannot count on it as a protection of disaster, for this you need off-site copies.
  • Get copies of your data into object storage for off-site protection. Usually this object storage solution is geo-replicated. The drawback is to slow to recover data and complex to manage.
  • Remote Cloud Backup solution are good for off-site copies and protect your data worldwide, but slow when you need to recover big amount of information or files.

Then, there is not the perfect option, but you can save your neck using more than one.

See you around!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s