A traditional backup solution means to implement agents in every server that copy the data in a consistent and recoverable way to a secondary disk system or directly to tape.
The way that you set your backup architecture and components is defined by the type of data, its size, its growth rate and the topology of your IT components (sites, network performance, geographies). Also, you need to define what are the Recovery Point and Times Objectives (RTO & RPO) that is required for your business and therefore your applications and data. Not all data and applications require the same RTO & RPO, that depends on how critical is this asset’s availability for your business –you cannot define a much higher RTO/RPO to the email service than the most critical application that manage your production chain worldwide-
Lowering both, RTO and RPO, cost exponentially more.
If you want to define RTO and RPO near to zero this can costs million of dollars because probably you need synchronous replication between sites –duplicated infrastructure and higher speed network resources- and pretty cool stuff to backup it – like consistent snapshots every hour and save every log transaction at every second- An expensive skilled team to react in case of a failure and to test every month this data protection solution as an evidence of control.
The size and type of data as I’ve mentioned also define your backup components and architecture. In the case of files, these must be copied to external storage equipment stuck to a backup server, sometimes called media or master server –a media/master server has an installed backup application that manages tape or storage devices directly attached.
Why not to send files directly to tape? Well, the stream of data that is being copied from the file server is not flowing in a constant rate through the network –we are talking about thousands of files with different sizes-, the tape is going to run in irregular speed not being so efficient for this type of devices making this backup process lasts years!
A good solution for this last case is to use a Virtual Tape Library (VTL) to receive the data copied directly from file servers to a virtual tape. Virtual tapes shows out disks as tapes to any backup application without the problems to manage irregular stream speed rates. Later you can copy the virtual tape to a physical one in a constant high-speed stream what is the best scenario for any physical tape device. However, this solution doesn’t bring much better performance is you are dealing with databases, and you have to deal with vaulting data to physical tapes later anyway.
A Database requires to be set in a consistent state to start copying its data and logs files directly to a backup device. This state could be reached through special database instructions – through RMAN in case of Oracle- that are triggered from the backup agent installed in the same server where the database is located. You can copy this data directly to the tape attached to the database server -Yes, you just read it right: “directly to tape” because in case of databases the stream of data to tape works in a constant rate- Other point to consider is databases usually could be compressed around three times versus the original size, and that saves you a lot of storage capacity.
Special applications like and Exchange server and VMWare images need to be treated like a database. These application requires special agents to set data and logs in a consistent state before be copied and transferred to any media server.
Now in “the Cloud”, would you think tape could fit? The answer is “It’s depends”. However, public cloud solutions are not considering tape at all. Disk is cheaper to manage and more trustable. Disk can be easily automated and you don’t need any human assistant at any time during the backup process –at least you don’t need people to take a tape to somewhere- however, there isn’t any provider offering a full automated backup solution stick to its ANYaaS today
Backup is an overdue bill yet in terms of automation. I don’t know any service that can warranty with certainty that your IT stuff will be 100% protected all time. Backup requires an exhaustive management of every protected item and a constant evidence of its control.
There are “Backup as a Service” (BaaS) solutions with different levels of functionalities and system compatibility coverage. Some of them aim an SMB market related to just files. Some vendors promote the usage of object storage as an alternative but they are lack of agents to make easier getting automation policies at application level.
KIO is not fully solving this problem today, but a big step has been taken. KIO has just released a Backup as a Service (BaaS) that brings an exclusive dashboard to your company to manage your own backup resources. BaaS works from our best-in-class datacenters and it gets your data from wherever your IT stuff is located through a monthly payment plan base on the amount of protected GBs. We save 30 online copies of your data (one per day) as part of our retention policy –we charge you only the equivalent of one copy, not the 30 ones of course-
Not every company in LATAM can afford a good quality and high bandwidth connection to Internet or between offices. Our solution was thought to face this condition being highly efficient on the bandwidth usage through compression and de-duplication at the source. Also we can transfer the data encrypted to reinforce control access policies to the information from the source.
We could install a local appliance at your office to improve the local RTO of the most recent copies charging an additional small payment per month. Our compatibility coverage is better than the average: we support systems like AIX, Solaris, Linux, AS400, Oracle, SQL, Sharepoint, Exchange and so on.
There is not required to have your IT systems in our datacenters to get this service. Of course you need a relative good Internet connection at least -a decent bandwidth depending on how much data you need to backup and transfer from your office-
We invite you to test this new service. See you the next Tuesday.