There is a lot of material explaining how important SAP HANA is for your company: It’s important to increase your revenue growth, reduce churn, lower stock costs dramatically, or detect fraud activities on time, and so on. However, this note is not intended to sell more benefits to justify the use of HANA (I think is clear, anyway we are opened to bring more information if you want). The purpose of this note is to describe why the most affordable and faster option to get the benefits of HANA is through an On-Demand service model.
Let’s start first describing how complex could be implementing it base on our own experience:
We will check the required components for the HANA DataBase: First you need to size servers, network and storage for the pure HANA data that will be processing in memory. There are several options in the market from different known vendors that provide appliances (servers certified by SAP). But this appliances don’t help you if you need more than 1TB of raw capacity (…for now) in memory and also high availability –you have to manually replace the failed piece and pray to God that everything come OK later-. Then, if you need high availability and more capacity, you have to go for a scale-out design based on converged certified Infrastructures like VCE vBlock, and Hitachi, or reference architectures as FlexPods (NetApp and Cisco) or IBM with Fusion-IO cards using GPFS and so on.
The most important piece of hardware is the processor or CPU, it needs to be certified by SAP HANA, this condition applies to the appliance or to the scale-out equipment. The scale-out architecture requires at least a couple of servers, and the switches, hopefully to manage 10Gbps or higher connections, to provide the enough bandwidth between servers, storage and the rest of the world. I let you take the decision to add Fiber Channel legacy switches to have dedicated communication resources between server and storage to transfer logs and data, in our case, we prefer unified fabric architectures because are simpler to manage. The storage has to have enough resources (amount of spindles, capacity, processors, cache memory) to support the required performance to keep the persistency of the data for all active nodes in case of a failure of any of these to help Hana take data to the stand-by node if required. You can apply to an open-architecture program with SAP if you don’t want to buy a certified stack and take the risk to build this by yourself, as we did it 😉
Later, there are other concerns about the IT equipment for the applications around HANA DB, and this depends on the way you prefer consume HANA:
- SAP Customers: There are a couple of suggested options to deploy HANA for customer if they are using SAP Applications for ERP and BI: The first option is “BW on HANA”, this option replace any existing Database supporting SAP BW with HANA to accelerate it; The second option is “Suite on HANA”, this is intended to replace any existing Database supporting SAP ECC and sometimes also BW with HANA to accelerate it and get “what-if” scenarios directly on transactional data.
- Agile Data-Mart for Non-SAP Customers most of the time: In this case we need to implement ETL, SLT or any other method to take data from different sources: Oracle Business Suite, SQL, Siebel and other traditional or non-traditional sources. Use applications like Business Object to create dashboards and charts and so on.
- Other Applications like Mobile Dashboards, Customer Segmentation Tools, and so on. That’s depends on customer needs.
Depending on what option you choose to consume HANA you could require a complete different stack of servers, network and storage, and their capacity depends on performance needs. It could be bigger or smaller than the stack required for Pure HANA DataBase. Also, depending of the situation you need to deploy a high speed communication core between Hana and the other components.
Also, you have to install Security components to control the access to the information -usually HANA contains strategic information for your company-. In some cases, it could be so complex like hide the more sensitive data through data masking. A good backup process is important also to take care of your data, there aren’t so many options and the most usual is through a Network File System to vault data directly from HANA.
Finally, the most expensive and strategic thing to be success are services. Services to setup and the posterior manage for all components. You need not only the IT skills to manage hardware and the basic software, there is a lot of other required profiles like data services, HANA studio and business object. We’ve enabled one of the first Active Embedded Services in the Region to manage, architect and bring experts on demand directly to your needs of innovation and efficiency. Data Scientists are the key to create analytics and predictive models to get information to take profitable decisions to make this investment worth making. These people are not easy to find and they have to manage several technics as statistics, visualization, mathematics, modeling and others, and sometimes specific knowledge of the related industry.
Well, all these sound very complex, and it really is.
If after this long description you don’t know yet “Why an On-Demand Model is best option? Here you have TWO strong reasons probably on top of your mind at this very moment:
- Cost: An On-Demand model help us, as a provider, share resources for hardware, software and services. Some of them need to be dedicated as the production servers to manage big data volumes in memory. But, the rest is highly shareable. We can rent them at a monthly payment and you can reduce or increase usage at any moment taking care of your finances. Setup and manage cost is reduce dramatically, we can have people attending 3 or until 10 different customers depending on how complex is the implementation.
- Time-to-Market: Time is money, and we have the resources and people to start implementing now. Don’t waste time looking and implementing resources. And get the benefits of this solution several months in advance.
Now a selling tone -forgive me, it’s on my blood- and who can provide this? Well, of course, KIO could do that:
- We are the first company to provide HANA in an On-Demand service model in the Region
- We have trained people to cover all these different profiles. We’ve enabled one of the first Active Embedded Services in the Region.
- We build our own stand-alone and scale-out stack for HANA and the rest of applications, and they are available today to start implementing
- We have an entire business unit called Dattlas to cover data scientist skills demands.
HANA is in a maturing process, there is a lot of improvement points yet; we are working on those like install bigger HANA dedicated and virtual nodes, or try some automation tools to speed provisioning. A big part of this progress depends directly on SAP and their efforts to extend or mature their integration with tools and partners. Also, some competitors are coming up like Oracle Exalytics and IBM DB2 BLU to bring in-memory processing, but Big Data Analytics is more than to accelerate your own database. They are just starting, but any competitive advantage is temporary, let’s see what happens in the next months.
I hope you’ve enjoyed this note. Special thanks to Miguel Angel Alvarez, who is leading our HANA Operation Services initiative at KIO, for supporting me with this note.