SAP announced, at the Teched Las Vegas 2010, the general release of its in-memory appliance called HANA (High-Performance Analytical Appliance) sometime in Dec 2010. This is probably one of the most exciting announcements from SAP in recent days. The initial announcement that SAP is working towards an in-memory appliance came in SAPPHIRE in May 2010.
Without understanding too much of the technicalities, HANA is an in-memory appliance like the RAM on our laptop. Currently SAP data is stored in databases. With HANA this data can be stored (as a secondary storage), and indexed in-memory and hence could be read, analysed at much greater speeds than allowed with database reads. Previously this was not technically feasible because of capacity constraints and high cost of RAM. HP and SAP collaborated to create this in-memory device. If you are interested in the technicalities, read this.
HANA will use SAP® Business Analytic Engine as the software which “will use in-memory technology with a powerful calculation engine, combined with easy-to-use, business-centric data modeling tools and data management tools”. Intel and SAP, through joint engineering, have optimized SAP HANA to help ensure that it runs optimally on the Intel® Xeon® processor 7500 series.
The existing Business Warehouse Accelerator (BWA) accelerates the query performance of BW. HANA is targeted towards SAP R/3 4.6 and higher, and SAP ECC market.
Dr Vishal Sikka of SAP announced at Teched Las Vegas about the status of HANA testing and SAP proposal for its general release. “Together with our partners and customers, SAP is breaking down the boundaries between real-time events and real-time business decisions,” said Vishal Sikka. “Here’s an example of how we bring this to life: in early tests conducted with our customer, one of the largest consumer packaged goods companies, they were able to query over 460 billion records of data in just seconds. This is a watershed moment in the evolution of business software. And this is the world that our customers need in order to become the best run organizations.” Sikka reiterated the company’s guiding principle of ‘innovation without disruption’ – the ideal of being able to continuously evolve the technology landscape at the same time as leveraging innovations in a way that is non disruptive to existing investments.
Some of my observations; and some of own questions I intend to scout for answers over the next few days.
- Companies stuck with SAP’s R/3 product due to economic conditions could benefit from HANA. HANA does not present new functionalities but will help improve performance of reporting without need to invest in expensive upgrades or implementing BW or BO.
- HANA will be very useful for data-intensive industries particularly in retail that have high-volume low-value transactions.
- I am not sure how the in-memory data will synchronise with data in source application. How real-time will in-memory data be? Will the data be fed into in-memory through an extractor or will it synchronise real-time with source application?
- If there is some kind of extractor that extracts data from source to in-memory, then the data in in-memory is only as good as the quality of the extractor.
- Then the all-pervading issue with BW and any analytical tool – the data in the analytical tool is only as good as the data in the source application.
- HANA is only a reporting tool – any corrections to the HANA analytics data will be required in the source application.
Oliver Mainka made an impressive presentation of capabilities of HANA at Teched.