What is in-memory computing?
In the digital age, it is important to obtain and analyze data as quickly and easily as possible and to react to changes accordingly. That's why IT departments need to find new and faster ways to provide decision makers with up-to-date information and insights. Waiting for data to pass through a complex data model and data warehouse is no longer sufficient for decision makers. For them, it is important what happens in the present – that is, at that exact moment – to be able to react to it real-time.
To achieve this goal, events must be analyzed not only real-time, but also large amounts of data. At this point, in-memory computing (in-memory = main memory, computing = use and function of computers) comes into play. By moving data from a hard disk to main memory, IT departments can easily meet the desired requirements for decision-makers.
With in-memory computing, it is possible to process large amounts of data in main memory and to provide analysis and transaction results directly. SAP calls this Massive Parallel Processing (MPP). Ideally, the data to be processed is generated real-time. To achieve and maintain this level of performance, in-memory computing follows a simple principle: data access is accelerated and data movement is minimized. The main memory is the fastest memory type that can store large amount of data. Moreover, access to data in main memory is 100,000 times faster than on a hard disk. Using the in-memory feature can significantly improve performance by making data faster available to report and analysis solutions or other applications.
In the case of a power outage or if the computer crashes, there is no reason to fear that data is lost because in-memory databases operate under the so-called ACIP principle (atomicity, consistency, isolation, and persistency). The principles describe desired characteristics of processing data in a database. These requirements are as follows:
A transaction must proceed continuously, i. e. if a part of a transaction fails then the entire transaction fails.
The consistency of a database must be ensured to file valid data in a database. If a transaction violates one of the principles for any reason, the entire transaction gets undone. Consequently, the database is restored corresponding these principles.
A transaction must proceed isolated, so that there are no interruptions between other transactions.
Finally, a transaction must be persistent which means, if it has been transferred to a database it will stay there.
The first three requirements are not an obstacle for the in-memory technology. However, the persistency cannot be guaranteed if data is stored only in the main memory because this memory type is volatile. That means if there is a power outage data could get lost easily. To obtain consistent data, it must be stored in a non-volatile memory (for example hard disk).
The medium used by the database for storage (in this case the main memory) is divided into pages. Now, if data is changed by a transaction, the corresponding pages are tagged and periodically written into a non-volatile storage (a self-powered hard drive). In addition, the database log records all changes made by the transactions. Each executed transaction generates a log entry that is written to non-volatile memory. This ensures that all transactions are persistent.
In-memory databases can store modified pages at savepoints that are periodically written asynchronously to persistent storage. This protocol is written synchronously. To achieve persistency and pass all requirements mentioned above (ACIP), a transaction is not returned until the corresponding log entry has been written to the persistent storage.
Therefore, restoring database pages by means of savepoints is not a problem after a power outage. The database logs are used to recover the changes that were not captured in the savepoints. This ensures that the database is restored to memory in the same state it had been before the power failure.
That is how an in-memory database works. Speed, consistency, and persistency distinguish its technology from other database technologies. Learn more about the functions of S/4HANA and how it simplifies IT landscapes in the next article
Translation by Gohar Zatrjan
Berg, Bjarne & Silvia, Penny 2013: Einführung in SAP HANA. 2. akt. Aufl. Galileo Press, Bonn.