Data Platform & Data Management

Nahaufnahme von Händen auf einer Laptop-Tastatur
Microsoft Fabric: Migration - Advantages & Chances?
Microsoft Fabric: Migration - Advantages & Chances?

Microsoft Fabric: Migration - Advantages & Chances?

Microsoft Fabric Is Now Available! Does It Make Sense To Migrate Now?

The new SaaS solution from Microsoft has the potential to change the world of data. But should you migrate now? If so, what is the best approach? A lakehouse or warehouse? And what must you consider with OneLake? Where are the current hurdles? And what about Azure Synapse? We’ll address all these aspects in this article.

First, there’s a key message for all Azure Synapse customers: Synapse will continue to be fully supported and is not being discontinued. Microsoft has no plans to do so! But we’ll tell you where it still makes sense to look ahead.

Enterprise Data Warehouse and Agile SQL Data Mart – SAP BW on HANA can do both – the “Mixed Scenario”
Enterprise Data Warehouse and Agile SQL Data Mart – SAP BW on HANA can do both – the “Mixed Scenario”

Enterprise Data Warehouse and Agile SQL Data Mart – SAP BW on HANA can do both – the “Mixed Scenario”

In the scope of the use of a SAP Business Warehouse, past experience has shown that there have frequently been different approaches in companies that have led to the development of a parallel infrastructure. These tend to be managed by specific departments rather than IT.  Solutions such as QlikView, SQL Server, Oracle and TM1 are widely used. These fulfil their tasks in the appropriate situation very well - otherwise they wouldn't be so popular.

SAP BW on HANA –  Does a Cache Still Make Sense in ABAP Routines?
SAP BW on HANA –  Does a Cache Still Make Sense in ABAP Routines?

SAP BW on HANA – Does a Cache Still Make Sense in ABAP Routines?

With the launch of SAP BW on HANA in 2010, many previous measures for enhancing performance in BW systems became obsolete. At the same time, however, many new questions are cropping up regarding the novel platform. Here, one very relevant question is whether it still makes sense to cache the Advanced Business Application Programming routines. For with HANA, the data is, on the one hand, stored in the database located under an application server in the main memory and, on the other hand, optimised for requests. In addition, the requests in routines are executed systemically on the application server. Therefore, the question regarding the sensibleness of the use of a cache for ABAP routine requests is to be commented on in detail in the following blog contribution:

In the event of frequently recurring data, the answer is yes. If, for example, the attribute "continent" is to be read by the information object "country", the temporal overhead of access by the SQL parser, the network, etc. to HANA is recurringly too high for each line. There are several technical layers between the ABAP program and the actual data, which are to be executed repeatedly. However, if it is necessary to perform several joins between tables or if the number of lines to be read is very large, the advantage tilts towards the HANA database again.

According to my experience with customers with large data quantities, a cache in ABAP partially triples the speed of the DTP execution in a SAP BW on HANA system. Of course, this always depends on the situation (e.g. data distribution, homogeneity of the data, etc.), as well as the infrastructure that has been built up. All still without use of the shared memory. For all data packages together, i.e. per load, the shared memory performs only one request of the database. In handling, however, it is unnecessarily complicated.

SAP BW - Optimization With Distinct Count, or “How To Count My Clients”
SAP BW - Optimization With Distinct Count, or “How To Count My Clients”

SAP BW - Optimization With Distinct Count, or “How To Count My Clients”

Before HANA, exception aggregations in SAP BW more frequently posed a challenge at runtime, this also applying to the distinct count operation. This operation is used, for example, to obtain a customer count from orders.

Earlier, distinct count operations were often implemented as follows:  A calculated key figure with a value of 1 was established, and additions then performed via exception aggregation. However, what worked well in environments without HANA currently causes the pushdown to stop functioning. This can prevent calculations from being optimally performed, depending on settings.

A solution is therefore required for HANA environments: Accordingly, let us concentrate next on optimal implementation of distinct count with business warehouse on HANA or BW/4HANA.

Performance Lookups in BW Transformations - Initial Aggregation of Selected Data
Performance Lookups in BW Transformations - Initial Aggregation of Selected Data

Performance Lookups in BW Transformations - Initial Aggregation of Selected Data

We now know how we can select the correct data, which type of tables we should use with lookups and how we can ensure that we only read through relevant datasets.

In practice it is still often the case that you must select a large and/or non-defined amount of data from the database, which should then be aggregated in accordance with specific rules for the high-performance reading.

Performance Lookups in BW Transformation – Finding the Relevant Records
Performance Lookups in BW Transformation – Finding the Relevant Records

Performance Lookups in BW Transformation – Finding the Relevant Records

After we have dealt with the relevant selection techniques and with the various types of internal tables, the most important performance optimisations are initially ensured for the lookups, in our BW transformations.

However, this does not completely cover the topic: Because until now we have assumed that only the relevant information will be searched in our lookup tables. But how can we ensure this?

High Performance Lookups in BW Transformations - Selecting the Right Table Type
High Performance Lookups in BW Transformations - Selecting the Right Table Type

High Performance Lookups in BW Transformations - Selecting the Right Table Type

This is perhaps the most fundamental of all ABAP questions, and that not only in the context of high-performance lookups: it arises as soon as you do anything in ABAP.

High Performance Lookups in BW Transformations - The Use of Internal Tables vs. SELECTS From the HANA Database
High Performance Lookups in BW Transformations - The Use of Internal Tables vs. SELECTS From the HANA Database

High Performance Lookups in BW Transformations - The Use of Internal Tables vs. SELECTS From the HANA Database

In this series we are focusing on implementation methods for lookups where every data record in a table is to be checked. The larger our data packages and lookup tables are, the more important high-performance implementation becomes.

High Performance Lookups in BW Transformations in Practice - Introduction
High Performance Lookups in BW Transformations in Practice - Introduction

High Performance Lookups in BW Transformations in Practice - Introduction

Performance optimisations are not set in stone: optimisations that have worked extremely well at a company with a certain system architecture and a certain volume of data will not necessarily work equally well elsewhere. In other words, individual solutions are required. Fundamentally, however, the key point is always to find the right balance between main memory and database capacity, and between implementation complexity and serviceability. The focus is always on processing time.