Blog

You can find tangible know-how, tips & tricks and the point of view of our experts here in our blog posts

Nahaufnahme von Händen auf einer Laptop-Tastatur
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Enterprise Data Warehouse and Agile SQL Data Mart – SAP BW on HANA can do both – the “Mixed Scenario”
Enterprise Data Warehouse and Agile SQL Data Mart – SAP BW on HANA can do both – the “Mixed Scenario”

Enterprise Data Warehouse and Agile SQL Data Mart – SAP BW on HANA can do both – the “Mixed Scenario”

In the scope of the use of a SAP Business Warehouse, past experience has shown that there have frequently been different approaches in companies that have led to the development of a parallel infrastructure. These tend to be managed by specific departments rather than IT.  Solutions such as QlikView, SQL Server, Oracle and TM1 are widely used. These fulfil their tasks in the appropriate situation very well - otherwise they wouldn't be so popular.

Read more
SAP BW on HANA –  Does a Cache Still Make Sense in ABAP Routines?
SAP BW on HANA –  Does a Cache Still Make Sense in ABAP Routines?

SAP BW on HANA – Does a Cache Still Make Sense in ABAP Routines?

With the launch of SAP BW on HANA in 2010, many previous measures for enhancing performance in BW systems became obsolete. At the same time, however, many new questions are cropping up regarding the novel platform. Here, one very relevant question is whether it still makes sense to cache the Advanced Business Application Programming routines. For with HANA, the data is, on the one hand, stored in the database located under an application server in the main memory and, on the other hand, optimised for requests. In addition, the requests in routines are executed systemically on the application server. Therefore, the question regarding the sensibleness of the use of a cache for ABAP routine requests is to be commented on in detail in the following blog contribution:

In the event of frequently recurring data, the answer is yes. If, for example, the attribute "continent" is to be read by the information object "country", the temporal overhead of access by the SQL parser, the network, etc. to HANA is recurringly too high for each line. There are several technical layers between the ABAP program and the actual data, which are to be executed repeatedly. However, if it is necessary to perform several joins between tables or if the number of lines to be read is very large, the advantage tilts towards the HANA database again.

According to my experience with customers with large data quantities, a cache in ABAP partially triples the speed of the DTP execution in a SAP BW on HANA system. Of course, this always depends on the situation (e.g. data distribution, homogeneity of the data, etc.), as well as the infrastructure that has been built up. All still without use of the shared memory. For all data packages together, i.e. per load, the shared memory performs only one request of the database. In handling, however, it is unnecessarily complicated.

Read more
The Highlights of Spark Summit 2016 in Brussels
The Highlights of Spark Summit 2016 in Brussels

The Highlights of Spark Summit 2016 in Brussels

I am not writing this blog post in a quiet minute in our b.telligent offices, but live from the Spark Summit in Brussels. For data scientists, it offers an enormous scope of machine learning procedures, both traditional for static data sets, and for streaming data in real-time. All those with practical experiences in the Python library sklearn will immediately feel at home, as this served as an example.

Read more
Analysis or App – What Does a Data Science Team Actually Produce?
Analysis or App – What Does a Data Science Team Actually Produce?

Analysis or App – What Does a Data Science Team Actually Produce?

A particularly productive current discussion revolves around the question what a data science team should actually sensibly produce. The two possibilities are quickly named: on the one hand, there is the "analysis", thus, a one-off, rather static final result; in this context, most people immediately think of a PowerPoint presentation.  On the other hand, there is the "app", i.e. an interactive end product continuously supplied with fresh data, frequently in the form of a website or a mobile app.

Read more
How Stationary Trade Catches up to Online Shops With PoS Tracking Data
How Stationary Trade Catches up to Online Shops With PoS Tracking Data

How Stationary Trade Catches up to Online Shops With PoS Tracking Data

Due to increasing challenges in digitalisation, e-commerce has been increasingly surpassing stationary trade. According to the IfH Institut in Cologne, this trend will be increasing in the coming years. Parallel to a reduction in sales in stationary trade, by 2020, sales from online trade will increase to approximately 77 billion euros

Read more
The Local Connection Arrow by Longview BI (formely arcplan)
The Local Connection Arrow by Longview BI (formely arcplan)

The Local Connection Arrow by Longview BI (formely arcplan)

The local connection arrow enables the restriction of structures without limiting the data at the same time. This function has been existing for many years but is easily forgotten and completely unknown by many application architects and developers. Thus, this is a blog refresh or introduce this function.

Note:arcplan Information Services GmbH has been renamed after the merger with Longview and is now Longview Europe GmbH.

Read more
The Advanced Data Store Object and Its Tables
The Advanced Data Store Object and Its Tables

The Advanced Data Store Object and Its Tables

With SAP BW on HANA comes ADSO with new table structures and functions. Compared to the InfoProviders which are used on SAP BW systems not based on HANA, ADSOs have the ability to modify their functions without losing filed data. This also includes a modification of the contents of the tables if the type is changed.

In this process, an ADSO always consists of three tables which are filled and processed depending on the ADSO type. Unused tables are created by the system regardless. Thus, the use in routines, HANA expert scripts etc. is possible but in general not always appropriate.

Read more
The Requirement For Customer-Oriented Data Warehousing And The Opportunities It Creates
The Requirement For Customer-Oriented Data Warehousing And The Opportunities It Creates

The Requirement For Customer-Oriented Data Warehousing And The Opportunities It Creates

The role of the customer

The central role of the customer for the strategic alignment of businesses has been discussed in science for decades.

"It costs much more to acquire a customer than it does to keep a customer. That is why a sale to a customer is good, but a relationship with a customer is great." [1]

"Personal data are the fuel of the modern economy" [2]

"In a global information-based economy, data about customers are one of the most important sources for competitive advantage." [3]

Read more
Handling SCD With the Oracle Data Integrator 12
Handling SCD With the Oracle Data Integrator 12

Handling SCD With the Oracle Data Integrator 12

After the termination of support for OWB has been officially announced, the Oracle Data Integrator (ODI) is the ETL tool of choice in the Oracle world. The development has progressed to version 12 which brought a few modifications and improvements. The GUI has continued to become more similar to OWB, although there are a few possibilities available which OWB did not offer in this way. In this blog entry, we will deal with the implementation of slowly changing dimensions in ODI.

Read more
Best Practice for SQL-Statements in Python
Best Practice for SQL-Statements in Python

Best Practice for SQL-Statements in Python

Thanks to a compulsory interface for database connectors, the "Python Database API Specification v2.0, PEP249", all current connectors have been developed so that database connections and the SQLs for data retrieval and data transactions can be started using the same commands. Results are received in more or less the same format everywhere. It is regarding this issue that there seem to be the most severe deviations from the required standardisation.
But this should not scare anyone off from using Python scripts as a flexible method for automating database operations.

Read more
Development Of A Powerful Data Science Team
Development Of A Powerful Data Science Team

Development Of A Powerful Data Science Team

Data science has undergone an increasing professionalization and standardization during recent years. The frequently intrinsically motivated data tinkerer and diddler, who fills the niche "analysis" in his business with very high company-internal data and process know-how, is reaching his limits.

Increasing demands, especially in the course of a stronger customer focus across all industries, force businesses to professionalize the structures in the area "data science": This includes knowledge, available data sources and their preparation and data science products already used in the business.

Read more
From SAS to R and back: Transfering SAS data Into an R System
From SAS to R and back: Transfering SAS data Into an R System

From SAS to R and back: Transfering SAS data Into an R System

SAS and R are topics which are very closely related: Both are popular tools for people like us who want to solve problems from the environment of statistic and machine learning on (more or less) large data volumes. Despite this apparent proximity, there are few touchpoints between the communities and only few persons work with both tools. As passionated `outside the box´ thinkers, we regret that and want to start a mini-series by means of this blog article in which we deal with topics which connect the both worlds, in loose order. For this first blog article, we will deal with the possibilities to exchange data between the systems. As there is a high number of ways, this article is limited to the transfer of SAS to R; the opposite direction will follow in a later article.

Read more