Data Platform & Data Management

Nahaufnahme von Händen auf einer Laptop-Tastatur
Data Warehouse Automation (Part 2)
Data Warehouse Automation (Part 2)

Data Warehouse Automation (Part 2)

A lot of what have hitherto been manual programming tasks can be replaced - or at least greatly simplified - by DWA tools. The precise elements of development that can be automated in that respect can vary greatly from tool to tool. For example there are pure code generator concepts, with the aid of which database structures and ETL/ELT processes can be generated automatically ("design time"). On the other hand, extensive integration suites exist that can generate but also manage the entire DWH lifecycle, from provision of the data in the sources right through to the data marts ("run time").

During the development phase there is a series of tasks in which a DWA tool can provide support. The following deals in particular with the fields of reverse engineering and compatibility, analysis, implementation and framework condition.

Data Warehouse Automation (Part 1)
Data Warehouse Automation (Part 1)

Data Warehouse Automation (Part 1)

The automation of repeatedly recurring tasks is one of the most fundamental principles of the modern world. Henry Ford recognised resulting advantages, such as a falling error rate, shorter production cycles and consistent, uniform quality. These very advantages can be applied in data warehouse initiatives.

Data Mesh: b.telligent ́s Considerations and Service Portfolio
Data Mesh: b.telligent ́s Considerations and Service Portfolio

Data Mesh: b.telligent ́s Considerations and Service Portfolio

The data mesh is a current technical and organizational concept to enable greater business proximity and more scaling for large organizations involved in data & analytics. Consistent implementation here proves revolutionary and requires change management.

Introduction To Continuous Integration in the Development of Data Warehouse Systems
Introduction To Continuous Integration in the Development of Data Warehouse Systems

Introduction To Continuous Integration in the Development of Data Warehouse Systems

New data sources and areas of application emerging constantly continue to drive the steady expansion of data storage systems such as DWH, Data Lake and Analytics Platform. Data management processes must also keep pace with growing requirements. Not seldom do small BI applications grow into major initiatives in which several development teams participate. The situation is exacerbated in many industries by the need to make adjustments faster than ever before. This demands short reaction times and high flexibility from the teams and, not least of all, from the infrastructure.

Three Basics of BI Quality Management
Three Basics of BI Quality Management

Three Basics of BI Quality Management

In business intelligence, "quality" is always a key topic but dealt with in a variety of ways. This is due to a lack of a consistent understanding in this area, and consequent absence of a standardized approach to BI quality.

That this is becoming increasingly problematic is also signified by renaming of the current Gartner Quadrant. Data quality "tools" are now called data quality "solutions". The reason: Whereas many manufacturers now proclaim to be bearers of "quality", the common denominator for this is usually infinitesimally small: To find, understand and solve problems. To their credit, this is still the source of all technical progress according to the philosopher Karl Popper. However, it does not yet necessarily have anything to do with data quality per se. In this article, I will therefore sort out and structure the topic of BI quality somewhat, and accordingly call the whole thing "BI quality management". This leaves plenty of scope for "total BI quality management", as applied and taught in the engineering disciplines for some time now.

The Advanced Data Store Object and Its Tables
The Advanced Data Store Object and Its Tables

The Advanced Data Store Object and Its Tables

With SAP BW on HANA comes ADSO with new table structures and functions. Compared to the InfoProviders which are used on SAP BW systems not based on HANA, ADSOs have the ability to modify their functions without losing filed data. This also includes a modification of the contents of the tables if the type is changed.

In this process, an ADSO always consists of three tables which are filled and processed depending on the ADSO type. Unused tables are created by the system regardless. Thus, the use in routines, HANA expert scripts etc. is possible but in general not always appropriate.

The Lakehouse Approach – Cloud Data Platform on AWS
The Lakehouse Approach – Cloud Data Platform on AWS

The Lakehouse Approach – Cloud Data Platform on AWS

In our free series of online events under the banner of Data Firework Days, we introduced you to the b.telligent reference architecture for cloud data platforms. Now we'd like to use this blog series to take a closer look at the subject of the cloud and the individual providers of cloud services. In the first of this three-part series Blueprint: Cloud Data Platform Architecture, we were interested in the architecture of cloud platforms in general.

Read part 1 here: Blueprint: Cloud Data Platform Architecture

The Essence of Data – Distillation in SQL
The Essence of Data – Distillation in SQL

The Essence of Data – Distillation in SQL

Have you ever stumbled across the following problem? Your database contains a table of versions, and you happen to notice there are almost no relevant changes from one version to the next, which means you have way too many rows. Let’s show you how to easily solve this problem.

The Effective Use of Partition Pruning for the Optimisation of Retrieval Speed (Part 3)
The Effective Use of Partition Pruning for the Optimisation of Retrieval Speed (Part 3)

The Effective Use of Partition Pruning for the Optimisation of Retrieval Speed (Part 3)

The Devil in the Detail - What Details Decide on the Effectiveness of Partition Pruning?

In the previous article of this series, a practical and effective approach to using partition pruning was explained in detail. This easy-to-implement method can significantly optimize query times. However, as is often the case, some details need to be taken into account to ensure the efficient and effective use of the presented method. In this regard, we echo Theodor Fontane, who observed as far back as the 19th century that the magic always lies in the details.