Skip to main content

If successful proof of concept (PoC) for a data-analysis pipeline is to be followed by production, this often proves to be a long road. Ibis makes it possible to simplify this process and thus add value faster.

After successful local development of a data-analysis pipeline in Python, the code often needs to be rewritten to allow operation in production mode. But does it really have to be that way? Programmed by Wes McKinney, lead author of Python Pandas library, the Python Ibis library provides a fascinating solution for balancing data processing between the production and development environments, thus enabling analytics teams to achieve production faster. This blog post of ours shows how it works.

Development of reporting/analytics pipelines

Reporting & analytics pipelines are an important part of a data-driven enterprise. To build such pipelines, teams often use isolated local development environments to produce results as quickly as possible. Subsequently, however, they are faced with the challenge of transferring the pipelines to productive systems. The problem: Code often needs to be rewritten to allow operation in a data warehouse, for example.

One reason for this is a use of different technologies for data processing in the development and production environments. These differences result in the following challenges:

  1. The development team needs additional knowledge of technologies from the production environment.
  2. As a result, additional or different employees are needed after completion of initial development. Issues with employee availability can therefore delay projects. 
  3. Errors or unwanted changes may occur when code is rewritten. This can cause a loss of confidence among stakeholders.

The Python Ibis library provides a solution to unify data processing between the production and development environments. Code written with Python Ibis can run without customization in a local environment as well as databases and data warehouses.

How does Python Ibis work?

The first step in using Ibis is to connect to a data source. This could be a Pandas data frame in a local development environment, or a DWH table or database in a production environment.

Illustration of Ibis connection to a local SQL Lite database

 

The data transformation logic can then be written using the Ibis API. Python Ibis generates the code for the relevant data backend, such as Pandas, Spark or Big Query. The backends remain responsible for executing the code. As with other big data frameworks, the data transformations are executed lazily, i.e. only when needed.

 

Illustration of a local Ibis Connection & unit testing of Ibis code

Use case: Churn project with Ibis library on Google cloud platform (GCP)

Suppose an advanced analytics team is working on a new pipeline for their marketing department. They want to receive daily metrics on customer churn rate for key customer segments in order to better manage their anti-churn campaign. First, the analytics team is working on a minimal viable product version using the Ibis library in Jupyter notebooks on Vertex AI (Google/user-managed VMs with pre-installed Jupyterlab) where they have stored data locally.

Once the pipeline has achieved the quality necessary for production mode, it is sufficient to replace the connection to the local data source with the corresponding tables in the data warehouse – Big Query in this case. Quick and easy transfer of the pipeline using the Ibis library allows the team to add value for the marketing department faster.

Illustration of Ibis Code first tested locally and then executed in production mode involving a DWH

So much for the basics. In the second part of this blog series, I'll explain how you can set up Ibis on GCP Vertex AI. In addition, I will use an example to show how easily a pipeline can be converted for transfer from a local data source to a DWH with Python Ibis.

 

 

 

If you've worked with Python Ibis before, I'm looking forward to your feedback. If you have any questions about the correct use of Ibis, my colleagues and I will be happy to assist you with our technological expertise and experience.

 

Get in contact

 

 

Your Contact
Sergej Kaiser
Senior Consultant
Sergej is convinced that the quality of data is its greatest value in companies. Therefore, it is the topics of data testing and functional data engineering that Sergej would like to push further.
#DataEngineering #DataQuality #FunctionalDataEngineering