This site is about a previous instance of this event. For the current event, please see the updated site.

Schedule - PGConf.EU 2018

Data pipelines with PostgreSQL and Kafka

Date: 2018-10-26
Time: 11:50–12:40
Room: Berlin
Level: Intermediate

Apache Kafka is a high-performance open-source stream processing platform for collecting and processing large numbers of messages in real-time. It's used in an increasingly large number of data pipelines to handle events such as website click streams, transactions and other telemetry in real-time and at scale. Kafka's core benefit is clean decoupling of event producing and consuming logic which makes it possible to easily update event processing logic and add new components to the mix while maintaining a clean architecture.

This session focuses on connecting Kafka and PostgreSQL to automatically update a relational database with incoming Kafka events, allowing you to use PostgreSQL's powerful data aggregation and reporting features on the live data stream.

After the session you'll have a clear overview of how Kafka can help untangle the mess of traditional data processing architectures and how it's used in production by different enterprises. You'll also get an update on why SQL is still relevant and what makes PostgreSQL a powerful database engine for OLTP, OLAP and time-series workloads.


Hannu Valtonen