<?xml version='1.0' encoding='utf-8'?>
<schedule><version>Firefly</version><conference><title>PGConf.DE 2026</title><start>2026-04-21</start><end>2026-04-22</end><days>2</days><baseurl>https://www.postgresql.eu/events/pgconfde2026/schedule/</baseurl></conference><day date="2026-04-21"><room name="Saal A1"><event id="7751"><start>09:00</start><duration>00:10</duration><room>Saal A1</room><title>Opening</title><abstract /><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7751/</url><track>General</track><persons><person id="30">Andreas Scherbaum</person></persons></event><event id="7237"><start>09:10</start><duration>00:45</duration><room>Saal A1</room><title>Zeilensperren in PostgreSQL: eine anatomische Betrachtung</title><abstract>Zeilensperren (row locks) kennt eh jeder, oder? Sie verhindern Konflikte bei Datenänderungen, und man setzt sie mit `UPDATE, DELETE` und `SELECT ... FOR UPDATE`.

Dieser Vortrag zeigt, dass es über Zeilensperren noch viel mehr zu wissen gibt:

- Welche Grade von Zeilensperren gibt es und wozu sind sie gut?
- Wie verwendet PostgreSQL Zeilensperren, um die Konsistenz von Fremdschlüsseln zu wahren?
- Wo speichert PostgreSQL Zeilensperren?
- Wie kann man bestehende Zeilensperren auf einer Tabelle untersuchen?
- Wie schauen Zeilensperren in `pg_locks` aus?
- Was zum Teufel ist ein "MultiXact", wozu ist es gut, und was hat es mit Zeilensperren zu tun?

Wer eine transaktionsintensive PostgreSQL-Datenbank entwickelt oder betreut, sollte diese Dinge wissen.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7237/</url><track>Internals</track><persons><person id="191">Laurenz Albe</person></persons></event></room><room name="Saal A2"><event id="7269"><start>09:10</start><duration>00:45</duration><room>Saal A2</room><title>pgBackRest in HA setups: deployment patterns that work</title><abstract>Having a solid recovery plan is essential, because in a real outage, just having a backup is not enough.

pgBackRest is a powerful and flexible backup tool that can fit many different setups: using a dedicated backup host, taking backups from standby servers, or sending data straight to the cloud with S3, Azure, or Google Cloud. Yet many users rely on only one of these options and miss out on what pgBackRest can really do.

In this talk, we'll go beyond the basics and look at several pgBackRest deployment patterns that can be combined for better resilience, performance, and cost efficiency. We'll discuss setups that mix backup hosts and cloud storage, make use of multiple standbys, and keep things running smoothly in High-Availability environments.

Based on real-world experience deploying this "undocumented magical solution" in highly critical systems, this session will show you how to make the most of pgBackRest’s flexibility and design a resilient backup setup that fits your HA environment perfectly.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7269/</url><track>DBA</track><persons><person id="440">Stefan Fercot</person></persons></event></room><room name="Saal B"><event id="7397"><start>09:10</start><duration>00:45</duration><room>Saal B</room><title>Achieving Reliable Application Transactions in Multi-Region PostgreSQL: Real-World Challenges and Solutions</title><abstract>Global businesses require multi-region databases to achieve two critical goals: boosting application resiliency and minimizing latency for users across different geographies. Yet, the dynamic nature of data introduces significant hurdles in designing these systems—particularly in coordinating updates across distributed regions where the level of consistency, whether strong or eventual, deeply impacts user experience and data correctness. We’ll also explore how concurrency control models, such as optimistic or pessimistic approaches, influence transaction coherency in high-throughput, distributed environments. Additionally, idempotent transaction patterns will be discussed to ensure reliable retries and fault tolerance in network-partitioned systems. This session will remain vendor agnostic, focusing purely on architectural principles and design best practices applicable across platforms.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7397/</url><track>Developer</track><persons><person id="1435">Yann Allandit</person></persons></event></room><room name="Other"><event id="7752"><start>09:55</start><duration>00:30</duration><room>Other</room><title>Coffee Break</title><abstract /><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7752/</url><track>Breaks</track><persons /></event></room><room name="Saal A1"><event id="7310"><start>10:25</start><duration>00:45</duration><room>Saal A1</room><title>Ein Jahr PostgreSQL statt Oracle – Das Leben danach</title><abstract>Dieser Vortrag ist ein Jahr nach dem Entschluss entstanden, einen neuen Job anzunehmen und mich raus aus meiner Oracle-Blase und rein in die "neue" Welt der PostgreSQL-Datenbankentwicklung zu begeben. Der Gedanke damals war: "SQL ist SQL und ich werde mich schon schnell zurechtfinden. Es bleibt ja eine Relationale Datenbank".

Dieser Vortrag soll eine kleine Zusammenfassung sein über:

Einige WTF-Momente: Der Teufel liegt im Detail und welche Unterschiede da wirklich auf einen Oracle-Datenbankentwickler warten, war mir vorher nicht bewusst.
Das schöne aus beiden Welten: Sowohl Oracle als auch PostgreSQL haben schöne Seiten. Es ist von Vorteil, beide Systeme ein bisschen genauer zu kennen.
Sonstige Erfahrungen und Eindrücke, die ich nach einem Jahr im PostgreSQL-Ökosystem so beobachtet habe. Ich konzentriere mich dabei auf den Bereich SQL (z. B. analytic functions oder die Behandlung von NULL values) und habe aber auch ein paar "admin-nahe" bzw. theoretische Unterschiede, die ich aufzeigen möchte (z. B. Tuning-Werkzeuge und MVCC).
Nach dem Vortrag sollte klarer sein, mit welchen Fallstricken man beim Umstieg von Oracle auf PostgreSQL rechnen sollte.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7310/</url><track>Developer</track><persons><person id="879">Jonas Gassenmeyer</person></persons></event></room><room name="Saal A2"><event id="7280"><start>10:25</start><duration>00:45</duration><room>Saal A2</room><title>SQL Injection Is Boring—Advanced Threats You’re Not Watching</title><abstract>Everyone knows how to prevent basic SQL injection—but modern attackers have moved far beyond textbook exploits. In high-traffic PostgreSQL deployments, subtle misconfigurations and overlooked features can open doors to far more sophisticated attacks.
This talk uncovers the next generation of database threats that rarely make it into security checklists. We’ll examine:
* Privilege Escalation via Extensions and Foreign Data Wrappers – how seemingly harmless extensions or FDWs can leak credentials or access external systems.
* Timing and Side-Channel Attacks – extracting secrets by measuring query latency and caching behavior.
* Abusing Logical Replication and LISTEN/NOTIFY – stealthy data exfiltration channels hidden in plain sight.
* Role Inheritance &amp; Row-Level Security Pitfalls – ways attackers exploit complex permission hierarchies.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7280/</url><track>Internals</track><persons><person id="1303">Dwarka Rao</person><person id="880">Kranthi Kiran Burada</person></persons></event></room><room name="Saal B"><event id="7485"><start>10:25</start><duration>00:45</duration><room>Saal B</room><title>PG Tricks</title><abstract>PostgreSQL has a huge range of features, maybe too many. Making use of these features can often make application developer's lives easier, reducing the complexity of their application.

We'll take a look at some use cases I've ran into over the years and what features of PostgreSQL can be used to help solve those problems.

Where we can use PostgreSQL to simplify our application architecture, to simplify our application code and to help prevent things going wrong.

Taking a look at use cases such as:

    Event scheduling &amp; booking
    Task execution
    Text Search and Fuzzy Matching
    Category and Tag Searching
    Time Series
    Geolocation
    Unknown data

And more!

This talk covers a huge range of SQL features and patterns that you can make use off, as is very much about showing what you can do.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7485/</url><track>Developer</track><persons><person id="424">Chris Ellis</person></persons></event></room><room name="360° Nord"><event id="7736"><start>10:25</start><duration>00:45</duration><room>360° Nord</room><title>What Can Redgate Actually Do for Your Postgres Stack? Let's Find Out.</title><abstract>A sponsored session where we skip the slides and go straight to the good stuff: a live demo of Redgate's portfolio across Postgres. We'll cover the key capabilities, walk through real use cases relevant to DBAs and engineers, and leave plenty of time for Q&amp;A.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7736/</url><track>Sponsors</track><persons><person id="1469">Nick Hape</person></persons></event></room><room name="Saal A1"><event id="7537"><start>11:20</start><duration>00:45</duration><room>Saal A1</room><title>PostgreSQL Klone mit Reflink-Kopien (und was wir dadurch über Backups lernen können!)</title><abstract>Snapshots sind aus verschiedenen Gründen für PostgreSQL unpraktisch. Dennoch wünscht man sich oft eine schnelle Möglichkeit, eine Kopie des Datenverzeichnisses zu erstellen, idealerweise ohne dabei doppelt so viel Speicher zu belegen.

Wir können etwas Snapshot-ähnliches umsetzen, indem wir das "low level backup API" von PostgreSQL mit Reflink-Kopien kombinieren. Das erlaubt es uns, sparsame aber konsistente Klone zu erzeugen, die wir viel einfacher als Snapshots nutzen können und trotzdem von den Vorteilen profitieren können.

Solche Klone stellen einen Abzweig der Datenbank dar, auf dem wir Migrationsskripte testen können, sie können aber auch als Rückfallpunkt dienen, falls eine Wartung schief geht, oder Notfallwiederherstellungspunkte darstellen, als Alternative zu "echten" Snapshots.

Die Motivation für diesen Vortrag liefern einige unserer Kunden mit sehr großen Datenbanken, bei denen konventionelle Backup- und Recovery-Mechanismen in PostgreSQL zu langsam sind um die Ziele hinsichtlich Wiederherstellungszeit zu erfüllen.
Mit einem robusten Mechanismus zum Erstellen von Klonen durch Reflinks können wir die Wiederherstellungszeit verkürzen.

Der große Vorteil bei der Benutzung von Reflinks gegenüber Snapshots ist, dass wir diese einfach in einem Maintstream-Dateisystem benutzen können: XFS. Es gibt zwar verschiedene Möglichkeiten, um Snapshots auf Dateisystemebene (ZFS, BTRFS, bcachefs) oder in einer Speicher-Abstraktionsschicht (LVM, virtualisierter Speicher, SAN) zu nutzen, jedoch unterliegen alle verschiedenen Beschränkungen. Außerdem sind Snapshots keine Möglichkeit, um innerhalb von Sekunden ein geklontes Datenverzeichnis zu erhalten, auf dem wir direkt PostgreSQL starten können.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7537/</url><track>DBA</track><persons><person id="599">Julian Markwort</person></persons></event></room><room name="Saal A2"><event id="7359"><start>11:20</start><duration>00:45</duration><room>Saal A2</room><title>Choosing your exporter: Pros and Cons of PostgreSQL exporters for telemetry data.</title><abstract>Effective PostgreSQL monitoring is non-negotiable, however, choosing the right metrics exporter is far from a straightforward decision. The choice impacts resource consumption, configuration overhead, performance and customization needs. 

This session provides a deep-dive comparison of the leading open-source PostgreSQL exporters . We will explore their architectural trade-offs, configuration complexity, and going beyond simple feature lists.

Crucially, you will understand the most common monitoring pitfalls, such as high-cardinality label explosion, scraping overhead, and how this affects your alerting implementation. You will learn how to avoid these traps and implement a monitoring strategy that ensures comprehensive, low-overhead PostgreSQL telemetry data every time.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7359/</url><track>DBA</track><persons><person id="1429">Orlando Kadaffy Talavera Arauz</person></persons></event></room><room name="Saal B"><event id="7520"><start>11:20</start><duration>00:45</duration><room>Saal B</room><title>Visualizing PostgreSQL Storage Internals</title><abstract>Many PostgreSQL developers work with tables and indexes without ever seeing how data is physically stored. This talk combines explanation of storage fundamentals with live demonstrations using pg-storage-visualizer, an interactive tool I built to make these concepts visible.

Theory first, then we look at real pages. We'll go through how PostgreSQL organizes heap pages and tuples, what xmin and xmax actually contain and how snapshots use them, then watch an UPDATE leave a dead tuple behind. We'll look at B-tree index pages - the root, internal nodes, leaf pages - and see what index bloat looks like when you're staring at page contents. We'll cover why VACUUM cleans the heap but can't reclaim space from indexes, and when you need REINDEX. HOT updates get their own section because they're underused and misunderstood.

You'll leave knowing how to use pageinspect to see this yourself, and with a better mental model for why tables grow, why VACUUM matters, and what's actually happening when things get slow.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7520/</url><track>Developer</track><persons><person id="1403">Radim Marek</person></persons></event></room><room name="360° Nord"><event id="7737"><start>11:20</start><duration>00:45</duration><room>360° Nord</room><title>Migrating Legacy &amp; Proprietary Databases to PostgreSQL with credativ-pg-migrator</title><abstract>European organizations and companies are increasingly re-evaluating proprietary database dependencies as digital sovereignty becomes critical. This talk serves as a pragmatic field guide for migrating from legacy or vendor-locked databases (Oracle, MS SQL, Sybase ASE, Db2, SQL Anywhere) to PostgreSQL. Drawing from years of hands-on experience in heterogeneous migrations, we will explore a comprehensive decision framework for successful transitions. The session compares offline strategies (dump/restore, ETL, bulk COPY) against online, near-zero-downtime approaches, focusing on how to design reversible cutovers that minimize operational risk.

We will explain the main features of credativ-pg-migrator, our open source migration solution and discuss practical data validation techniques to guarantee your data is migrated perfectly, going beyond simple row counts. We will discuss our lessons learned over the past year, including insights from the latest large-scale migration projects.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7737/</url><track>Sponsors</track><persons><person id="1470">Josef Machytka</person></persons></event></room><room name="Saal A1"><event id="7568"><start>12:15</start><duration>00:45</duration><room>Saal A1</room><title>Benchmarking - An unexpected Journey</title><abstract>Starting with an simple ask in my daily business just to compare the performance of PostgreSQL DBaaS offerings between different cloud vendors I've opened a door into an unexpected journey of way more effort, research and thoughts.

However, achieving optimal performance from PostgreSQL requires a nuanced understanding of its behavior under different workloads. This talk chronicles an in-depth journey into PostgreSQL benchmarking, sharing lessons learned, unexpected challenges, and key insights gained along the way.

We will explore my methodology used to design and execute meaningful benchmarks, including the selection of tools, configuration tuning, and workload modeling that aligns with real-world scenarios. Key takeaways will include strategies for interpreting benchmark results, common pitfalls to avoid, and the impact of factors such as parametrization and compute configurations.

The session also dives into the evolution of benchmarking practices, addressing questions like: What distinguishes synthetic benchmarks from real-world performance measurements? How do version upgrades and new features affect benchmarking strategies? 

Whether you are a database administrator, developer, or architect, this talk aims to equip you with practical insights and actionable techniques to master the art and science of PostgreSQL benchmarking, helping you unlock its full potential for your workloads.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7568/</url><track>DBA</track><persons><person id="718">Dirk Krautschick</person></persons></event></room><room name="Saal A2"><event id="7367"><start>12:15</start><duration>00:45</duration><room>Saal A2</room><title>Vector and VictoriaLogs: Powerhouse combo for Logging Observability in modern database infrastructure</title><abstract>Any troubleshooting where a DBA is involved, reading and analysing DB log entries is a must. Database logfiles are a DBA’s weapon in production environment. Not only reactively but also proactively monitoring database log files is crucial. However, in environments with thousands of very highly active databases generating massive amount of logging and compliance policies to abide by, analysing a single log file by manually browsing through it either by logging into the DB server or  by downloading the file to a local machine to create some reports( that takes resource and time for processing and producing some basic views/aggregates)  is not very practical and efficient. Hence, the need for a better approach.

In my talk, I will be presenting an alternative solution for DB logging Observability solution : Vector + Victoria Logs.

Vector: Reads, transforms and ships the log file to different targets
VictoriaLogs:  VictoriaLogs is a storage for logs and uses LogsQL for querying and create statistical data from the log files
Both Vector and Victorialogs are open source

The Agenda would include: 

Traditional approach
New Requirement
New solution
Old vs new Log configurations
What is vector?
Vector vs Promtail
What is VictoriaLogs?
VictoriaLogs vs Loki
LogsQL: Introduction and examples
Grafana Dashboards with VictoriaLogs as source using LogsQL  for metrics
My experience : Pros, cons and why this was a chosen solution</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7367/</url><track>DBA</track><persons><person id="1088">Priyanka Chatterjee</person></persons></event></room><room name="Saal B"><event id="7576"><start>12:15</start><duration>00:45</duration><room>Saal B</room><title>Everything you need to know about collations</title><abstract>Outside of causing trouble for you when upgrading libc what are collations good for? PostgreSQL's collations have gotten a lot of bad press from the upgrade issues but they are also a powerful and important tool, especially for working with text in other languages than English.
 
 This talk will give an introduction to collations in PostgreSQL, including how to use them, what they are useful for, how they work plus some common pitfalls and misunderstandings. You will learn, among other things, about the three collation providers (libc, icu, builtin), BCP 47, case insensitive collations, CTYPEs, what new features have been introduced in recent PostgreSQL versions and get a brief look into the future of collations in PostgreSQL.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7576/</url><track>Developer</track><persons><person id="1445">Andreas Karlsson</person></persons></event></room><room name="360° Nord"><event id="7732"><start>12:15</start><duration>00:45</duration><room>360° Nord</room><title>Compliant and fully automated PostgreSQL</title><abstract>In today’s world, database management and compliance are closely linked. Fortunately, the PostgreSQL ecosystem provides the necessary tools to remain secure, compliant, and resilient. In an environment of constant change, adopting modern technology is essential, and the PostgreSQL ecosystem offers exactly that. In this presentation, you will learn about the key aspects of compliance and engineering.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7732/</url><track>Sponsors</track><persons><person id="1465">Hans-Jürgen Schönig</person></persons></event></room><room name="Other"><event id="7753"><start>13:00</start><duration>00:55</duration><room>Other</room><title>Lunch</title><abstract /><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7753/</url><track>Breaks</track><persons /></event></room><room name="Saal A1"><event id="7603"><start>13:55</start><duration>00:45</duration><room>Saal A1</room><title>Was bedeutet der EU Cyber Resilience Act für PostgreSQL und seine Anwender?</title><abstract>Der Cyber Resilience Act ist eine Verordnung der EU, die 2024 in Kraft trat und in den kommenden Jahren ihre volle Wirkung entfalten wird. Sie verpflichtet sowohl Hersteller als auch Anwender von Software zu bestimmten Maßnahmen, um die Sicherheit und Qualität der eingesetzten Software sicherstellen zu können. Dazu zählen zum Beispiel klar definierte Verfahren zur Bereitstellung von Sicherheitsupdates. Dabei gelten besondere Regeln für Open Source Software sowie für Unternehmen, die Open-Source-Projekte sponsorn.

PostgreSQL im Unternehmenseinsatz wird ganz sicher normalerweise unter diese Regeln fallen. Durch die besondere Konstellation der PostgreSQL-Community mit loser Organisation, vielen beteiligten Unternehmen und Akteuren in aller Welt, ist es eine Herausforderung, herauszufinden, welche Regeln auf wen zutreffen werden.

In diesem Vortrag möchte ich beginnen, herauszuarbeiten, was dieses Gesetz für das PostgreSQL-Projekt und seine Anwender bedeutet, welche Maßnahmen von den verschiedenen Parteien angegangen werden sollten, und wie dabei die Sicherheit und Qualität der Software verbessert werden kann.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7603/</url><track>DBA</track><persons><person id="503">Peter Eisentraut</person></persons></event></room><room name="Saal A2"><event id="7453"><start>13:55</start><duration>00:45</duration><room>Saal A2</room><title>Why your PostgreSQL tuning guide might be wrong (and what to do about it)</title><abstract>Have you ever applied PostgreSQL performance tuning advice only to see no improvement—or made things worse? While generic PostgreSQL wisdom is valuable, the complexity of PostgreSQL makes catch-all solutions underperform in unexpected ways.
I will share examples where one PostgreSQL configuration improved performance in one system but hurt it in another—even for the same workload. The key insight: optimal PostgreSQL server parameters depend heavily on your specific infrastructure characteristics. I'll present a checklist of important infrastructure differences—local vs network storage, IOPS limits, JIT availability, cloud vs on-premise—and demonstrate how these different environments require different optimal configurations for the same workload.
You'll leave understanding why generic tuning guides often fail and what infrastructure characteristics you need to consider when tuning YOUR specific PostgreSQL system.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7453/</url><track>DBA</track><persons><person id="1256">Mohsin Ejaz</person></persons></event></room><room name="Saal B"><event id="7584"><start>13:55</start><duration>00:45</duration><room>Saal B</room><title>More Datacenters, Less Problems</title><abstract>As Datadog continues to grow we need to prioritize datacenter expansion. Unfortunately, our Postgres architecture—previously supporting a handful of datacenters—became a painful liability for operators and service owners. Hidden coupling, operational toil, and reliance on components like PgBouncer surfaced major coordination challenges for datacenter expansion.

To understand what needed to change, we used AI-assisted analysis to examine how Postgres was actually being used across hundreds of services. By analyzing real production workloads, queries, and traffic patterns, we identified hidden dependencies and unsafe assumptions that were impossible for individual teams to investigate alone, allowing us to deliver architectural and service-level changes with confidence.

In this talk, I’ll share how our team simplified a production Postgres architecture to enable safe, repeatable, and hands-off datacenter expansion. I’ll walk through the original design, the failure modes that forced change, and the deliberate tradeoffs we made. I’ll demonstrate how we used Temporal to automate previously manual workflows, removed redundant dependencies, and ultimately deprecated PgBouncer in favor of a homegrown Postgres proxy.

This is a practical, experience-driven talk about simplifying Postgres at scale, using automation to tame complexity, AI to detangle existing workloads, and building database architectures that can grow without becoming an operator's nightmare.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7584/</url><track>Developer</track><persons><person id="1286">Fabiana Scala</person></persons></event></room><room name="360° Nord"><event id="7733"><start>13:55</start><duration>00:45</duration><room>360° Nord</room><title>I'm Afraid of the Database: How I gained confidence in PostgreSQL as an App Developer</title><abstract>As an app developer, my goal is to serve my users. I may love my tech stack, 
but secretly, I am terrified of the data layer.

We have more and more app developers blindly relying on PostgreSQL, but with 
all this data lying around, we need to ensure that PostgreSQL as a community is 
approachable.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7733/</url><track>Sponsors</track><persons><person id="1466">Jay Miller</person></persons></event></room><room name="Saal A1"><event id="7619"><start>14:50</start><duration>00:45</duration><room>Saal A1</room><title>PostgreSQL AIO in der Praxis</title><abstract>PostgreSQL 18 hat uns asynchrones IO für einige Operationen beschert.
Was bedeuten das in der Praxis? Welche Anwendungen und Setups profitieren am meisten von den Neuerungen? Was ist bei der Konfiguration von Datenbank und Betriebssystem zu beachten? Wie kann der Einfluss verschiedener Parameter auf die Leistung des Systems gemessen werden?
Wir erkunden das neue IO-System und seinen Nutzen für praktische Anwendungen.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7619/</url><track>DBA</track><persons><person id="214">Christoph Moench-Tegeder</person></persons></event></room><room name="Saal A2"><event id="7474"><start>14:50</start><duration>00:45</duration><room>Saal A2</room><title>The Elephant That Learns: How to use Machine Learning to Optimize PostgreSQL</title><abstract>PostgreSQL powers some of the most demanding workloads in the world, yet many performance problems remain difficult to detect, diagnose, and predict using traditional monitoring alone. 

In this talk, we'll explore how machine learning can transform the way engineers understand and optimize PostgreSQL systems. We'll demonstrate practical ML techniques from anomaly detection and time-series forecasting to workload pattern recognition, and show where they can outperform conventional monitoring and tuning methods. 

We'll demonstrate how ML models can potentially anticipate slowdowns, detect early signs of failure, recommend index changes, and help DBA's navigate complex performance behaviours using real world examples and open-source tooling. Attendees will leave with a clear roadmap for integrating ML into their PostgreSQL environments, whether for predictive maintenance, autonomous tuning, or large-scale performance analysis. The elephant can learn, and this session intends to show how to teach it.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7474/</url><track>DBA</track><persons><person id="745">Charly Batista</person></persons></event></room><room name="Saal B"><event id="7418"><start>14:50</start><duration>00:45</duration><room>Saal B</room><title>Hey, I'm using that! Fixing lock contention in OLTP.</title><abstract>Some transaction processing workloads end up with horrible lock contention because they end up blocked on updating the same rows. There are now databases that advertise running this workload a 1000 times faster than PostgreSQL. In this talk will discuss strategies how to manage this contention in PostgreSQL while retaining application correctness. How network latencies, different isolation levels, optimistic and pessimistic concurrency control, deadlocks and livelocks affect the capability to get work done. Working with the database allows us to take a large step closer to single-purpose database performance while staying in our familiar PostgreSQL land.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7418/</url><track>Developer</track><persons><person id="868">Ants Aasma</person></persons></event></room><room name="360° Nord"><event id="7741"><start>14:50</start><duration>00:45</duration><room>360° Nord</room><title>Data Migration Is Easy. Data Validation Is Not.</title><abstract>Migrating from Oracle or DB2 to PostgreSQL offers greater flexibility and significant cost advantages, but only if the migrated data can be fully trusted. Even small, undetected data discrepancies can lead to serious business and operational consequences.

This presentation focuses on the often-overlooked challenge of data validation in cross-platform database migrations. It examines why organizations struggle to obtain reliable validation guarantees and why traditional, checklist-based approaches frequently fall short.

The session also introduces modern data validation practices and presents OMrun as an example of how data validation can evolve from a one-time verification effort into a controlled, repeatable, and auditable process that supports long-term confidence in migrated data.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7741/</url><track>Sponsors</track><persons><person id="1474">Hervé Schweitzer</person></persons></event></room><room name="Saal A1"><event id="7522"><start>15:45</start><duration>00:45</duration><room>Saal A1</room><title>20 Years in the Trenches: What Postgres Can Learn from the Proprietary World</title><abstract>What prevents a major financial institution or government body from fully migrating to PostgreSQL? Often, it isn't performance, it's the strict requirements for security, compliance, and high availability. After twenty years of adapting open source databases for the enterprise, it has become clear that features like Transparent Data Encryption (TDE) and strong identity management (OIDC) are deciding factors.

This talk explores the journey of making PostgreSQL fit for the most demanding environments. We will move beyond the basics to discuss the architectural reality of implementing enterprise-grade security and availability in Postgres. Attendees will leave with a clear understanding of the current "Enterprise Gap," how to bridge it using modern tools, and what the PostgreSQL community must tackle next to secure its future in the corporate world.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7522/</url><track>DBA</track><persons><person id="1232">Jan Wieremjewicz</person><person id="872">Kai</person></persons></event></room><room name="Saal A2"><event id="7499"><start>15:45</start><duration>00:45</duration><room>Saal A2</room><title>Async I/O in PostgreSQL 18: Storage Finally Matters Again</title><abstract>Postgres 18 adds native support for asynchronous I/O. This is the most significant change in the I/O subsystem in decades. Instead of blocking on reads, Postgres can now issue I/O requests and continue working, aligning the database engine with how modern storage actually works.

We will see what async I/O means in PostgreSQL and how it affects performance across different storage backends, such as local and remote NVMe storage options, different cloud block storage solutions, and the comparisons between synchronous and asynchronous I/O operations. Additionally, we’ll investigate changes to CPU efficiency, concurrency, and tail latency across all of them.

Lastly, we discuss the benefits and drawbacks, when to use async I/O, and how to unlock the power of Postgres in modern infrastructure.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7499/</url><track>DBA</track><persons><person id="1096">Chris Engelbert</person></persons></event></room><room name="Saal B"><event id="7563"><start>15:45</start><duration>00:45</duration><room>Saal B</room><title>PostgreSQL Migrations Without Drama</title><abstract>Schema and data changes are a normal part of development, but in PostgreSQL they can easily affect performance and availability if executed carelessly. This talk focuses on practical considerations developers should keep in mind when applying DDL and DML changes in production.

We’ll look at how schema changes can impact running workloads, why certain operations lead to blocking, and how to reduce risk when introducing structural changes. The session also touches on common issues around large data modifications, such as long-running transactions and excessive locking.

With many examples, the talk highlights safer ways to approach DDL and DML changes and what to watch for during execution to ensure system stability.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7563/</url><track>Developer</track><persons><person id="1280">Daria Nikolaenko</person></persons></event></room><room name="360° Nord"><event id="7723"><start>15:45</start><duration>00:45</duration><room>360° Nord</room><title>Silver Sponsor Lightning Talks</title><abstract>Lightning Talks from Silver Sponsors</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7723/</url><track>Sponsors</track><persons><person id="30">Andreas Scherbaum</person></persons></event></room><room name="Other"><event id="7754"><start>16:30</start><duration>00:30</duration><room>Other</room><title>Coffee Break</title><abstract /><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7754/</url><track>Breaks</track><persons /></event></room><room name="Saal A1"><event id="7742"><start>17:00</start><duration>00:15</duration><room>Saal A1</room><title>PostgreSQL digital independence</title><abstract>In a world of geopolitical change, digital independence is more important than ever; we will show you how.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7742/</url><track>Platinum Sponsor Keynotes</track><persons><person id="1475">Hans-Jürgen Schönig</person></persons></event><event id="7743"><start>17:15</start><duration>00:15</duration><room>Saal A1</room><title>Iceberg Ahead! Why PostgreSQL is the Lifeboat (and the Captain) of the Modern Data Lake</title><abstract>The "Laws of Data Physics" are being rewritten. Jay (the Developer) claims that Data Gravity - the idea that
applications must orbit a massive, central database - is a myth from the mainframe era. He’s ready to launch
all his data into the "weightless" clouds of Apache Iceberg.

Dirk (the Database Architect), who has spent decades cleaning up "weightless" data disasters, isn't buying it.
He knows that without the structural integrity of PostgreSQL, a data lake is just a collection of expensive digital debris.

Jay argues that the future is "Postgres-lite" -where the database is just a thin, ephemeral layer over a massive Iceberg.
Dirk counters with the veteran’s reality: without the rigor, indexing, and ACID-guarantees of a rock-solid
PostgreSQL instance, your "Data Lake" is just an expensive, unorganized swamp.

In this 15-minute "Tough Love" keynote, Jay and Dirk face off to decide: Does the Modern Data Stack actually
need a Relational Heart?</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7743/</url><track>Platinum Sponsor Keynotes</track><persons><person id="1476">Jay Miller, Dirk Krautschick</person></persons></event><event id="7744"><start>17:30</start><duration>00:15</duration><room>Saal A1</room><title>The next 10 years of Postgres</title><abstract>Postgres is one of the most resilient and sophisticated projects in open-source history. Through a combination of community-driven innovation and sizeable technical contributions from vendors like EDB, Postgres has achieved massive technical objectives that have shifted the database landscape. Many in the industry now openly claim that "Postgres won"—becoming the universal standard for modern application development.
However, we aren't done yet. As we look toward the next decade, we must ask: what breakthroughs are still on the horizon?</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7744/</url><track>Platinum Sponsor Keynotes</track><persons><person id="1477">Álvaro Herrera</person></persons></event></room><room name="Other"><event id="7722"><start>17:50</start><duration>01:40</duration><room>Other</room><title>Evening Reception</title><abstract>After-conference evening reception in the sponsor area. The reception starts after the sponsor keynotes.

This is not a full dinner, there will be snacks and drinks</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7722/</url><track>Breaks</track><persons /></event></room></day><day date="2026-04-22"><room name="Saal A1"><event id="7270"><start>09:00</start><duration>00:45</duration><room>Saal A1</room><title>Swapping the Elephant Without Breaking the Room</title><abstract>Upgrading PostgreSQL across hundreds of production databases without downtime sounds impossible,especially when logical replication slots, Debezium CDC pipelines, and outbox event streams are in play. 
At Fresha, we faced exactly that: mission-critical workloads still on PostgreSQL 12, and no safe way to reach 17 without freezing the business.

This talk walks through how we designed and automated a blue-green upgrade framework using logical decoding, controlled WAL overlap, and connector orchestration. 
We’ll dive into how Debezium connectors, replication origins, and PgBouncer pools were coordinated to guarantee continuity for both CDC and outbox topics with rollback and dry-run modes built in.

Attendees will learn practical techniques for:

* Orchestrating zero-downtime Postgres major upgrades on RDS or self-managed clusters

* Managing replication slots and Debezium connectors safely across clusters

* Handling sequence alignment, WAL overlap, and connector state transitions

* Designing reversible, testable database cutovers

This is a practical session from real production experience: no magic tools, just PostgreSQL internals, Debezium knowledge and careful planning, and a few well-placed bash scripts.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7270/</url><track>DBA</track><persons><person id="1418">Anton Borisov</person></persons></event></room><room name="Saal A2"><event id="7261"><start>09:00</start><duration>00:45</duration><room>Saal A2</room><title>Beyond work_mem Myths: A Source‑Code‑Guided Tour Through PostgreSQL Memory</title><abstract>This talk is a source‑code‑guided investigation into how PostgreSQL actually allocates memory during query execution. It builds on my previous talk, “PostgreSQL Connections Memory Usage on Linux: How Much, Why, and When?”, presented at PostgreSQL Conference Germany 2025, which focused on practical measurements of PostgreSQL query memory usage based on aggregated data from Linux /proc/PID/smaps files.
In this session, I trace memory usage for different types of queries, with the goal of eliminating speculation about work_mem and giving attendees a better understanding supported by both source‑code findings and practical measurements. We will also briefly revisit other aspects of PostgreSQL connection memory usage and explain why reported resident set size (RSS) numbers can look so huge compared to the memory actually consumed.

Key Takeaways
- work_mem is only a soft limit — queries may use less, or far more, than the setting suggests
- Real memory usage is much smaller than the giant RSS numbers you see in monitoring tools
- Parallel query execution can multiply memory consumption in surprising ways
- Visualizing memory usage over time reveals how PostgreSQL allocates and releases memory dynamically</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7261/</url><track>Internals</track><persons><person id="316">Josef Machytka</person></persons></event></room><room name="Saal B"><event id="7600"><start>09:00</start><duration>00:45</duration><room>Saal B</room><title>Into the Woods: Finding Your Way with B-Trees in PostgreSQL</title><abstract>B-trees are the default index type in PostgreSQL and the workhorse behind many query speedups—but developers often use them with only a limited understanding of how they actually work or how to apply them effectively. This talk demystifies B-trees by building them from the ground up and exploring their design, structure, and even a naming history along the way. By emphasizing first principles, we’ll uncover why B-trees perform so well across a wide range of queries and workloads.

With that foundation in place, we’ll turn to real-world usage patterns and common pitfalls: missing or redundant B-tree indexes, the subtleties of multi-column indexes, and handling of NULLs. From there, we’ll dig into PostgreSQL-specific behavior—from CREATE INDEX CONCURRENTLY to tracking index usage, identifying bloat, and safely deciding when to drop an index. Finally, we’ll explore the limitations of B-trees and when to consider alternative index types, like those for multidimensional or vector data. This talk is geared toward application developers who want to build faster, more efficient systems, gain a deeper understanding of this fundamental data structure, and avoid the silent costs of misused indexing.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7600/</url><track>Developer</track><persons><person id="1294">Sergey Dudoladov</person></persons></event></room><room name="Saal A1"><event id="7562"><start>10:15</start><duration>00:45</duration><room>Saal A1</room><title>When a Postgres Operator decision is not forever: approaches and experiences for migrating between  Postgres Operators</title><abstract>The Postgres ecosystem on Kubernetes is thriving and this is particularly evident in the Postgres K8s Operators space. 
As the Postgres Operator ecosystem matures, real world experience has shown that, technical, licensing or operational constraints can make it necessary to migrate between PostgreSQL Kubernetes Operators, over the lifecycle of a database platform.
In this talk we are going to:
• Provide an overview of the Postgres Operator landscape, reviewing widely adopted options such as CloudNativePG, Percona PostgreSQL Operator, Crunchy Data PGO, Zalando Operator, and Bitnami Helm charts. We will compare their key architectural, technological, and licensing choices to help attendees make more informed operator selection decisions.
• Examine the reasons why, even after a decision, you might be faced with an operator migration eventually, and the licensing, architectural and operational aspects, risks and opportunities, that you should take into account. 
• Discuss the different implementation approaches for migrating between Operators, such as replication, backup and volume-based patterns. 
• Finally, deep dive on a particular Operator migration case study between two popular Postgres K8s operators.  
By the end of this session, attendees will have a clearer understanding of the PostgreSQL Operator ecosystem, the risks and opportunities associated with an Operator technology switch, and finally, concrete, actionable strategies for planning and executing Postgres Kubernetes Operator technology migrations.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7562/</url><track>DBA</track><persons><person id="1444">Takis (Panagiotis) Stathopoulos</person></persons></event></room><room name="Saal A2"><event id="7538"><start>10:15</start><duration>00:45</duration><room>Saal A2</room><title>Profiling PostgreSQL: perf, Flame Graphs, and eBPF Tools in Practice</title><abstract>PostgreSQL is highly optimized, but workloads often reveal performance problems that are hard to diagnose with SQL tooling like EXPLAIN alone.

Many engineers use perf as the go-to profiler for Linux systems. However, several other tools allow you to visualize information more clearly, provide additional insights, or analyze function calls. For example, flame graphs are a common way to visualize stack traces and the time spent in each function. On-CPU and off-CPU flame graphs help to understand where CPU hotspots exist or if functions have to wait for resources like I/O. Differential flame graphs can be used to compare two profiling runs (e.g., an optimized versus a non-optimized implementation) to identify where performance improves or degrades. Other eBPF-based tools (e.g., funccount, funclatency, bpftrace) can be used to go even further. They allow you to count function invocations, measure the latency of individual function calls, or consider function parameters.

In my talk, I will discuss how to use these tools to detect performance bottlenecks in an example PostgreSQL extension. The audience will learn how to use these tools, their limitations, and the differences between production and debug builds of PostgreSQL.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7538/</url><track>Internals</track><persons><person id="1135">Jan Kristof Nidzwetzki</person></persons></event></room><room name="Saal B"><event id="7596"><start>10:15</start><duration>00:45</duration><room>Saal B</room><title>Understanding Model Context Protocol (MCP) and building AI Agents for PostgreSQL database management</title><abstract>The Model Context Protocol (MCP) is revolutionizing how AI agents interact with external systems, and PostgreSQL databases are no exception. This talk introduces MCP servers as a standardized interface that enables AI agents to perform sophisticated database operations beyond simple query execution.

We'll explore how MCP servers bridge the gap between AI agents and PostgreSQL, providing structured access to database internals, performance metrics, and administrative functions. Unlike traditional monitoring tools that require human interpretation, MCP-enabled AI agents can autonomously diagnose performance issues, detect anomalies, and execute corrective actions in real-time.

This talk will include a demo showing how AI agents can use MCP servers to identify and fix performance issues. It is ideal for database administrators, developers, and anyone interested in the intersection of AI and database management. Attendees will leave with practical knowledge of implementing MCP servers and building AI agents that can significantly reduce manual database maintenance overhead while improving system reliability.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7596/</url><track>Developer</track><persons><person id="804">Divya Sharma</person></persons></event></room><room name="360° Nord"><event id="7739"><start>10:15</start><duration>00:45</duration><room>360° Nord</room><title>Exploring new enhancements and improvements Fujitsu has delivered to PostgreSQL</title><abstract>In this session we will dive into a series of recent contributions to PostgreSQL, made possible by the Fujitsu team working with sustained focus on core database features. 
We will explore some of these new enhancements and improvements including: 


1. Advanced Conflict Management

2. Table Exclusions

3. Seamless Cluster Upgrades

4. Slot Synchronization for High Availability</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7739/</url><track>Sponsors</track><persons><person id="1472">Vincent O'Dea</person></persons></event></room><room name="Saal A1"><event id="7540"><start>11:10</start><duration>00:45</duration><room>Saal A1</room><title>The Need for Speed: Mastering WAL-G for High-Performance Backup &amp; Recovery on Kubernetes</title><abstract>Running PostgreSQL on Kubernetes exposes standard backup strategies to the I/O limitations of networked storage. While WAL-G is widely adopted for cloud-native PostgreSQL, many implementations fail to leverage its most powerful storage and recovery optimizations.

This session examines the architecture of high-performance, resilient recovery systems. We will begin with WAL-G's internals, detailing precisely how Delta Backups and ZSTD compression reduce storage footprints and I/O overhead compared to traditional filesystem snapshots.

The discussion will then deep-dive into the mechanics of rapid restoration. We will analyze how to configure WAL prefetching for accelerated Point-in-Time Recovery (PITR) and leverage S3-compatible storage to maximize throughput. The outcome is a technical blueprint for the rapid restoration of terabyte-scale databases.

Key Takeaways:
 * Delta Backup Mechanics: Understanding WAL-G's block-level tracking logic and its specific I/O advantages over snapshots for large datasets.
 * PITR Acceleration: Mastering WAL prefetching techniques to significantly reduce recovery duration.
 * Storage Optimization: Leveraging ZSTD compression and efficient S3 object storage protocols to minimize overhead.
 * Automated Verification: Implementing wal-verify to continuously validate backup integrity and recoverability.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7540/</url><track>DBA</track><persons><person id="1441">Sudeepta Patra</person></persons></event></room><room name="Saal A2"><event id="7390"><start>11:10</start><duration>00:45</duration><room>Saal A2</room><title>The Life of a Tuple in Logical Replication</title><abstract>Logical replication is one of the most intricate yet fascinating parts of PostgreSQL. In this session, we’ll take a closer look at how a single tuple travels through the logical replication pipeline, from the moment it’s changed on the publisher to when it’s finally applied on the subscriber.

We’ll begin with how changes are captured in the WAL, then see how the walsender and reorder buffer extract those records and pass them through logical decoding using the pgoutput plugin. From there, we’ll explore how the replication slot ensures data retention and continuity, how messages are streamed to the subscriber, and how the apply worker and table sync worker reconstruct and apply these changes to maintain transactional consistency.

By the end of the session, you’ll have a clear, end-to-end understanding of how PostgreSQL logically replicates data and what happens behind the scenes when a tuple is replicated.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7390/</url><track>Internals</track><persons><person id="1422">Shlok Kumar Kyal</person></persons></event></room><room name="Saal B"><event id="7282"><start>11:10</start><duration>00:45</duration><room>Saal B</room><title>Embedding Workloads as a New Stress Test for Postgres</title><abstract>AI workloads push Postgres in unexpected ways - millions of embeddings, frequent updates, and hybrid vector + relational queries. These patterns reveal new pain points: autovacuum lag, index bloat, and planner confusion. This talk shows how real embedding workloads stress MVCC and indexes, what tuning helps, and what lessons the Postgres community can take from AI users in production.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7282/</url><track>DBA</track><persons><person id="1259">Viktoriia Hrechukha</person></persons></event></room><room name="360° Nord"><event id="7734"><start>11:10</start><duration>00:45</duration><room>360° Nord</room><title>One Collector to Rule Them All: Unified Observability for PostgreSQL Platforms</title><abstract>Last year at PGConf.EU 2025 in Riga, a talk about unified PostgreSQL observability with OpenTelemetry sparked a key insight: in production environments, PostgreSQL never runs alone - it's always surrounded by an ecosystem of components like Patroni, pgBackRest, pgAudit, and etcd, each with its own logging format and configuration.

The talk explores how all these components can be funneled into a unified observability pipeline, covering relevant logging and metrics settings, OpenTelemetry Collector configuration excerpts, and possible target systems such as Prometheus or Jaeger. Open questions around performance and Collector overhead round off the topic.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7734/</url><track>Sponsors</track><persons><person id="1467">Dirk Aumueller</person></persons></event></room><room name="Saal A1"><event id="7610"><start>12:40</start><duration>00:45</duration><room>Saal A1</room><title>Not Just Altruism: Selling PostgreSQL Contributions to Your Employer</title><abstract>Many PostgreSQL contributors, whether through code, documentation, advocacy, or events, engage with the community as volunteers. But community involvement isn’t just a personal passion project. When done right, it can be a powerful strategic asset for an employer.

This talk explores how PostgreSQL community involvement can be positioned as a business advantage, both from an individual and an organisational perspective. We’ll look at why companies choose to support community contributions and what they gain in return — from a talent acquisition edge and technical leverage to marketing visibility, sales credibility, and long-term ecosystem influence.

This talk provides both context and concrete arguments to help bridge the gap between open-source community work and business strategy — and shows how investing in community engagement can benefit both individuals and organisations.

The session is equally relevant for contributors seeking employer support, engineering leaders evaluating whether community involvement makes sense for their teams, marketers aiming to better showcase their company’s expertise, and sales professionals wanting to understand how community presence strengthens their company’s trust and public perception.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7610/</url><track>General</track><persons><person id="657">Valeria  Kaplan</person></persons></event></room><room name="Saal A2"><event id="7430"><start>12:40</start><duration>00:45</duration><room>Saal A2</room><title>Solving complex database problems by starting small</title><abstract>Over the years, I've been involved in some complicated performance optimization and corruption cases we discovered on our databases. When we talk about them, we usually focus on the cost and time optimization or how we managed to fix the entire corruption issue. But when I start a complicated investigation, I always remember the words of the first CTO I worked for: "How should it work in the most simple use case? How does it work for a single user, a single transaction, a single table, a single page of a table?" And only when you fully understand this simplified use case can you start building your problem case for this strong understanding of the basics.

And that is what I did over the years with PostgreSQL as well. A database with a single user table of just one page has been the foundation of my understanding of MVCC, vacuum, single page cleanup, FillFactor and Heap-Only Tuple (HOT) updates.

In this presentation, I will emphasize the usefulness of a single page table to build understanding and only build onward from that strong foundation. I will start briefly with the transaction IDs and vacuum, but quickly move on to a little more advanced topics such as differences between bloat and dead rows, FillFactor and HOT updates.

Every complicated topic starts with the most basic building blocks, and we talk too little about their importance. After this presentation, you will want to go back to simple use cases for all your complicated problems, if that wasn't your M.O. already.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7430/</url><track>Internals</track><persons><person id="859">Derk van Veen</person></persons></event></room><room name="Saal B"><event id="7289"><start>12:40</start><duration>00:45</duration><room>Saal B</room><title>What's Missing in Postgres?</title><abstract>Postgres adds about 180 features and changes every year, yet it is missing some major ones.  This talk explains what those features are, and why they have not been implemented.  The features include sharding, TDE, global indexes, and multi-master replication.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7289/</url><track>General</track><persons><person id="44">Bruce Momjian</person></persons></event></room><room name="360° Nord"><event id="7740"><start>12:40</start><duration>00:45</duration><room>360° Nord</room><title>Digital Sovereignty in Europe: STACKIT as a Secure Cloud Alternative</title><abstract>As global legislation like the US Cloud Act puts European data privacy and technological self-determination under pressure, Schwarz Digits presents STACKIT—the sovereign European cloud alternative.
This session highlights key technical milestones that reinforce this secure foundation, most notably the BSI C5 certification. We will explore security innovations such as Private Endpoints and robust data protection through a proprietary Key Management System (KMS) for full backup and storage encryption. Additionally, we introduce the house-developed "Workload Identity" in PostgreSQL Version 18.
By leveraging regional data centers in Germany and Austria, adhering strictly to GDPR, and utilizing Open Source technology, STACKIT proves that technological independence is the essential prerequisite for a future-proof European economy.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7740/</url><track>Sponsors</track><persons><person id="1473">Richard Siekmann</person></persons></event></room><room name="Saal A1"><event id="7593"><start>13:35</start><duration>00:45</duration><room>Saal A1</room><title>Operating Postgres as a data source for your analytics pipelines</title><abstract>The times when analytics systems were just OLAP replicas of Postgres with long-running ETL queries are gone. Modern data analysts with their language models and Jupyter notebooks are no longer just a disturbance for database administrators. They deliver real-time analytics to businesses that in turn use them to make mission-critical decisions.
Data analysis systems demand their data from OLTP systems, here and now. This understandable need alongside excellent capabilities provided by Postgres logical replication, takes DBA into the brave new world of DataOps. Unfortunately, this new world is not about hydraulic engineering of shiny data lakes, but is about the day-to-day plumbing of clogged data pipelines.
In this talk, I will provide an overview of established and trending approaches to change data capture (CDC) used by modern DataOps, and explain how a mission-critical OLTP Postgres database can survive and deliver under load.
We will compare different approaches, such as xmin-based and logical replication-based solutions, and open-source tools, such as Debezium, Kafka, Apache Flink, PeerDB/Clickpipes. Finally, we will discuss benefits, problems, hazards and best practices of running Postgres as a source of data for solutions built on different combinations of the above tools.
If you work with applications that operate large analytical data and use it to guide your business decisions, this talk is for you.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7593/</url><track>DBA</track><persons><person id="88">Ilya Kosmodemiansky</person></persons></event></room><room name="Saal A2"><event id="7346"><start>13:35</start><duration>00:45</duration><room>Saal A2</room><title>The Secret Handshake: Demystifying PostgreSQL's SCRAM Authentication Protocol</title><abstract>Password-based authentication has evolved significantly, and SCRAM (Salted Challenge Response Authentication Mechanism) is the standard for securing modern database connections. This session offers a rigorous deep dive into the inner workings of SCRAM, specifically focusing on how PostgreSQL implements and leverages this mechanism. We will dismantle the protocol, step-by-step, exploring the client-server exchange, nonce generation, iterative hashing, and the verification process.

Attendees will walk away with a crystal-clear understanding of the SCRAM specifications (RFC5802) and the specific internal routines that PostgreSQL uses to achieve superior, modern, and cryptographically sound connection security.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7346/</url><track>Internals</track><persons><person id="723">Jorge Solorzano</person></persons></event></room><room name="Saal B"><event id="7519"><start>13:35</start><duration>00:45</duration><room>Saal B</room><title>It works on my database - Regression testing of SQL queries</title><abstract>SQL queries often lack systematic testing - they're treated as "just glue code" that only gets validated in production. Meanwhile, PostgreSQL itself has used robust regression testing for decades to prevent disasters in core development. This talk introduces RegreSQL, a tool that brings the same regression testing methodology to application queries, catching both correctness bugs and performance regressions before deployment.

We'll explore how RegreSQL tests SQL queries systematically: verifying correctness across schema changes, tracking performance baselines, detecting common query plan issues (sequential scans, missing indexes), and managing reproducible test data. You'll see live demonstrations of catching real-world issues - from missing indexes that cause production slowdowns to ORM-generated queries that perform sequential scans on millions of rows.

Whether you write raw SQL or use ORMs, whether you're maintaining legacy systems or building greenfield applications, this talk will show you practical techniques for making your PostgreSQL queries testable, maintainable, and production-ready.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7519/</url><track>Developer</track><persons><person id="1403">Radim Marek</person></persons></event></room><room name="360° Nord"><event id="7735"><start>13:35</start><duration>00:45</duration><room>360° Nord</room><title>Der Weg zu PostgreSQL: Strategien, Souveränität und Best Practices</title><abstract>Die Migration bestehender Datenbanksysteme gehört zu den komplexesten, aber auch lohnendsten Vorhaben einer IT-Organisation. Dank seiner Robustheit, Open-Source-Flexibilität und Kosteneffizienz ist PostgreSQL heute das Ziel zahlreicher Transitionen von proprietären Systemen wie Oracle, Microsoft SQL Server oder MySQL.

In diesem Vortrag erhalten Sie einen umfassenden Leitfaden für die Navigation durch die komplexe Migrationslandschaft. Egal, ob Sie Ihre erste Migration planen oder einen laufenden Wechsel optimieren möchten: Diese Session vermittelt Ihnen das Wissen und die Sicherheit für einen erfolgreichen und sicheren Umstieg auf PostgreSQL.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7735/</url><track>Sponsors</track><persons><person id="1468">Raphael Salguero</person></persons></event></room><room name="Saal A1"><event id="7503"><start>14:40</start><duration>01:00</duration><room>Saal A1</room><title>Getting Started with pgwatch: Features, Installation, and Use Cases</title><abstract>In this talk, I will introduce pgwatch, the mature open-source PostgreSQL monitorint solution. We’ll start with the basics: how to install and configure pgwatch to monitor your PostgreSQL instances effectively. I will walk you through its key features and demonstrate how it can help you track performance, detect bottlenecks, and optimize your database operations.

We’ll also compare pgwatch with some existing monitoring solutions, highlighting its unique advantages and use cases. I look forward to hearing your questions and thoughts during the session. Whether you’re new to PostgreSQL monitoring or already familiar with other tools, this session will offer valuable insights and spark an engaging discussion.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7503/</url><track>DBA</track><persons><person id="72">Pavlo Golub</person></persons></event></room><room name="Saal A2"><event id="7271"><start>14:40</start><duration>01:00</duration><room>Saal A2</room><title>How I built an open source community in Armenia</title><abstract>Building and nurturing an open source developer community in Armenia comes with challenges that might be surprising if you’re used to communities in Central Europe. In this talk, I’ll share how I applied community-building strategies to create a more diverse and inclusive space with shared purpose, and spark meaningful collaboration. Gain actionable insights that you can take back to your own developer communities, wherever they are.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7271/</url><track>General</track><persons><person id="1123">Emma Saroyan</person></persons></event></room><room name="Saal B"><event id="7555"><start>14:40</start><duration>01:00</duration><room>Saal B</room><title>Don’t OIDC Yourself in the Foot: Postgres 18’s New Auth Explained</title><abstract>Postgres 18 adds native support for OAuth and OpenID Connect (OIDC) authentication, one of the most significant security-related changes in years. While widespread adoption will take time, since the feature requires client-side support and external validators, it is already possible to experiment with command-line clients like psql together with validators such as pg_oidc_validator.

This talk includes a demo of a minimal setup using Keycloak and pg_oidc_validator, showing how developers and DBAs can start experimenting immediately. We’ll then dive into how PostgreSQL integrates with OIDC under the hood, demystifying the flow from token issuance to database login.

OIDC promises convenience and streamlined “single sign-on,” yet it’s surprisingly easy to deploy insecurely, and sometimes less secure than traditional password-based authentication. This session highlights the most common pitfalls, misconceptions, and misconfigurations seen in OIDC deployments and provides clear guidance on how to avoid them. Attendees will leave with a practical understanding of both the power and the sharp edges of OIDC in Postgres 18.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7555/</url><track>DBA</track><persons><person id="1395">Zsolt Parragi</person></persons></event></room><room name="360° Nord"><event id="7738"><start>14:40</start><duration>01:00</duration><room>360° Nord</room><title>Run PostgreSQL Like a Product: Operational Readiness for Databases</title><abstract>Running PostgreSQL in production is no longer just a database challenge, it is an operational discipline. Organizations struggle not with PostgreSQL itself, but with ensuring reliability, security, performance, and cost efficiency at scale in real-life operations. This session takes a practitioner’s view on PostgreSQL Operational Readiness, starting from common production challenges such as backup and recovery, monitoring, security, high-availability, and operational governance.
We outline a product-oriented operating model and architecture that treats PostgreSQL as a business-critical platform rather than a standalone technology. Attendees will see how different building blocks covering observability, resilience, automation, and security fit together to enable stable and predictable operations.</abstract><url>https://www.postgresql.eu/events/pgconfde2026/schedule/session/7738/</url><track>Sponsors</track><persons><person id="1471">Roland Stirnimann</person></persons></event></room></day></schedule>