<?xml version='1.0' encoding='utf-8'?>
<schedule><version>Firefly</version><conference><title>pgDay Paris 2026</title><start>2026-03-26</start><end>2026-03-26</end><days>1</days><baseurl>https://www.postgresql.eu/events/pgdayparis2026/schedule/</baseurl></conference><day date="2026-03-26"><room name="Other"><event id="7694"><start>08:30</start><duration>00:30</duration><room>Other</room><title>Registration</title><abstract /><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7694/</url><track>Breaks</track><persons /></event></room><room name="Auditorium"><event id="7441"><start>09:15</start><duration>00:45</duration><room>Auditorium</room><title>A framework for self-driving databases</title><abstract>Self-driving, or autonomous, databases are an urgent topic as PostgreSQL becomes more and more popular. Organisations adopting PostgreSQL are facing major challenges:

1. More and more development teams do not include trained Development DBAs, which means that the teams have limited abilities to design high-performance schemas, queries, and indexes, or to address performance issues resulting from the use of ORMs or vibe-coding tools.

2. Postgres adoption has outstripped the supply of qualified Operational DBAs who can support PostgreSQL-based systems after go-live. This means that the lack of talent is hampering innovation.

3. Maintaining and operating databases at scale – 1000s of instances – is challenging and costly. Quarterly security updates alone have become a near-impossible task, without talking about continuous performance management, backup monitoring, etc.
This talk reviews the current work on automatic performance tuning and autonomous databases in the PostgreSQL ecosystem (including Kubernetes and AI) and puts it into the five-level framework proposed by SAE International for autonomous vehicles.
We aim to evaluate existing solutions within the PostgreSQL ecosystem, highlighting those that demonstrate the most potential.</abstract><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7441/</url><track>Talks</track><persons><person id="902">Luigi Nardi</person><person id="178">Marc Linster</person></persons></event></room><room name="Other"><event id="7695"><start>10:00</start><duration>00:20</duration><room>Other</room><title>Coffee</title><abstract /><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7695/</url><track>Breaks</track><persons /></event></room><room name="Auditorium"><event id="7221"><start>10:20</start><duration>00:45</duration><room>Auditorium</room><title>Breaking PostgreSQL - Learning by doing it wrong</title><abstract>Although PostgreSQL is a very reliable and robust database management system, there are things you shouldn't do. In this session we're breaking PostgreSQL in several ways, live, without any slides (slides will be provided anyway, don't worry). There is no better way to learn as learning from mistakes, isn't it? We're going to take that serious and I am sure you'll have a lot of fun and gain some interesting insights. By the end of this talk you should have a solid understanding of what not not do (and why not)and this will safe you quite some time in your journey with PostgreSQL.</abstract><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7221/</url><track>Talks</track><persons><person id="386">Daniel Westermann</person></persons></event></room><room name="Karnak"><event id="7435"><start>10:20</start><duration>00:45</duration><room>Karnak</room><title>From Crisis to Control: Detect and Fix Corruptions</title><abstract>For me, database corruptions are the most frightening problems to face. I see it as my primary task to keep the data safe and available, and corruptions are an especially tough nut to crack when you encounter them. They can be many years old, and restoring from a backup might not be an option. I've now faced corruptions multiple times and can deal with them with confidence. I'd like to show you how you can handle corruptions as well, without losing any data or have to rely only on your backups.

There are multiple ways to corrupt data yourself, but today I'll focus on corruptions caused by the system.

Corruptions are hard to fix because they require a deep understanding of multiple parts of the database. In this presentation, I'll guide you through the entire journey, from detecting the corruption and unblocking the vacuum, to decision-making, and finally mitigating and fixing the problem. For all these steps, you'll need different extensions, and I'll walk you through each of them.

After this presentation, you should have the confidence to start tackling corruptions yourself. With the help of our blog at https://www.adyen.com/knowledge-hub/database-corruption-in-postgresql, you'll be the one who ensures the company's data remains secure and recoverable, even in the face of corruption.</abstract><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7435/</url><track>Talks</track><persons><person id="859">Derk van Veen</person></persons></event></room><room name="Auditorium"><event id="7317"><start>11:15</start><duration>00:45</duration><room>Auditorium</room><title>postgresql.org: The hidden parts</title><abstract>PostgreSQL is a project that, for good or for bad, manages most of its infrastructure on its own, rather than relying on an external entity. Some of this is very visible, other parts are more hidden. There is surprisingly little PostgreSQL as part of the infrastructure, but it is of course the foundation upon which everything lies. In this presentation we'll go through some of the different services that are handled, how they work, and how it all fits together behind the scenes - and in some cases why things work the way they go.</abstract><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7317/</url><track>Talks</track><persons><person id="1">Magnus Hagander</person></persons></event></room><room name="Karnak"><event id="7691"><start>11:15</start><duration>00:45</duration><room>Karnak</room><title>Database optimization and reducing global digital pollution</title><abstract>Digital pollution has a devastating global impact that affects every one of us, whether or not we choose to acknowledge it. Yet conscious decisions to improve database efficiency lead to the direct benefits of faster speeds and reduced costs, while helping our planet as well.

Join for a 45-minute session exploring:

- Our collective current state of affairs
- How we're all on a path to being required to optimize our infrastructure, anyway
- Specific actions that can be taken to reduce emissions - and as a side effect, also improve performance and reduce costs

Delivered by a non-tech speaker, with tips &amp; tricks within that help you take action.</abstract><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7691/</url><track>Sponsors</track><persons><person id="1461">Catherine Bouxin</person></persons></event></room><room name="Other"><event id="7696"><start>12:00</start><duration>01:30</duration><room>Other</room><title>Lunch</title><abstract /><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7696/</url><track>Breaks</track><persons /></event></room><room name="Auditorium"><event id="7226"><start>13:30</start><duration>00:45</duration><room>Auditorium</room><title>Operational hazards of managing PostgreSQL DBs over 100TB</title><abstract>How do you backup (and restore) a +100TB database? Well, maybe you don't.

In this talk I will share the singularities I encountered when managing huge PostgreSQL databases, like backups, high availability challenges, how to keep vacuum under control...

When reading blog articles, the best practices, the "how to" guides, things seem straightforward, but when you start bending PostgreSQL limits, you will end up needing to question the most fundamental assumptions on about how PostgreSQL works.

Over the last years, my team has been exploring the boundaries of what PostgreSQL can do and today I will share our findings with you (at least the ones I can!).</abstract><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7226/</url><track>Talks</track><persons><person id="1130">Teresa Lopes</person></persons></event></room><room name="Karnak"><event id="7692"><start>13:30</start><duration>00:45</duration><room>Karnak</room><title>The Cryptic Elephant: Column-Level Encryption for PostgreSQL</title><abstract>This presentation examines transparent encryption solutions for PostgreSQL databases, particularly addressing emerging regulatory requirements from DORA and PCI DSS 4.0.
While Full Disk Encryption (FDE) has traditionally provided protection against physical theft and improper disposal, new regulations mandate encryption for data at rest, in transit, and increasingly, data in use.
Dalibo introduces The Cryptic Elephant, an open-source Rust-based extension offering Transparent Column Encryption (TCE) compatible with all major versions of PostgreSQL. Unlike cluster-wide encryption approaches, this solution enables selective column encryption while maintaining application transparency. The architecture employs envelope encryption using unique Data Encryption Keys (DEK) protected by external Key Encryption Keys (KEK) managed through Key Management Systems like AWS KMS. Security is enhanced through audited cryptographic libraries (RustCrypto) and data is encrypt</abstract><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7692/</url><track>Sponsors</track><persons><person id="1462">Damien Clochard</person></persons></event></room><room name="Auditorium"><event id="7422"><start>14:25</start><duration>00:45</duration><room>Auditorium</room><title>Creating a “Dungeon Master” with Postgres and MCP</title><abstract>AI agents are told to be the next revolution, and of course they will need data.

But instead of building another boring chatbot, let's create a Dungeon Master for our next Dungeons &amp; Dragons campaign!
Using this practical and fun example, we will build an AI agent that runs entirely on PostgreSQL. We'll go beyond simple query generation to explore how to grant agents secure, contextual access to your database for complex, non-predictive tasks. You'll learn how to architect an MCP (Model Context Protocol) server to prevent rogue AIs from dropping your tables while still empowering them to act as creative partners.

Join this quest to save the realm of elephants and learn to forge the weapons you'll need for the coming run.</abstract><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7422/</url><track>Talks</track><persons><person id="978">Matt Cornillon</person></persons></event></room><room name="Karnak"><event id="7693"><start>14:25</start><duration>00:45</duration><room>Karnak</room><title>Synchronisation of logical replication slots</title><abstract>Since the introduction of logical decoding, followed by integrated logical replication in version 10, it is possible to use PostgreSQL transaction logs to feed a logical replication stream. Until now, the reliability of the whole system depended on the use of a slot to store the progress of replication in the transaction logs. 

But what happens when the instance holding this slot disappears? How can the slot be made fault-tolerant? 

This is what we will discuss in this presentation. We will also focus on what PostgreSQL offers starting with version 17.</abstract><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7693/</url><track>Sponsors</track><persons><person id="1463">Stéphane Schildknecht et Sébastien Lardière</person></persons></event></room><room name="Other"><event id="7697"><start>15:10</start><duration>00:20</duration><room>Other</room><title>Tea</title><abstract /><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7697/</url><track>Breaks</track><persons /></event></room><room name="Auditorium"><event id="7533"><start>15:30</start><duration>00:45</duration><room>Auditorium</room><title>Customizing the Wordle Game Experience with PostgreSQL</title><abstract>Discover the endless possibilities of PostgreSQL as a gaming platform by harnessing its ability to customize the Wordle game. Explore how PostgreSQL empowers developers to redefine the game experience through three core entities: 

- The available word set. Do we want to allow all words or only popular and well-known ones? Do we want to limit a set to some topic, e.g., IT-slang terms, or geographically limited? Should we restrict a guessing word length to 5 characters, or can we vary? For which languages it's better to use shorter or longer words? 

- The standard guess-checking function returns yellow and green position marks. But what if we will use another score function, like Levenshtein distance? Or even bigram/trigram positions instead of single characters? 

- And the move acceptance function. Should it check the guess only against the initial word set? Or allow any word in the target language? Or allow any word restricted by simple regular expression? 

Gain insights into curating word dictionaries, enabling support for non-English languages, and implementing innovative gameplay mechanics. With PostgreSQL's advanced features and extensibility, attendees will unlock the potential to create unique and engaging gaming experiences. Join me as we explore the transformative power of PostgreSQL in the world of Wordle.</abstract><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7533/</url><track>Talks</track><persons><person id="72">Pavlo Golub</person></persons></event></room><room name="Karnak"><event id="7690"><start>15:30</start><duration>00:45</duration><room>Karnak</room><title>Actionable Observability: Finding the root cause of Slow Queries in Postgres</title><abstract>Postgres offers many tools for query observability, but it’s a struggle to use them effectively. Either they collect too little data and miss performance problems or logs everything creating noise and confusion. In this talk, we present a practical framework to configure Postgres logging and query analysis with a single goal: identify the queries worth investigating. Drawing on experience monitoring Postgres across managed services, Kubernetes, and self-hosted environments, we explore how to balance visibility, performance impact, and signal-to-noise. Learn how tools like pg_stat_statements, slow query logging, auto_explain, and built-in metrics complement each other—and where they fall short when used alone. We also look at common observability mistakes, such as premature global tuning, over-instrumentation, and logging without a clear question. Finally, we show how pganalyze supports these practices and how observability data paired with AI tools can enhance investigation workflows.</abstract><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7690/</url><track>Sponsors</track><persons><person id="1460">Ryan Booz</person></persons></event></room><room name="Auditorium"><event id="7246"><start>16:25</start><duration>00:45</duration><room>Auditorium</room><title>Postgres Therapy Session: What the Elephant Can Learn from Its Rivals.</title><abstract>Not another “what’s new in 18” tour but this is a candid look at what’s still missing.
Postgres may rank #4 on DB-Engines and #1 in developers’ hearts, yet its rivals still flaunt enviable tricks such as Oracle’s database resident connection pool, DB2’s adaptive compression, SQL Server’s query store, MongoDB's TTL indexes, and DuckDB’s direct object-storage querying, and more.

We’ll show how these and many other features work, why they matter, and dig into the historical reasons Postgres didn’t adopt them. 
Then we’ll ask the hard question: what can we ship now cleanly in core or as extensions?</abstract><url>https://www.postgresql.eu/events/pgdayparis2026/schedule/session/7246/</url><track>Talks</track><persons><person id="1107">Mayuresh Suresh Bagayatkar</person></persons></event></room></day></schedule>