Operational hazards of managing PostgreSQL DBs over 100TB
October 21–24
Picture this: you start a new role, eager to learn and contribute with your ideas! Your next task is to get familiar with the database setup, and then you start encountering these massive PostgreSQL databases — 100TB, 200TB, 300TB...
And you start questioning yourself: how do you backup (and restore) a +100TB database? And how about HA? Performance? Vacuum?
It should work the same way as for a 100GB database, right? Well, maybe not exactly.
Blog posts and best practice guides make PostgreSQL seem straightforward—until you push it to its limits. At extreme scale, you will find yourself questioning the most fundamental assumptions about how PostgreSQL works.
Over the last years, my team at Adyen has been exploring the boundaries of what PostgreSQL can do, and today I will share our findings with you (at least the ones I can!).