PostgreSQL implementation on a large operational database, Yes! The size matters!
Room: Baltic I
Several months ago, my responsabilities changed, and I had to manage a new application, freshly developped by colleagues.
Its aim was to collect all sales transactions, all around the world and forever.
After discovering the application, I asked for infrastructure evolutions:
* To modify conception errors
* To industrialize the European prototype and enable its worldwide deployment (2 other platforms: ASIA & CHINA)
Then, to face the volume increase and to limit the risk in case of failure, we have searched solutions to secure our application. Some of these search results are already implemented, some are still in test.
Table of contents
- Project discovery
- Context definition
- Source of data: storing all sales transactions worldwide.
- Adding 300 millions records per year, no data cleanup (keeping sales transactions forever)
- One tricky XML filed: The POSLOG
- Step onboard an existing project!
- 1 table represents 95% of the database size.
- Implementing new business needs
- Hybrid implementation
- Using onPremise & Cloud solutions
- Adding some technical needs
- Using Puppet
- For deployments
- On the database structure
- For the global application deployment
- JENKINS for platform mangement
- calling external jobs.
- We need partitioning!
- Why partitioning?
- How to partition PostgreSQL database?
- Existing solutions
- Moving to the cloud
- Production doesn't behave well
- The SELECT max(version)
- BRIN index
- Let's spread the data
- What is the good choice:
- Creating a lot of application clusters?
- Sharding the database?
- Impact of data management
The following slides have been made available for this session: