Time: 09:30 - 10:20
Prometheus has its own time-series database specifically for metrics monitoring but by now it’s well-known that longer-term, persistent storage is required for the historical metrics. There are many systems being used for this purpose and it seems like every month or so a new system introduces itself to specifically tackle this problem. This talk will focus on an older, but very powerful system as an option for your long-term Prometheus metrics storage: PostgreSQL.
Why PostgreSQL? First, while PromQL is quite expressive, it can be limiting in terms of the more complex queries that users come to realize they want to run on historical metrics data. It also lacks the ability to JOIN metrics with other metadata of interest. With PostgreSQL, users are able to ask complex questions and get a full SQL interface. Second, PostgreSQL can now be scaled effortlessly to 10s of TBs via the TimescaleDB extension, which also adds useful functions in SQL and optimizations for working with time-series data.
The two pieces that allow for this full SQL, long-term storage backend to Prometheus are pg_prometheus, a custom Prometheus datatype, and prometheus-postgresql-adapter, a remote storage adapter for PostgreSQL. We’ll present those as well as an overview TimescaleDB, concluding with a discussion on the power of SQL for querying and visualizing metrics data using a variety of real-world examples e.g., more complex queries involving multiple WHERE predicates, sub-queries, limits, JOINs etc. We'll show why these complex queries are often useful, and necessary, to effectively act on the data being collected, and how they can be enabled to run even across billions of rows of data.