Time: 11:50 - 12:40
This session introduces our heterogeneous architecture based on GPU and NVME-SSD, for acceleration of typical big-data queries.
In general, key of performance is not only number of processor cores and its clock, but data throughput to be loaded for the fast/parallel processors also. Our architecture utilizes GPU for massive parallel query execution, NVME-SSD for ultra-wide bandwidth storage and I/O expansion box for hierarchical I/O network topology inside a server system.
Software also needs to be optimized for the special hardware. SSD-to-GPU Direct SQL Execution is a unique feature of PG-Strom. It loads data blocks of PostgreSQL on NVME-SSD onto GPU's device memory using P2P DMA, then runs SQL workloads (WHERE, JOIN and GROUP BY) on the GPU device prior to data loading onto CPU/RAM of the system. It works to reduce number of records to be processed by CPU, thus, looks like I/O acceleration. Hash-partitioning and parallel scan of partition-leafs are new feature of PostgreSQL v11. It allows to distribute large dataset to multiple NVME-SSDs on behalf of tablespace, and activate multiple pairs of NVME-SSDs and GPUs in-parallel.
I/O expansion box is an external chassis to install PCIe devices, like SSD or GPU, outside of server system. In addition, some of the products have internal PCIe-switch that transmits PCIe packets if both peers are installed in the same box. When we run SSD-to-GPU Direct SQL Execution between devices in a same box, its major traffic is enclosed in the I/O expansion box, thus does not leak i/o load to the server system. Its combined usage with table partitioning enables to run multiple SSD-to-GPU data stream in parallel for each I/O expansion box.
In the results, our single-node configuration pulls out maximum capability of GPU and NVME-SSD, to accelerate typical big-data workloads more than 10GB/s query execution throughput. We shall show the benchmark results based on three I/O expansion boxes with GPU and SSDs for each, but not limited to three.