We mentioned in another recipe that logging every query on a highly-available database that handles high volumes of query traffic is undesirable. DBAs often solve this problem by only logging slow queries by setting
log_min_duration_statement to a reasonable number of milliseconds in
postgresql.conf. Later, only queries that cross this threshold are logged, along with binding parameters if the query was a prepared statement.
We strongly encourage this practice, as it is invaluable for catching outlying queries that could benefit from optimization. Unfortunately, faster queries are still invisible to us. Worse, queries that execute often probably have their data sources cached in memory, so it's unlikely that they contribute to I/O. The database could be executing an inefficient or redundant query thousands of times per second, and besides an elevated server load, we would never know.
This situation is not conducive to long-term viability of a highly-available...