In production environments with large databases and high concurrent access, it might happen that queries that used to run in tens of milliseconds suddenly take several seconds.
Likewise, a summary query for a report that used to run in a few seconds might take half an hour to complete.
Here are some ways to find out what is slowing them down.
Any questions of the type why is this different today from what it was last week? are much easier to answer if you have some kind of historical data collection setup.
The tools we mentioned in the earlier recipe,Providing PostgreSQL information, to monitor tools so that we can monitor general server characteristics, such as CPU and RAM usage, disk I/O, network traffic, and load average, and so on are very useful for seeing what has changed recently, and for trying to correlate these changes with the observed performance of some database operations.
Also, collecting historical statistics data from pg_stat_...