wiki:PerformanceTips

Performance Tips

If you've been working with databases a lot, chances are you already know how to deal with performance problems. But if you don't, here are some tips on identifying and fixing performance problems. Some of them will work for any SQL database, others are specific to postgres.

Spot the problem

The first thing you do when things take too long is figure out where the problems are. How you approach that will depend largely on your operating system, application, and available debugging tools. But if it's just you and postgres, here's what you do.

There's a setting in postgresql.conf called log_min_duration_statement. You set it to a number. It's a duration, measured in milliseconds. It makes the database log any SQL statement that takes longer than the given time. You typically want to set this pretty high, or you'll end up logging far too much. If nothing is logged, you lower the setting until you start seeing your longest queries.

This will usually tell you which queries take too long, and you can start wondering about why. Sometimes you find that no query takes particularly long, in which case you may be facing a "death of a thousand cuts"—the application simply executes too many queries.

Analyze slow queries

Let's say you've found your slow query. It's usually some SELECT query. You can get a full rundown of how the database plans to execute it by running the same query, but prefixed with the EXPLAIN keyword:

EXPLAIN SELECT *
FROM Table1, Table2, Table2
WHERE
    Table1.x = Table3.x AND
    Table2.y <> Table3.x
ORDER BY Table1.z

This will explain the query plan. The plan is a combination of sub-plans, each of which is often another combination of sub-sub-plans. Thus the overall plan is shaped like a tree.

The plan shows an estimate of how many rows each sub-plan will produce, based on the statistics the database has gathered for your data, and how long it may take. The costs are not in seconds, or milliseconds, or any fixed unit. This is just an abstract cost figure that lets the query planner compare alternative plans and pick what it thinks will be the cheapest.

                                        QUERY PLAN                                        
------------------------------------------------------------------------------------------
 Sort  (cost=36873.77..37387.19 rows=205368 width=56)
   Sort Key: Table1.x
   ->  Nested Loop  (cost=32.25..4710.25 rows=205368 width=56)
         Join Filter: (Table2.y <> Table3.x)
         ->  Seq Scan on Table2  (cost=0.00..34.00 rows=2400 width=4)
         ->  Materialize  (cost=32.25..33.11 rows=86 width=52)
               ->  Hash Join  (cost=1.31..32.16 rows=86 width=52)
                     Hash Cond: (Table3.x = Table1.x)
                     ->  Seq Scan on Table3  (cost=0.00..22.30 rows=1230 width=36)
                     ->  Hash  (cost=1.14..1.14 rows=14 width=16)
                           ->  Seq Scan on Table1  (cost=0.00..1.14 rows=14 width=16)
(11 rows)

What does this mean? An easy way to read it is "inside out," from the most-indented back to the least-indented. In this case that means back to front!

Reading a query plan

You may want to run your "EXPLAIN ANALYZE" output through explain.depesz.com to make this easier. Either way, you start at the very outermost indentation level. In this case we find a "Seq Scan" of Table1:

                           ->  Seq Scan on Table1  (cost=0.00..1.14 rows=14 width=16)

This means a "sequential scan" of Table1, where the executor needs to run through all of a table's data. That's often a bad thing, especially if your tables get very big, but sometimes there's simply no way around it.

The result of that scan is stored in an in-memory hash table:

                     ->  Hash  (cost=1.14..1.14 rows=14 width=16)

That hash table is then used to join rows from Table3 to those of Table1. The executor will achieve that by running through each row in Table3 (again sequentially) and looking it up in the hash table it's generated from Table1:

               ->  Hash Join  (cost=1.31..32.16 rows=86 width=52)
                     Hash Cond: (Table3.x = Table1.x)
                     ->  Seq Scan on Table3  (cost=0.00..22.30 rows=1230 width=36)
                     ->  Hash  (cost=1.14..1.14 rows=14 width=16)

A hash table is a data structure that's generally very fast at looking things up. But don't be too sure. In this example query the tables are quite small, and all data will fit into a hash table. But if you're chasing down a performance problem, there's a good chance that that's not the case.

What actually happens is that the backend will read a bit of data from the sub-plan indented under the Hash, store it as a hash table, perform a part of the Hash Join with it, then throw away the hash table and start a new one with the next chunk of data. If it has to do that too many times, a hash join can actually be very costly.

Which is why we next see this:

         ->  Materialize  (cost=32.25..33.11 rows=86 width=52)

What is Materialize? That's where the executor takes the results from a part of the plan and writes it out as a kind of temporary table. That can be useful if it has to go through the result many times, or generating the result takes up too much memory: producing the data up-front and storing it will be better than generating it on-demand while also doing other work.

Looking a bit further up, we see

   ->  Nested Loop  (cost=32.25..4710.25 rows=205368 width=56)
         Join Filter: (Table2.y <> Table3.x)
         ->  Seq Scan on Table2  (cost=0.00..34.00 rows=2400 width=4)
         ->  Materialize  (cost=32.25..33.11 rows=86 width=52)

A Nested Loop is often bad news, performance-wise. This one means that the executor will go through all of Table2 (using a sequential scan, as above) and for each row, go through each of the rows coming out of the materialized sub-plan and check whether it matches the condition Table2.y <> Table3.x.

Why is this bad news? Because if you have to do this for one subplan or table that produces x rows and another that produces y rows, this loop will execute x × y times. Those numbers can get pretty big if x and y run in the thousands or millions!

In this case of course, it's not so bad. The estimate is that the sequential scan on Table2 will produce 2,400 rows but the materialized sub-plan only 86. But that already takes us to a worrisome 205,368 iterations!

Finally, we get to see very clearly why it made sense to read this plan back-to-front:

 Sort  (cost=36873.77..37387.19 rows=205368 width=56)
   Sort Key: Table1.x

This is where the executor sorts the results that come out of the Nested Loop by Table1.x.

The sorting is usually an afterthought. But sometimes the executor may underestimate the amount of memory it needs to sort the data efficiently. When that happens, the Sort can become very slow. You'll usually want your data sorted (if nothing else to make your application behave predictably) but for performance testing, it sometimes makes sense to try the same query without sorting.

Actually you'll find many more things in query plans than I show here. A nice one you'll often see is called Index Scan, where the executor looks through an index to identify the rows it needs before accessing the actual rows data.

You generally don't need to worry about an index scan. Indexes are usually much smaller than the real tables and tend to be in cache. There is one thing you should know though: as of postgres 9.0, even if the index has all the data that your query needs, the executor will still need to access the actual row. Doing so may cost you a non-sequential disk access—not a problem with solid-state drives but potentially very expensive on conventional hard disks, possibly dozens of milliseconds—but there are good if complicated reasons why it's still necessary.

What's the real cost?

The output of EXPLAIN is helpful, but it's not always enough. Sometimes the statistics just aren't representative of the real data your query will dig up, and the plan you get is much slower. And sometimes you just want to see how long a query really takes, especially when you have an alternative query and you want to compare the two for speed.

In that case, try prefixing your query with "EXPLAIN ANALYZE" instead of just EXPLAIN. This will actually run your query and tell you how long each subplan took.

Repeat: this will actually run your query. If your query does an UPDATE or a DELETE or an INSERT, or if it simply brings your database server to a crawl and upsets your users, that will still happen when you "EXPLAIN ANALYZE" it. Be very careful with this feature.

There is another thing to watch out for. Sometimes you "EXPLAIN ANALYZE" a query and get bad results—the query takes ages. Then you do it again and now it's suddenly much faster. That's because the data that is needed to execute the query has been fetched from disk and is cached in memory.

Therefore, when you're figuring out a slow query, don't assume that it will always run at the same speed.

In particular, try doing EXPLAIN ANALYZE twice instead of once and seeing the difference. If they're both fast enough, your problem may be that you don't have enough memory free for the operating system's filesystem cache but the data for your query happens to be in memory right now. Try testing the same query again tomorrow, or if you're really sure it'll work, try it now but on different data.

If both runs are too slow, you've got a query you can probably optimize in isolation—or there may be a change you can make to your application so that it doesn't need such a difficult query.

Finally, if the first run is too slow but the second one isnt, start worrying about disk accesses. Are there indexes you can add to help your query find the right rows more easily? Can you make your query access less data? Can you put the data you need closer together, e.g. in a single table?


That's all for now. I'll add more when I have time—or when I receive your suggestions! Some ideas:

  • Indexing.
  • Retrieving more of your data in a single query.
  • The "nested-loop query" antipattern and how to replace it with joins.
  • Replacing "if" clauses on the client with outer joins.
  • Avoiding OR.
  • When to avoid nested queries.
  • The "deep-narrow/shallow-wide" join antipattern.
  • Avoiding count() when EXISTS will do.
  • Clustering.
  • Vacuum.
  • Recognizing degraded statistics.
  • Tuning resource-usage parameters.
Last modified 8 years ago Last modified on Aug 8, 2011, 5:40:40 AM