EXPLAIN ANALYZE select extract(epoch from ts) AS ts, "name", "amount" from "blockrewards" as "b" inner join "messages" as "m" on "m"."id" = "b"."id" order by ts desc limit 1: 199.744 ms
QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=6697.25..6697.37 rows=1 width=49) (actual time=118.531..119.376 rows=1 loops=1) -> Gather Merge (cost=6697.25..8805.10 rows=18066 width=49) (actual time=118.529..119.373 rows=1 loops=1) Workers Planned: 2 Workers Launched: 2 -> Sort (cost=5697.23..5719.81 rows=9033 width=49) (actual time=108.848..108.851 rows=1 loops=3) Sort Key: (EXTRACT(epoch FROM m.ts)) DESC Sort Method: top-N heapsort Memory: 25kB Worker 0: Sort Method: top-N heapsort Memory: 25kB Worker 1: Sort Method: top-N heapsort Memory: 25kB -> Hash Join (cost=669.78..5652.06 rows=9033 width=49) (actual time=25.357..100.472 rows=7228 loops=3) Hash Cond: (m.id = b.id) -> Parallel Seq Scan on messages m (cost=0.00..4686.35 rows=104135 width=28) (actual time=0.043..25.376 rows=83606 loops=3) -> Hash (cost=398.79..398.79 rows=21679 width=37) (actual time=24.963..24.964 rows=21683 loops=3) Buckets: 32768 Batches: 1 Memory Usage: 1759kB -> Seq Scan on blockrewards b (cost=0.00..398.79 rows=21679 width=37) (actual time=0.020..12.201 rows=21683 loops=3) Planning Time: 0.405 ms Execution Time: 119.744 ms (17 rows)
vs
EXPLAIN ANALYZE select extract(epoch from ts) AS ts, "name", "amount" from "blockrewards" as "b" inner join "messages" as "m" on "m"."id" = "b"."id" order by m.ts desc limit 1: 0.204 ms
QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=0.71..4.75 rows=1 width=57) (actual time=0.173..0.174 rows=1 loops=1) -> Nested Loop (cost=0.71..87566.78 rows=21679 width=57) (actual time=0.172..0.172 rows=1 loops=1) -> Index Scan Backward using idx_messages_ts on messages m (cost=0.42..10122.09 rows=249923 width=28) (actual time=0.013..0.029 rows=25 loops=1) -> Index Scan using blockrewards_pkey on blockrewards b (cost=0.29..0.31 rows=1 width=37) (actual time=0.005..0.005 rows=0 loops=25) Index Cond: (id = m.id) Planning Time: 0.417 ms Execution Time: 0.204 ms (7 rows)
Haha, the spooky ghosts of SQL implementations.
reply
lol impressive
I get this kind of thing all the time writing image processing software. As you're dealing with pretty hefty data sets and complex analysis it's quite unforgiving towards inefficient code. :V
reply
Yeah, a lot of work I do with concurrency is similar, nannoseconds shaved in signalling latency often compounds really fast into dramatically higher throughput.
reply
Looks like a good deal.
Good job, finding that index (or creating it).
reply
The index already existed, I just had to use ORDER BY m.ts DESC instead of ORDER BY ts DESC
reply
Sheesh. Care to name the guilty SQL implementation?
reply
Postgres 14.7
To be fair, it's understandable that postgres doesn't know that extract(epoch from ts) does not change the order of m.ts. For postgres, it's just a random function which could do anything, I guess. So it can't use the index since it needs to calculate that value for every row first. To spot stuff like this is part of my job as a developer :)
reply
Ah, yeah, I guess "overwriting" with "AS ts" in this case would understandably cause such issues, missed that detail. Well done finding it. 🙂
reply
It would be the same on any implementation IMHO. At least it's the same with MySQL too.
reply
Yeah, overwriting field names should be avoided if possible.
reply