Unanswered: Advice on debugging a VERY slow delete
I have a delete which is taking forever. It runs, but soooo slow.
The table owns 14 other tables, with cascade on delete stuff, and some of those have +100,000 rows (and will be in.7 figs very quickly) It's N squared on this, so it's presumably doing some linear scan of these tables.
All the table defs came from sequelize, which is a node.js thing. it's a little bleeding edge with postgres, but i have no idea what it might have got wrong. well obviously it must be the delete on cascade stuff but how ?
I just discovered that an aggregate like max(id) is also taking ages for something like
explain select max(id) from "StopTimes" where "DataSetId" = 175;
Result (cost=21.82..21.83 rows=1 width=0)
InitPlan 1 (returns $0)
-> Limit (cost=0.43..21.82 rows=1 width=4)
-> Index Scan Backward using "StopTimes_pkey" on "StopTimes" (cost=0.43..160392.31 rows=7497 width=4)
Index Cond: (id IS NOT NULL)
Filter: ("DataSetId" = 175)
oh, only the first time, runs in a flash the second. good job postgres!.
I killed my top level delete and I get this
wikitimetable=> delete FROM "DataSets" where id = 175;
^CCancel request sent
ERROR: canceling statement due to user request
CONTEXT: while deleting tuple (38873,68) in relation "StopTimes"
SQL statement "DELETE FROM ONLY "public"."StopTimes" WHERE $1 OPERATOR(pg_catalog.=) "DataSetId""
and I tired explain but it just says
wikitimetable=> explain delete FROM "DataSets" where id = 175;
Delete on "DataSets" (cost=0.00..6.08 rows=1 width=6)
-> Seq Scan on "DataSets" (cost=0.00..6.08 rows=1 width=6)
Filter: (id = 175)
This is a real pain as I am about to start using geo stuff, but I need to be able to add and delete these datasets. I dont need it to take seconds, but a minute even for a big lump was par on mysql, so thats our ball park.
i think i am getting to the bottom of this.
I am certainly missing indexes within the schema tree. everyone has a DataSetId index but they dont have the others they need.
the cascade goes through the higher tables first, they then go scanning rather inefficiently over 3,500,000 rows, 1,200 times, and so on.
it just didnt show up in mysql.
P.S. this verification stuff on any post is death. get rid please.
Please add the output of explain (analyze, verbose, buffers) instead of the "plain" explain (and please wrap them in [code] blocks because otherwise this won't be readable). Or upload them to http://explain.depesz.com and post the the link.
And please post the complete create table statement including any index on the table (and any table that has a foreign key on the one that you are deleting from).