Home, Optimizer, Benchmarks, Server Systems, System Architecture, Processors, Storage,

Query Optimizers Gone Wild - Loop Join to Table Scan

The Cost Based Optimizer is one of the cornerstones of modern database engines. Long ago in the past, Rule Based optimizers could not make "intelligent" decisions on when to use an index versus just scanning the table. The cost model allows decisions to be made based on the size of the table (number of pages) and the number of rows. However, every now and then, a query that should run in a reasonable time frame, all of the sudden runs forever, never finishing. So why is it that the more sophisticated the technology seem to be associated with occasional rare but spectacular failure?.

The execution plan below is an example of a situation when a Cost Based Optimizer leads to a serious execution plan failure. The query is a three-table join. There should be sufficient information to the Query Optimizer that this is in fact a one-to-one-to-one join. Each row from the temp table #ADOS joins to exactly one row in DimPersonnel and exactly one row in DimTable.

LoopJoinScan

Note: in the example above, the second loop join (from the right) inner source is a permanent table. The loop join with inner source scan is prone to occur when the inner source table is a table variable.

Below are the details from the first join outer source temp table #ADOS, and inner source table DimPersonnel. There are 362,181 rows in the temp table, with no search arguments. The temp table is about 80MB (IO cost 1 corresponds to 10.5MB). There are 362,181 executes for the Loop Join inner source index seek with estimated number of rows 1 for each execute.

LoopJoinScan   LoopJoinScan

Below is the output of the first join. For some reason, the estimated row output shown is 1. In situations when the data distribution statisitcs lead to the conclusion that there are zero rows, the estimated rows will be displayed as 1. So it is not clear whether the estimate is actually one or zero. This is the source of the error, even though the cause is not clear.

LoopJoinScan

It is in the next step after the error that the disaster occurs. In this case, there is no index on the join condition of the third table DimTable, because the expected usage is a high row count, a table scan is a reasonable execution plan. The table is about 900MB, corresponding to scan IO cost 84. A loop join executed 362,000 times could be as high as 1000, based on cost 1 corresponding to 320 inner source index seeks, and less if the table is not large.

LoopJoinScan   LoopJoinScan

The problem is because the execution plan employs a loop join with a scan to a large inner source table. If there is in fact only 1 row coming from the outer source, then this is the better plan (without an index on the inner source.) The alternative plan is a hash join to the inner source. This is somewhat more expensive than the loop join for a single row from the outer source. However, the hash join is relatively insensitive to the number of rows from the outer source, while the loop join requires a full scan for each row from the outer source.

Lets assume for the moment that one full scan of the inner table, takes 3-5 sec. One scan is no problem, and just as fast as 360,000 index seeks if there had been an index on the join condition. However, there are in fact 360,000 rows from the outer source, so this execution plan with the loop join and inner source table scan, is estimated to require 300-500 hours (3600 seconds per hour, 3-5 seconds per 900MB scan).

So what is the correct resolution? Ideally it would be highly beneficial to never make a serious mistake in row count estimation. But it is inevitable that there will be situations of major row estimate discrepancies. One alternative is the ability to hint assume 1:1 or 1:many joins depending on the key definitions. Another alternative is to hint no loop join with inner source scan because of the sensitivity to row count estimation errors.