This document provides overview of how the query planner and optimizer for SQLite works.
Given a single SQL statement, there might be dozens, hundreds, or even thousands of ways to implement that statement, depending on the complexity of the statement itself and of the underlying database schema. The task of the query planner is to select an algorithm from among the many choices that provides the answer with a minimum of disk I/O and CPU overhead.
<a></a>
All terms of the WHERE clause are analyzed to see if they can be satisfied using indices. Terms that cannot be satisfied through the use of indices become tests that are evaluated against each row of the relevant input tables. No tests are done for terms that are completely satisfied by indices. Sometimes one or more terms will provide hints to indices but still must be evaluated against each row of the input tables.
The analysis of a term might cause new "virtual" terms to be added to the WHERE clause. Virtual terms can be used with indices to restrict a search. But virtual terms never generate code that is tested against input rows.
To be usable by an index a term must be of one of the following forms:
If an index is created using a statement like this:
Then the index might be used if the initial columns of the index (columns a, b, and so forth) appear in WHERE clause terms. The initial columns of the index must be used with the <b>=</b> or <b>IN</b> operators. The right-most column that is used can employ inequalities. For the right-most column of an index that is used, there can be up to two inequalities that must sandwich the allowed values of the column between two extremes.
It is not necessary for every column of an index to appear in a WHERE clause term in order for that index to be used. But there can not be gaps in the columns of the index that are used. Thus for the example index above, if there is no WHERE clause term that constraints column c, then terms that constrain columns a and b can be used with the index but not terms that constraint columns d through z. Similarly, no index column will be used (for indexing purposes) that is to the right of a column that is constrained only by inequalities.
For the index above and WHERE clause like this:
The first four columns a, b, c, and d of the index would be usable since those four columns form a prefix of the index and are all bound by equality constraints.
Only columns a, b, and c of the index would be usable. The d column would not be usable because it occurs to the right of c and c is constrained only by inequalities.
Only columns a and b of the index would be usable. The d column would not be usable because column c is not constrained and there can be no gaps in the set of columns that usable by the index.
The index is not usable at all because the left-most column of the index (column "a") is not constrained.) Assuming there are no other indices, the query above would result in a full table scan.
If a term of the WHERE clause is of the following form:
Then two virtual terms are added as follows:
If both virtual terms end up being used as constraints on an index, then the original BETWEEN term is omitted and the corresponding test is not performed on input rows. Thus if the BETWEEN term ends up being used as an index constraint no tests are ever performed on that term. On the other hand, the virtual terms themselves never causes tests to be performed on input rows. Thus if the BETWEEN term is not used as an index constraint and instead must be used to test input rows, the expr1 expression is only evaluated once.
WHERE clause constraints that are connected by OR instead of AND are handled in one of two way. If a term consists of multiple subterms containing a common column name and separated by OR, like this:
Then that term is rewritten as follows:
The rewritten term then might go on to constrain an index using the normal rules for <b>IN</b> operators. Note that column must be the same column in every OR-connected subterm, although the column can occur on either the left or the right side of the <b>=</b> operator.
If and only if the previously described conversion of OR to an IN operator does not work, the second OR-clause optimization is attempted. Suppose the OR clause consists of multiple subterms as follows:
Individual subterms might be a single comparison expression like <b>a=5</b> or <b>x>y</b> or they can be LIKE or BETWEEN expressions, or a subterm can be a parenthesized list of AND-connected sub-subterms. Each subterm is analyzed as if it were itself the entire WHERE clause in order to see if the subterm is indexable by itself. If every subterm of an OR clause is separately indexable then the OR clause might be coded such that a separate index is used to evaluate each term of the OR clause. One way to think about how SQLite uses separate indices foreach each OR clause term is to imagine that the WHERE clause where rewritten as follows:
The rewritten expression above is conceptual; WHERE clauses containing OR are not really rewritten this way. The actual implementation of the OR clause uses a mechanism that is more efficient than subqueries and which works even for tables where the "rowid" column name has been overloaded for other uses and no longer refers to the real rowid. But the essence of the implementation is captured by the statement above: Separate indices are used to find candidate result rows from each OR clause term and the final result is the union of those rows.
Note that in most cases, SQLite will only use a single index for each table in the FROM clause of a query. The second OR-clause optimization described here is the exception to that rule. With an OR-clause, a different index might be used for each subterm in the OR-clause.
The ESCAPE clause cannot appear on the LIKE operator.
The build-in functions used to implement LIKE and GLOB must not have been overloaded using the sqlite3_create_function() API.
For the GLOB operator, the column must be indexed using the built-in BINARY collating sequence.
But if the case_sensitive_like pragma is enabled as follows:
The LIKE optimization might occur if the column named on the left of the operator is indexed using the built-in BINARY collating sequence and case_sensitive_like is turned on. Or the optimization might occur if the column is indexed using the built-in NOCASE collating sequence and the case_sensitive_like mode is off. These are the only two combinations under which LIKE operators will be optimized.
The GLOB operator is always case sensitive. The column on the left side of the GLOB operator must always use the built-in BINARY collating sequence or no attempt will be made to optimize that operator with indices.
Suppose the initial sequence of non-wildcard characters on the right-hand side of the LIKE or GLOB operator is x. We are using a single character to denote this non-wildcard prefix but the reader should understand that the prefix can consist of more than 1 character. Let y be the smallest string that is the same length as /x/ but which compares greater than x. For example, if x is <b>hello</b> then y would be <b>hellp</b>. The LIKE and GLOB optimizations consist of adding two virtual terms like this:
Under most circumstances, the original LIKE or GLOB operator is still tested against each input row even if the virtual terms are used to constrain an index. This is because we do not know what additional constraints may be imposed by characters to the right of the x prefix. However, if there is only a single global wildcard to the right of x, then the original LIKE or GLOB test is disabled. In other words, if the pattern is like this:
then the original LIKE or GLOB tests are disabled when the virtual terms constrain an index because in that case we know that all of the rows selected by the index will pass the LIKE or GLOB test.
The ON and USING clauses of a inner join are converted into additional terms of the WHERE clause prior to WHERE clause analysis described above in paragraph 1.0. Thus with SQLite, there is no computational advantage to use the newer SQL92 join syntax over the older SQL89 comma-join syntax. They both end up accomplishing exactly the same thing on inner joins.
For a LEFT OUTER JOIN the situation is more complex. The following two queries are not equivalent:
For an inner join, the two queries above would be identical. But special processing applies to the ON and USING clauses of an OUTER join: specifically, the constraints in an ON or USING clause do not apply if the right table of the join is on a null row, but the constraints do apply in the WHERE clause. The net effect is that putting the ON or USING clause expressions for a LEFT JOIN in the WHERE clause effectively converts the query to an ordinary INNER JOIN - albeit an inner join that runs more slowly.
The current implementation of SQLite uses only loop joins. That is to say, joins are implemented as nested loops.
The default order of the nested loops in a join is for the left-most table in the FROM clause to form the outer loop and the right-most table to form the inner loop. However, SQLite will nest the loops in a different order if doing so will help it to select better indices.
Inner joins can be freely reordered. However a left outer join is neither commutative nor associative and hence will not be reordered. Inner joins to the left and right of the outer join might be reordered if the optimizer thinks that is advantageous but the outer joins are always evaluated in the order in which they occur.
When selecting the order of tables in a join, SQLite uses a greedy algorithm that runs in polynomial (O(N²)) time. Because of this, SQLite is able to efficiently plan queries with 50- or 60-way joins.
The schema above defines a directed graph with the ability to store a name at each node. Now consider a query against this schema:
This query asks for is all information about edges that go from nodes labeled "alice" to nodes labeled "bob". The query optimizer in SQLite has basically two choices on how to implement this query. (There are actually six different choices, but we will only consider two of them here.) Pseudocode below demonstrating these two choices.
Option 1:
Option 2:
The same indices are used to speed up every loop in both implementation options. The only difference in these two query plans is the order in which the loops are nested.
So which query plan is better? It turns out that the answer depends on what kind of data is found in the node and edge tables.
Let the number of alice nodes be M and the number of bob nodes be N. Consider two scenarios. In the first scenario, M and N are both 2 but there are thousands of edges on each node. In this case, option 1 is preferred. With option 1, the inner loop checks for the existence of an edge between a pair of nodes and outputs the result if found. But because there are only 2 alice and bob nodes each, the inner loop only has to run 4 times and the query is very quick. Option 2 would take much longer here. The outer loop of option 2 only executes twice, but because there are a large number of edges leaving each alice node, the middle loop has to iterate many thousands of times. It will be much slower. So in the first scenario, we prefer to use option 1.
Now consider the case where M and N are both 3500. Alice nodes are abundant. But suppose each of these nodes is connected by only one or two edges. In this case, option 2 is preferred. With option 2, the outer loop still has to run 3500 times, but the middle loop only runs once or twice for each outer loop and the inner loop will only run once for each middle loop, if at all. So the total number of iterations of the inner loop is around 7000. Option 1, on the other hand, has to run both its outer loop and its middle loop 3500 times each, resulting in 12 million iterations of the middle loop. Thus in the second scenario, option 2 is nearly 2000 times faster than option 1.
If you really must take manual control of join loop nesting order, the preferred method is to use some peculiar (though valid) SQL syntax to specify the join. If you use the keyword CROSS in a join, then the two tables connected by that join will not be reordered. So in the query, the optimizer is free to reorder the tables of the FROM clause anyway it sees fit:
But in the following logically equivalent formulation of the query, the substitution of "CROSS JOIN" for the "," means that the order of tables must be N1, E, N2.
Hence, in the second form, the query plan must be option 2. Note that you must use the keyword CROSS in order to disable the table reordering optimization; INNER JOIN, NATURAL JOIN, JOIN, and other similar combinations work just like a comma join in that the optimizer is free to reorder tables as it sees fit. (Table reordering is also disabled on an outer join, but that is because outer joins are not associative or commutative. Reordering tables in outer joins changes the result.)
For the SELECT statement above, the optimizer can use the ex2i1 index to lookup rows of ex2 that contain x=5 and then test each row against the y=6 term. Or it can use the ex2i2 index to lookup rows of ex2 that contain y=6 then test each of those rows against the x=5 term.
When faced with a choice of two or more indices, SQLite tries to estimate the total amount of work needed to perform the query using each option. It then selects the option that gives the least estimated work.
The various <b>sqlite_stat</b>N tables contain information on how selective the various indices are. For example, the <b>sqlite_stat1</b> table might indicate that an equality constraint on column x reduces the search space to 10 rows on average, whereas an equality constraint on column y reduces the search space to 3 rows on average. In that case, SQLite would prefer to use index ex2i2 since that index.
Terms of the WHERE clause can be manually disqualified for use with indices by prepending a unary <b>+</b> operator to the column name. The unary <b>+</b> is a no-op and will not slow down the evaluation of the test specified by the term. But it will prevent the term from constraining an index. So, in the example above, if the query were rewritten as:
The <b>+</b> operator on the <b>x</b> column will prevent that term from constraining an index. This would force the use of the ex2i2 index.
Consider a slightly different scenario:
Further suppose that column x contains values spread out between 0 and 1,000,000 and column y contains values that span between 0 and 1,000. In that scenario, the range constraint on column x should reduce the search space by a factor of 10,000 whereas the range constraint on column y should reduce the search space by a factor of only 10. So the ex2i1 index should be preferred.
Another limitation of the histogram data is that it only applies to the left-most column on an index. Consider this scenario:
Here the inequalities are on columns x and y which are not the left-most index columns. Hence, the histogram data which is collected no left-most column of indices is useless in helping to choose between the range constraints on columns x and y.
SQLite attempts to use an index to satisfy the ORDER BY clause of a query when possible. When faced with the choice of using an index to satisfy WHERE clause constraints or satisfying an ORDER BY clause, SQLite does the same work analysis described above and chooses the index that it believes will result in the fastest answer.
When a subquery occurs in the FROM clause of a SELECT, the simplest behavior is to evaluate the subquery into a transient table, then run the outer SELECT against the transient table. But such a plan can be suboptimal since the transient table will not have any indices and the outer query (which is likely a join) will be forced to do a full table scan on the transient table.
To overcome this problem, SQLite attempts to flatten subqueries in the FROM clause of a SELECT. This involves inserting the FROM clause of the subquery into the FROM clause of the outer query and rewriting expressions in the outer query that refer to the result set of the subquery. For example:
Would be rewritten using query flattening as:
There is a long list of conditions that must all be met in order for query flattening to occur.
The subquery and the outer query do not both use aggregates.
The subquery is not an aggregate or the outer query is not a join.
The subquery is not the right operand of a left outer join.
The subquery is not DISTINCT or the outer query is not a join.
The subquery is not DISTINCT or the outer query does not use aggregates.
The subquery does not use aggregates or the outer query is not DISTINCT.
The subquery has a FROM clause.
The subquery does not use LIMIT or the outer query is not a join.
The subquery does not use LIMIT or the outer query does not use aggregates.
The subquery does not use aggregates or the outer query does not use LIMIT.
The subquery and the outer query do not both have ORDER BY clauses.
The subquery and outer query do not both use LIMIT.
The subquery does not use OFFSET.
The outer query is not part of a compound select or the subquery does not have both an ORDER BY and a LIMIT clause.
The outer query is not an aggregate or the subquery does not contain ORDER BY.
The sub-query is not a compound select, or it is a UNION ALL compound clause made up entirely of non-aggregate queries, and the parent query:
is not itself part of a compound select,
is not an aggregate or DISTINCT query, and
has no other tables or sub-selects in the FROM clause.
The parent and sub-query may contain WHERE clauses. Subject to rules (11), (12) and (13), they may also contain ORDER BY, LIMIT and OFFSET clauses.
If the sub-query is a compound select, then all terms of the ORDER by clause of the parent must be simple references to columns of the sub-query.
The subquery does not use LIMIT or the outer query does not have a WHERE clause.
If the sub-query is a compound select, then it must not use an ORDER BY clause.
The casual reader is not expected to understand or remember any part of the list above. The point of this list is to demonstrate that the decision of whether or not to flatten a query is complex.
Query flattening is an important optimization when views are used as each use of a view is translated into a subquery.
Queries of the following forms will be optimized to run in logarithmic time assuming appropriate indices exist:
In order for these optimizations to occur, they must appear in exactly the form shown above - changing only the name of the table and column. It is not permissible to add a WHERE clause or do any arithmetic on the result. The result set must contain a single column. The column in the MIN or MAX function must be an indexed column.
When no indices are available to aid the evaluation of a query, SQLite might create an automatic index that lasts only for the duration of a single SQL statement and use that index to help boost the query performance. Since the cost of constructing the automatic index is O(NlogN) (where N is the number of entries in the table) and the cost of doing a full table scan is only O(N), an automatic index will only be created if SQLite expects that the lookup will be run more than logN times during the course of the SQL statement. Consider an example:
An automatic index might also be used for a subquery:
In this example, the t2 table is used in a subquery to translate values of the t1.b column. If each table contains N rows, SQLite expects that the subquery will run N times, and hence it will believe it is faster to construct an automatic, transient index on t2 first and then using that index to satisfy the N instances of the subquery.
本文轉自 不得閑 部落格園部落格,原文連結: http://www.cnblogs.com/DxSoft/archive/2011/02/12/1952872.html ,如需轉載請自行聯系原作者