[SOLVED] Auto trace statistics comparison

Issue

How to tell which oracle plan is good when comparing different queries which produce same number of rows ?

If I have to consider last_consistent_gets to be low, I see the elapsed time is more.
And for other query elapsed time is less but last_consistent_gets are more.

It’s very confusing.

Solution

The elapsed time is usually the most important metric for Oracle performance. In theory, we may occasionally want to sacrifice the run time of one SQL statement to preserve resources for other statements. In practice, those situations are rare.

In your specific case, there are many times when a statement that consumes more consistent gets is both faster and more efficient. For example, when retrieving a large percentage of data from a table, a full table scan is often more efficient than an index scan. A full table scan can use a multi-block read, which can be much more efficient than the multiple single-block reads of an index scan. Storage systems generally are much faster at reading large chunks of data than multiple small chunks.

The below example compares reading 25% of the data from a table. The index approach uses only half as many consistent gets, but it is also more than twice as slow.

Sample Schema

Create a simple table and index and gather stats.

create table test1(a number, b number);
insert into test1 select level, level from dual connect by level <= 1000000;
create index test1_ids on test1(a);
begin
    dbms_stats.gather_table_stats(user, 'TEST1');
end;
/

Autotrace

The code below shows the full table scan consumes 2082 consistent gets and forcing an index access consumes 1078 consistent gets.

[email protected]> set autotrace on;
[email protected]> set linesize 120;
[email protected]> select sum(b) from test1 where a >= 750000;

    SUM(B)
----------
2.1875E+11


Execution Plan
----------------------------------------------------------
Plan hash value: 3896847026

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |     1 |    10 |   597   (3)| 00:00:01 |
|   1 |  SORT AGGREGATE    |       |     1 |    10 |            |          |
|*  2 |   TABLE ACCESS FULL| TEST1 |   250K|  2441K|   597   (3)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("A">=750000)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       2082  consistent gets
          0  physical reads
          0  redo size
        552  bytes sent via SQL*Net to client
        404  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

[email protected]> select /*+ index(test1) */ sum(b) from test1 where a >= 750000;

    SUM(B)
----------
2.1875E+11


Execution Plan
----------------------------------------------------------
Plan hash value: 1247966541

--------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |           |     1 |    10 |  1084   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE                      |           |     1 |    10 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| TEST1     |   250K|  2441K|  1084   (1)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | TEST1_IDS |   250K|       |   563   (1)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("A">=750000)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1078  consistent gets
          0  physical reads
          0  redo size
        552  bytes sent via SQL*Net to client
        424  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Performance

If you run the statements a hundred times in a loop (and run those loops multiple times to ignore caching and other system activity), the full table scan version runs much faster than the forced index scan version.

--Seconds to run plan with more consistent gets: 1.7, 1.7, 1.8
declare
    v_count number;
begin
    for i in 1 .. 100 loop
        select sum(b) into v_count from test1 where a >= 750000;
    end loop;
end;
/


--Seconds to run plan with less consistent gets: 4.5, 4,5, 4.5
declare
    v_count number;
begin
    for i in 1 .. 100 loop
        select /*+ index(test1) */ sum(b) into v_count from test1 where a >= 750000;
    end loop;
end;
/

Exceptions

There are some times when resource consumption is more important than elapsed time. For example, parallelism is kind of cheating in that it forces the system to work harder, not smarter. A single out-of-control parallel query can take down an entire system. There are also times when you need to break up statements into less efficient versions to decrease the amount of time something is locked, or to avoid consuming too much UNDO or temporary tablespace.

But the above examples are somewhat uncommon exceptions, and they generally only happen when dealing with data warehouses that query a large amount of data. For most OLTP systems, where every query takes less than a second, the elapsed time is the only metric you need to worry about.

Answered By – Jon Heller

Answer Checked By – David Marino (BugsFixing Volunteer)

Leave a Reply

Your email address will not be published. Required fields are marked *