Direct Reads and Smart Scan with Exadata

Direct Reads and Smart Scan
The most significant performance feature of Exadata is generally agreed to be smart scan.  There are several prerequisites for smart scan – most notably, the query must perform direct reads.
The widely accepted (and published) stance on direct reads is that:
– They take place for parallel operations (parallel direct reads) and some full-scan operations (serial direct reads)
– “Full scans” includes full table scans and full index scans
– Full scans qualify for direct reads if the number of blocks being scanned exceeds a threshold
– This threshold is defined by “_small_table_threshold”, a dynamically adjusted initialization parameter that computes to 2% the size of Oracle’s buffer cache
– It’s published in many sources that direct reads happen when the number of blocks for an object exceeds (“_small_table_threshold”) * (5)
There are number of factors that can add variability to this formula – in addition to the size of the segment being scanned, the percentage of segment’s blocks in the buffer cache also plays a role, as does the number of dirty blocks in the cache.  Oracle makes a decision at run-time to do direct reads or buffered reads and calls this “adaptive direct reads”.
Since direct reads enable smart scans on Exadata, it’s important to understand what makes direct reads “successful” or unsuccessful.

In this blog I’ll show a bunch of direct read and smart scan test cases.  I’m doing this mostly because of this – the “_small_table_threshold” * 5 formula actually hasn’t quite worked out as expected based on what all the blogs, books, and documentation say.  Hopefully these cases can stand on their own, but if anyone finds anything “off” with them let me know …

And before getting started, a couple of great references:

(All test cases are done on either Oracle 11.2.0.3/Exadata or Oracle 11.2.0.2 on non-Exadata OEL)
Smart Scan Test Case #1: Full-scan XLA_AE_LINES
XLA_AE_LINES is an Oracle EBS R12 table that’s typically large in almost every R12 environment of any respectable size. My test environment is not large at all, but I wanted to show a few smart scan test cases against it:

#######################

Test Query

#######################

APPS @ visx1> @exq.sql

APPS @ visx1> select count(*) from

2  (select code_combination_id,

3  sum(entered_dr), sum(entered_cr)

4  from xla_ae_lines

5  group by code_combination_id);

 

COUNT(*)

———-

5573

1 row selected.

APPS @ visx1> set echo off

PLAN_TABLE_OUTPUT

————————————————————————————–

SQL_ID ct0t92t7f0a99, child number 0

————————————-

select count(*) from (select code_combination_id, sum(entered_dr),

sum(entered_cr) from xla_ae_lines group by code_combination_id)

Plan hash value: 3427813926

—————————————————————————————————————

| Id  | Operation                      | Name         | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |

—————————————————————————————————————

|   0 | SELECT STATEMENT               |              |       |       |  4245 (100)|          |       |       |

|   1 |  SORT AGGREGATE               |              |     1 |       |           |          |       |       |

|   2 |   VIEW                         |              |  4542 |       |  4245  (29)| 00:00:01 |       |       |

|   3 |    HASH GROUP BY             |              |  4542 | 49962 |  4245  (29)| 00:00:01 |       |       |

|   4 |     PARTITION LIST ALL         |              |  6645K|    69M|  3369  (10)| 00:00:01 |     1 |    22 |

|   5 |      PARTITION RANGE ALL       |              |  6645K|    69M|  3369  (10)| 00:00:01 |     1 |  LAST |

|   6 |       TABLE ACCESS STORAGE FULL| XLA_AE_LINES |  6645K|    69M|  3369  (10)| 00:00:01 |     1 |   610 |

—————————————————————————————————————

 

#######################

Smart Scan Stats

#######################

NAME      VALUE

——————————————————-

cell physical IO interconnect bytes 111,206,400

cell flash cache read hits                1,312

CPU used by this session                    237

#######################

Smart Scan Efficiency *

#######################

no rows selected

#######################

Waits

#######################

STAT                       TIME_WAITED  TOTAL_WAITS

————————– ———–   ————

events in waitclass Other          200        5

cell multiblock physical read       97      751

cell single block physical read    44      904

SQL*Net message from client          6       26

gc cr multi block request            6      238

gc cr grant 2-way                    3      394

row cache lock                       3      893

Summary from above:
– The query did a full-scan on XLA_AE_LINES with a PARTITION RANGE ALL operation, since we selected data from all partitions
– We used 237 CPU units
– There was no evidence of smart scan, based on the fact that there were no bytes eligible for predicate offload and because our IO-related waits were “cell multiblock physical read” and “cell single block physical read”
Smart Scan Test Case #2: Full-scan XLA_AE_LINES, Force Serial Direct Reads
For this test, I’m going to set “_serial_direct_read”=TRUE prior to running the same query and attempt to get a smart scan:

#######################

Test Query

#######################

APPS @ visx1> alter session set “_serial_direct_read”=true;

Session altered.

APPS @ visx1> @exq.sql

APPS @ visx1> select count(*) from

2  (select code_combination_id,

3  sum(entered_dr), sum(entered_cr)

4  from xla_ae_lines

5  group by code_combination_id);

COUNT(*)

———-

5573

1 row selected.

APPS @ visx1> set echo off

PLAN_TABLE_OUTPUT

————————————————————————————–

SQL_ID ct0t92t7f0a99, child number 0

————————————-

select count(*) from (select code_combination_id, sum(entered_dr),

sum(entered_cr) from xla_ae_lines group by code_combination_id)

Plan hash value: 3427813926

—————————————————————————————————————

| Id  | Operation                     | Name         | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |

—————————————————————————————————————

|   0 | SELECT STATEMENT              |              |       |       |  4245 (100)|          |       |       |

|   1 |  SORT AGGREGATE              |              |     1 |       |           |          |       |       |

|   2 |   VIEW                         |              |  4542 |       |  4245  (29)| 00:00:01 |       |       |

|   3 |    HASH GROUP BY              |              |  4542 | 49962 |  4245  (29)| 00:00:01 |       |       |

|   4 |     PARTITION LIST ALL         |              |  6645K|    69M|  3369  (10)| 00:00:01 |     1 |    22 |

|   5 |      PARTITION RANGE ALL        |             |  6645K|    69M|  3369  (10)| 00:00:01 |     1 |  LAST |

|   6 |       TABLE ACCESS STORAGE FULL | XLA_AE_LINES |  6645K|    69M|  3369  (10)| 00:00:01 |     1 |   610 |

—————————————————————————————————————

#######################

Smart Scan Stats

#######################

NAME       VALUE

———————————————————————- ——————–

cell physical IO bytes eligible for predicate offload       87,965,696

cell physical IO interconnect bytes                         74,826,560

cell physical IO interconnect bytes returned by smart scan 67,756,864

cell flash cache read hits                                         832

CPU used by this session                                           196

#######################

Smart Scan Efficiency *

#######################

Eligible MB SS Returned MB     Pct Saved

——————– ——————– ———

84       71             14.93

#######################

Waits

#######################

STAT                              TIME_WAITED     TOTAL_WAITS

———————————– —————- —————-

SQL*Net message from client                49      28

cell single block physical read            42     835

cell smart table scan                       27     445

events in waitclass Other                   24      72

row cache lock                               5     873

gc cr grant 2-way                            3     357

enq: KO – fast object checkpoint             1     134

Summary from above:
– The query again did a full-scan on XLA_AE_LINES with a PARTITION RANGE ALL operation, since we selected data from all partitions
– The query was offloaded, as evident by “cell physical IO bytes eligible for predicate offload” statistics and a smart scan efficiency, which is calculated from
V$SQL.IO_BYTES_ELIGIBLE_FOR_PREDICATE_OFFLOAD > 0
– Further, we’ve got waits for “cell smart table scan”, which is one of the things we’d see for smart scans
– We also have a wait for “enq: KO – fast object checkpoint” – this happens when blocks exist in the buffer cache that need to be flushed for the data being read into the PGA to be consistent
Smart Scan Test Case #3: Full-scan XLA_AE_LINES, Parallel Query
Below, I’m going to parallelize the same query with a degree of parallelism = 8, without forcing serial direct reads as in the previous case:

#######################

Test Query

#######################

APPS @ visx1> @exq.sql

APPS @ visx1> select count(*) from

2  (select /*+ parallel (8) */ code_combination_id,

3  sum(entered_dr), sum(entered_cr)

4  from xla_ae_lines

5  group by code_combination_id);

COUNT(*)

———-

5573

APPS @ visx1> set echo off

PLAN_TABLE_OUTPUT

————————————————————————————–

SQL_ID cmpbrw3vw826m, child number 0

————————————-

select count(*) from (select /*+ parallel (8) */ code_combination_id,

sum(entered_dr), sum(entered_cr) from xla_ae_lines group by

code_combination_id)

Plan hash value: 4167058233

————————————————————————————————————————————————-

| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop | TQ  |IN-OUT| PQ Distrib |

————————————————————————————————————————————————-

|   0 | SELECT STATEMENT            |              |       |       |   573 (100)|          |   |   |    |   |    |

|   1 |  SORT AGGREGATE            |              | 1    |       |            |          |   |   |    |   |    |

|   2 |   PX COORDINATOR            |              |       |       |            |          |   |   |    |   |    |

|   3 |    PX SEND QC (RANDOM)       | :TQ10001     | 1    |       |            |          |   |   |  Q1,01 | P->S | QC (RAND) |

|   4 |     SORT AGGREGATE           |              | 1    |       |            |          |   |   |  Q1,01 | PCWP |    |

|   5 |      VIEW                   |              |  4542 |       |   573  (27)| 00:00:01 |   |   |  Q1,01 | PCWP | |

|   6 |       HASH GROUP BY         |              |  4542 | 49962 |   573  (27)| 00:00:01 |   |   |  Q1,01 | PCWP | |

|   7 |        PX RECEIVE           |              |  4542 | 49962 |   573  (27)| 00:00:01 |   |   |  Q1,01 | PCWP | |

|   8 | PX SEND HASH               | :TQ10000     |  4542 | 49962 |   573  (27)| 00:00:01 |   |   |  Q1,00 | P->P | HASH |

|   9 | HASH GROUP BY             |              |  4542 | 49962 |   573  (27)| 00:00:01 |   |   |  Q1,00 | PCWP | |

|  10 |  PX PARTITION RANGE ALL    |            |  6645K| 69M   |   467  (10)| 00:00:01 | 1 |  LAST |  Q1,00 | PCWC | |

|  11 |   TABLE ACCESS STORAGE FULL| XLA_AE_LINES |  6645K| 69M   |   467  (10)| 00:00:01 | 1 |   610 |  Q1,00 | PCWP | |

————————————————————————————————————————————————-

Note

—–

– Degree of Parallelism is 8 because of hint

#######################

Smart Scan Stats

#######################

NAME                                          VALUE

————————————————- ———-

cell physical IO interconnect bytes      93,569,024

cell flash cache read hits                     1,053

CPU used by this session                         283

#######################

Smart Scan Efficiency *

#######################

no rows selected

#######################

Waits

#######################

STAT                               TIME_WAITED     TOTAL_WAITS

———————————– —————- —————-

events in waitclass Other                    352     70

SQL*Net message from client                107     26

PX Deq: Execute Reply                         55     640

cell single block physical read              29     589

row cache lock                               2      706

gc cr grant 2-way                              2      237

PX Deq: Parse Reply                            2       16

cell multiblock physical read                  1        5

PX Deq: Join ACK                               1       16

Summary from above:
– The query again did a parallel, full-scan on XLA_AE_LINES with a PARTITION RANGE ALL operation
– The query did NOT benefit from smart scan, even though it should have done a parallel direct read
– … but wait – looking at the wait event information, we see no evidence of “direct path read” waits, and without these we can’t get a smart scan
– We have no “cell smart%” waits for the session
This is puzzling – according to a couple of sources, certainly direct reads should have happened and these should have been offloaded.  In my environment, it doesn’t look like in-memory parallel execution because all the requisite initialization parameters are defaulted:

1  select  a.ksppinm name, b.ksppstvl value, b.ksppstdf isdefault

2  from x$ksppi a, x$ksppsv b

3  where a.indx = b.indx

4  and a.ksppinm  in (‘parallel_degree_policy’,’_parallel_statement_queuing’,

5  ‘_parallel_cluster_cache_policy’)

6  –and b.ksppstdf=’FALSE’

7* order by 1,2

SYS @ visx1> /

_parallel_cluster_cache_policy    ADAPTIVE    TRUE

_parallel_statement_queuing     FALSE FALSE

parallel_degree_policy        MANUAL FALSE

So my working theory is that the size of the actual table isn’t small enough for Oracle to think it’s better to do parallel direct reads on.  I’ll show some examples later on that attempt to prove this out.
Smart Scan Test Case #4: Full-scan XLA_AE_LINES, Parallel Query, Large Table
Now I’m going to simply create a larger set of tables to test with.  At this point, we assume Oracle can choose direct path reads when the size of the segment being scanned is roughly 5 times “_small_table_threshold”, which is defined as 2% the size of the buffer cache.  So let’s try to estimate how large we need our table to be:

APPS @ visx1> select a.ksppinm name, b.ksppstvl value, b.ksppstdf isdefault

2  from x$ksppi a, x$ksppsv b

3  where a.indx = b.indx

4  and 1=1

5  and a.ksppinm  like ‘%’||’&parm’||’%’

6  –and b.ksppstdf=’FALSE’

7  order by 1,2

8  /

Enter value for parm: small_table

old   5: and a.ksppinm like ‘%’||’&parm’||’%’

new   5: and a.ksppinm like ‘%’||’small_table’||’%’

_small_table_threshold    27284 TRUE

APPS @ visx1> select 5 * 27284

2  from dual;

136420

1 row selected.

So it looks like we need a segment with more than 136,420 blocks.  Let’s look at what our XLA_AE_LINES table shows:

SYS @ visx1> select blocks from dba_tables

2  where table_name=’XLA_AE_LINES’;

BLOCKS

———-

11225

1 row selected.

SYS @ visx1>

SYS @ visx1> select sum(blocks) from dba_tab_partitions

2  where table_name=’XLA_AE_LINES’;

SUM(BLOCKS)

———–

11225

1 row selected.

SYS @ visx1>

SYS @ visx1> select sum(blocks) from dba_tab_subpartitions

2  where table_name=’XLA_AE_LINES’;

SUM(BLOCKS)

———–

11225

1 row selected.

Based on the above, we’ve got 11,225 blocks in our partitioned table – about 10% of the size we need.  The number of rows in XLA_AE_LINES is:

APPS @ visx1> select count(*) from xla_ae_lines;

COUNT(*)

———-

6638073

So we need to create a table with about 70 million rows in order to test this theory out.  I’m going to create two tables – one a partitioned table with the same partitioning scheme as the original, and one un-partitioned.  Excerpts from the code are below:

652    subpartition psb_2011 values less than (to_date(’01-JAN-2012′,’dd-MON-yyyy’)),

653    subpartition psb_max values less than (maxvalue)

654   ), partition partother values (default))

655  tablespace apps_ts_tx_data nologging

656  as

657  select * from xla.xla_ae_lines

658  /

Table created.

Elapsed: 00:01:01.41

SQL>

1  create table xla.xla_ae_lines_unpart

2  nologging

3* as select * from xla_ae_lines

SQL> /

Table created.

Elapsed: 00:00:21.74

SQL> select count(*) from xla.xla_ae_lines_unpart;

COUNT(*)

———-

6638073

Elapsed: 00:00:00.94

SQL> select count(*) from xla.xla_ae_lines_part;

COUNT(*)

———-

6638073

Now we’re going to load it up with data, up to over 100 million rows – I’ll spare the details, it wasn’t very scientific:

SQL> select count(*) from xla.xla_ae_lines_part;

COUNT(*)

———-

106209168

Elapsed: 00:00:10.70

SQL> select count(*) from xla.xla_ae_lines_unpart;

COUNT(*)

———-

106209168

Ok, now let’s run a parallel query against both tables and see if we can see smart scan.  I’ll start with the partitioned table:

#######################

Test Query

#######################

APPS @ visx1> @exq.sql

APPS @ visx1> select count(*) from

2  (select  /*+ parallel (8) */

3  code_combination_id,

4  sum(entered_dr), sum(entered_cr)

5  from xla.xla_ae_lines_part

6  group by code_combination_id

7  );

 

COUNT(*)

———-

5573

APPS @ visx1> set echo off

(dbms_xplan.display_cursor(‘&_sqlid’,&_cnum,’TYPICAL’))

new   1: select * from table (dbms_xplan.display_cursor(‘a7zk1u1w63w00′, 0,’TYPICAL’))

PLAN_TABLE_OUTPUT

——————————————————————————————————————————————————————————————————–

SQL_ID a7zk1u1w63w00, child number 0

————————————-

select count(*) from (select  /*+ parallel (8) */ code_combination_id,

sum(entered_dr), sum(entered_cr) from xla.xla_ae_lines_part group by

code_combination_id )

Plan hash value: 2859844103

——————————————————————————————————————————————————

| Id  | Operation     | Name | Rows | Bytes | Cost (%CPU)| Time    | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |

——————————————————————————————————————————————————

|   0 | SELECT STATEMENT     | | | |  9070 (100)|    |    |    | | |     |

|   1 |  SORT AGGREGATE     | |     1 | |         |    |    |    | | |     |

|   2 |   PX COORDINATOR     | | | |         |    |    |    | | |     |

|   3 |    PX SEND QC (RANDOM)    | :TQ10001 |     1 |    |         |    |    |    |  Q1,01 | P->S | QC (RAND)  |

|   4 |     SORT AGGREGATE     | |     1 | |         |    |    |    |  Q1,01 | PCWP |     |

|   5 |      VIEW     | |  5573 | |  9070  (30)| 00:00:01 |    |    |  Q1,01 | PCWP |     |

|   6 |       HASH GROUP BY     | |  5573 | 61303 |  9070  (30)| 00:00:01 |    |    |  Q1,01 | PCWP |     |

|   7 |        PX RECEIVE     | |  5573 | 61303 |  9070  (30)| 00:00:01 |    |    |  Q1,01 | PCWP |     |

|   8 | PX SEND HASH     | :TQ10000 |  5573 | 61303 |  9070  (30)| 00:00:01 |    |    |  Q1,00 | P->P | HASH     |

|   9 | HASH GROUP BY     | |  5573 | 61303 |  9070  (30)| 00:00:01 |    |    |  Q1,00 | PCWP |     |

|  10 |  PX PARTITION RANGE ALL    | |   106M|  1114M|  7073  (10)| 00:00:01 |     1 |  LAST |  Q1,00 | PCWC |     |

|  11 |   TABLE ACCESS STORAGE FULL| XLA_AE_LINES_PART |   106M|  1114M|  7073  (10)| 00:00:01 |     1 |   610 |  Q1,00 | PCWP |     |

——————————————————————————————————————————————————

Note

—–

– Degree of Parallelism is 8 because of hint

 

#######################

Smart Scan Stats

#######################

 

NAME                                                          VALUE

———————————————————————- ——————–

cell physical IO bytes eligible for predicate offload      1,366,065,152

cell physical IO interconnect bytes                        1,088,400,208

cell physical IO interconnect bytes returned by smart scan  1,082,813,264

physical reads                                                   167,438

physical reads direct                                            166,756

CPU used by this session                                          2,269

cell flash cache read hits                                           647

 

#######################

Smart Scan Efficiency *

#######################

 

Eligible MB   SS Returned MB Pct Saved

——————– ——————– ———

1,303    1,038     20.33

 

 

#######################

Waits

#######################

 

STAT                             TIME_WAITED  TOTAL_WAITS

———————————– ———————-

PX Deq: Execute Reply                497  645

SQL*Net message from client             73   27

events in waitclass Other               50   61

cell single block physical read         36  567

cursor: pin S wait on X                 2    1

PX Deq: Parse Reply                      2   16

gc cr grant 2-way                        2  216

row cache lock                           2  578

cell multiblock physical read            1    5

cell list of blocks physical read        1    1

Next, we’ll do it on our un-partitioned table:

#######################

Test Query

#######################

APPS @ visx1> @exq.sql

APPS @ visx1> select count(*) from

2  (select  /*+ parallel (8) */

3  code_combination_id,

4  sum(entered_dr), sum(entered_cr)

5  from xla.xla_ae_lines_unpart

6  group by code_combination_id

7  );

 

COUNT(*)

———-

5573

 

 

APPS @ visx1> set echo off

 

old   1: select * from table (dbms_xplan.display_cursor(‘&_sqlid’,&_cnum,’TYPICAL’))

new   1: select * from table (dbms_xplan.display_cursor(‘8k1wkzavyambg’, 0,’TYPICAL’))

 

PLAN_TABLE_OUTPUT

——————————————————————————————————————————————————————————————————–

SQL_ID 8k1wkzavyambg, child number 0

————————————-

select count(*) from (select  /*+ parallel (8) */ code_combination_id,

sum(entered_dr), sum(entered_cr) from xla.xla_ae_lines_unpart group by

code_combination_id )

 

Plan hash value: 1164237767

—————————————————————————————————————————————-

| Id  | Operation        | Name        | Rows  | Bytes | Cost (%CPU)| Time  |    TQ  |IN-OUT| PQ Distrib |

—————————————————————————————————————————————-

|   0 | SELECT STATEMENT        |        |      |      |   108K(100)|      |       |  |           |

|   1 |  SORT AGGREGATE        |          | 1     |      |           |      |       |  |           |

|   2 |   PX COORDINATOR        |    |      |      |           |      |       |  |           |

|   3 |    PX SEND QC (RANDOM)    | :TQ10001    | 1     |      |           |      |  Q1,01 | P->S | QC (RAND)  |

|   4 |     SORT AGGREGATE        |    | 1     |      |           |      |  Q1,01 | PCWP |           |

|   5 |      VIEW            |    |  5573 |      |   108K  (3)| 00:00:05 |  Q1,01 | PCWP |           |

|   6 |       HASH GROUP BY        |  |  5573 | 61303 |   108K  (3)| 00:00:05 |  Q1,01 | PCWP |           |

|   7 |        PX RECEIVE        |        |  5573 | 61303 |   108K  (3)| 00:00:05 |  Q1,01 | PCWP |           |

|   8 | PX SEND HASH        | :TQ10000    |  5573 | 61303 |   108K  (3)| 00:00:05 |  Q1,00 | P->P | HASH       |

|   9 | HASH GROUP BY        |    |  5573 | 61303 |   108K  (3)| 00:00:05 |  Q1,00 | PCWP |           |

|  10 |  PX BLOCK ITERATOR     |          |   106M|  1114M|   106K  (1)| 00:00:05 |  Q1,00 | PCWC |           |

|* 11 |   TABLE ACCESS STORAGE FULL| XLA_AE_LINES_UNPART |   106M|  1114M|   106K  (1)| 00:00:05 |  Q1,00 | PCWP |           |

—————————————————————————————————————————————-

Predicate Information (identified by operation id):

—————————————————

 

11 – storage(:Z>=:Z AND :Z<=:Z)

Note

—–

– Degree of Parallelism is 8 because of hint

 

#######################

Smart Scan Stats

#######################

 

NAME       VALUE

———————————————————————- ——————–

cell physical IO bytes eligible for predicate offload     23,030,530,048

cell physical IO interconnect bytes                         1,873,421,824

cell physical IO interconnect bytes returned by smart scan  1,867,032,064

physical reads                                                  2,812,124

physical reads direct                                           2,811,344

CPU used by this session                                            2,482

cell flash cache read hits                                            749

 

#######################

Smart Scan Efficiency *

#######################

 

Eligible MB   SS Returned MB Pct Saved

——————– ——————– ———

21,964    1,787     91.87

 

1 row selected.

 

#######################

Waits

#######################

 

STAT                               TIME_WAITED  TOTAL_WAITS

———————————– —————- —————-

PX Deq: Execute Reply                        579  123

SQL*Net message from client                  313   27

events in waitclass Other                     43   57

cell single block physical read               38  751

row cache lock                               2  299

gc cr grant 2-way                              2  187

PX Deq: Parse Reply                            1   16

cell multiblock physical read                  1    5

So what did we learn from these tests?
– Both queries benefited from smart scan
– Both queries did  direct reads
– We didn’t see “cell smart%” waits
– The size of the segment/granule being requested does play a role in where direct read is done, which governs smart scan
– There was a big difference in “cell physical IO bytes eligible for predicate offload” between the partitioned and unpartitioned table – in the case of the former, it was basically the number of bytes in the partitions, not summed, and in the latter it was the bytes in the table
Summary I
– In order for Exadata to do smart scans, direct reads are required
– Direct reads can either be serial direct reads of parallel direct reads
– In both cases, Oracle determines this based on the amount of data being requested.  In other words, the size of the table (blocks), the size of the partition, or the size of the PQ granule
– This volume of data is influenced by “_small_table_threshold” multiplied by 5, or so we’re lead to believe so far.  We’ll prove this in the next section
– The decision to do smart scans is made on the storage cell(s).  As Oracle sends iDB messages to the cell, the message includes metadata that tells the cell what data to retrieve and a key characteristic of whether it’s being requested via direct path mechanism.  If so, the cells act as column/row servers, and if not, they revert (or downgrade) to block servers
A Quick Word about Offload Efficiency
In the previous tests, I’ve shown some “offload efficiency” statistics. In my calculations, it’s the ratio of bytes sent over the InfiniBand interconnect to bytes eligible for predicate offload, although I’ve seen other formulas used by other folks. The point is this – it’s supposed to be an indication of how much data is saved via smart scan.  But there are problems with this approach.  First, as calculated, its scope is per SQL_ID – I’m grabbing the data from V$SQL.  If you do a query that does a massive sort that has “direct path write” activities in it, data will be sent from the compute node *to* the storage cells, and then potentially doubled or tripled depending on ASM redundancy settings.  This alone can drive the calculation negative, since in my definition it looks like this:

select

IO_CELL_OFFLOAD_ELIGIBLE_BYTES/1024/1024 val1,

IO_INTERCONNECT_BYTES/10241/024 val2,

round(100*(1-(IO_INTERCONNECT_BYTES/IO_CELL_OFFLOAD_ELIGIBLE_BYTES)),2) efficiency

from v$sql

where sql_id=’&_sqlid’

Second, it’s an efficiency, and although it does give an indication of how much data you saved in transit from cell to compute node, it’s still an “efficiency” number as I wrote it.
Direct Read Test Cases
As direct reads are a key to unlocking smart scan, it’s important to appreciate how Oracle determines whether to perform direct reads.  As the previous examples showed, the size (in blocks, granules, etc) of the full segment scan dictates when direct read mechanism is used, and Oracle documents that this is based on 5 times “_small_table_threshold”, which is set to 2% the size of the buffer cache.   We want to prove when direct reads kick in, and what sort of impact adaptive direct reads happen on Oracle’s choice of direct reads and ultimately, smart scan.  In this section, I’m going to decrease our buffer cache size from 10GB to 4GB to make the test cases easier to demonstrate without forcing direct reads.
First, let’s show our buffer cache size and current value for small table threshold:

APPS @ visx1> show sga

 

Total System Global Area 8551575552 bytes

Fixed Size        2245480 bytes

Variable Size     4412411032 bytes

Database Buffers     3992977408 bytes

Redo Buffers      143941632 bytes

APPS @ visx1> @sysparm

APPS @ visx1> select a.ksppinm name, b.ksppstvl value, b.ksppstdf isdefault

2  from x$ksppi a, x$ksppsv b

3  where a.indx = b.indx

4  and 1=1

5  and a.ksppinm  like ‘%’||’&parm’||’%’

6  –and b.ksppstdf=’FALSE’

7  order by 1,2

8  /

Enter value for parm: small_table

old   5: and a.ksppinm like ‘%’||’&parm’||’%’

new   5: and a.ksppinm like ‘%’||’small_table’||’%’

_small_table_threshold    9062 TRUE

So for now, our small table threshold is 9062, which means that about direct reads should commence when our blocks reaches 45310.  We’re going to do a couple of tests to prove this, one in a tablespace with ASSM and one with MSSM.
I’m also going to show our non-standard, “interesting” initialization parameters, for reference:

APPS @ visx1> select a.ksppinm name, b.ksppstvl value, b.ksppstdf isdefault

2  from x$ksppi a, x$ksppsv b

3  where a.indx = b.indx

4  and a.ksppinm  like ‘%’||’&parm’||’%’

5  and b.ksppstdf=’FALSE’

6  order by 1,2

7  /

 

__db_cache_size                3976200192 FALSE

__java_pool_size                      16777216 FALSE

__large_pool_size                   117440512 FALSE

__pga_aggregate_target             8589934592 FALSE

__sga_target                       8589934592 FALSE

__shared_io_pool_size                       0 FALSE

__shared_pool_size                 4294967296 FALSE

__streams_pool_size                         0 FALSE

_file_size_increase_increment       143289344 FALSE

_kill_diagnostics_timeout                140 FALSE

_lm_rcvr_hang_allow_time                 140 FALSE

_optimizer_ignore_hints                FALSE FALSE

_parallel_statement_queuing             FALSE FALSE

_serial_direct_read                      AUTO FALSE

cluster_database                                            TRUE FALSE

cluster_interconnects             192.168.10.1 FALSE

compatible                         11.2.0.3.0 FALSE

db_block_size                           8192 FALSE

filesystemio_options                    SETALL FALSE

instance_number                             1 FALSE

open_cursors                              1000 FALSE

parallel_adaptive_multi_user            FALSE FALSE

parallel_degree_policy                 MANUAL FALSE

parallel_execution_message_size          16384 FALSE

parallel_max_servers                     128 FALSE

parallel_min_servers                       32 FALSE

pga_aggregate_target                8589934592 FALSE

pre_page_sga                             FALSE FALSE

processes                                1024 FALSE

recyclebin                                OFF FALSE

sessions                                  1560 FALSE

sga_max_size                        8589934592 FALSE

sga_target                          8589934592 FALSE

shared_pool_size                    4294967296 FALSE

sql92_security                            TRUE FALSE

Test Functions, Tables, and Intro Stuff
To do our tests, I’m going to heavily reference some work that that Alex Fatkulin published in his blog here – http://afatkulin.blogspot.com/2009/01/11g-adaptive-direct-path-reads-wha….
I’ll modify the function he’s built, but first, I’m going to create a MSSM tablespace and an ASSM tablespace, then create tables inside each of them to do our testing with:

SYS @ visx1> create tablespace dr_mssm

2  datafile ‘+DATA_CM01’ size 64m segment space management manual

3  /

 

Tablespace created.

 

Elapsed: 00:00:00.52

SYS @ visx1

1  create tablespace dr_assm

2* datafile ‘+DATA_CM01’ size 64m segment space management auto

SYS @ visx1>

SYS @ visx1> /

 

Tablespace created.

 

Elapsed: 00:00:00.35

Now I’ll create two tables, one in each tablespace:

APPS @ visx1> create table t_mssm

2  (col1 varchar2(100))

3  pctused 1

4  pctfree 99

5  tablespace dr_mssm;

 

Table created.

 

1  create table t_assm

2  (col1 varchar2(100))

3* tablespace dr_assm

APPS @ visx1> /

 

Table created..02

APPS @ visx1>

The function I’ll use looks like this:

APPS @ visx1> create or replace function calc_dr_mssm return number is

2   l_prd  number;

3   l_blocks number:=0;

4   l_cnt number:=0;

5  begin

6   execute immediate ‘truncate table t_mssm’;

7   loop

8   insert /*+ append */ into t_mssm

9    select rpad(‘*’, 100, ‘*’)

10    from dual;

11    commit;

12    l_blocks:=l_blocks + 1;

13   execute immediate ‘alter system flush buffer_cache’;

14    select /*+ full(t) */ count(*) into l_cnt from t_mssm;

15    select value into l_prd

16    from v$segment_statistics

17    where owner=user

18    and object_name=’T_MSSM’

19    and statistic_name=’physical reads direct’;

20    exit when l_prd > 0;

21   end loop;

22   return l_blocks;

23    end;

24  /

 

Function created.

The function CALC_DR_ASSM looks similar, only the table names have changed.
Direct Read Threshold, T_MSSM
Now we’ll call CALC_DR_MSSM.  Remember, our expected threshold is 45,310 blocks:

APPS @ visx1> declare

2   l_th number;

3  begin

4   l_th:=calc_dr_mssm;

5   dbms_output.put_line(‘Started doing direct reads at: ‘||l_th||’ blocks’);

6  end;

7  /

Started doing direct reads at: 9063 blocks

 

PL/SQL procedure successfully completed.

It appears to have stopped right at the small table threshold, not small table threshold times 5.  If we check V$SQL and DBA_SEGMENTS, we can see this:

APPS @ visx1> select sql_id,io_cell_offload_eligible_bytes

2  from v$sql

3  where sql_id=’960bzw5qrmp86′;

 

SQL_ID      IO_CELL_OFFLOAD_ELIGIBLE_BYTES

————- ——————————

960bzw5qrmp86    74244096

 

1 row selected.

 

Elapsed: 00:00:00.01

APPS @ visx1> select bytes, blocks

2  from dba_segments

3  where segment_name=’T_MSSM’;

 

BYTES     BLOCKS

———- ———-

75497472 9216

So this test least seems to contradict the small table threshold times 5 theory.
The test above was on 11.2.0.3 – now let’s test with a smaller buffer cache on 11.2.0.2:

SQL> show sga

 

Total System Global Area 1068937216 bytes

Fixed Size    2233336 bytes

Variable Size  704646152 bytes

Database Buffers  348127232 bytes

Redo Buffers   13930496 bytes

SQL> @sysparm

SQL> select a.ksppinm name, b.ksppstvl value, b.ksppstdf isdefault

2  from x$ksppi a, x$ksppsv b

3  where a.indx = b.indx

4  and 1=1

5  and a.ksppinm  like ‘%’||’&parm’||’%’

6  –and b.ksppstdf=’FALSE’

7  order by 1,2

8  /

Enter value for parm: small_table

old   5: and a.ksppinm like ‘%’||’&parm’||’%’

new   5: and a.ksppinm like ‘%’||’small_table’||’%’

So our target is, or should be, 1099 blocks or 1099 * 5.  Let’s test it:
APPS @ prod> declare
  2   l_th number;
  3  begin
  4   l_th:=calc_dr_mssm;
  5   dbms_output.put_line(‘Started doing direct reads at: ‘||l_th||’ blocks’);
  6  end;
  7  /
Started doing direct reads at: 1100 blocks
PL/SQL procedure successfully completed.
Again, this confirmed that direct reads started right around, “_small_table_threshold”, not the threshold times 5.
ASSM Tablespace Direct Read Threshold, Direct Path Inserts
In this test, we’ll run CALC_DR_ASSM passing the same parameters.  In our function, we’ll still do direct path inserts, one row at a time:

APPS @ visx1> create or replace function calc_dr_assm return number is

2   l_prd  number;

3   l_blocks number:=0;

4   l_cnt number:=0;

5  begin

6   execute immediate ‘truncate table t_assm’;

7   loop

8   insert /*+ append */ into t_assm

9    select rpad(‘*’, 100, ‘*’)

10    from dual;

11    commit;

12    l_blocks:=l_blocks + 1;

13   execute immediate ‘alter system flush buffer_cache’;

14    select /*+ full(t) */ count(*) into l_cnt from t_assm;

15    select value into l_prd

16    from v$segment_statistics

17    where owner=user

18    and object_name=’T_ASSM’

19    and statistic_name=’physical reads direct’;

20    exit when l_prd > 0;

21   end loop;

22   return l_blocks;

23    end;

24  /

 

Function created.

Also recall that our small table threshold is 9062 blocks.  Here’s what our test output shows:

APPS @ visx1> declare

2   l_th number;

3  begin

4   l_th:=calc_dr_assm;

5   dbms_output.put_line(‘Started doing direct reads at: ‘||l_th||’ inserts’);

6  end;

7  /

Started doing direct reads at: 8924 inserts

 

As we can see above, direct reads started after 8,924 inserts.  Let’s check the block count:

APPS @ visx1> select blocks

2  from dba_segments

3  where segment_name=’T_ASSM’;

 

BLOCKS

———-

9216

 

1 row selected.

 

Elapsed: 00:00:02.54

APPS @ visx1> select count(*) from t_assm;

 

COUNT(*)

———-

8924

So with only 8924 rows inserted, we’ve actually got them spread across 9216 blocks, which is relatively close to “_small_table_threshold”.  This test demonstrates two things:
– With direct path inserts, Oracle allocates new blocks above the highwater mark, so our block counts are disproportionally high with respect to the number of rows – in spite of ASSM tablespace storage
– Direct path reads began after 9215 blocks, which is close to our small table threshold – not 5 times this.  This validated our previous test case
ASSM Tablespace Direct Read Threshold, “Normal” Inserts
In this test, we’re going to do conventional inserts against a table in an ASSM tablespace, which should allow Oracle to more tightly pack blocks.  We should see more rows inserted and more rows-per-block.  And of course, we also want to show when direct path reads start.  I’ll modify the function not only to do conventional inserts, but also to batch them up in larger chunks in efforts to get the program to run faster.

APPS @ visx1> create or replace function calc_dr_assm return number is

2   l_prd  number;

3   l_blocks number:=0;

4   l_cnt number:=0;

5  begin

6   execute immediate ‘truncate table t_assm’;

7   loop

8   insert into t_assm

9    select rpad(‘*’, 100, ‘*’)

10    from dual connect by level<=500;

11    commit;

12    l_blocks:=l_blocks + 1;

13   execute immediate ‘alter system flush buffer_cache’;

14    select /*+ full(t) */ count(*) into l_cnt from t_assm;

15    select value into l_prd

16    from v$segment_statistics

17    where owner=user

18    and object_name=’T_ASSM’

19    and statistic_name=’physical reads direct’;

20    exit when l_prd > 0;

21   end loop;

22   return l_blocks;

23    end;

24  /

 

Function created.

Now for our test:

APPS @ visx1> declare

2   l_th number;

3  begin

4   l_th:=calc_dr_assm;

5   dbms_output.put_line(‘Started doing direct reads at: ‘||l_th||’ inserts’);

6  end;

7  /

Started doing direct reads at: 1096 inserts

This shows that after 1096 1000-row inserts, we began doing direct path reads.  Let’s examine more closely:

APPS @ visx1> exec dbms_stats.gather_table_stats(‘APPS’,’T_ASSM’);

 

APPS @ visx1> l

1  select num_rows,blocks,num_rows/blocks

2  from dba_tables

3* where table_name=’T_ASSM’

APPS @ visx1> /

548000 9077   60.3723697

We’ve got 9077 blocks and about 60 rows per block.  9077 blocks is 15 blocks over the small table threshold, which corresponds to about 900 rows.  Since we were doing row inserts in increments of 1000, the last insert pushed the block count above the threshold.
Again, this test showed that direct reads started at “_small_table_threshold”.
Cached Block Threshold: MSSM
According to Oracle’s documentation and Alex’s blog, direct reads stop happening when a specific threshold of the table’s blocks are already in cache.  In this section, we’ll test this using our MSSM table.  Let’s first check how many rows and blocks we have in our tables.

APPS @ visx1> select table_name,num_rows,blocks,num_rows/blocks

2  from dba_tables

3  where table_name in  (‘T_MSSM’,’T_ASSM’);

T_MSSM     9063 9063   1

T_ASSM   548000 9077  60.3723697

We’ll create indexes on both tables so we can get an index scan/rowid lookup in this test case and the next one:

APPS @ visx1> create index i_t_mssm

2  on t_mssm (1);

 

Index created.

 

Elapsed: 00:00:00.11

APPS @ visx1> create index i_t_assm

2  on t_assm (1);

 

Index created.

The pseudo-code of our test plan will be to:
1. Define a cursor on the table
2. Open the cursor and fetch one row at a time – this should do an index scan and rowid lookup.  In the case of our MSSM table, T_MSSM, one row will equate to one table block
3. After each row is fetched, we’ll run a full-scan on the table and check the physical direct read statistic
4. If it matches the previous one, it means a direct read wasn’t done and we stop
5. If it’s higher than the previous one, we’ll fetch another row and do another full-scan
Our code looks like this:

APPS @ visx1> create or replace function calc_cached_mssm

2   return number is

3   cursor l_cur is select /*+ index(t_mssm i_t_mssm) */ * from t_mssm;

4   l_v varchar2(100);

5   l_execs number:=0;

6   l_cnt number:=0;

7   l_prv number:=0;

8  begin

9   execute immediate ‘alter system flush buffer_cache’;

10   open l_cur;

11   loop

12    fetch l_cur into l_v;

13    l_execs:=l_execs+1;

14    select /*+ full(t) */ count(*) into l_cnt from t_mssm t;

15    select value into l_cnt

16    from v$segment_statistics

17    where owner=user

18    and object_name=’T_MSSM’

19    and statistic_name=’physical reads direct’;

20    exit when l_cnt = l_prv or l_cur%notfound;

21    l_prv:=l_cnt;

22   end loop;

23   close l_cur;

24   return l_execs;

25  end;

26  /

 

Function created.

If we run it, we see it stopping at 8772 blocks, or more accurately, after 8772 fetches of our cursor:

APPS @ visx1> declare

2   l_th number;

3  begin

4   l_th:=calc_cached_mssm;

5   dbms_output.put_line(‘Stopped doing direct reads at: ‘||l_th||’. ‘);

6

7  end;

8  /

Stopped doing direct reads at: 8772.

T_MSSM has 9216 blocks in it, so we basically stopped doing direct reads after (8772/9216) = 95% of the blocks existed in cache.  This is quite a bit higher than in Alex’s blog, where he witnessed the cached block threshold at about 50% the number of blocks in the table.
Cached Block Threshold: ASSM
Similar to the above example, we’ll create a procedure to test T_ASSM, our table in an ASSM tablespace.  This table has about 60 rows/block and 548,000 rows, so we’d expect to see it stop at about 520,600 fetches.
The test results look like this:

APPS @ visx1> declare

2   l_th number;

3  begin

4   l_th:=calc_cached_assm;

5   dbms_output.put_line(‘Stopped doing direct reads at: ‘||l_th||’. ‘);

6

7  end;

8  /

Stopped doing direct reads at: 548001.

Since our table has 548,000 rows, it seems from the above test that the full scan never reverted from direct reads, which may indicate that if the block counts are high, cached blocks has no impact on direct reads.  Let’s validate that indeed our cursor handling was populating blocks in cache.  I’m going to run a trace against the same type of program, minus the full-scans:

SQL> create or replace function calc_cached_assm_t2

2   return number is

3   cursor l_cur is select /*+ index(t_assm i_t_assm) */ * from t_assm;

4   l_v varchar2(100);

5   l_execs number:=0;

6   l_cnt number:=0;

7   l_prv number:=0;

8  begin

9   execute immediate ‘alter system flush buffer_cache’;

10   open l_cur;

11   loop

12    fetch l_cur into l_v;

13    l_execs:=l_execs+1;

14    exit when l_cur%notfound;

15   end loop;

16   close l_cur;

17   return l_execs;

18  end;

19  /

SQL> set serveroutput on size 20000

SQL> alter session set events ‘10046 trace name context forever, level 12’;

 

Session altered.

 

SQL> set echo on

SQL> declare

2   l_th number;

3  begin

4   l_th:=calc_cached_assm_t2;

5   dbms_output.put_line(‘Finished reads at: ‘||l_th||’. ‘);

6  end;

7  /

Finished reads at: 548001.

 

PL/SQL procedure successfully completed.

 

SQL> alter session set events ‘10046 trace name context off’;

 

Session altered.

The trace looks output like this:

SELECT /*+ index(t_assm i_t_assm) */ *

FROM

T_ASSM

 

 

call     count       cpu    elapsed       disk      query    current        rows

——- ——  ——– ———- ———- ———- ———-  ———-

Parse        1      0.00       0.00          0          0          0           0

Execute      1      0.00       0.00          0          0          0           0

Fetch   548001      3.14       7.30       9132    1096002          0      548000

——- ——  ——– ———- ———- ———- ———-  ———-

total   548003      3.14       7.31       9132    1096002          0      548000

Additionally, we can confirm that the table is nearly 100% buffered based on the below:

SQL> set autotrace on

SQL> select count(*) from t_assm;

 

COUNT(*)

———-

548000

 

 

Execution Plan

———————————————————-

Plan hash value: 2636023310

 

———————————————————————————-

| Id  | Operation        | Name | Rows  | Cost (%CPU)| Time |

———————————————————————————-

|   0 | SELECT STATEMENT          | |     1 |   301   (3)| 00:00:01 |

|   1 |  SORT AGGREGATE        | |     1 |        |     |

|   2 |   INDEX STORAGE FAST FULL SCAN| I_T_ASSM |   548K|   301   (3)| 00:00:01 |

———————————————————————————-

 

Statistics

———————————————————-

0  recursive calls

0  db block gets

1091  consistent gets

0  physical reads

0  redo size

527  bytes sent via SQL*Net to client

520  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

1  rows processed

 

SQL> set autotrace off

SQL> @../sqlmon/mywaits

 

STAT TIME_WAITED  TOTAL_WAITS

———————————– —————- —————-

SQL*Net message from client      10,538   37

Disk file operations I/O   0   70

SQL*Net message to client   0   38

events in waitclass Other   0    2

This test confirms that for the most part, direct reads were never changed to buffered reads as a result of data being cached. My theory is that it’s due to the total row count, not block count – Oracle adaptively decides to do serial direct reads if the data set is large enough.  I haven’t validate this theory yet though …
Dirty Read Threshold: MSSM
Direct reads require a segment level checkpoint – if you’ve ever seen a wait for “enq: KO – fast object checkpoint”, you’re probably seeing this phenomenon.  In this test, we’re going to update rows in our T_MSSM table in various ranges, starting with 0 rows and incrementing by one, then following with an attempt at a full scan via direct reads.  The goal of this test is to determine at which point direct reads are disabled due to dirty blocks being in cache. Here’s the function:

SQL> create or replace function calc_dirty_mssm

2   return number is

3   l_v varchar2(100);

4   l_execs number:=0;

5   l_cnt number:=0;

6   l_prv number:=0;

7  begin

8   execute immediate ‘alter system flush buffer_cache’;

9   loop

10    update t_mssm set col1=col1 where rownum < l_execs;

11    commit;

12    l_execs:=l_execs+1;

13    select /*+ full(t) */ count(*) into l_cnt from t_mssm t;

14    select value into l_cnt

15    from v$segment_statistics

16    where owner=user

17    and object_name=’T_MSSM’

18    and statistic_name=’physical reads direct’;

19    exit when l_cnt = l_prv  or l_execs > 9063;

20    l_prv:=l_cnt;

21   end loop;

22   return l_execs;

23  end;

24  /

 

Function created.

In the function, I’m putting in a clause to exit the loop when the execution count exceeds the block count of the table (9063) in the event dirty reads never stop direct path reads.
Test results show the following:

SQL> @mssm_test3

SQL> set serveroutput on size 20000

SQL> set echo on

SQL> declare

2   l_th number;

3  begin

4   l_th:=calc_dirty_mssm;

5   dbms_output.put_line(‘Stopped doing direct reads at: ‘||l_th||’. ‘);

6  end;

7  /

Stopped doing direct reads at: 3888.

 

PL/SQL procedure successfully completed.

 

SQL>

As we can see, direct reads stopped at 3888 blocks, which is about 43% the number of blocks.  Let’s look at our wait statistics:

SQL> select stat,sum(time_waited)   time_waited,

2  sum(total_waits) total_waits

3  from

4  (select   c.event stat

5  ,c.time_waited,c.total_waits,

6  b.sid,b.serial#,nvl(b.module,b.program) what

7  from  v$session b, v$session_event c

8  where c.sid=b.sid

9  and b.sid=userenv(‘sid’))

10  group by stat

11  having sum(time_waited) > 0

12  order by 2 desc

13  /

 

STAT       TIME_WAITED      TOTAL_WAITS

—————————————- —————- —————-

SQL*Net message from client   15,914 26

enq: KO – fast object checkpoint    12,754     7,759

cell smart table scan             9,697   160,595

events in waitclass Other           535     3,909

log file switch (checkpoint incomplete)   88    21

cell single block physical read       45       232

cell multiblock physical read       22    86

log file switch completion        4 1

gc cr multi block request        4       104

log buffer space            2     3

Notice the high wait time for “en: KO – fast object checkpoint” – this is an indication that the direct reads are motivating the object checkpoint, something that can be a relatively nontrivial serialization or concurrency issue.
Dirty Read Threshold: ASSM
Similar to the previous test case, we’re now going to try the same thing on our ASSM table, T_ASSM.  Our function looks slightly different – we’re going to run updates in batches of 100:

SQL> create or replace function calc_dirty_assm

2   return number is

3   l_v varchar2(100);

4   l_execs number:=0;

5   l_bs number:=100;

6   l_cnt number:=0;

7   l_prv number:=0;

8  begin

9   execute immediate ‘alter system flush buffer_cache’;

10   loop

11    update t_assm set col1=col1 where rownum < l_execs+l_bs;

12    commit;

13    l_execs:=l_execs+l_bs;

14    select /*+ full(t) */ count(*) into l_cnt from t_assm t;

15    select value into l_cnt

16    from v$segment_statistics

17    where owner=user

18    and object_name=’T_ASSM’

19    and statistic_name=’physical reads direct’;

20    exit when l_cnt = l_prv  or l_execs > 548000;

21    l_prv:=l_cnt;

22   end loop;

23   return l_execs;

24  end;

25  /

 

Function created.

Now we’ll run our test against our ASSM table, T_ASSM:

SQL> declare

2   l_th number;

3  begin

4   l_th:=calc_dirty_assm;

5   dbms_output.put_line(‘Stopped doing direct reads at: ‘||l_th||’. ‘);

6  end;

7  /

 

Stopped doing direct reads at: 255600.

 

PL/SQL procedure successfully completed.

Based on the way the code is written, our threshold was somewhere between 255500 and 255600 rows, or about 4260 blocks.  This is about 46% the size of the table.
Direct Read Threshold, Oracle EBS R12 APPS Table
In this test, we’re going to simulate the direct read threshold on an Oracle EBS table, ONT.OE_ORDER_LINES_ALL.  We’re doing this to test the impact with a real-world row size, but in general the test is very similar to the test for direct reads for a table stored in an ASSM tablespace.  To setup the test, we’ll create a dummy table called OEL_TEST:

APPS @ visx1> create table oel_test

2  tablespace apps_ts_tx_data

3  as select * from ont.oe_order_lines_all

4  where 1=2;

 

Table created.

 

Elapsed: 00:00:00.29

APPS @ visx1>

Now, we’ll create a similar function to load data and validate direct reads against this table.  For our insert operations, I’m going to do a IAS from OE_ORDER_LINES_ALL for a single, indexed column:

APPS @ visx1> create or replace function calc_dr_apps return number is

2   l_prd  number;

3   l_blocks number:=0;

4   l_cnt number:=0;

5  begin

6   execute immediate ‘truncate table oel_test’;

7   loop

8   insert into oel_test

9    select * from

10    (select * from oe_order_lines_all

11    where line_id=388282)

12    connect by level<=500;

13    commit;

14    l_blocks:=l_blocks + 1;

15   execute immediate ‘alter system flush buffer_cache’;

16    select /*+ full(t) */ count(*) into l_cnt from oel_test;

17    select value into l_prd

18    from v$segment_statistics

19    where owner=user

20    and object_name=’OEL_TEST’

21    and statistic_name=’physical reads direct’;

22    exit when l_prd > 0;

23   end loop;

24   return l_blocks;

25    end;

26  /

 

Function created.

Since we’ve done some updating in previous tests and Oracle’s adjusted the buffer cache size, our new “_small_table_threshold” is 8986 – close to the original, but a bit less.
The test results are below:

SQL> declare

2   l_th number;

3  begin

4   l_th:=calc_dr_apps;

5   dbms_output.put_line(‘Started doing direct reads at: ‘||l_th||’ inserts’);

6

7  end;

8  /

Started doing direct reads at: 180 inserts

 

PL/SQL procedure successfully completed.

 

SQL> exec dbms_stats.gather_table_stats(‘APPS’,’OEL_TEST’);

 

PL/SQL procedure successfully completed.

 

SQL> select blocks, num_rows, num_rows/blocks

2  from dba_tables

3  where table_name=’OEL_TEST’;

9012 90000   9.98668442

Based on the above results, we started doing direct path reads at 9012 blocks, slightly higher than the small table threshold value.  Bear in mind, the extra insert at the end of the loop could have added as many as 500 additional rows, which equates to 50 additional blocks.  If you consider this, the threshold for direct reads was again right around small table threshold.
Cached Block Threshold, APPS Table
To measure our cached block threshold, we’ll run a function similar to the ASSM test case a few sections ago.  We’re first going to need to create the necessary index to motivate an index scan:

SQL> create index i_oel_test  on oel_test (1);

 

Index created.

 

SQL> exec dbms_stats.gather_table_stats(‘APPS’,’OEL_TEST’);

 

PL/SQL procedure successfully completed.

Our function looks like this:

SQL> create or replace function calc_cached_apps

2   return number is

3   cursor l_cur is select /*+ index(oel_test i_oel_test) */ * from oel_test;

4   l_v oel_test%rowtype;

5   l_execs number:=0;

6   l_cnt number:=0;

7   l_cnt2 number:=0;

8   l_prv number:=0;

9  begin

10   execute immediate ‘alter system flush buffer_cache’;

11   open l_cur;

12   loop

13    fetch l_cur into l_v;

14    l_execs:=l_execs+1;

15    select /*+ full(t) */ count(*) into l_cnt from oel_test t;

16    select value into l_cnt

17    from v$segment_statistics

18    where owner=user

19    and object_name=’OEL_TEST’

20    and statistic_name=’physical reads direct’;

21    exit when l_cnt = l_prv or l_cur%notfound;

22    l_prv:=l_cnt;

23   end loop;

24   close l_cur;

25   return l_execs;

26  end;

27  /

 

Function created.

Let’s run our test:

APPS @ visx1> declare

2   l_th number;

3  begin

4   l_th:=calc_cached_assm;

5   dbms_output.put_line(‘Stopped doing direct reads at: ‘||l_th||’. ‘);

6

7  end;

8  /

 

Stopped doing direct reads at: 89421.

Our table has 90000 rows, and it looks like we stopped doing direct reads after 89421 of these were in cache.  This is more than 99% the table size, and based on this test it looks like cache block thresholds don’t influence direct path reads.
Dirty Read Threshold, APPS Table
In this test, we’ll test the threshold at which dirty reads impact direct reads.  Our function looks like this:

SQL> create or replace function calc_dirty_apps

2   return number is

3   l_v varchar2(100);

4   l_execs number:=0;

5   l_bs number:=100;

6   l_cnt number:=0;

7   l_prv number:=0;

8  begin

9   execute immediate ‘alter system flush buffer_cache’;

10   loop

11    update oel_test set line_number=line_number,

12       open_flag=open_flag

13       where rownum < l_execs+l_bs;

14    commit;

15    l_execs:=l_execs+l_bs;

16    select /*+ full(t) */ count(*) into l_cnt from oel_test t;

17    select value into l_cnt

18    from v$segment_statistics

19    where owner=user

20    and object_name=’OEL_TEST’

21    and statistic_name=’physical reads direct’;

22    exit when l_cnt = l_prv  or l_execs > 90000;

23    l_prv:=l_cnt;

24   end loop;

25   return l_execs;

26  end;

27  /

 

Function created.

Test results are provided below:

SQL> declare

2   l_th number;

3  begin

4   l_th:=calc_dirty_apps;

5   dbms_output.put_line(‘Stopped doing direct reads at: ‘||l_th||’. ‘);

6  end;

7  /

Stopped doing direct reads at: 39800.

 

PL/SQL procedure successfully completed.

We can see that direct reads stopped between 39700 and 39800 rows.  With about 10 rows per block, we’ve got 3980 blocks, or about 44% the number of blocks in the table.
Direct Read Threshold, T_MSSM: Test 2
In the tests above we had “_serial_direct_read”=AUTO, which should be the default.  However, queries from X$KSPPSV showed that the value was not the default, and it was also in upper-case.  Oracle allows you to alter the parameter and set it to AUTO, but we’re questioning whether this indeed produces the default behavior or something else.
Since our direct read threshold didn’t measure up to what we expected based on other publications, I’m going to unset it and do the MSSM test again and see if there’s a difference.

SQL> alter system reset “_serial_direct_read” sid=’visx1′;

 

System altered.

 

SQL> ed

Wrote file afiedt.buf

 

1* alter system reset “_serial_direct_read” sid=’visx2′

SQL> /

 

System altered.

 

SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Data Mining and Real Application Testing options

[[email protected] direct_reads]$ srvctl stop database -d visx

[[email protected] direct_reads]$ srvctl start database -d visx

[[email protected] direct_reads]$ sqlplus / as sysdba

 

SQL*Plus: Release 11.2.0.3.0 Production on Sat Mar 10 12:04:21 2012

 

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Data Mining and Real Application Testing options

 

SQL> @../sqlmon/sysparm

SQL> select a.ksppinm name, b.ksppstvl value, b.ksppstdf isdefault

2  from x$ksppi a, x$ksppsv b

3  where a.indx = b.indx

4  and a.ksppinm  like ‘%’||’&parm’||’%’

5  and b.ksppstdf=’FALSE’

6  order by 1,2

7  /

Enter value for parm: serial_dir

old   4: and a.ksppinm like ‘%’||’&parm’||’%’

new   4: and a.ksppinm like ‘%’||’serial_dir’||’%’

 

no rows selected

With “_serial_direct_read” set to its default (auto, not AUTO), we’re now going run our direct read threshold test again and see how it compares to the test in the “MSSM Tablespace Direct Read Threshold” test.  Our current value for “_small_table_threshold” is 8949, as communicated above:

APPS @ visx1> declare

2   l_th number;

3  begin

4   l_th:=calc_dr_mssm;

5   dbms_output.put_line(‘Started doing direct reads at: ‘||l_th||’ blocks’);

6  end;

7  /

Started doing direct reads at: 8950 blocks

 

PL/SQL procedure successfully completed.

The above test validates that (8950-1) = 8949 = “_small_table_threshold”, which indicates that whether “_serial_direct_read”= AUTO or auto, Oracle still did direct reads at the “_small_table_threshold” setting.
Parallel Direct Read Test Case
In this test we’re going to test the threshold for when parallel direct reads are done for parallel queries.  We’ll use our partitioned XLA_AE_LINES_PART table as our test case, similar to the tests against T_ASSM or OEL_TEST, but we’ll need to prepare a test temporary table to hold a sampling of data to make our program run reasonably fast. We’ll first build our temp table:

APPS @ visx1> create table xla_temp

2  as select * from xla_ae_lines

3  where 1=2;

Now we’ll do a simple script to insert one row into it from each subparition:

select ‘insert into xla_temp select * from xla_ae_lines subpartition (‘||subpartition_name||’) where rownum=1;’

from dba_tab_subpartitions

where table_name=’XLA_AE_LINES’

/

We’ll save this to a script and execute it.  When done, XLA_TEMP looks like this:

APPS @ visx1> select count(*) from xla_temp;

67

 

1 row selected.

 

Elapsed: 00:00:00.00

APPS @ visx1> insert into xla_temp

2  select * from xla_temp;

 

67 rows created.

 

Elapsed: 00:00:00.01

APPS @ visx1> select count(*) from xla_temp;

134

 

1 row selected.

 

Elapsed: 00:00:00.00

APPS @ visx1> commit;

Now we’ve got a 134 row temporary table that we’ll use as the basis for inserting new rows into XLA_AE_LINES_PART.  Let’s drop this table and re-create it first:

SQL> prompt Dropping existing partitioned table

Dropping existing partitioned table

SQL> drop table xla.xla_ae_lines_part;

 

Table dropped.

 

Elapsed: 00:00:00.45

SQL> prompt Creating partitioned table

Creating partitioned table

SQL>

SQL> CREATE TABLE XLA.XLA_AE_LINES_PART

2  PARTITION BY list (application_id)

3  subpartition by range (accounting_date)

4  (partition AP values (200)

5   (subpartition ap_1995 values less than (to_date(’01-JAN-1996′,’dd-MON-yyyy’)) compress for query high,

6    subpartition ap_1996 values less than (to_date(’01-JAN-1997′,’dd-MON-yyyy’)) compress for query high,

<< lines deleted >>

653    subpartition psb_max values less than (maxvalue)

654   ), partition partother values (default))

655  tablespace apps_ts_tx_data nologging

656  as

657  select * from xla.xla_ae_lines

658  where 1=2

659  /

 

Table created.

Next, we’ll create a function to test parallel direct reads:

APPS @ visx1> create or replace function calc_pdr_xla return number is

2   l_prd  number;

3   l_blocks number:=0;

4   l_cnt number:=0;

5  begin

6   execute immediate ‘truncate table xla.xla_ae_lines_part’;

7   loop

8   insert into xla.xla_ae_lines_part

9    select * from  xla_temp;

10    commit;

11    l_blocks:=l_blocks + 1;

12   execute immediate ‘alter system flush buffer_cache’;

13    select /*+ full(t) parallel (8)  */ count(*) into l_cnt from xla.xla_ae_lines_part t;

14    select sum(value) into l_prd

15    from v$segment_statistics

16    where owner=’XLA’

17    and object_name=’XLA_AE_LINES_PART’

18    and statistic_name=’physical reads direct’;

19    exit when l_prd > 0;

20   end loop;

21   return l_blocks;

22    end;

23  /

 

Function created.

Finally, we’ll test the function.   Results are below:

APPS @ visx1> declare

2   l_th number;

3  begin

4   l_th:=calc_pdr_xla;

5   dbms_output.put_line(‘Started doing direct reads at: ‘||l_th||’ inserts’);

6  end;

7  /

Started doing direct reads at: 3790 inserts

 

PL/SQL procedure successfully completed.

After 3790 134-row inserts, direct reads started.  If we look at the block counts for the table, we see this:

APPS @ visx1> @b

 

SUM(BLOCKS)

———–

9616

 

APPS @ visx1> select count(*) from xla.xla_ae_lines_part;

 

COUNT(*)

———-

507860

The block count of 9616 is a bit over the value for small table threshold, but remember we’ve been batching inserts together so it’s reasonable to see a higher value of both blocks and row-count values. If we look at block counts per-partition, we see this:

1  select table_name,partition_name,blocks

2  from dba_tab_partitions

3* where table_name=’XLA_AE_LINES_PART’

APPS @ visx1> /

 

TABLE_NAME       PARTITION_NAME  BLOCKS

—————————— —————————— ———-

XLA_AE_LINES_PART       AP            2363

XLA_AE_LINES_PART       AR            933

XLA_AE_LINES_PART       CE              0

XLA_AE_LINES_PART       COREBANK       836

XLA_AE_LINES_PART       CST           503

XLA_AE_LINES_PART       DPP             0

XLA_AE_LINES_PART       FUN             0

XLA_AE_LINES_PART       FV             925

XLA_AE_LINES_PART       GMF            134

XLA_AE_LINES_PART       IGC               0

XLA_AE_LINES_PART       IGI               0

XLA_AE_LINES_PART       INSITBANK        73

XLA_AE_LINES_PART       LNS             523

XLA_AE_LINES_PART       OFA             972

XLA_AE_LINES_PART       OKL             402

XLA_AE_LINES_PART       OZF               0

XLA_AE_LINES_PART       PA              770

XLA_AE_LINES_PART       PARTOTHER         0

XLA_AE_LINES_PART       PAY               0

XLA_AE_LINES_PART       PN                0

XLA_AE_LINES_PART       PO              568

XLA_AE_LINES_PART       PSB               0

What does this tell us?  It says that parallel direct reads started when the overall block count (i.e., total number of blocks across all partitions) is above “_small_table_threshold”.    Let’s try a full-partition scan on one of the larger partitions, the “AP” partition.  First, let’s get a current physical direct read count for all “AP” partitions:

APPS @ visx1> select sum(value )

2    from v$segment_statistics

3    where owner=’XLA’

4     and object_name=’XLA_AE_LINES_PART’

5  and subobject_name like ‘AP%’

6     and statistic_name=’physical reads direct’

7  /

 

SUM(VALUE)

———-

4726

Now we’ll full-scan the partition:

APPS @ visx1> select count(*) from xla.xla_ae_lines_part

2  partition (AP);

 

COUNT(*)

———-

98540

When done, we’ll re-run our query against V$SEGMENT_STATISTICS:

APPS @ visx1> select sum(value )

2    from v$segment_statistics

3    where owner=’XLA’

4     and object_name=’XLA_AE_LINES_PART’

5  and subobject_name like ‘AP%’

6     and statistic_name=’physical reads direct’

7  /

 

SUM(VALUE)

———-

4726

As you can see, we did no direct reads on the single partition, and this is because the number of blocks in the partition, 2363, is less than “_small_table_threshold”.
Oracle Apps Direct Read Test Case
In this section, I’m going to show some tests on a production Oracle EBS 12.1.2 environment running on Oracle 11.2.0.2.  This environment is running on AIX, not Exadata.   The purpose here is to determine whether Oracle switches to direct reads at the “_small_table_threshold” boundary in a real-life, production system.  Here’s what our cache size and relevant direct read parameters are set to:

SQL> show sga

Total System Global Area 4275781632 bytes

Fixed Size                  2226472 bytes

Variable Size            2650801880 bytes

Database Buffers         1610612736 bytes

Redo Buffers               12140544 bytes

SQL>

 

SQL> select     a.ksppinm name, b.ksppstvl value, b.ksppstdf isdefault

2  from x$ksppi a, x$ksppsv b

3  where a.indx = b.indx

4  and a.ksppinm  in (‘_small_table_threshold’,’_serial_direct_read’)

5  /

 

_small_table_threshold              3744                                TRUE

_serial_direct_read                 auto                                TRUE

So we’ve got default values for both parameters.  Below is a query that shows what we’d expect with direct reads, based on the assumption the actual threshold is 5 times “_small_table_threshold”:

SQL> select owner,table_name,blocks,

2  (case when blocks > (3744*5) then ‘Y’

3  else ‘N’

4  end) dr

5  from dba_tables

6  where blocks > 3200

7  and  table_name not like ‘MLOG%’

8  and owner in

9  (select oracle_username from applsys.fnd_oracle_userid)

10  /

 

Owner           Table                                  Blocks Expect DR

————— ———————————– ——— ———-

APPLSYS         AD_FILES                                 7425 N

APPLSYS         AD_FILE_VERSIONS                        13123 N

APPLSYS         AD_PATCH_COMMON_ACTIONS                 13633 N

AK              AK_REGION_ITEMS                          6969 N

APPLSYS         AD_PATCH_RUN_BUGS                        3379 N

APPLSYS         AD_PATCH_RUN_BUG_ACTIONS                10446 N

APPLSYS         AD_SNAPSHOT_BUGFIXES                    13696 N

ALR             ALR_ALERT_CHECKS                         3394 N

APPLSYS         AD_SNAPSHOT_FILES                       51416 Y

APPLSYS         AD_TASK_TIMING                           7929 N

APPLSYS         FND_COLUMNS                             11339 N

APPLSYS         FND_CONCURRENT_REQUESTS                  6181 N

APPLSYS         FND_ENV_CONTEXT                         10271 N

AR              HZ_HIERARCHY_NODES                      10191 N

AR              HZ_RELATIONSHIPS                        51098 Y

APPLSYS         FND_LOBS                                10558 N

AX              AX_RULE_LINES                            9060 N

BEN             BEN_ELIG_PER_ELCTBL_CHC                 16549 N

BEN             BEN_ELIG_PER_F                          16055 N

BEN             BEN_ELIG_PER_OPT_F                      29616 Y

BEN             BEN_ENRT_RT                             12008 N

BEN             BEN_PERSON_ACTIONS                       7284 N

BEN             BEN_PRTT_ENRT_RSLT_F                     3835 N

BEN             BEN_REPORTING                           19051 Y

XXXX            XX_AS400_IIMTM                           3258 N

CZ              CZ_IMP_PS_NODES                          8358 N

AZ              AZ_DIFF_RESULTS                        721535 Y

AZ              AZ_REPORTER_DATA                      3351285 Y

ICX             ICX_SESSION_ATTRIBUTES                   4705 N

BOM             BOM_STRUCTURES_B                        40261 Y

BOM             BOM_COMPONENTS_B                         4072 N

BOM             BOM_INVENTORY_COMPS_INTERFACE            5709 N

BOM             CST_ITEM_COSTS                          21648 Y

INV             MTL_INTERFACE_ERRORS                     3520 N

INV             MTL_ITEM_CATEGORIES                     73312 Y

INV             MTL_ITEM_REVISIONS_B                    31544 Y

INV             MTL_ITEM_REVISIONS_INTERFACE            39273 Y

INV             MTL_ITEM_REVISIONS_TL                   14222 N

INV             MTL_PENDING_ITEM_STATUS                 17744 N

INV             MTL_SYSTEM_ITEMS_B                     218803 Y

INV             MTL_SYSTEM_ITEMS_INTERFACE             253048 Y

INV             MTL_SYSTEM_ITEMS_TL                     26366 Y

APPLSYS         FND_NEW_MESSAGES                         5236 N

APPLSYS         FND_VIEWS                                7110 N

APPLSYS         FND_VIEW_COLUMNS                         5709 N

APPLSYS         JDR_ATTRIBUTES                          20008 Y

APPLSYS         JDR_COMPONENTS                           8008 N

APPLSYS         WF_DEFERRED                              6228 N

APPLSYS         WF_ITEM_ACTIVITY_STATUSES                3237 N

APPLSYS         WF_ITEM_ATTRIBUTE_VALUES                27975 Y

APPLSYS         AD_PROGRAM_RUN_TASK_JOBS                14270 N

AR              HZ_GEOGRAPHIES                           4308 N

AR              HZ_GEOGRAPHY_IDENTIFIERS                 4749 N

EGO             EGO_ITEM_TEXT_TL                        23449 Y

ENI             ENI_OLTP_ITEM_STAR                      43289 Y

APPLSYS         DR$FND_LOBS_CTX$I                      214198 Y

APPLSYS         WF_JAVA_DEFERRED                         4056 N

ICX             DR$ICX_CAT_ITEMSCTXDESC_HDRS$I           6323 N

ICX             ICX_CAT_ITEMS_CTX_DTLS_TLP              22493 Y

ICX             ICX_CAT_ITEMS_CTX_HDRS_TLP               4749 N

ICX             ICX_CAT_ATTRIBUTE_VALUES                10813 N

ICX             ICX_CAT_ATTRIBUTE_VALUES_TLP             7725 N

INV             MTL_ITEM_BULKLOAD_RECS                  38715 Y

PO              PO_ATTRIBUTE_VALUES                     10462 N

PO              PO_ATTRIBUTE_VALUES_TLP                  7677 N

XXXX            XX_MTL_SYSTEM_ITEMS_21APR09              6732 N

XXXX            XX_MTL_SYSTEM_ITEMS_INTERFACE           20406 Y

INV             MMT_AAM_NOCOMPRESS                      26366 Y

INV             MMT_AAM_COMPRESS                         4473 N

XXXX            XX_ITEMS_IIM_CONV                        3473 N

XXXX            XX_ITEM_ATTRIBUTES_IIM_CONV              3552 N

XXXX            XX_ITEMS_CONV                           26796 Y

XXXX            XX_ITEM_ATTRIBUTES_CONV                 36963 Y

XXXX            XX_ITEM_PINS_CONV                        9522 N

XXXX            TABLE_SITE_PART                         20764 Y

XXXX            TABLE_CONTRACT_ITM_LST                  27265 Y

XXXX            TABLE_SCM_VIEW                          22274 Y

XXXX            TABLE_C_SCHED_PD_AMT                    55245 Y

XXXX            XX_AS400_ISMTM_SERIAL                   24565 Y

XXXX            TABLE_CONDITION                         12638 N

Now I’m going to do a full-scan on all of these and check V$SEGMENT_STATISTICS before and after.   This isn’t the most elegant way to do things, but it serves the purpose.

SQL> create table my_segstat_1

2  as select * from v$segment_statistics

3  where (owner,object_name) in

4  (

5  select owner,table_name

6  from dba_tables

7  where blocks > 3200

8  and  table_name not like ‘MLOG%’

9  and owner in

10  (select oracle_username from applsys.fnd_oracle_userid))

11  and statistic_name=’physical reads direct’

12  /

 

Table created.

Next, I’ll full-scan each of the 80 tables that have more than 3200 blocks:

SQL> spool scans.sql

SQL> select ‘select /*+ full (X) */ count(*) from ‘||owner||’.’||table_name||’ x                                    ;’ q

2  from dba_tables

3  where blocks > 3200

4  and  table_name not like ‘MLOG%’

5  and owner in

6  (select oracle_username from applsys.fnd_oracle_userid)

7  /

 

Query

——————————————————————————–                                    ——————–

select /*+ full (X) */ count(*) from APPLSYS.AD_PATCH_COMMON_ACTIONS x;

select /*+ full (X) */ count(*) from AK.AK_REGION_ITEMS x;

select /*+ full (X) */ count(*) from ALR.ALR_ALERT_CHECKS x;

select /*+ full (X) */ count(*) from APPLSYS.AD_SNAPSHOT_FILES x;

select /*+ full (X) */ count(*) from APPLSYS.AD_TASK_TIMING x;

select /*+ full (X) */ count(*) from APPLSYS.FND_COLUMNS x;

select /*+ full (X) */ count(*) from APPLSYS.FND_CONCURRENT_REQUESTS x;

select /*+ full (X) */ count(*) from AR.HZ_HIERARCHY_NODES x;

select /*+ full (X) */ count(*) from AR.HZ_RELATIONSHIPS x;

select /*+ full (X) */ count(*) from APPLSYS.FND_LOBS x;

select /*+ full (X) */ count(*) from AX.AX_RULE_LINES x;

 

After flushing the buffer cache, I’ll run “scans.sql” to full-scan each of the 80 tables.  I’m doing this in a production environment because I want to understand the boundary at which Oracle decides to flip on and off direct reads in a real-world scenario:

 

SQL> @scans

SQL> select /*+ full (X) */ count(*) from APPLSYS.AD_PATCH_COMMON_ACTIONS x;

 

COUNT(*)

———-

311051

 

SQL> select /*+ full (X) */ count(*) from AK.AK_REGION_ITEMS x;

 

COUNT(*)

———-

214997

 

SQL> select /*+ full (X) */ count(*) from ALR.ALR_ALERT_CHECKS x;

 

COUNT(*)

———-

459527

 

SQL> select /*+ full (X) */ count(*) from APPLSYS.AD_SNAPSHOT_FILES x;

 

COUNT(*)

———-

4590722

 

SQL> select /*+ full (X) */ count(*) from APPLSYS.AD_TASK_TIMING x;

 

COUNT(*)

———-

24780

<< many more examples deleted >>

When complete, I’ll create a second table based on V$SEGMENT_STATISTICS:

SQL> create table my_segstat_2

2  as select * from v$segment_statistics

3  where (owner,object_name) in

4  (

5  select owner,table_name

6  from dba_tables

7  where blocks > 3200

8  and  table_name not like ‘MLOG%’

9  and owner in

10  (select oracle_username from applsys.fnd_oracle_userid))

11  and statistic_name=’physical reads direct’

12  /

 

Table created.

Finally, I’ll compare results.

SQL> select a.owner,a.object_name,a.value v1,b.value v2,

2  c.blocks ,

3  (case when b.value > a.value then ‘Direct’

4  else ‘Buffered’

5  end) dr

6  from my_segstat_1 a, my_segstat_2 b,

7  dba_tables c

8  where a.owner=b.owner

9  and a.object_name=b.object_name

10  and a.owner=c.owner

11  and a.object_name=c.table_name

12  order by c.blocks asc

13  /

 

Direct

Owner           Object                         Direct Reads Start Direct Reads Start    Blocks Buffered

————— —————————— —————— —————— ——— ———-

APPLSYS         WF_ITEM_ACTIVITY_STATUSES                       0                  0      3237 Buffered

XXXX          XX_AS400_IIMTM                                  0                  0      3258 Buffered

APPLSYS         AD_PATCH_RUN_BUGS                               0                  0      3379 Buffered

ALR             ALR_ALERT_CHECKS                                0                  0      3394 Buffered

XXXX          XX_ITEMS_IIM_CONV                               0                  0      3473 Buffered

INV             MTL_INTERFACE_ERRORS                            0                  0      3520 Buffered

XXXX          XX_ITEM_ATTRIBUTES_IIM_CONV                     0                  0      3552 Buffered

BEN             BEN_PRTT_ENRT_RSLT_F                        68925              72760      3835 Direct

APPLSYS         WF_JAVA_DEFERRED                            28392              28392      4056 Buffered

BOM             BOM_COMPONENTS_B                            20328              24400      4072 Direct

AR              HZ_GEOGRAPHIES                              21540              25848      4308 Direct

INV             MMT_AAM_COMPRESS                            43838              48233      4473 Direct

ICX             ICX_SESSION_ATTRIBUTES                      28196              32898      4705 Direct

AR              HZ_GEOGRAPHY_IDENTIFIERS                    23732              28481      4749 Direct

ICX             ICX_CAT_ITEMS_CTX_HDRS_TLP                  23745              28494      4749 Direct

APPLSYS         FND_NEW_MESSAGES                            57596              62832      5236 Direct

BOM             BOM_INVENTORY_COMPS_INTERFACE               28545              34254      5709 Direct

APPLSYS         FND_VIEW_COLUMNS                            28458              34167      5709 Direct

APPLSYS         FND_CONCURRENT_REQUESTS                     67991              74172      6181 Direct

APPLSYS         WF_DEFERRED                                 43596              43596      6228 Buffered

ICX             DR$ICX_CAT_ITEMSCTXDESC_HDRS$I              31599              37922      6323 Direct

XXXX          XX_MTL_SYSTEM_ITEMS_21APR09                100898             107630      6732 Direct

AK              AK_REGION_ITEMS                             76659              83628      6969 Direct

APPLSYS         FND_VIEWS                                   35518              42628      7110 Direct

BEN             BEN_PERSON_ACTIONS                          36136              43420      7284 Direct

APPLSYS         AD_FILES                                    37125              44550      7425 Direct

PO              PO_ATTRIBUTE_VALUES_TLP                     76698              84375      7677 Direct

ICX             ICX_CAT_ATTRIBUTE_VALUES_TLP                77186              84911      7725 Direct

APPLSYS         AD_TASK_TIMING                              39604              47533      7929 Direct

APPLSYS         JDR_COMPONENTS                              39993              48001      8008 Direct

CZ              CZ_IMP_PS_NODES                             41646              50004      8358 Direct

AX              AX_RULE_LINES                               45300              54360      9060 Direct

XXXX          XX_ITEM_PINS_CONV                           47566              57088      9522 Direct

AR              HZ_HIERARCHY_NODES                          50955              61146     10191 Direct

APPLSYS         FND_ENV_CONTEXT                             61626              71897     10271 Direct

APPLSYS         AD_PATCH_RUN_BUG_ACTIONS                    52230              62676     10446 Direct

PO              PO_ATTRIBUTE_VALUES                        104588             115050     10462 Direct

APPLSYS         FND_LOBS                                    52790              63348     10558 Direct

ICX             ICX_CAT_ATTRIBUTE_VALUES                   108107             118920     10813 Direct

APPLSYS         FND_COLUMNS                                 56695              68034     11339 Direct

BEN             BEN_ENRT_RT                                598305             610313     12008 Direct

XXXX          TABLE_CONDITION                             62420              74904     12638 Direct

APPLSYS         AD_FILE_VERSIONS                            65615              78738     13123 Direct

APPLSYS         AD_PATCH_COMMON_ACTIONS                     68165              81798     13633 Direct

APPLSYS         AD_SNAPSHOT_BUGFIXES                        68465              82161     13696 Direct

INV             MTL_ITEM_REVISIONS_TL                       71058              85280     14222 Direct

APPLSYS         AD_PROGRAM_RUN_TASK_JOBS                    71350              85620     14270 Direct

BEN             BEN_ELIG_PER_F                            1683491            1699546     16055 Direct

BEN             BEN_ELIG_PER_ELCTBL_CHC                    742229             758778     16549 Direct

INV             MTL_PENDING_ITEM_STATUS                    106371             124115     17744 Direct

BEN             BEN_REPORTING                              133357             152408     19051 Direct

APPLSYS         JDR_ATTRIBUTES                              99912             119920     20008 Direct

XXXX          XX_MTL_SYSTEM_ITEMS_INTERFACE              306090             326496     20406 Direct

XXXX          TABLE_SITE_PART                            102885             123462     20764 Direct

BOM             CST_ITEM_COSTS                             108160             129808     21648 Direct

XXXX          TABLE_SCM_VIEW                             110400             132480     22274 Direct

ICX             ICX_CAT_ITEMS_CTX_DTLS_TLP                 112433             134926     22493 Direct

EGO             EGO_ITEM_TEXT_TL                           117149             140598     23449 Direct

XXXX          XX_AS400_ISMTM_SERIAL                      121810             146172     24565 Direct

INV             MMT_AAM_NOCOMPRESS                         263660             290026     26366 Direct

INV             MTL_SYSTEM_ITEMS_TL                        365062             391428     26366 Direct

XXXX          XX_ITEMS_CONV                              133980             160776     26796 Direct

XXXX          TABLE_CONTRACT_ITM_LST                     135260             162312     27265 Direct

APPLSYS         WF_ITEM_ATTRIBUTE_VALUES                   167850             195825     27975 Direct

BEN             BEN_ELIG_PER_OPT_F                        2366120            2411138     29616 Direct

INV             MTL_ITEM_REVISIONS_B                       157594             189138     31544 Direct

XXXX          XX_ITEM_ATTRIBUTES_CONV                    184815             221778     36963 Direct

INV             MTL_ITEM_BULKLOAD_RECS                     193575             232290     38715 Direct

INV             MTL_ITEM_REVISIONS_INTERFACE               196365             235638     39273 Direct

BOM             BOM_STRUCTURES_B                           286904             327165     40261 Direct

ENI             ENI_OLTP_ITEM_STAR                         216287             259576     43289 Direct

AR              HZ_RELATIONSHIPS                           613176             715372     51098 Direct

APPLSYS         AD_SNAPSHOT_FILES                          257080             359912     51416 Direct

XXXX          TABLE_C_SCHED_PD_AMT                       274595             329514     55245 Direct

INV             MTL_ITEM_CATEGORIES                        366272             439584     73312 Direct

APPLSYS         DR$FND_LOBS_CTX$I                         1070990            1285188    214198 Direct

INV             MTL_SYSTEM_ITEMS_B                        3279494            3498297    218803 Direct

INV             MTL_SYSTEM_ITEMS_INTERFACE                3795720            4048768    253048 Direct

AZ              AZ_DIFF_RESULTS                           3607327            4328862    721535 Direct

AZ              AZ_REPORTER_DATA                         16843627           20194912   3351285 Direct

 

80 rows selected.

Based on our tests, the smallest table that did direct reads was at 3835 blocks, slightly over “_small_table_threshold” but not 5 times this value.  These test results align with previous tests in this paper.
We did see buffered reads for a larger table, APPLSYS.WF_DEFERRED, at 6228 blocks.  This is to not surprising considering the nature of the table – WF_DEFERRED would typically be cached because it’s always used.
Summary II
First, check out this table summarizing our direct read scenarios:
Here’s what I make of it:
– Oracle performs serial direct reads when the number of blocks exceeds the value of “_small_table_threshold”, which is 2% the size of the buffer cache
– The parallel query tests seemed to indicate that parallel direct reads occur if the number of blocks for the entire operation exceeds “_small_table_threshold”, not the maximum number of blocks retrieved for each PQ server/granule
– The well-documented, “5 times small table threshold” didn’t seem to hold up under my test cases, at least on 11.2.0.2 and 11.2.0.3.  Oracle elected to do direct reads right at the small table threshold boundary. If anyone has any additional feedback I’ve love to hear it!
– The impact of cached, unmodified blocks had a very small impact on direct reads.  Oracle adaptively turned off direct reads either very close to the total block count, or not at all.
– The impact of dirty blocks in the buffer cache and direct reads showed that Oracle elected to turn off direct reads when the number of blocks in the buffer cache reached about 45%.  This is probably a very good thing, as it mitigates the penalty of object checkpoints (enq: KO – fast object checkpoint waits)
– Adaptive direct reads is an interesting topic – according to Oracle documentation and other sources, Oracle uses a number of different mechanisms to decide whether to do or not to do direct reads.  The test cases in this white paper hopefully shed some light on a few of these test cases
– In this paper I haven’t talked at all about another set of hidden parameters, “_very_large_object_threshold” and “_very_large_partitioned_table”.  These parameters control limits of the size of a table/partitioned-table to qualify for direct read.