- About Us
- Events and Webinars
- Contact Us
Using a 1MB AU Size
In this section we’ll perform tests for full-scanning a table when it’s stored in an ASM disk group with a 1MB ASM allocation unit size and compare with the same table stored in a tablespace residing on an ASM disk group with a 4MB AU size. The control-case done with SYSTEM.MYOBJ, which has the following characteristics:
Let’s create a tablespace in the AU1MDATA_CM01 ASM disk group:
Now we’ll create the copy of MYOBJ:
Let’s run our test:
As we can see from the above test:
Using a 8MB AU Size
In this section we’ll perform tests for full-scanning a table when it’s stored in an ASM disk group with a 8MB ASM allocation unit size and compare with the same table stored in a tablespace residing on an ASM disk group with a 4MB AU size. The control-case done with SYSTEM.MYOBJ, which has the following characteristics:
Let’s create a tablespace in the AU8MDATA_CM01 ASM disk group:
Now we’ll create our test table:
And finally, run our test:
As we can see from the above test:
Since this test and the previous one with the 1MB ASM allocation unit size showed relatively the same results, and considering the 8MB AU size would seem to yield better response, we’re going to do some additional testing with multiple runs (5 each) to see if the tests are indeed what saw with just one sample:
We can see from this that the timings per test indeed indicate that things are more efficient (i.e., run faster) with a 4MB AU size, but let’s try to figure out exactly why. Using V$SESSION_EVENT, we can see that the “cell smart table scan” wait event was the event responsible for most of the wait time, so let’s compare test results:
As we can see above, the number and total time of “cell smart table scan” waits was larger with 1MB AU and 8MB AU sizes for the second two tests. They retrieved roughly the same number of bytes over the interconnect, and the number of bytes and blocks per segment is very nearly the same. If we do some math on the above table, we can see that the number of waits increased for 1MB and 8MB ASM AU sizes, but the total time didn’t increase linearly with the number of waits, so we could suspect that alignment boundary issues are the cause of more waits.
Based on this, we can conclude that more work is required to satisfy cell IO requests when ASM disk groups don’t use a 4MB AU size. So setting ASM disk group attributes will clearly yield a better overall result, in this case about 50% time-savings.
What about Extent Placement on the Physical Disk?
As you’ve probably noticed in the tests above, a 4MB AU size was optimal for performance. But was this solely because of the 4MB AU size or could it have been because the 4MB AU ASM disk group was created on a set of Grid Disks that resided our the outermost tracks of the physical disk? In this section, we’ll again tear down the ASM disk groups and re-create them one at a time and do our testing.
The first thing we’ll do to get a clean baseline is wipe out the DBM database. After this, we’ll drop the ASM disk groups DATA_CM01 and RECO_CM01:
After this is completed, we’ll delete our Grid Disks:
When complete, we’ll create grid disks, 425Gb and remainder, for DATA_CM01 and RECO_CM01:
Next, let’s create an ASM disk group on the “DATA” Grid Disks with a 1MB allocation unit and an ASM disk group with a 4MB AU called RECO_CM01:
When complete, we’ll re-create our DBM RAC database:
Now we’ll set to post-rebuild settings:
At this point, we can create our test table, MYOBJ:
Now we’ll check the space used by MYOBJ:
Finally, let’s run full-scan test:
We’ll then repeat the tests above with an ASM disk group with a 8MB AU and a 4MB AU size. I’ll leave out most of the details of the test case builds, as they mirror the above with the exception of the ASM disk group creation.
The 8MB AU test is below:
And finally, our 4MB storage test:
|Statistic||1MB AU Size||4MB AU Size||8MB AU Size|
|Query Time||5.96 seconds||4.34 seconds||5.49 seconds|
|“cell physical IO bytes eligible for predicate offload”||15,278,497,792||15,278,497,792||15,278,497,792|
|“cell physical IO interconnect bytes”||104,495,872||101,636,320||101,990,816|
|“cell physical IO interconnect bytes returned by smart scan”||99,597,056||99,596,512||99,590,560|
|“CPU used by this session”||64||53||76|
|“cell flash cache read hits”||37||143||149|
|Pct Saved by SS||99.32%||99.33%||99.33%|
Configure ASM disk groups on Exadata with a 4MB allocation unit size.
Centroid is a cloud services and technology company that provides Oracle enterprise workload consulting and managed services across Oracle, Azure, Amazon, Google, and private cloud. From applications to technology to infrastructure, Centroid’s depth of Oracle expertise and breadth of cloud capabilities helps clients modernize, transform, and grow their business to the next level.