Blog

Where Do Our Extents Reside on ASM Grid Disks in Exadata?

We had quite a bit of discussion about where our extents resided on our Exadata Grid Disks.  The question was posed – if Exadata farms out IO requests to each cell in our storage grid, how does it know which extents reside on which disks? How do these get placed on grid disks, and is […]

Read More

Smart Scan: Why is “_small_table_threshold” Important?

The “_small_table_threshold” setting controls a few cell offload functions, such as what types of tables will qualify for caching in Smart Flash Cache.  Here, I’ll try to reason out why it defaults to what it does and what the impact is of changing it for various Exadata features. Based on Oracle documentation, “_small_table_threshold” is an […]

Read More

Smart Scan: What Impact does Row Filtering have on Bytes Returned from Storage Cells

In one of recent tests demonstrating Smart Scan processing, we did a full-scan on a 4Gb table and returned all columns, all rows, blanketed by a “select count(*)”.  Over 4 Gb were eligible for predicate offload, but only 2 Gb was actually returned over the InfiniBand interconnect.  In this section I’ll attempt to show why. […]

Read More

Column Filtering with Smart Scan

Those familiar with Exadata know that both row filtering and column filtering occurs with Smart Scan cell offload operations.  Both techniques limit the number of bytes returned from the Exadata storage servers over the InfiniBand interconnect to the compute servers.  Here, we’ll explore the impact of column filtering. Below, I’m going to show the Smart […]

Read More

Using DCLI with Single Quotes in LIST Arguments on Exadata

With Oracle Exadata, CELLCLI is the command-line interface to manage, monitor, and report on storage cell characteristics, configuration, and behavior. While logged on to the storage cell as root, celladmin, or cellmonitor, you can invoke CELLCLI by simply typing “cellcli” at the Linux shell prompt:     Once logged in to CELLCLI, you can issue […]

Read More

Can Data Pump Be Used Against Multiple RAC Instances in Parallel?

The simple answer is no, unless you’ve got a shared file-system on which to put your dump and log files. Here’s what happens if not:   If we create the same directory name on both servers in a cluster, it’ll work a little bit further but still ultimately fail:     Of course, if we […]

Read More

Monitoring Exadata Cell Servers with Active Requests

An active request represents a “client” or application-centric view of I/O requests being processed by the cell. Similar to previous sections, the graphic below shows the detail associated with Active Request monitoring. You can use the ioType above to monitor which type of IO is being done, and if run with “detail” it’ll show results […]

Read More

Monitoring Exadata Cell Servers with Alerts

Alerts indicate warning, critical, clear, and informational messages about operations within a cell.   You can list the alert definitions available for every condition in which an alert exists by running the below:   CellCLI> LIST ALERTDEFINITION ATTRIBUTES name, metricName, description; ADRAlert “Incident Alert” HardwareAlert “Hardware Alert” StatefulAlert_CD_IO_ERRS_MIN CD_IO_ERRS_MIN “Threshold Alert” StatefulAlert_CG_FC_IO_BY_SEC CG_FC_IO_BY_SEC “Threshold Alert” […]

Read More

Resizing Exadata Grid Disks and ASM Disk Groups

In this post, we’ll show you how to resize your Exadata storage cell Grid Disks and the ASM disk groups that reside on them. We’ll assume we’ve a storage environment comprised of different ASM disk groups for different application and database purposes, and further, different Grid Disks to support these ASM disk group requirements. Let’s […]

Read More

Exadata Flash Based Grid Disks

Oracle allows you to configure “Flash-Based Grid Disks” from the PCIe flash cards that reside in each Exadata storage cell. This allows you to create ASM disk groups on flash storage and in theory, should yield solid-state performance gains for the segments residing in these ASM disk groups. In this post we’ll perform tests for […]

Read More

Whatever we ask them to do, we know they'll deliver a solution and not leave us hanging.

Tony Deller Manager Data Center Technical Engineering