Using IORM with Exadata

IO Resource Management (IORM) provides a means to govern and meter IO from different workloads in the Exadata Storage Server.
  • Consolidation means that multiple databases and applications could share Exadata storage.
  • Different databases in a shared Exadata storage grid could have different IO performance requirements.
  • One of the common challenges with shared storage infrastructure is that of competing IO workloads.
    • Batch vs. OLTP
    • Warehouse vs. OLTP
    • Production vs. Test and Development
  • You can mitigate competing priorities by over-provisioning storage, but this becomes expensive.

Exadata addresses this challenge with IO Resource Management.

We Can Help, Contact Us Today!

Database Resource Management

  • A single database may have many types of workloads with different performance requirements.
  • “Resource consumer groups” allow you to group sessions by workload.
  • After creating resource consumer groups, you specify how resources are used within a resource consumer group.
  • Once resource consumer groups are established, you must map sessions to a consumer group based on distinguishing characteristics.
  • The combination of resource consumer groups and session mappings comprises a “resource plan”.
  • One resource plan can be active in a database at a time.
  • A database resource plan is also called an “intradatabase resource plan”.

Let’s show an example of DBRM:



Let’s say you’ve got a database called “DBM”, and let’s say you’ve got a consumer group for Order Management OLTP functions calls “OM OLTP”. This consumer group is and should be allowed to consume more database resources than anything else in the DBM database.

Let’s also imagine you’ve got another OLTP consumer group called “Other OLTP”, and finally, a consumer group called “Reporting” in the DBM database.

On another database, XBM, you’ve got an “online query” consumer group, and a “batch query” consumer group.

We won’t go into details here on how resources are allocated between each intradatabase resource plan, nor will be show our session mapping strategy, but later in this document we’ll show an example.


How IORM Extends DBRM

IORM extends DBRM using a concept of “categories”.

While consumer groups represent collections of users and their resource allocation within a database, categories represent collections of consumer groups across *all* databases.

In the diagram below, let’s say we have two categories: one for “interactive” sessions and one for “batch” sessions.

With IORM, you can specify IO precedence to consumer groups in the “interactive” category over consumer groups in the “batch” category.



IO Resource Management Plans

  • IORM provides different approaches for managing resource allocations
  • If you have multiple workloads within a database that you wish to control database resource usage with, you need to configure an intradatabase resource plan.
  • If you only have one database in your Exadata Database Machine, your intradatabase resource pan is all you need – IO resource management is handled automatically inside the storage servers based on this intradatabase resource plan.
  • If you have multiple databases in your Exadata Database Machine that you wish to govern IO resource amongst, you create an interdatabase resource plan.
  • Rules in an interdatabase resource plan specify allocations to databases, not consumer groups.
  • Category resource management is used when you want to control resources primary by the category of work being done – it allows for allocation of resources amongst categories spanning multiple databases.
  • An IORM plan is the combination of an interdatabase plan and a category plan.


IORM Architecture

IORM manages Exadata IO resources on a per-cell basis when IO requests begin to saturate the cell. When this happens, IORM begins scheduling IO requests according to configured resource plans.

First, the database sends IO requests to the Exadata storage cells. These requests are bundled in an iDB message and include metadata indicating their consumer group and optionally, resource category assigned to the IO request.

The requests are sent to different CELLSRV IO queues, based on the order in which they were sent to the storage cell.


The IO requests are then passed to from the CELLSRV IO queue to IORM; at which point any resource plans are also evaluated.

IORM then evaluates the IO requests from each of the “input” consumer groups and databases, validates their priority against the configured resource plans, and schedules the IO into the cell disk queues.



IORM Rules

IORM is only “engaged” when needed.

  • IORM does not intervene if there is only one active consumer group on one database.
  • Any disk allocation that is not fully utilized is made available to other workloads in relation to the configured resource plans.
  • Background IO is scheduled based on their priority relative to user IO.
  • Redo and control file writes always take precedence.
  • DBWR writes are scheduled at the same priority as user IO.
  • For each cell disk, each database accessing the cell has one IO queue per consumer group and three background queues.
  • Background IO queues are mapped to “high”, “medium”, and “low” priority requests with different IO types mapped to each queue.
  • If no intradatabase plan is set, all non-background IO requests are grouped into a single consumer group called OTHER_GROUPS.

IORM Planning

In this section, we’ll plan our IORM test case.

  • DBM has three consumer groups, “OM OLTP”, “OTHER OLTP”, and “REPORTING”.
  • XBM will have two consumer groups, “ONLINE QUERY” and “BATCH QUERY”.
  • DBM Intradatabase Resource Plan
    • 50% of resources allocated to “OM OLTP”
    • 30% of resources allocated to “OTHER OLTP”
    • 20% of resources allocated to “REPORTING”
  • XBM Intradatabase Resource Plan
    • 70% of resources allocated to “ONLINE QUERY”
    • 30% of resources allocated to “BATCH QUERY”
  • Category Plan
    • 70% of resources allocated to INTERACTIVE category
    • “OM OLTP” and “OTHER OLTP” in INTERACTIVE category for DBM
    • “ONLINE QUERY” in INTERACTIVE category for XBM
    • 30% of resources allocated to BATCH category
    • “REPORTING” in “BATCH” category for DBM
    • “BATCH QUERY” in “BATCH” category for XBM
  • Interdatabase Plan
    • 60% of resources allocated to database DBM
    • 40% of resources allocated to database XBM


IORM: Understanding the Math

Once we decided what we want our resource plans to look like, it’s helpful to diagram things so we know how much IO each type of operation will consume and/or take.

So assuming 100% IO allocation to the storage cell, the first thing IORM looks at is the category plans. In our case, we’re saying that we’ve got 70% allocated to the Interactive category and 30% allocation to batch.

Next, IORM looks at the interdatabase plan. In our test, we’re allocating 60% to XBM and 40% to DBM.

Next we look at consumer group resource allocation. In our case, we’ve got in DBM: 50% to “OM OLTP”, 30% to “OTHER OLTP”, and 20% to “REPORTING”. For XBM, we’ve got 70% allocated for ONLINE QUERY and 30% for batch query.

IORM uses a probability formula to then prioritize IO to the different consumer groups based on their consumer group percentage in the intradatabase plan, their database % in the interdatabase plan, and the category assignment in the category plan.

In our case, the math ends up looking like this:



IORM in Action

We’ve got two databases sharing our Exadata quarter rack, DBM and XBM:

Now that we have an idea how we want to setup our intradatabase plans, interdatabase plans, interdatabase plan, and category plan, let’s start putting all the pieces together. I am going to use DBMS_RESOURCE_MANAGER in these sections, but the same could be done in Grid Control or Database Control.



When we have our intradatabase plans created on our DBM and XBM databases, we now have to enable an IORM plan on the Exadata cells. This is done using an “alter iormplan” statement in CELLCLI, and in our “complete” example, we’ll set an interdatabase “dbplan” as well as a “category” catplan.

The screen print below shows the script I’ll use with dcli to enable an IORM plan on each of the three cells:


To Note:
objective=’auto’ is required to change from the default, which is basically no IO resource management.
There are a handful of choices for the objective – auto will automatically adjust and tailor IO requests based on workload. “low_latency” will tailor IORM IO allocation for small IO requests, and is designed for OLTP applications. “off” simply turns off the IORM plan’s IO metering.
In the example above, I’m specifying both an interdatabase IO plan by using “dbplan”, as well as a category plan, specified by “catplan”.
We can then run dcli to create our IORM plan as follows:


Next we’ll map consumer groups in the DBM database:



And when done, we’ll validate:




Monitoring IORM with CELLCLI

You can monitor IORM from CELLCLI and understand resource consumption using metrics and statistics. The table above shows various important metrics that you can monitor.

The “DB_IO_RQ_[SM|LG]” shows (read the table) and it’s a good indication of which databases are generating the most load. In this an the rest of the metrics, “SM” = small IO (< 128k) and “LG” = > 128K.


Testing our IORM Plan

Below, we’ll use the intradatabase, interdatabase, and category plans that comprise our IORM plan and introduce two large workloads into our environment, one for DBM and one for XBM. We’ll full-scan our largest table in each environment, simultaneously, and measure the IORM-induced waits at the storage cell level:


As you can see, we’re seeing wait times for our databases once the IORM plan is in effect. The session in DBM is part of a resource consumer group that is allowed, via a category, less IO, so it’s waiting more for resources than our XBM session.


Placing an IO Limit on a Database

The test cases done so far have defined a relative resource allocation for each database by implementing an interdatabase resource plan. You can also set a ceiling on the maximum amount of IO resources a database can consume, regardless of cell load.

Let’s first check our current IORM plans in place:



Let’s run a test query while connected as SYSTEM to DBM:



Now let’s change this to make DBR have a limit of 10% of IO resources:



Now let’s re-run our previous query:



As you can see from the above, it took 10 times more time to execute than previously, which is evidence that our limiting criteria was in effect.

Also, look at our utilization percentage of disks when the second query ran – they’re lower than previously.



As you can see, we witnessed steady resource utilization that’s over 5% and no waits.

  • IO Resource Management is an Exadata feature designed to govern IO to the Exadata Storage Cells.
  • IORM is used in conjunction with DBRM.
  • Consumer groups, resource plans, resource group mappings and directives are part of an “intradatabase” resource plan; i.e., resource allocation management within a database.
  • Interdatabase plans are used to control IO resource allocation between multiple databases sharing an Exadata storage server.
  • Category plans are a way to group resource consumer groups from an intradatabase plan into an IORM plan and further control IO resource allocation in the Exadata storage servers.