Building Grid Disks
After deleting existing ASM disk groups and deleting our relevant Grid Disks, we need to create new ones based on our requirements above.
With these requirements in mind, we need to be careful about how we create our Grid Disks, how we size them, and the order in which we do things. We know we have a 29.125Gb Grid Disk created on every Cell Disk, and each Cell Disk is approximately 557Gb in size. We also know that our RECO Grid Disks should be approximately 100Gb in size, so the order in which we’ll create Grid Disks and the respective sizing will look like this:
Now that we have all our Grid Disks created on all cells, let’s validate them:
And finally, let’s check one Cell Disks’s Grid Disks on one cell server and validate the byte offsets and sizes to make sure everything looks as expected:
Everything looks as planned, so let’s begin our ASM Disk Group builds.
Building ASM Disk Groups
Remember our test goals:
The scripts below show how to create ASM Disk Groups to match these requirements:
Now let’s check the ASM Disk Groups:
Building our DBM Database
Before beginning to build a new DBM RAC database on our Exadata Database Machine, let’s do a quick reboot to make sure our cluster is healthy and our ASM disk groups mount and are available. We’ll stop the cluster via crsctl first and then reboot each node:
After the reboot, let’s check CRS:
Now we’ll check CRS resources using “crsctl stat res -t” – we initially found that the ASM disk groups were not mounted on +ASM2 on cm01dbm02, but after mounting manually they came up. Part of the output below is truncated but take my word that the only resources not ONLINE are not “gsd” resources, which don’t start and are disabled on 11g:
Now that we know we have a healthy cluster, let’s launch DBCA and create our “DBM” database again. Below, I’ll only show important storage-related DBCA screens and assume the reader is familiar with using DBCA in 11gR2:
After the database is created, we’ll validate that everything is where it should be. The outputs below show the current database status and details:
Before continuing, we need to set cluster_interconnects on each instance to the appropriate InfiniBand IP address:
I found this requirement after seeing an abnormally high number of “gcs” and “ges”-related wait events during a Data Pump import I was doing for a different purpose. This finding led me examine log files in the original configuration of the database (/opt/oracle.SupportTools/onecommand/tmp/dbupdates.lst).
Examining this log file led me to some additional settings to configure, which are provided below:
Summary
When complete, we have a fully-functional RAC database on Exadata with brand new ASM disk groups and Exadata Grid Disks. In future posts, we’ll do tests on these different ASM disk groups.
1050 Wilshire Drive,
Suite 170,
Troy, MI 48084
Phone: (248) 465-9533
Toll free: 1-877-868-1753
Email: [email protected]
© Centroid, Inc. All rights reserved. Contact Privacy Policy Terms of Use CCPA Policy
Centroid is a cloud services and technology company that provides Oracle enterprise workload consulting and managed services across Oracle, Azure, Amazon, Google, and private cloud. From applications to technology to infrastructure, Centroid’s depth of Oracle expertise and breadth of cloud capabilities helps clients modernize, transform, and grow their business to the next level.
© Centroid, Inc. All rights reserved. Contact Privacy Policy Terms of Use CCPA Policy