Oracle Database Appliance Installation

Oracle Database Appliance Installation
In this post I’m going to share  the end-to-end process of getting an Oracle Database Appliance (ODA) up and running at a client site.  No pictures, just words.   And right up front, I’m going to tell you this is likely going to be the shortest blog I’ll ever post on
Why is this?  In short, ODA is easy.  Very easy.  And the end-result is a fully functional, Oracle 11gR2 RAC cluster.
Preparing for an Oracle Database Appliance installation is  straightforward.
Here’s what we did:
– Prior to beginning the installation, we made sure to read through the Oracle Database Appliance Installation, Configuration, and User’s Guide (E22692-11).
– Racked the ODA in the data center.  The ODA is a 4U machine, so we made sure we had a spot to put it.
– Connected network cables for the Net0 and Net1 networks (eth2 and eth3) from each of the two System Controllers (SCs) into a data center switch on the appropriate network
– Connected the NetMgt network to the same switch, same network
– Determined host name and IP address information for the following:
– Public interface for SC1
– Public interface for SC2
– VIP address for SC1
– VIP address for SC2
– ILOM address for SC1
– ILOM address for SC2
– 3 SCAN addresses.
Only 8 IP addresses are required, one for each of the above, but we created one additional FQDN/IP for a 3rd SCAN address.
Installing ODA
– Logged into each SC as root and ran:
/opt/oracle/bin/oakcli configure firstnet
This will configure the public, bonded interface (bond0 in our case) on both SCs.
– Downloaded from MOS and staged in /tmp
– Ran the followin to unpack the installation files:
  # /opt/oracle/bin/oakcli unpack -package \ /tmp/
– From SC1, ran :
# oakcli deploy
–  Enter the System Name. When installing, oakcli will append a “1” to the first SC and “2” to the second host
– Changed IP address and hostname information for the public networks, VIP addresses, SCAN addresses, DNS servers, and NTP servers.  At this point, we changed the actual SC hostnames to match the client naming conventions
– We chose to deploy as RC
– We chose a “medium” sized database
– We chose to backup locally; this means that the Appliance Manager will configure a “RECO” diskgroup that’s twice the size of the “DATA” diskgroup
– Installed it. From this point, the entire process took about two hours.  During this time, the DBAs and IT folks and I discussed generic Oracle 11gR2 topics, Oracle ASM, Grid Infrastructure, began dialogue about backup strategy, migration strategy, and so forth.
When Done
When complete, you’ve got a 2-node RAC cluster with an RDBMS home and an Grid Infrastructure home with ASM running out of it.
– From either node, run:
# oakcli validate -v -d
– From either node, as oracle, run:
$ srvctl status database -d <DB name>
$ srvctl status nodeapps
(and whatever else you like)
– Launch Enterprise Manager DB Control and validate your environment
– Start thinking about migrating data to it.
Lessons Learned
In truth, the only thing you really need to worry about when installing ODA is your network.  Make sure DNS has the right entries in it, make sure your network connectivity is sound, and make sure you plan things out.
Beyond this, the biggest lesson learned is that the simplicity of the Oracle Database Appliance works as advertised.   For this particular client, they needed a robust, fault-tolerant Oracle database infrastructure deployed very quickly.   Based on this easy installation and others like it, it’s easy to see that ODA is the most pain-free way to get a RAC cluster up on 11gR2.