Do you know that you can now monitor an Oracle Real Application Cluster through AppManager?
Yes, you can monitor Oracle RAC elaborately through the module ‘AppManager for Oracle Unix 8.0’, that was released in October, 2016.
This release provides an in-depth monitoring of Oracle Real Application Clusters (RAC), in addition to stand-alone Oracle Data Base monitoring.
What can be monitored?
RAC Health, where you can monitor cluster health, node status and utilization.
Oracle ASM(Automatic Storage Management) Monitoring, where ASM disk utilization, ASM IO stats, etc can be tracked.
You can monitor Clusterware health as well.
Data Space Monitoring, under which you can monitor the Table Space availability, Date File space usage, Intelligent data placement etc.
Resource Monitoring, where you can monitor the top resource consuming users, top resource consuming SQL statements running against the cluster, various user ratios etc.
Detailed Log monitoring, through which you have an option to do an out-of-the-box monitoring of AlertLog and RedoLog.
In addition, you can monitor Voting Disk status, sysstat etc.
Pre-requisites to use this module:
This is a Proxy based monitoring module and requires some pre-requisites
- Install AppManager for UNIX Agent 8.0 or later on any UNIX Machine, preferably on a non-RAC node machine
- Configure password less SSH between the agent machine user and grid user in one of the RAC Nodes
- NOTE: It is recommended that you have one Agent proxy machine for monitoring one Oracle RAC Cluster.
- Add the entries in /etc/hosts file of the Agent Machine for each RAC node.
- NOTE: The Agent machine must be able to resolve the SCAN IPs.
- Use the UNIX Agent Manager (UAM) or wcPatch utility, to apply the respective OracleRAC patch for the UNIX Agent.
- For example, apply the p75p12 patch for UNIX Agent 7.5.
- Use UAM to configure the OracleRAC options, or run the AM_HOME/mo/bin/RACConfig.sh script, and follow the on-screen instructions to complete the configuration. You can find the AM_HOME in the /etc/vsaunix.cfg file.
Before you go, here are the Quick Troubleshooting Tips, if you’ve already started using AppManager for Oracle Unix 8.0:
Module Level Troubleshooting:
- Prior to running Knowledge Scripts, run the $PSHOME/netiq/bin/wcAppManConfig script to restrict the size of Oracle module and Oracle RAC module log files to the specified size.
- Run the $PSHOME/netiq/bin/wcAppManConfig script to specify the required value for the following logs size in MBs.
- Oracle Module back-end log max size can be set. The default value is 8MB.
- Oracle Module Knowledge Scripts log max size can also be set. The default value is 2MB.
- You can always have a look at the Managed Object (MO) logs for any job.
- For example,.OraLog_[JobID] and OraLog_[JobID]_1.log.
- The OraLog_[JobID] job contains the current logs.
- Also, you have the Knowledge Script level log files for any job to troubleshoot
- For example,[OracleUNIX KS Name]_[JobID] and [OracleUNIX KS Name]_[JobID]_1.log. The [OracleUNIX KS Name]_[JobID] job contains the latest logs.
NOTE: You also can change the maximum size setting for the MO and KS logs through UNIX Agent Manager (UAM) as well.
Oracle RAC Level Troubleshooting:
- Verify that CRS is running:
- crsctl check cluster –all
- Check the status of the SCAN:
- Check the status of the ASM instances:
- Check the status of the database instances:
- srvctl status database -d ORCL
- Check the node apps:
- Check the SCAN config:
- Check the database:
- config: srvctl config database -d ORCL
- Verify that you can connect to the database:
- Verify the status for instances:
- SQL> select instance_name, status, startup_time from gv$instance;