Previous | Table of Contents | Next |
Generally speaking, a sum of the least squares technique will suffice for forecasting, but single or double exponential smoothing techniques might yield more accurate predictions under certain circumstances.
A need has already been identified for collecting database statistics from all of the Oracle instances. In short, we desire a proactive performance and tuning system that will:
Now the question arises: How do we extract this information and place it into our Oracle database? We start by determining which data items we want to track, and then designing a table structure that fills our requirements (see Figure 11.1).
Figure 11.1 A sample schema for storing Oracle performance information.
The philosophy for this performance and tuning repository is to collect virtually all of the information from the bstat-estat reports and the V$ and DBA tables. The original information is collected on each remote agent and stored in local tables. For example, on Saturday at 8:00 AM, a crontab job could start a task to invoke the export and compress utilities that will extract the information into a flat file. Following extraction, the data is transferred to a central host via anonymous FTP, where the extract files will be loaded into the master Oracle tables on the central host. Each remote host will then have its performance and tuning tables re-created.
At a predetermined time on Saturday, a crontab will verify that all remote agents have delivered their daily extract files and notify the DBA of any agents who have failed.
The next step is to translate the model into some physical Oracle tables. Now that the empty database has been created, we will need to alter the utlestat.sql program to insert the information into our new tables whenever it is invoked. Here is an example of how the hit ratios can be captured:
INSERT INTO oracle.pt_instance (buffer_hit_ratio) ( SELECT ROUND(decode(gets-misses,0,1,gets-misses)/decode(gets,0,1,gets),3) hit_ratio, SLEEPS, ROUND(sleeps/decode(misses,0,1,misses),3) "SLEEPS/MISS" FROM stats$latches WHERE gets != 0 AND name = 'buffer cache lru' );
Once all of the modifications have been completed, this new version of utlestat.sql will replace the older version. Whenever it is invoked, it will add rows to our performance and tuning database.
To make the utility run, it is suggested that a cron job be set to start utlbstat.sql at a predetermined time, followed by utlestat.sql several hours later. This way you will receive a daily, consistent record of the status of your Oracle database at the same time each day. Of course, if you wanted to measure trends during the processing day, you could run this every hour, with 15 minutes between bstat and estat.
In a distributed environment, it is necessary to transfer all of the local table data into a large, centralized Oracle database that will hold performance and tuning information for all of the Oracles. This can be easily accomplished by running a cron job from the main repository. This cron will execute the necessary SQL to select all rows from the remote table, inserting them into the main table. Finally, it will either truncate the tables or delete all rows from the remote table:
INSERT INTO MAIN.pt_instance ( SELECT * FROM ORACLE.pt_instance@bankok ); DELETE FROM ORACLE.pt_instance@bankok;
We can now run reports and alert scripts against the centralized repository to predict when problems will be encountered. Here is an example of a query that will list any tables that have extended more than three times on the previous day (see Listing 11.6).
Listing 11.6 A sample table report.
COLUMN c0 HEADING "SID"; COLUMN c1 HEADING "Owner"; COLUMN c2 HEADING "ts"; COLUMN c3 HEADING "Table"; COLUMN c4 HEADING "Size (KB)" FORMAT 999,999; COLUMN c5 HEADING "Next (K)" FORMAT 99,999; COLUMN c6 HEADING "Old Ext" FORMAT 999; COLUMN c7 HEADING "New Ext" FORMAT 999; SET LINESIZE 150; SET PAGESIZE 60; BREAK ON c0 SKIP 2 ON c1 SKIP 1 TTITLE " Table Report| > 50 Extents or new extents"; SPOOL /tmp/rpt10 SELECT DISTINCT b.sid c0, substr(b.owner,1,6) c1, substr(b.tablespace_name,1,10) c2, substr(b.table_name,1,20) c3, (b.blocks_alloc*2048)/1024 c4, c.next_extent/1024 c5, a.extents c6, b.extents c7 FROM perf a, perf b, dba_tables c WHERE rtrim(c.table_name) = rtrim(b.table_name) AND a.sid = b.sid AND rtrim(a.tablespace_name) <> 'SYSTEM' AND a.tablespace_name = b.tablespace_name AND a.table_name = b.table_name AND to_char(b.run_date) = to_char(round(sysdate,'DAY')-7) -- start with closest SUNDAY minus one week AND to_char(a.run_date) = to_char(b.run_date-7) -- compare to one week prior AND ( a.extents < b.extents -- where extents has increased OR b.extents > 50 ) --AND -- b.extents - a.extents > 1 ORDER BY b.sid; SPOOL OFF;
Notice that this query joins the PERF table with itself, comparing the table data for one week against the table data for the prior week. Reports like these are the basis for automated exception reporting, since they can be used to detect out-of-the-ordinary conditions and alert the developer before performance becomes a problem.
Previous | Table of Contents | Next |