Previous | Table of Contents | Next |
Here is an easy way to tell the number of times that a non-pinned stored procedure was swapped out of memory and required a reload. To effectively measure memory, two methods are recommended. The first is to regularly run the estat-bstat utility (usually located in ~/rdbms/admin/utlbstat.sql and utlestat.sql) for measuring SGA consumption over a range of time. Another handy method would be writing a snapdump utility to interrogate the SGA and note any exceptional information relating to the library cache. This would include the following measurements:
Also, be aware that the relevant parameter, shared_pool_size, is used for other objects besides stored procedures. This means that one parameter fits all, and Oracle offers no method for isolating the amount of storage allocated to any subset of the shared pool.
Listing 2.4 is a sample report for gathering information relating to shared_pool_size. As you can see, the data dictionary hit ratio is above 95 percent and the library cache miss ratio is very low. However, we see over 125,000 reloads in the SQL area namespace, and we may want to increase the shared_pool_size. When running this type of report, always remember that statistics are gathered from startup, and the numbers may be skewed. For example, for a system that has been running for six months, the data dictionary hit ratio will be a running average over six months. Consequently, data from the V$ structures is meaningless if you want to measure todays statistics.
Some DBAs will run utlbstat.sql, wait one hour, and run utlestat.sql. This produces a report, shown in Listing 2.4, that shows the statistics over the elapsed time interval.
Listing 2.4 The generated report showing statistics.
========================= DATA DICT HIT RATIO ========================= (should be higher than 90 else increase shared_pool_size in init.ora) Data Dict. Gets Data Dict. cache misses DATA DICT CACHE HIT RATIO --------------- ----------------------- ------------------------- 41,750,549 407,609 99 ========================= LIBRARY CACHE MISS RATIO ========================= (If > 1 then increase the shared_pool_size in init.ora) executions Cache misses while executing LIBRARY CACHE MISS RATIO ---------- ---------------------------- ------------------------ 22,909,643 171,127 .0075 ========================= LIBRARY CACHE SECTION ========================= hit ratio should be > 70, and pin ratio > 70 ... NAMESPACE Hit ratio pin hit ratio reloads ---------- ------------- --------------- ------------ SQL AREA 84 94 125,885 TABLE/PROCEDURE 98 99 43,559 BODY 98 84 486 TRIGGER 98 97 1,145 INDEX 0 0 CLUSTER 31 33 OBJECT 100 100 PIPE 99 99 52
Lets take a look at the SQL*Plus script (Listing 2.5) that generated Listing 2.4.
Listing 2.5 The script that generated Listing 2.4.
PROMPT PROMPT PROMPT ========================= PROMPT DATA DICT HIT RATIO PROMPT ========================= PROMPT (should be higher than 90 else increase shared_pool_size in init.ora) PROMPT COLUMN "Data Dict. Gets" FORMAT 999,999,999 COLUMN "Data Dict. cache misses" FORMAT 999,999,999 SELECT sum(gets) "Data Dict. Gets", sum(getmisses) "Data Dict. cache misses", trunc((1-(sum(getmisses)/sum(gets)))*100) "DATA DICT CACHE HIT RATIO" FROM v$rowcache; PROMPT PROMPT ========================= PROMPT LIBRARY CACHE MISS RATIO PROMPT ========================= PROMPT (If > 1 then increase the shared_pool_size in init.ora) PROMPT COLUMN "LIBRARY CACHE MISS RATIO" FORMAT 99.9999 COLUMN "executions" FORMAT 999,999,999 COLUMN "Cache misses while executing" FORMAT 999,999,999 SELECT sum(pins) "executions", sum(reloads) "Cache misses while executing", (((sum(reloads)/sum(pins)))) "LIBRARY CACHE MISS RATIO" FROM v$librarycache; PROMPT PROMPT ========================= PROMPT LIBRARY CACHE SECTION PROMPT ========================= PROMPT hit ratio should be > 70, and pin ratio > 70 ... PROMPT COLUMN "reloads" FORMAT 999,999,999 SELECT namespace, trunc(gethitratio * 100) "Hit ratio", trunc(pinhitratio * 100) "pin hit ratio", reloads "reloads" FROM v$librarycache;
Just as the wisdom of the 1980s dictated that data should be centralized, the 1990s have begun an era where SQL is also centralized and managed. With the centralization of SQL, many previously impossible tasks have become trivial:
As memory becomes less expensive, it will eventually become desirable to have all of the applications SQL and code loaded into the Oracle library cache, where the code will be quickly available for execution by any external applications regardless of its platform or host language. The most compelling reasons for putting all SQL within packages are portability and code management. If all applications become SQL-less, with calls to stored procedures, then entire applications can be ported to other platforms without touching a single line of the application code.
Previous | Table of Contents | Next |