Previous | Table of Contents | Next |
This SQL creates a result that shows the range of additional buffers blocks that may be added to the cache and the expected increase in cache hits:
SQL> @morebuff Interval Additional Cache Hits ------------------ -------------------- 1 to 250 60 251 to 500 46 501 to 750 52 751 to 1000 162 1001 to 1250 191 1251 to 1500 232 1501 to 1750 120 1751 to 2000 95 2001 to 2250 51 2251 to 2500 37 2501 to 2750 42
Here, we see that the number of cache hits peaks at 232 with the addition of 1,500 buffer blocks. We then see a decreasing marginal benefit from adding more buffers. This is very typical of online transaction processing databases that have common information that is frequently referenced by all end users.
The following sample is from a database that primarily performs reports that invoke full-table scans:
SQL> @morebuff Interval Additional Cache Hits -------------------- -------------------- 1 to 250 60 251 to 500 46 501 to 750 52 751 to 1000 62 1001 to 1250 51 1251 to 1500 24 1501 to 1750 28 1751 to 2000 35 2001 to 2250 31 2251 to 2500 37 2501 to 2750 42
Here we see no peak and no marginal trends with the addition of buffers. This is very typical of databases that read large tables front-to-back. Doing a full-table scan on a table that is larger than the buffer will cause the first table blocks to eventually page out as the last table rows are read. Consequently, we will see no specific optimal setting for the db_block_buffers parameter.
As a general rule, all available memory on the host should be tuned, and Oracle should be given db_block_buffers up to a point of diminishing returns. There is a point where the addition of buffer blocks will not significantly improve the buffer hit ratio, and these tools give the Oracle DBA the ability to find the optimal amount of buffers.
The general rule is simple: As long as marginal gains can be achieved from adding buffers and you have the memory to spare, you should increase the value of db_block_buffers. Increases in buffer blocks increase the amount of required RAM memory for the database, and it is not always possible to hog all of the memory on a processor for the database management system. Therefore, the DBA will carefully review the amount of available memory and determine an optimal amount of buffer blocks.
NOTE: If you overallocate SGA memory on a Unix system, such as with Oracle users sign-on, the Unix kernel will begin to swap out chunks of active memory in order to accommodate the new users and cause a huge performance problem.
Today, many databases reside alone on a host. When this is the case, you can predict the amount of spare memory and run your Oracle SGA up to that amount. For example, assume that your host machine has 350 megabytes of available memory. The Unix kernel consumes 50 megabytes, leaving 300 megabytes available for your Oracle database. We know that each online user will need to allocate a PGA when accessing the application, and the largest share of the PGA is determined by the value of the sort_area_size init.ora parameter. Therefore, assuming that we have a sort_area_size of 20 megabytes and 10 online users, we can assume that about 200 megabytes of real memory must be reserved for end-user sessions, leaving 100 megabytes for the Oracle SGA.
In many cases, we will see conditions where memory may be subtracted from the SGA without causing any serious performance hits. Oracle provides the x$kcbcbh table for this purpose, and we can query this table to track the number of buffer misses that would occur if the SGA was decreased in size:
REM lessbuff.sql - predicts losses from subtracting db_block_buffer values SET LINESIZE 100; SET PAGES 999; COLUMN "Additional Cache Misses" FORMAT 999,999,999; COLUMN "Interval" FORMAT a20; SELECT 250*TRUNC(indx/250)+1 ||' To '||250*(TRUNC(indx/250)+1) "Interval", SUM(count) "Additional Cache Misses" FROM x$kcbcbh WHERE indx > 0 GROUP BY TRUNC(indx/250);
Here is an example of how the output might appear:
SQL>@lessbuff Interval Additional Cache Misses -------------------- ---------------------- 1 To 250 3,895,959 251 To 500 35,317 501 To 750 19,254 751 To 1000 12,159 1001 To 1250 9,853 1251 To 1500 8,624 1501 To 1750 7,035 1751 To 2000 6,857 2001 To 2250 6,308 2251 To 2500 5,625 2501 To 2750 5,516 2751 To 3000 5,343 3001 To 3250 5,230 3251 To 3500 5,394 3501 To 3750 4,965
We can clearly see that this database has some shared information, with nearly 4 million cache hits in the first 250 buffer blocks. From 250 on up, we see a slowly decreasing downward trend, indicating that this application is doing some full-table scans or is not referencing a lot of common information.
For more sophisticated databases, we can control not only the number of buffer blockswe can also control the block size for each buffer. For example, on an IBM mainframe, we might want to make the buffer blocks very large so that we can minimize I/O contention. For example, an I/O for 32,000 bytes is not a great deal more expensive than an I/O for 12,000 bytes; and the database designer may choose to make the buffer blocks large to minimize I/O if the application clusters records on a database page. If a customer record is only 100 bytes, we will not gain by retrieving 32,000 bytes to get the 100 bytes that we need. However, if we cluster the orders physically near the customer (i.e., on the same database page), and if I/O usually proceeds from customer to order, we wont need further I/O to retrieve orders for the customer. They will already reside in the initial read of 32,000 bytes, as shown in Figure 3.4.
Figure 3.4 Using table clusters to reduce I/O.
Previous | Table of Contents | Next |