Previous | Table of Contents | Next |
Many tools are available for performance analysis. Reports can be run against the database logs to produce reports containing statistics about every task in the database. The SQL trace utility can be turned on for a few minutes each day to attempt to identify inefficient SQL statements. Operating system reports can be generated containing information on disk contention and I/O bottlenecks. Performance monitors can also be used to detect information about database deadlocks and incomplete transactions.
Scheduling can be performed to fire the reports at specific time intervals. Unix utilities such as cron can be used for time-dependent scheduling, but it is convenient to have a gateway into the scheduler so that exception-based reporting can be conducted. For example, if a Unix monitor detects excessive hardware contention against a disk, it could trigger a user-exit to start an SQL trace to obtain detailed information.
Most database reports contain so much irrelevant information that a simple program written with C or even COBOL could be used to read the output from the reports, storing critical information into a table. This table would be used to feed the problem processing system, enabling the creation of exception reports and trend analysis reports. While this may sound complex, the data from the knowledge system can be fed directly into a canned statistical package such as SAS or SPSS to produce trend reports.
Table 7.1 shows a typical weekly exception report. All tasks for the day are analyzed and compared with a running average for their historical response time. If the response time for the task exceeds 15 percent of the historical average, the variance is noted in the right-hand column.
Task Name | 94306 | 94307 | Day 94308 | of 94309 | Week 94310 | 94311 | 94312 | Times Invoked | Historical Response Variance |
---|---|---|---|---|---|---|---|---|---|
PC01 | .55 | .54 | .68 | .56 | .54 | .57 | .00 | 21932 | .55 |
PC02 | .05 | .04 | .08 | .06 | .04 | .07 | .00 | 3444 | .01 |
PC03 | .22 | .23 | .40 | .21 | .23 | .21 | .22 | 129342 | .24 |
PC04 | .05 | .08 | .09 | .02 | .04 | .01 | .00 | 3444 | .01 |
PC05 | .35 | .33 | .42 | .33 | .32 | .40 | .31 | 3444 | .31 |
This type of report can provide very useful information about the overall operation of the distributed databases. Note, for example, the performance degradation on Wednesday (94308). We see that all of the tasks in the system were running below their historical average on that day, so we can assume that some external influence may have caused a system-wide response problem.
Other common reports can compare the buffer utilization for all of the distributed databases, comparing the ratio of blocks_requested to blocks_in_buffer. This can be used as a simple method of monitoring changes in buffer utilization over time.
DBAs who have implemented this type of system find that it removes the need for the tedious report analysis they once did by hand, providing a base for more sophisticated performance and tuning automation. For specific information about designing an Oracle system for performance monitoring, refer to Chapter 11, Oracle Application Monitoring.
The placement of the database files on the DASD (disk) devices can have a significant impact on the system performance. On most disk devices, the volume table of contents (VTOC) can be placed anywhere on the disk device. While the VTOC placement is irrelevant if the database is not used for local mode processing (the database simply locates it once at startup time), the VTOC placement can become critical for local mode processing. A VTOC must be accessed each time a file is requested from disk storage, making it the most frequently accessed file on the disk. (See Figure 7.9.)
Figure 7.9 Proper placement of Oracle data files.
If one assumes an equal probability of the read-write heads being under any given track, it makes sense that all frequently accessed files (especially the VTOC) should be placed near the middle track on the disk. Since seek time (the time it takes for the read-write head to relocate) is the greatest component of disk delay, this strategy will minimize overall disk access time.
Clearly, it follows that because I/O is the greatest contributor to database performance problemsand since seek time is the greatest component of disk responsewhatever can be done to reduce seek delay will result in a performance improvement for the overall database.
With all of the recent advances in symmetric multiprocessing (SMP), it is interesting to note that many companies are investigating tools that allow processors to be added and removed from a system on an as-needed basis. Load sharing, or the ability to dynamically add and remove processors, is the result of the shared memory cluster technology. By clustering multiple SMP processors, the system administrators have the ability to easily scale-up a specific application, providing additional processing power on an as-needed basis.
The system administrators can link and unlink clusters of SMP hosts as they see fit, achieving the proper balance of processing power for an application. This new technology also relieves system planners from the chore of predetermining the full amount of CPU that an application will require. With multiple SMP clusters, the system can start small and new processors can be added as demands on the system grow.
Previous | Table of Contents | Next |