Previous | Table of Contents | Next |
When I/O contention causes a performance problem, we have alternatives to disk striping. For very small and high-impact data tables, it is possible to make these tables reside in RAM memory. The access time against RAM is 10,000 times faster than disk I/O, and this solution can often make a huge performance difference. Two approaches to caching are possible. The first is caching, a hardware solution that uses extra RAM memory to hold the data table (table caching). The other alternative uses software mechanisms to reserve buffer memory for the exclusive use of the data table. While not all databases support this feature, some allow the memory buffer to be fenced or partitioned for the exclusive use of specific tables.
Unfortunately, Oracle does not yet support the pinning of database blocks within its buffer cache. If it were possible to keep specific data blocks from swapping out, commonly used blocks such as common reference tables and the high-level nodes of indexes could be kept in memory. However, tricks can be used to simulate this type of buffer pinning. Introduced in Oracles 7.2, read-only tablespaces allow for the creation of separate instances that concurrently access the same tablespace. For example, assume that your application had a common set of lookup tables, commonly referenced by every user. This table could be placed inside a separate instance, sized only for the lookup tables. Since the instance has its private buffer pool, you can ensure that the reference tables will always reside in memory. This type of architecture is also beneficial for systems that must do full-table scans. This alleviates the buffer flushing that occurs when an online transactions system is slowed by a single task that is required to do a full- table scan on a large table. But what if the table is read both by the online transaction processing and by the full-table scan request? Oracle version 7.2 offers read-only tablespaces.
With read-only tablespaces, the tablespace can be in update mode for the online transactions processing instance, while a separate instance handles read-only full-table scans (see Figure 3.5). The Oracle DBA should make every possible effort to identify and isolate read-only tables into a read-only tablespace, since performance is dramatically faster in read-only processing mode.
Figure 3.5 Oracles read-only tablespaces.
NOTE: For details on read-only tablespaces, refer to Chapter 6, Oracle DBA Performance And Tuning.
The ability to identify and correct performance problems has plagued distributed systems from their genesis. Even within the context of a single transaction, distributed query optimization can be a formidable challenge. On a single database, query tuning takes place by running an SQL EXPLAIN and performing the appropriate tuning. However, when a query is spilt into distributed databases, the overall query tuning becomes much more complex. Many distributed database managers take a distributed query and partition it into subqueries, which are then independently optimized and run (sometimes simultaneously) on the distributed databases. The query is considered complete when the last subquery has completed successfully and the results are returned to the user. This approach is sometimes called the weakest link architecture: If a distributed query partitions into four subqueries, for example, the longest running of the four subqueries determines the overall performance for the entire query, regardless of how fast the other three subqueries execute.
Clearly, tuning a distributed query is going to need to take into consideration the load on the network, the physical location of the database, and the availability of multiple CPUs. Today, tools are available to perform load balancing, whereby a processor may borrow CPU cycles in order to balance the query and achieve maximum throughput.
It is very important for a distributed database to have the ability to address information regardless of the hardware platform or the architecture of the database; especially in volatile environments where hardware and network configurations may change frequently. Three types of interoperability come into play with distributed databases: hardware, operating systems, and network factors.
Database interoperability refers to the ability of a database to function autonomously to allow the distributed database to access many different types of databases within the domain of a unified environment. Tools such as UniFace and PowerBuilder attempt to serve this market, providing mechanisms for subtasking database queries and merging result sets automatically.
Hardware interoperability refers to the ability of the distributed system to address resources at many locations on an as-needed basis. At the hardware level, it is possible for a single subquery of a distributed query to run on numerous processors, and load balancing tools are available for assigning multiple processors to a single database.
In some cases, widely differing applications may access the same tables. An excellent example of this scenario would be a banking application that processes fast, online transactions during the day, and is updated with long-running background tasks in the evening.
A fundamental difference exists between the database resources required for transaction processing and batch processing. Online transactions are usually small and require few resources from the database lock manager. Batch processes are generally lock intensive, sweeping a table in a linear fashion.
Oracles buffer pool offers a finite amount of RAM storage within the region. This storage may be allocated to lock pools or buffer pools, but it is impossible to reallocate these resources as the applications change, unless the system is brought down and then restarted with a new configuration (called bouncing). For online transaction systems with hundreds of concurrent users, the demands upon the database buffer pool are much more intensive than with system-wide updates. Conversely, batch updates make very little use of large buffers, but require a lot of room in the lock pools to hold row locks between commit checkpoints.
One simple solution to these application-specific requirements is to create two database configurations, each with a different configuration of buffers and lock pools. At the end of the online transaction day, the online system may be brought down and a batch version of the database may be started with a different memory configuration.
Previous | Table of Contents | Next |