Oracle7 Server Distributed Systems Volume I: Distributed Data
A Case Study
This case study illustrates:
- the definition of a session tree
- how a commit point site is determined
- when prepare messages are sent
- when a transaction actually commits
- what information is stored locally about the transaction
The Scenario
A company that has separate Oracle7 servers, SALES.ACME.COM and WAREHOUSE.ACME.COM. As sales records are inserted into the SALES database, associated records are being updated at the WAREHOUSE database.
The Process
The following steps are carried out during a distributed update transaction that enters a sales order:
1. An application issues SQL statements.
At the Sales department, a salesperson uses a database application to enter, then commit a sales order. The application issues a number of SQL statements to enter the order into the SALES database and update the inventory in the WAREHOUSE database.
These SQL statements are all part of a single distributed transaction, guaranteeing that all issued SQL statements succeed or fail as a unit. This prevents the possibility of an order being placed but, inventory is not updated to reflect the order. In effect, the transaction guarantees the consistency of data in the
global database.
As each of the SQL statements in the transaction executes, the session tree is defined, as shown in Figure 5 - 3.
Figure 5 - 3. Defining the Session Tree
- An order entry application running with the SALES database initiates the transaction. Therefore, SALES.ACME.COM is the global coordinator for the distributed transaction.
- The order entry application inserts a new sales record into the SALES database and updates the inventory at the warehouse. Therefore, the nodes SALES.ACME.COM and WAREHOUSE.ACME.COM are both database servers. Furthermore, because SALES.ACME.COM updates the inventory, it is a client of WAREHOUSE.ACME.COM.
This completes the definition of the session tree for this
distributed transaction.
Remember that each node in the tree has acquired the necessary data locks to execute the SQL statements that reference local data. These locks remain even after the SQL statements have been executed until the prepare/commit phases are completed.
2. The application issues a COMMIT statement.
The final statement in the transaction that enters the sales order is now issued -- a COMMIT statement which begins the prepare/commit phases starting with the prepare phase.
3. The global coordinator determines the commit point site.
The commit point site is determined immediately following the COMMIT statement. SALES.ACME.COM, the global coordinator, is determined to be the commit point site, as shown in Figure 5 - 4.
See "Specifying the Commit Point Strength" for more information about how the commit point site is determined.
Figure 5 - 4. Determining the Commit Point Site
4. The global coordinator sends the Prepare message.
After the commit point site is determined, the global coordinator sends the prepare message to all directly referenced nodes of the session tree, excluding the commit point site. In this example, WAREHOUSE.ACME.COM is the only node asked to prepare.
WAREHOUSE.ACME.COM tries to prepare. If a node can guarantee that it can commit the locally dependent part of the transaction and can record the commit information in its local redo log, the node can successfully prepare.
In this example, only WAREHOUSE.ACME.COM receives a prepare message because SALES.ACME.COM is the commit point site (which does not prepare). WAREHOUSE.ACME.COM responds to SALES.ACME.COM with a prepared message.
As each node prepares, it sends a message back to the node that asked it to prepare. Depending on the responses, two things
can happen:
- If any of the nodes asked to prepare respond with an abort message to the global coordinator, the global coordinator then tells all nodes to roll back the transaction, and the
process is completed.
- If all nodes asked to prepare respond with a prepared or a read-only message to the global coordinator. That is, they have successfully prepared, the global coordinator asks the commit point site to commit the transaction.
Continuing with the example, Figure 5 - 5 illustrates the parts
of Step 4.
Figure 5 - 5. Sending and Acknowledging the PREPARE Message
5. The commit point site commits.
SALES.ACME.COM, receiving acknowledgement that WAREHOUSE.ACME.COM is prepared, instructs the commit point site (itself, in this example) to commit the transaction. The commit point site now commits the transaction locally and records this fact in its local redo log.
Even if WAREHOUSE.ACME.COM has not committed yet, the outcome of this transaction is determined, that is, the transaction will be committed at all nodes even if the node's ability to commit is delayed.
6. The commit point site informs the global coordinator
of the commit.
The commit point site now tells the global coordinator that the transaction has committed. In this case, where the commit point site and global coordinator are the same node, no operation is required. The commit point site remembers it has committed the transaction until the global coordinator confirms that the transaction has been committed on all other nodes involved in the distributed transaction.
After the global coordinator has been informed of the commit at the commit point site, it tells all other directly referenced nodes to commit. In turn, any local coordinators instruct their servers to commit, and so on. Each node, including the global coordinator, commits the transaction and records appropriate redo log entries locally. As each node commits, the resource locks that were being held locally for that transaction are released.
Figure 5 - 6 illustrates Step 6 in this example. SALES.ACME.COM, both the commit point site and the global coordinator, has already committed the transaction locally. SALES now instructs WAREHOUSE.ACME.COM to commit the transaction.
Figure 5 - 6. The Global Coordinator and Other Servers Commit the Transaction
7. The global coordinator and commit point site complete the commit.
After all referenced nodes and the global coordinator have committed the transaction, the global coordinator informs the commit point site.
The commit point site, which has been waiting for this message, erases the status information about this distributed transaction and informs the global coordinator that it is finished. In other words, the commit point site forgets about committing the distributed transaction. This is acceptable because all nodes involved in the prepare/commit phases have committed the transaction successfully, and they will never have to determine its status in
the future.
After the commit point site informs the global coordinator that it has forgotten about the transaction, the global coordinator finalizes the transaction by forgetting about the transaction itself.
This completes the COMMIT phase and thus completes the distributed update transaction.
All of the steps described above are accomplished automatically and in a fraction of a second.