To optimize your application server's performance, you need to minimize the overhead of locking, while you maintain the integrity of your data. Concentrate on three areas:
There are a number of techniques to help you with each area.
Lock contention occurs when an agent tries to lock an object that is already locked with a conflicting mode by another agent. The more locks the database manager uses and the longer each lock lasts the greater the probability that contention will occur.
To understand when locks come into contention, you must first understand:
The database manager locks objects in the database according to a hierarchy.
![]() |
The database manager grants locks in the order of the hierarchy. Thus, an agent accessing a row, has to first obtain a lock on the dbspace, the table and the page that contain the row before it can obtain a lock on the row itself. (After the first row is locked, it is not necessary to get the DBSPACE and table locks again since they will be held until the end of the LUW.)
There are eight types of locks, or lock modes:
Locks can be held and released almost instantly or held until the end of the current logical unit of work. The lock duration depends on the lock mode, the type of internal data manipulation call, and the isolation level (refer to Isolation Level). A detailed table including the relationships between all of these is included in the DB2 Server for VSE & VM Diagnosis Guide and Reference manual.
The main purpose of having different lock modes is to be able to define which requests to access a certain object are compatible with other requests and which are incompatible. The matrix in Table 4 indicates which lock modes are compatible with each other. Yes means the requested lock is compatible with the held lock (and therefore is granted). No means the request is denied or the requesting agent is put in a LOCK WAIT. Either way a lock contention occurs.
Table 4. Compatibility of Lock Modes
MODE OF LOCK REQUEST | ||||||||
---|---|---|---|---|---|---|---|---|
MODE OF LOCK HOLD | IN | IS | IX | S | U | SIX | X | Z |
IN | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No |
IS | Yes | Yes | Yes | Yes | Yes | Yes | No | No |
IX | Yes | Yes | Yes | No | No | No | No | No |
S | Yes | Yes | No | Yes | Yes | No | No | No |
U | Yes | Yes | No | Yes | No | No | No | No |
SIX | Yes | Yes | No | No | No | No | No | No |
X | Yes | No | No | No | No | No | No | No |
Z | No | No | No | No | No | No | No | No |
More than one user can have concurrent access to a private dbspace, but for read operations only. That is, multiple users can hold a shared lock on the dbspace.
The amount of locking contention is directly related to the number of concurrent users allowed to access the application server (NCUSERS). While you could reduce locking contention by reducing NCUSERS, this would affect your overall response time, refer to Tuning Parameters (NCUSERS). You need to find a balance between having users wait for a lock and having them wait for access to the server.
The smallest possible lock level is a single row in a table. However, you can increase the minimum lock level in a dbspace to be a single page or the dbspace itself. Using a larger minimum lock level reduces the number of locks required, which reduces locking overhead but may increase locking contention. (The default lock level is a single page.)
You can define a larger lock level with the ACQUIRE DBSPACE statement or change an existing size with ALTER DBSPACE.
The default lock level (PAGE) should be appropriate for most applications. However, you may consider using the DBSPACE lock level if your application is read-only and accesses the data primarily through dbspace scans.
Only consider ROW level locking for applications that access small answer sets through index scans. Also, this level should not be used with an application performing a dbspace scan using cursor stability.
You can find the minimum lock size for a given dbspace in the SYSTEM.SYSDBSPACES catalog table in the LOCKMODE column. Refer to SYSTEM.SYSDBSPACES.
Remember that index pages and key values are locked for any SQL statement that uses the index or whenever the table that they are indexing is updated. If your indexes are not critical to fast access to your data, you can reduce lock contention by reducing the number of indexes defined on your data. In other words, do not create unnecessary indexes.
If you are using row level locking, make sure that your tables contain a unique index. They perform better, with respect to locking contention, because the database manager is able to determine the exact row it needs to lock. Without a unique index it may unnecessarily lock several rows at once. If you cannot create a unique index on a single column, create a unique multicolumn index. For example if you want to sort the EMPLOYEE table by the JOB column, create an index on JOB and EMPNO (JOB,EMPNO). The index will still sort by JOB, but it can also be a UNIQUE index.
You can usually reduce the number of locks required for a particular SQL statement by ensuring that it is accessing the data as efficiently as possible. For information on access path selection, refer to Chapter 5, Improving Data Access Performance.
Since a lock will never last longer than a logical unit of work (refer to Logical Units of Work), it is critical that you make your LUWs as short as possible. Commit work frequently, even if you are only reading tables.
Do not use ROLLBACK WORK to release locks. While rolling back a LUW also releases locks, a roll back involves more work and processor time than a commit work and should only be used when you want to undo updates, inserts, or deletions.
You can set the isolation level for a particular application program during preprocessing. It represents the degree of independence that the application program will have from other programs. A lower isolation level maximizes concurrency and performance but increases the risk of inconsistent data appearing in applications. There are three isolation levels: repeatable read, cursor stability, and uncommitted read.
A repeatable read application program locks every object it accesses until the end of the current logical unit of work. It guarantees that within a logical unit of work it can repeatedly read the same row of data without having it changed by some other user. With repeatable read, a user is completely isolated from interference by other applications. Other users must wait until your logical unit of work is complete before they can modify the data you were using.
A cursor stability application program only locks an object for as long as it is directly accessing it. This allows more than one user to work on the same data at the same time. It is possible to issue the same query twice within a logical unit of work and get different results. That is, rows in a table, or pages in a DBSPACE, that you have already read are subject to change by other users. It also means that the data may appear "inconsistent". If this is not a problem, and will not affect the integrity of your application program, seriously consider using cursor stability. It can significantly reduce the locking contention in your system.
Cursor stability only applies to tables in PUBLIC DBSPACEs with PAGE or ROW level locking. An application program that accesses tables in PRIVATE DBSPACEs or PUBLIC DBSPACEs with DBSPACE level locking always act like a repeatable read program.
Note: | When the database manager uses a DBSPACE scan (does not use an index) to access a table in a DBSPACE with ROW level locking using isolation level cursor stability, the effect is similar to repeatable read: no other logical unit of work can update the table until the logical unit of work performing the DBSPACE scan ends. Also, if one logical unit of work has updated a table, another logical unit of work (using cursor stability) cannot access that table with a DBSPACE scan until the updating logical unit of work ends. This reduced concurrency for DBSPACE scans does not apply for tables in DBSPACEs with PAGE level locking, or when accessing through indexes. |
Many uncommitted read (UR) application programs can query the same data simultaneously while the data is being updated by another application. This isolation level prevents read-only applications from waiting on applications that have changed or may change the data about to be read. Uncommitted read provides the lowest degree of isolation and hence greater concurrency and throughput.
Since isolation level UR gives applications the ability to read data that is not necessarily committed, data can appear to be inconsistent. For example, it is possible for you to issue the same query twice within a logical unit of work and get different results. You must be very careful when deciding to use uncommitted read for your applications. Only choose it for an application if it is not important that the data read is necessarily committed.
Note: | Uncommitted read applies only to tables in PUBLIC DBSPACEs with page or row level locking. Tables in PRIVATE DBSPACEs or PUBLIC DBSPACEs with DBSPACE level locking always have the repeatable read isolation level. |
Isolation level uncommitted read (UR) is defined as follows:
While you normally set the isolation level for a program when you preprocess it, you may allow the application program to dynamically set its isolation levels during execution. For more information on this, refer to the USER isolation level in the DB2 Server for VSE & VM Application Programming manual.
Note: | The isolation level does not affect the duration of the locks held on data that have been inserted, deleted, or updated in an LUW. Locks on this data are always held until the end of the LUW, regardless of the isolation level. |
We recommend that you use cursor stability whenever possible because it reduces the duration of locks for the application program that uses it. For even further reductions in locking and lock durations, you may consider using uncommitted read. Only use this isolation level for applications in which data integrity is not important. The effects of cursor stability and uncommitted read can be very subtle. Specific guidelines for selecting isolation levels are in the appropriate DB2 Server for VSE & VM manuals. For guidelines on selecting an isolation level in application programs, see the DB2 Server for VSE & VM Application Programming manual. For guidelines that apply to the DBS utility, see the DB2 Server for VSE & VM Database Services Utility or the DB2 Server for VSE & VM Database Services Utility manuals. For ISQL guidelines, see the DB2 Server for VSE & VM Interactive SQL Guide and Reference or the DB2 Server for VSE & VM Interactive SQL Guide and Reference manuals.
Catalog tables can be exclusively locked by:
Try to avoid any or all of these during peak load periods and try not to include them in your application programs. If you cannot avoid them, at least COMMIT WORK after each statement.
To display locking contention as it occurs, use the SHOW LOCK operator commands. They can help you identify agents that are locking other agents out of critical data and solve immediate locking problems.
To test the frequency of lock contentions after they occur, use the COUNTER operator command. Specify the WAITLOCK counter to get the number of lock requests that resulted in a wait.
The database manager uses internal control blocks called lock request blocks (LRBs) to manage locking. Each time a lock is acquired one or more LRBs are used. The number of LRBs that can be held by any given agent is defined by the initialization parameter NLRBU. The sum of the number of LRBs held by all agents cannot exceed the limit defined by the NLRBS initialization parameter. When either of these limits is reached, lock escalation is initiated for the agent that caused the limit to be exceeded.
Lock escalation is the act of trading low level locks (page, row, table, index page, or key value locks) for the appropriate DBSPACE lock for one of the DBSPACEs in which the victim agent holds locks. The DBSPACE chosen is the one in which the agent holds the most locks.
Note: | The lock manager is selective about the locks it escalates. A request for data in DBSPACE X does not necessarily cause escalation to go after a lock on DBSPACE X. |
The lock manager requests a lock on the chosen DBSPACE. The lock mode requested is the same as the most restrictive lock that the agent holds in the DBSPACE. For example, if the agent holds any Z locks, a Z lock is requested. The next choice would be an X lock, followed by an S lock. If the DBSPACE lock cannot be granted, the system checks for a possible deadlock. If no deadlock is found, the DBSPACE lock request is queued. After the DBSPACE lock is granted, the lower level locks are freed.
As the user resumes access to the DBSPACE (which is now locked at the DBSPACE level), lower level locks are not required and are not obtained. Thus, for any given LUW, the user can escalate only once on a particular DBSPACE. Or another way of looking at it, the maximum number of times an LUW can be escalated is the number of DBSPACEs accessed during that LUW.
Tuning Parameters (NLRBU, NLRBS): If an application program is causing too many lock escalations, consider the following alternatives:
Note: | Using a larger minimum lock level can increase locking contention. Refer to Minimum Lock Level. |
If you cannot reduce the number of lock escalations, you may need to increase the number of available lock request blocks by increasing the the NLRBU, and NLRBS initialization parameters.
To establish the lock request block requirements for running an DB2 Server for VSE & VM preprocessor, or for an application that is causing escalation problems:
Performance Indicators (COUNTER, SHOW LOCK MATRIX): To test the frequency of lock escalations, use the COUNTER operator command, refer to COUNTER Operator Command. Specify both the ESCALATE and the LOCKLMT counters to get the number of successful escalations and the number of unsuccessful escalation attempts respectively. (An escalation can fail if the LUW that reached the lock limit is rolled back because of a deadlock, or if a sufficient number of lock request blocks cannot be freed.)
Note: | ESCALATE and LOCKLMT may increase during preprocessing, because locks are required then as well. |
You can also use the SHOW LOCK MATRIX operator command that displays information about lock request block usage, refer to Lock Escalation. You can determine whether unexpected delays are caused by locking; monitor how the database manager is using lock request blocks; and determine the lock request blocks required to preprocess a single application.
One of the values displayed by SHOW LOCK MATRIX is called MAX USED BY LUW. It is the maximum number of lock request blocks used by any one application during a logical unit of work. (When any LUW exceeds NLRBU and the escalation process occurs, MAX USED BY LUW is set to zero.)
In addition you can look for SQLCODE -912 (SQLSTATE 57028), or SQLCODE -915 (SQLSTATE 57029). These indicate rollbacks that occur because of, insufficient lock request blocks for the database manager, or insufficient lock request blocks for a user application, respectively.
The database manager performs deadlock detection prior to placing any agent into a lock wait. A deadlock occurs when agent A holds resource X and agent B wants resource X while holding resource Y, which agent A wants. There is an impasse, which the system removes by rolling back the youngest LUW. For example, consider two users, LAWRENCE and VERONICA:
+--------------------------------------------------------------------------------+ | UPDATE SQLDBA.EMPLOYEE SET SALARY=60000 WHERE LASTNAME='HAAS' | | ARI7955I THE SYSTEM ENDED YOUR QUERY RESULT TO PROCESS YOUR COMMAND. | | ARI0503E AN SQL ERROR HAS OCCURRED. | | THE CURRENT LOGICAL UNIT OF WORK HAS BEEN | | ROLLED BACK DUE TO A DEADLOCK. IT WAS WAITING | | FOR A PAGE LOCK IN DBSPACE = 17 | | HELD BY USER LAWRENCE. | | ARI0505I SQLCODE = -911 ROWCOUNT = 0 | | ARI0504I SQLERRP: ARIXRSS SQLERRD1: -110 SQLERRD2: -99 | | ARI0502I FOLLOWING SQL WARNING CONDITIONS ENCOUNTERED: | | NULLWHERE NOLUW | | ARI7021E THE APPLICATION SERVER HAS ISSUED A ROLLBACK | | STATEMENT. ALL WORK ENTERED FOR PROCESSING SINCE | | THE LAST COMMIT STATEMENT WAS ROLLED BACK. | | YOU MAY HAVE TO REENTER SOME STATEMENTS. | +--------------------------------------------------------------------------------+
While the database manager does not allow deadlocks to occur, the more potential deadlock situations that you create the more resources are required to avoid them.
Note: | The time required to detect potential deadlocks increases exponentially (power of two) with the number of real agent structures in your database manager. For example, it takes 100 times longer to process deadlocks when NCUSERS=20 than it does when NCUSERS=2. |
Application Design: Look for two applications that access the same data in the opposite order. If you can, switch the order of access for one application so they both use the same order.
Reschedule Applications: If you find that two applications often create deadlocks, try to reschedule them to run at different times of the day.
Reduce Lock Contention: The other way to reduce potential deadlocks is to simply reduce the number and duration of locks that your database manager needs to use, refer to Locking Contention.
Reduce Lock Escalation: Escalation can also cause deadlocks. For example, suppose two users are updating tables in a dbspace. When the lock size is escalated to a dbspace level, both users can be locked out, with each waiting for the other to complete a logical unit of work. Refer to Lock Escalation.
To determine if deadlocks are a problem, look for users receiving SQLCODE -911 (SQLSTATE 40001, rollback due to deadlock).
Note: | This message may also be received during preprocessing, as the locks are required then as well. |
To test the frequency of deadlocks, use the COUNTER operator command and specify the DEADLOCK counter. It displays the number of deadlocks detected, each of which causes a rollback.