Hello everyone, today we are going to talk about latches, an Oracle lock in memory.
What are Latches?
Latches provide a low-level serialization mechanism that protects shared data structures in the LMS. A latch is a type of lock that can be reclaimed and released very quickly. Latches are typically used to prevent more than one process from executing identical code fragments at any given time. When a latch is taken, the object is to avoid interlocks. Associated with each latch is a cleanup procedure that will be invoked if a process dies while holding the latch. This cleaning is performed using the services of PMON. The underlying implementation of latches depends on the operating system, particularly as to whether a process will wait for a latch and for how long. Some examples of latches are the following: buffer cache latches, library cache latches, shared pool latches, redo allocation latches, archive control latchesand redolog buffer latches.
When are latches obtained?
A process acquires a latch when it works with memory area in the SGA (System Global Area). Holds the latch for the period of time it operates with the memory area. The latch is terminated or released when the process finishes working with that frame or memory area. Each latch protects a different set of data, identified by the latch name. The purpose of latches is to manage simultaneous access to shared data structures so that only one process can access the structure at a time. Blocked processes (processes waiting to execute a piece of code for which some other process has already obtained a latch) will wait until the latch is released. Oracle uses atomic instructions such as test and set to operate on latches. Since the instructions to set and release latches are atomic, the operating system guarantees that only one process gets it and, since it is only one instruction, it is quite fast.
What are the possible blocking request modes?
Blocking requests can be made in two ways:
willing to wait:
a “willing to wait” mode request will repeat, wait and re-request until the latch is obtained. Examples of latchs “willing to wait” are latches of library cache and shared pool.
No wait : In “no wait” mode, the process will request the latch and if it is not available, instead of waiting, another one will be requested. When no latch is available, the server process has to wait. An example of latch “no wait” is latch of redo copy.
Spin count: The spin count controls how many times the process will retry to obtain the lock before rolling back and going to sleep. Basically, this means that the process is in a closed CPU loop.
What causes latch containment?
If a process tries to acquire a latch and is busy, it enters a waiting process called spinning, tries again, and if the latch is still busy, it enters waiting again. The number of times the process enters standby is determined by the value of the hidden parameter _spin_count. The first time the process enters the wait state, it stays in that state for one hundredth of a second and, if it has to re-enter the wait state, the wait value is doubled for each entry in timeout. Logically, as the process times out, there is additional CPU consumption until the requested latch becomes available and, consequently, a performance penalty. We could say that CPU usage is proportional to the time the process is spinning. This is called latch contention and can seriously penalise the performance of the DB.