Modern Java Concurrency: Tools and Utilities That Matter
At its foundation, Java provides a straightforward concurrent API designed to manage threads, define tasks and synchronize access to critical blocks or resources. Despite its simplicity, the API appears flexible and powerful, offering support to handle almost any concurrent scenario.
Primarily, Java has traditionally been, and continues to be, used to develop enterprise applications, backend services, and large-scale distributed systems. As application requirements continues to grow, Java evolved and introduced utilities and framework designed to support common concurrency use cases, allowing software engineers to concentrate on business logic rather than addressing technical aspects of concurrency. Build around common and modern programming principles, the improvements enable engineers across different experience levels to work efficiently with concurrency.
Starting with Java 5, the platform introduced significant improvements to concurrency and grouped all together in java.util.concurrent package. Even the Java still mapping the threads to operating system threads (also known as platform threads) and the implementations use complex low level features moving the approach from “thread is blocked” to “thread is waiting”, the abstractions introduced to concurrency improves the performance and make it more convenient to build complex concurrent flows.
Along with performance improvements and convent way to synchronize access to resources, these abstractions supports more complex use cases upon software engineer configuration — such as lock for read or write, controlling the number of concurrent access, managing the threads and thread-safe collection implementations (apart of synchronized collections).
Non-Blocking Concurrency: Winning Without Waiting
Over the past few years, the software engineering industry has seen a smooth transition toward non-blocking operations. Modern web servers implement non-blocking I/O (such as Netty or Jetty), reactive frameworks and non-blocking Web API (such as Spring WebFlux, Vert.x and Quarkus Mutiny) adopted this principle and now deliver greater performance, scalability and responsiveness.
Java adopted NIO (New Input/Output) and non-blocking concurrency model across its concurrency tools, utilities and frameworks. These improvements enables Java Software Engineers to build hight-performance applications and give frameworks the foundation to create scalable and efficient platforms.
Moving from traditional locking mechanism, where threads transition between different states (ready, running, waiting), nowadays Java relies on CAS (compare-and-swap) algorithm, an atomic CPU instruction that compares a value in memory to an "expected" value and, if they match, swaps it with a new value. The CAS approach is better because the operation is fast and handled by CPU build-in implementation, avoid thread blocking and reduces context switching which is an expensive operation.
At its core, the locks and synchronizers mechanisms are build upon the AQS framework, an abstract class java.util.concurrent.lock.AbstractQueuedSynchronizer designed that rely on a single int atomic value (for long atomic value can be used a dedicated implementation AbstractQueuedLongSynchronizer ). The Sync implementations are responsible for retry and decide if lock is acquired, when do not succeed to acquire the lock from availabilities or there are not availabilities at all, the AQS framework queue the thread util a release happen. By its design, the framework completely delegates the responsibility for lock acquisition and release to concrete Sync implementations defined within each lock and synchronizer.
Before moving to abstractions and implementations, is important to analyze tools and mechanisms that mitigates the retry storm problem, a CAS drawback. Along with resolving the infinite retry problem, these mechanisms improve performance and throughput by leveraging the operating system and CPU to suspend the threads. This mechanism eliminates the need for busy-waiting by allowing threads to suspend instead of actively consuming CPU while waiting. Because registers remain intact during suspension, it also minimize the overhead of expensive context switching, an expensive operation where the CPU must repeatedly save and restore a thread’s state.
The java.util.concurrent.locks.LockSupport is a class designed to block and unblock thread within AQS framework. Its implementation offers a convenient static API (all the methods are static) where methods invocation are associated with current thread (except unpark that requires a specific thread to unpark) and is able to recognize virtual and operating system threads. Under the hood, the LockSupport rely on low-level system operations exposed through jdk.internal.misc.Unsafe, or on specific implementation optimized for virtual threads.
Internally each thread has a Parker associated with it which holds a volatile single-bit flag (the permit) and it’s used to efficiently block and un-block the threads; along with permit flag, Parker holds the state of thread. The permit flag indicates if the park operation could proceed immediately without need of suspending the thread or does not have the permission to continue and should suspend the thread. Primarily, the permit flag tied to the park() operation, and the terminology is intentionally centered around the concept of parking.
- permit flag = 0 means the thread could be parked — it does not currently have permission to continue without suspending and will suspend if it calls park().
- permit flag = 1 means the thread cannot be parked right now — it already has permission to proceed, so park() will return immediately.
LockSupport#park
The LockSupport#park method uses a low-level operating system command to suspend the thread associated with current park() method invocation. The JVM will mark the thread as blocked for it’s internal usage and a low-level command invocation saves the CPU Context (registers, stack pointer, instruction pointer, flags) into kernel-level task structure and kernel removes the thread from it’s runnable queue moving it into waiting queue.
When using LockSupport.park() to suspend a thread, the suspension happens unless the permit is available (described by Parker permit flag). This may appear counterintuitive at first, so it’s worth taking a moment to see what’s really going on. In fact, the Parker’s permit flag indicates if JVM should suspend the current thread or can continue thread execution without suspension (no block/park happens). The permission to continue without suspension could happen when framework or utility (or some other thread which does not occur so often) invoked unpark() to current thread or interrupted the current thread. Even with modern tools, engineers should treat thread interruption with care and responsibility. It is also important to remember, the permit flag is not cumulative (it’s a single-bit token, not a counter) — multiple unpark() calls still result in only single retained permit.
LockSupport#unpark(Thread)
The LockSupport#unpark(Thread) is a static method that wakes up the passed as argument, allowing it to resume if it’s currently parked. The method may be invoked even for a target thread that is not suspended; in this case, Parker associated with the target thread records permit, and the next park() call on this thread will return immediately without suspending it. When unpark() is called on a suspended thread, the JVM wakes the thread by instructing the operating system kernel to move it from the waiting queue back to the runnable queue. This OS kernel operation is efficient, eliminates the need to restore the CPU context because it is preserved in the kernel-level task structure.
Aside from LockSupport, the Java platform enhances the Thread class with utility methods that improve concurrency efficiency. However, engineers should use them carefully, as they act only as CPU or JVM hints and offer no guarantee that the intended behavior will be executed.
- Thread#yield is a hint for JVM scheduler that current thread can pause and let other runnable threads with only the same priority to execute.
- Thread#onSpinWait is a static method introduced in Java 9 designed to indicate that the current thread is busy-waiting state without transitioning into a blocked or waiting state. When invoked, it provides a hint to CPU that thread cannot make progress until a condition is met (expects to met the condition shortly). Because this is a low-level CPU instruction, the behavior is predictable and enabled certain processor optimization — such as reducing the power consumption and avoiding excessive CPU usage during tight spring loops.
Lock concepts and transition to explicit Synchronization
Moving beyond implicit locking and synchronization, the modern concurrency features discussed earlier enabled Java to adopt Lock Concepts and support explicit Synchronization. Apart of performance benefits, Locking API and Synchronizers are easier to use in a convenient way because remove many of limitations and boilerplate associated with intrinsic locking. With synchronized block or method, software engineers are forced into a rigid structure where the lock must be acquired and released within specific block, while Locking API break away from these constraints and gives more flexibility.
In addition to lock concepts, the Java platform provides a powerful group of concurrency utils known as synchronizers. These API help coordinate the progress of multiple threads efficiently, without forcing developers into the rigid structures associated with intrinsic locking or even the Lock API.
Explicit Locking with the Lock API
The java.util.concurrent.locks.Lock interface represents the foundational abstraction for explicit locking in modern Java concurrency, being a flexible alternative to the synchronized block. Like the synchronized block, the Lock API enables controlled access to shared resources, while offering greater flexibility through explicit locking. Lock#lock is designed to acquire lock for the current and Lock#unlock is designed to release the acquired lock. Additionally the modern Lock API provides Lock#tryLock that attempts to obtain lock immediately without suspending or specifying suspension time Lock#tryLock(time, timeUnit), both methods return a boolean indicating whether the thread successfully acquired the lock or still unavailable. Lock#lockInterruptibly is another method that attempts to acquires the lock unless the current thread is interrupted.
The following section provides a detailed look at java.util.concurrent.locks.ReentrantLock , the main implementation of the Lock API. The primary purpose is to offer capabilities similar to intrinsic locking while following modern concurrency approach and being built on top of the AQS framework.
The lock and unlock methods offers control access to shared resource, similar as synchronized block but with greater convenience, no block-level restriction, no object monitor and instance method invocation. The example below demonstrates how these benefits translate into clearer and more structured synchronization flow.
package org.course.concurrency;
import java.time.LocalDateTime;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class MultithreadingApplication {
public static void main(String[] args) {
var lock = new ReentrantLock();
lock.lock();
// tasks initialisation
Task task = new Task(lock);
// task running
new Thread(task).start();
new Thread(task).start();
new Thread(task).start();
}
private static class Task implements Runnable {
private final Lock lock;
public Task(Lock lock) {
this.lock = lock;
}
@Override
public void run() {
lock.lock();
try {
TimeUnit.SECONDS.sleep(5); // simulate task execution
var taskName = Thread.currentThread().getName();
System.out.println("Hello from thread " + taskName + " at " + LocalDateTime.now());
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
lock.unlock();
}
}
}
}
The ReentrantLock implementation support fair and non fair synchronization, although before taking a closer look at features and implementation aspects of ReentrantLock API, it is important to introduce the ReentrantLock.Sync, as it forms the core of Lock implementation. In practice, the behavior of ReentrantLock is driven by its Sync component (a nested class): the primary responsibility of the implementation is to initialize the appropriate Sync mode (fair or non-fair) and delegate all the locking operations to the underlying Sync instance. Default constructor will always define a non fair synchronization, while an alternative contractor allows fairness to be explicitly specified through a boolean parameter.
At its core, the Sync is build on top of AQS framework described earlier and inherits its full set of synchronization features. Because the AQS framework maintains a queue of threads that fail to acquire the lock, ordering them in a FIFO manner. In a fair sync implementation, when the lock is released, the thread at head of the queue is unparked and given the opportunity to acquire the lock next. The fair sync implementation enforces the lock acquisition to bypass only when other threads does not waiting in the queue. In contrast, the non-fair sync implementations allows a thread to acquire the lock immediately without checking if other threads waiting for lock.
At this point, it’s time to take closer look at Sync implementations within ReentrantLock that’s optimized for concurrent access, single-thread lock acquisition and lock releasing. In practice, both fair and non-fair lock acquisition share similar flow. In the first, sync tries to acquire the lock using CAS operation. If the lock is not available, then checks the current lock owner; if current thread owns it, the state counter is incremented to reflect additional lock acquisition, otherwise indicate the attempt failed and initiate next try to acquire the lock within the AQS framework represented by AbstractQueuedSynchronizer#tryAcquire . The tryAcquire implementation participates in lock acquisition when thread wakes up after being unparked and attempts to acquire the lock using a CAS operation when volatile state is zero, meaning the lock is free (no difference between first lock acquisition and after thread wakes up). In order to make sure the fair implementation does not acquire the lock upfront of other threads, the initialTryLock and tryAcquire before evaluating the state or running CAS operation, check the queue to ensure it’s empty or the current thread is the next eligible to acquire the lock. When thread successfully acquires the lock, the sync registers the thread as exclusive owner of the lock.
The ReentrantLock#unlock() operation, both fair and non-fair sync variants delegate lock release to the AQS framework, where tryRelease is an implementation defined by ReentrantLock#Sync and inherited by both variants. At lock release the sync firstly check that the current thread exclusively owns the lock and throws IllegalMonitorStateException if does not, since only the owning thread can release the lock. Next, decrements the lock count and the state is updated with result value. When the count reaches zero, tryRelease signals to AQS framework that can unpark head thread from the waiting queue. Otherwise, it indicates that current thread still holds the lock and additional release are required before another thread can acquire it. When thread successfully releases the lock, the sync revokes the exclusive lock ownership.
package org.course.concurrency;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class MultithreadingApplication {
public static void main(String[] args) {
Lock lock = new ReentrantLock();
// tasks initialisation
Task task = new Task(lock); // task running
new Thread(task).start();
new Thread(task).start();
}
private static class Task implements Runnable {
private final Lock lock;
public Task(Lock lock) {
this.lock = lock;
}
@Override
public void run() {
String threadName = Thread.currentThread().getName();
lock.lock();
System.out.println("Lock acquired by thread " + threadName);
safeSleep(500);
lock.lock();
System.out.println("Additional lock acquired by thread " + threadName);
safeSleep(200);
lock.unlock();
System.out.println("Additional lock has been release by thread " + threadName);
safeSleep(300);
lock.unlock();
System.out.println("Lock has been released by thread " + threadName);
}
}
private static void safeSleep(long millis) {
try {
TimeUnit.MILLISECONDS.sleep(millis);
} catch (InterruptedException e) {
}
}
}
In addition to lock, the API includes several options that offer benefits for complex and more specific concurrency scenarios: tryLock, tryLock(time, timeUnit) and lockInterruptibly().
ReentrantLock#tryLock() is a fully non-blocking operation that attempts single time to acquire the lock using CAS when state indicates the lock is be available, similar to initialTryLock in non-fair implementation. If the lock is available, then thread acquires it and the method returns true, otherwise return false without parking or queuing the thread for subsequent attempts.
ReentrantLock#tryLock(time, timeUnit) follows a similar flow to ReentrantLock#lock with the key difference that may throw InterruptedException and thread is parked for an interval. If the lock is available, the thread acquires it immediately and the method returns true. Otherwise, the AQS framework parks the thread for up to the specified timeout. When the thread is not unparked within this interval—because the lock is not released or other queued threads acquire it first—the acquisition attempt is cancelled and the method returns false. If the thread is interrupted at any point while waiting, the acquisition is aborted and an InterruptedException is thrown.
ReentrantLock#lockInterruptibly() behaves similarly to ReentrantLock#lock(), but differs in that attempts to acquire the lock unless the thread is interrupted. The implementation is based on AbstractQueuedSynchronizer#acquireInterruptibly which detects the interruption signal and throws InterruptedException when interruption occurs.
Explicit Coordination with Conditions
Within the explicit locking mechanism, the Condition API is another improvement introduced with the concurrency. It provides functionality similar to classical wait and notify , while offering greater flexibility by allowing single or multiple conditions, supports different variants of awaiting and clearer coordination semantics. The following example demonstrates how a publisher can be parked when the queue is full, and a subscriber can be parked when the queue is empty without interfering other side. However, it is important to remember that the critical section between lock() and unlock() remains protected and can be accessed by only one thread at a time. In other words, a publisher and a consumer cannot execute this section concurrently, ensuring correct and predictable synchronization behavior.
package org.course.concurrency;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import java.util.Random;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class MultithreadingApplication {
private static final Random RANDOM = new Random();
public static void main(String[] args) throws Exception {
ValueHolder valueHolder = new ValueHolder();
Lock lock = new ReentrantLock();
Condition readCondition = lock.newCondition();
Condition writeCondition = lock.newCondition();
// initiate 10 subscribers
List<Thread> subscriberWorkers = new ArrayList<>();
for (int i = 1; i <= 10; i++) {
Thread subscriberWorker =
new Thread(new Subscriber(valueHolder, lock, readCondition, writeCondition));
subscriberWorkers.add(subscriberWorker);
subscriberWorker.start();
}
// initiate the publisher
Thread publisherWorker =
new Thread(new Publisher(valueHolder, lock, readCondition, writeCondition));
publisherWorker.start();
// run the application for 60 seconds
TimeUnit.SECONDS.sleep(60);
// complete the publisher execution
publisherWorker.interrupt();
// complete the subscriptions' execution
subscriberWorkers.forEach(Thread::interrupt);
}
private static class ValueHolder {
private Integer value;
public void setValue(Integer value) {
this.value = value;
}
public boolean exists() {
return value != null;
}
public Integer pullValue() {
var value = this.value;
this.value = null;
return value;
}
}
private static class Publisher implements Runnable {
private final ValueHolder valueHolder;
private final Lock lock;
private final Condition readCondition;
private final Condition writeCondition;
public Publisher(
ValueHolder valueHolder, Lock lock, Condition readCondition, Condition writeCondition) {
this.valueHolder = Objects.requireNonNull(valueHolder);
this.lock = lock;
this.readCondition = readCondition;
this.writeCondition = writeCondition;
}
@Override
public void run() {
while (true) {
try {
lock.lock();
if (valueHolder.exists()) {
writeCondition.await(); // wait for writing readiness signal
}
valueHolder.setValue(RANDOM.nextInt(100));
readCondition.signal(); // signal readiness to read the } catch (InterruptedException e) {
} catch (InterruptedException e) {
break;
} finally {
lock.unlock();
}
}
}
}
private static class Subscriber implements Runnable {
private final ValueHolder valueHolder;
private final Lock lock;
private final Condition readCondition;
private final Condition writeCondition;
public Subscriber(
ValueHolder valueHolder, Lock lock, Condition readCondition, Condition writeCondition) {
this.valueHolder = Objects.requireNonNull(valueHolder);
this.lock = lock;
this.readCondition = readCondition;
this.writeCondition = writeCondition;
}
@Override
public void run() {
String subscriber = Thread.currentThread().getName();
while (true) {
lock.lock();
try {
readCondition.await(); // wait for read readiness signal
// simulate heavy value processing
TimeUnit.MILLISECONDS.sleep(RANDOM.nextInt(500));
System.out.println("Subscriber: " + subscriber + ", value: " + valueHolder.pullValue());
} catch (InterruptedException e) {
break;
} finally {
writeCondition.signal(); // signal readiness to write the value
lock.unlock();
}
}
}
}
}
The Condition API implementation relies entirely on the AQS framework, and the following section describes the basic mechanisms provided by the framework, as no custom implementation exists at the time of writing. Conditions can be created only by a Lock instance, as awaiting a condition or signaling other threads is permitted exclusively for the thread that currently holds the lock. Each Condition maintains it’s own queue for parked threads, separate form the main synchronization queue of AQS framework.
Condition#await() cause the thread to wait for signal or until the thread is interrupted. In practice, the framework creates a new ConditionNode within the condition queue, an extension of Node, releases the lock and then parks the current thread.
Condition#awaitUninterruptibly() behaves similarly to Condition#await(), but differs in that ignores the thread interruption and wait until it’s signaled.
Condition#awaitNanos(nanosTimeout) behaves similarly to Condition#await(), but differs in that it specifies a maximum waiting time, uses LockSupport#parkNanos() instead of LockSupport#park() and returns estimated remaining nanoseconds if awakened early, zero or negative if the timeout is reached. In either case, the thread is automatically awakened when the timeout expires if no signal is received.
Condition#await(time, unit) behaves similarly to Condition#awaitNanos(), but returns a boolean indicating whether the wait completed before the timeout elapsed, rather than returning the remaining wait time.
Condition#awaitUntil(deadline) similar to Condition#await(time, unit), but instead of specifying a timeout, it permits to specify a deadline; a Date when the waiter thread should be unparked if no signal occurs.
Condition#signal() pull the first node from the condition queue and unpark the waiter, allowing it to attempt to re-acquire the lock. At this stage, the lock acquisition proceeds through the AQS framework. If the acquisition fails, the thread is parked and added to the AQS tail queue. This mechanism guarantee that only one thread executes the critical section at a time.
Condition#signalAll() wakes all threads waiting on the condition queue. For each individual node, it follows the same processing flow as Condition#signal().
Concurrency Optimization with Read–Write Locks
In many scenarios, engineers follow a common locking strategy when dealing with concurrent access to shared resources that distinguishes between read and write access. For the performance perspective, it does not make sense to block other thread from reading a shared resource when not performing a modification. The approach mirrors what we see in relational database systems (RDBMS): a record can be locked for reading, allowing other transactions to read concurrently, while write operations must wait to prevent race conditions. When a record is locked for writing, both read and write operations by other transactions are blocked to maintain consistency and avoid data corruption. Java Concurrency API introduced specialized locking mechanisms that explicitly separate read and write locks, enabling higher concurrency without sacrificing thread safety. To address the pattern, Java Concurrency API introduced read-write locking abstraction through the java.util.concurrent.locks.ReadWriteLock interface and core implementation java.util.concurrent.locks.ReentrantReadWriteLock . This implementation separates read and write access at the synchronization level and significantly improves concurrency and throughput without compromising thread safety. It’s achieved through well-defined locking mechanisms using Lock-based mechanisms and java.util.concurrent.locks.Lock where ReentrantReadWriteLock provides read and write implementation of Lock that that adhere to this model.
package org.course.concurrency;
import java.time.LocalDateTime;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class MultithreadingApplication {
private static final Random RANDOM = new Random();
public static void main(String[] args) throws Exception {
List<Thread> threads = new ArrayList<>();
List<String> cache = new ArrayList<>();
ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
// initiate evicting
Thread evictTask = new Thread(new CacheEvictTask(readWriteLock, cache));
threads.add(evictTask);
evictTask.start();
// initiate updating
for (int i = 0; i < 2; i++) {
Thread updateThread = new Thread(new CacheUpdaterTask(readWriteLock, cache));
threads.add(updateThread);
updateThread.start();
}
// initiate reading
for (int i = 0; i < 5; i++) {
Thread readerThread = new Thread(new CacheReaderTask(readWriteLock, cache));
threads.add(readerThread);
readerThread.start();
}
// Run the application for 60 seconds
TimeUnit.SECONDS.sleep(60);
for (Thread thread : threads) {
thread.interrupt();
}
}
private static class CacheUpdaterTask implements Runnable {
private final Lock writeLock;
private final List<String> cache;
public CacheUpdaterTask(ReadWriteLock readWriteLock, List<String> cache) {
this.writeLock = readWriteLock.writeLock();
this.cache = cache;
}
@Override
public void run() {
String threadName = Thread.currentThread().getName();
while (true) {
try {
// complete the cache every second
TimeUnit.SECONDS.sleep(1);
writeLock.lock();
int value = RANDOM.nextInt(100);
System.out.println(threadName + " updates the cache with: " + value);
cache.add(threadName + value);
TimeUnit.MILLISECONDS.sleep(500); // simulate heavy update
} catch (InterruptedException e) {
break;
} finally {
System.out.println(threadName + " completed the cache update.");
writeLock.unlock();
}
}
}
}
private static class CacheEvictTask implements Runnable {
private final Lock writeLock;
private final List<String> cache;
public CacheEvictTask(ReadWriteLock readWriteLock, List<String> cache) {
this.writeLock = readWriteLock.writeLock();
this.cache = cache;
}
@Override
public void run() {
String threadName = Thread.currentThread().getName();
while (true) {
try {
// evict the cache every 5 seconds
TimeUnit.SECONDS.sleep(5);
writeLock.lock();
System.out.println(threadName + " starting to evict the cache.");
cache.clear();
TimeUnit.MILLISECONDS.sleep(1000); // simulate heavy evict
} catch (InterruptedException e) {
break;
} finally {
System.out.println(threadName + " completed the cache eviction.");
writeLock.unlock();
}
}
}
}
private static class CacheReaderTask implements Runnable {
private final Lock readLock;
private final List<String> cache;
public CacheReaderTask(ReadWriteLock readWriteLock, List<String> cache) {
this.readLock = readWriteLock.readLock();
this.cache = cache;
}
@Override
public void run() {
String threadName = Thread.currentThread().getName();
while (true) {
try {
readLock.lock();
System.out.println(
threadName + " reads the cache value " + cache + " at " + LocalDateTime.now());
TimeUnit.MILLISECONDS.sleep(50); // simulate heavy read operation
} catch (InterruptedException e) {
break;
} finally {
readLock.unlock();
}
}
}
}
}
Following the approach used in Lock API, this section explores ReadWriteLock API via its core implementation, java.util.concurrent.locks.ReentrantReadWriteLock. While the ReadWriteLock API appears different, it follows similar locking principles described in previous chapters, and being designed for read–write scenarios. At the core, both implementations are based on AQS framework through a fair and non-fair Sync where read operations rely on shared mode and write operations rely on exclusive mode. Both shared lock and exclusive lock are supported through the AQS framework. A key aspect of the implementation is that it relies on java.util.concurrent.locks.AbstractQueuedLongSynchronizer starting with Java 25 where long type of state eliminates int limit and enables tracking both shared read locks and exclusive write locks more efficiently—an essential requirement to coordination the operations. The shared lock allows multiple threads to proceed concurrently but does not support conditions, whereas the exclusive lock permits only single thread to proceed and does support conditions.
Write Lock API through ReentrantReadWriteLock.WriteLock
The java.util.concurrent.locks.ReentrantReadWriteLock.WriteLock implementation is designed to guarantee exclusive and safe write access to a shared resource by waiting for active readers and writers to complete and blocks further access until the write operation releases the lock. In practice the Sync implementation evaluates the state for shared and exclusive locks and, if the lock is available and write access is allowed (differs for fair and non fair implementation), the thread acquires the exclusive look, preventing other threads from proceeding. Otherwise the thread is parked and queued following the standard AQS framework flow.
WriteLock#unlock() operation, both fair and non-fair sync variants delegate exclusive lock release to the AQS framework, where ReentrantReadWriteLock#Sync provides the implementation for tryRelease() (a dedicated method to release exclusive lock). Firstly the Sync check that the current thread exclusively owns the lock and throws IllegalMonitorStateException if does not, since only the owning thread can release the exclusive lock. Next, decrementing the exclusive lock count and the state is updated with result value. When the count reaches zero, tryRelease() signals to AQS that can unpark head thread from the waiting queue. Unparking the first node of SharedNode type cause recursive waking up of all threads until reaching an ExclusiveNode type. In such case the AQS guarantee the threads waiting for shared lock that enter the queue before a writer are unparked, while other readers that attempted to acquire shared lock after, remain parked.
WriteLock#tryLock() is a fully non-blocking operation that attempts single time to acquire the lock using CAS when state indicates the write lock is available. If write lock is available and write operation is permitted, then thread acquires it and the method returns true, otherwise return false without parking or queuing the thread for subsequent attempts.
WriteLock#tryLock(time, timeUnit) behaves similar to ReentrantLock#tryLock(time, timeUnit). Since exclusive acquisition is implemented through tryAcquire, the same acquisition logic applies. If the initial attempt fails, the thread is parked for the given timeout and returns a boolean indicating whether the lock was acquired before the timeout.
WriteLock#lockInterruptibly() behaves similarly to WriteLock#lock(), but differs in that attempts to acquire the lock unless the thread is interrupted. The implementation is based on AbstractQueuedLongSynchronizer#acquireInterruptibly() which detects the interruption signal and throws InterruptedException when interruption occurs.
Read Lock API through ReentrantReadWriteLock.ReadLock
The java.util.concurrent.locks.ReentrantReadWriteLock.ReadLock implementation is designed to allow multiple threads to proceed concurrently when no writer threads holds the exclusive lock or pending access. In addition, the Sync ensures the reader thread is parked when queue contains a writer thread.
ReadLock#unlock() operation follows a pattern similar to other lock implementations, where Sync adjusts the shared state and decides whether waiting threads should be unparked. The shared lock release does not have effect on readers, however it becomes significant for waiting writers once the final shared lock is released as this event signals writer wake-up.
ReadLock#tryLock() is a fully non-blocking operation that attempts to acquire shared lock if exclusive lock count is detected.
ReadLock#tryLock(time, timeUnit) attempts to acquire shared lock and, if the initial attempt fails, parks the thread for given timeout. The method returns a boolean indicating whether the lock was successfully acquired before the timeout expired.
ReadLock#lockInterruptibly() similar to write lock interruptible, he implementation is based on AbstractQueuedLongSynchronizer#acquireSharedInterruptibly which detects the interruption signal and throws InterruptedException when interruption occurs.
StampedLock a different lock way
The java.util.concurrent.locks.StampedLock is a separate and independent implementation that supports read-write locking, does not rely on AQS framework and does not implement a concurrent abstraction. It exposes methods for read and write Lock but the implementations does not support conditions (both write and read locks). The behavior of this implementation differs significantly from other lock mechanisms, as it has no concept of thread ownership, locks acquired in one thread may be released or converted in another. Does not follow any regular abstractions and mostly is designed for use as internal utilities in the development of thread-safe components.
Thread Coordination with Synchronizers
Synchronizers do not expose a common abstraction or follow a strict hierarchy. Instead, each synchronizer is provided as a concrete implementation with its own specific set of methods tailored to a particular coordination use case. Although their purposes differ, they are typically built on top of the AQS framework, which manages the underlying synchronization mechanics.
Permit-Based Access Control with Semaphore
The java.util.concurrent.Semaphore is designed to manage concurrent access to shared resource via fixed number of permits. Unlike Lock, the semaphore's Sync implementation rely entirely on shared lock mode, allowing specific number of threads to proceed concurrently. It does not enforce the lock ownership and makes flexible the lock acquisition and releasing. The initial int parameter defines how many concurrent accesses are allowed, and as other APIs, it supports both fair and non-fair synchronization modes.
package org.course.concurrency;
import java.util.Random;
import java.util.concurrent.Semaphore;
import java.util.concurrent.TimeUnit;
public class MultithreadingApplication {
public static void main(String[] args) {
Semaphore semaphore = new Semaphore(3);
for (int i = 0; i < 20; i++) {
new Thread(new VisitorTask(semaphore)).start();
}
}
private static class VisitorTask implements Runnable {
private final Semaphore semaphore;
private final Random random;
VisitorTask(Semaphore semaphore) {
this.semaphore = semaphore;
this.random = new Random();
}
@Override
public void run() {
String threadName = Thread.currentThread().getName();
try {
semaphore.acquire();
System.out.println("Acquire lock for visitor " + threadName);
TimeUnit.SECONDS.sleep(random.nextInt(5)); // simulate heavy
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
semaphore.release();
System.out.println("Release lock for visitor " + threadName);
}
}
}
}
Semaphore#acquire() is designed to attempt to acquire permission to proceed, if no permits are available the thread is parked within AQS framework until a permit is released or until the thread is interrupted. The overloaded method acquire(permits) allows to acquire a specific number of permits, whereas acquire() always requires single permit. In the fair implementation, the semaphore first checks whether there are threads waiting in the queue before attempting to acquire permits. In contrast, the non-fair implementation attempts to acquire permits immediately, regardless of queued threads. It's important to note that Semaphore does not validate the correctness of permit usage (non while setting available permits, non while acquire a given number of permits). The Semaphore can be initialized with zero or negative permits (that's a completely valid value for state) or the flow may request more than maximum number of permits are available in semaphore. Incorrect permit management can therefore lead to unintended blocking issues, and it is entirely the responsibility of the software engineer to prevent and mitigate such problems.
Semaphore#release() method releases the permit back to semaphore or the number of permits in case of overloaded method release(permits). Once release is completed and the state is updated, the AQS framework wakes up the head thread in the queue which may cause recursive thread unpark until permits are available (automatically detected by AQS).
Semaphore#tryAcquire() and Semaphore#tryAcquire(permits) are fully non-blocking operations that attempt to acquire one or more permits. If sufficient permits are available, they are acquired and the method returns true; otherwise, the method returns false immediately.
Semaphore#tryAcquire(timeout, unit) and Semaphore#tryAcquire(permits, timeout, unit) attempts to one or more permits, if the initial attempt fails, parks the thread for given timeout and returns a boolean indicating whether the permissions was successfully acquired before the timeout expired.
Semaphore#acquireUninterruptibly() and Semaphore#acquireUninterruptibly(permits) behave similarly to Semaphore#acquire(), but differ in that they do not respond to thread interruption while waiting to acquire permits.
In addition to standard methods that facilitate thread coordination, the Semaphore offers a few others that can be useful when designing highly concurrent applications.
- drainPermits: acquires all available permits and returns the number of permits that have been acquired
- availablePermits: returns the current number of permits available in the semaphore
- getQueueLength: returns an approximate number of parked threads in the queue
- hasQueueThreads: indicates if the queue contains parked threads
One-Time Coordination Using CountDownLatch
The java.util.concurrent.locks.CountDownLatch implementation is one-time synchronization aid that blocks waiting threads until a predefined number of events has occurred. It is initialized with a count (a count of shared locks) and each call to countDown() decrements the share lock state. Once the state value reaches zero, all waiting threads are awakened. Any calls to await() or countDown() after the state becomes zero have no effect.
package org.course.concurrency;
import java.time.LocalDateTime;
import java.util.Random;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
public class MultithreadingApplication {
private static final Random RANDOM = new Random();
public static void main(String[] args) throws InterruptedException {
int workerNumber = 10;
CountDownLatch startSignal = new CountDownLatch(1);
CountDownLatch doneSignal = new CountDownLatch(workerNumber);
for (int i = 0; i < workerNumber; ++i) {
new Thread(new Worker(startSignal, doneSignal)).start();
}
TimeUnit.SECONDS.sleep(RANDOM.nextInt(5)); // simulate heavy preparation
startSignal.countDown(); // let all threads proceed
System.out.println("All workers received signal to start processing at " + LocalDateTime.now());
doneSignal.await(); // wait for all to finish
System.out.println("All workers finalised processing " + LocalDateTime.now());
}
private static class Worker implements Runnable {
private final CountDownLatch startSignal;
private final CountDownLatch doneSignal;
Worker(CountDownLatch startSignal, CountDownLatch doneSignal) {
this.startSignal = startSignal;
this.doneSignal = doneSignal;
}
public void run() {
String threadName = Thread.currentThread().getName();
try {
System.out.println("Worker " + threadName + " is waiting for start signal");
startSignal.await();
System.out.println(
"Worker " + threadName + " is performing task processing " + LocalDateTime.now());
TimeUnit.SECONDS.sleep(RANDOM.nextInt(10)); // simulate heavy process
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
doneSignal.countDown();
System.out.println(
"Worker " + threadName + " completed task processing at " + LocalDateTime.now());
}
}
}
}
Thread Coordination at Scale with CyclicBarrier
The java.util.concurrent.locks.CyclicBarrier implementation is a waiting peers synchronizer that enables a defined group of threads (a configured number of threads) to lock at await() point, awaking them only when all participating threads have arrived. Once the cycle is completed, the barrier is automatically reset and can be used again. At its core, the CyclicBarrier rely on ReentrantLock and Condition to guarantee safe "peer" state update. Threads that reach the barrier first register their arrival and then await on the condition, which parks them while the remaining peers are still pending. Once the defined group size is reached, all waiting threads are unparked, allowing the next set of threads to repeat the process through the same mechanism. Important to note that threads which received the signal are not synchronized anymore with other set of threads that have reached the barrier or are waiting to reach the barrier.
package org.course.concurrency;
import java.time.LocalDateTime;
import java.util.Random;
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;
import java.util.concurrent.TimeUnit;
public class MultithreadingApplication {
private static final Random RANDOM = new Random();
public static void main(String[] args) {
CyclicBarrier cyclicBarrier = new CyclicBarrier(3);
for (int i = 10; i < 30; i++) {
new Thread(new Worker(cyclicBarrier)).start();
}
}
private static class Worker implements Runnable {
private final CyclicBarrier cyclicBarrier;
Worker(CyclicBarrier cyclicBarrier) {
this.cyclicBarrier = cyclicBarrier;
}
@Override
public void run() {
String threadName = Thread.currentThread().getName();
try {
TimeUnit.SECONDS.sleep(RANDOM.nextInt(5)); // simulate heave operation
System.out.println("Worker " + threadName + " is waiting for barrier");
cyclicBarrier.await();
System.out.println(
"Worker " + threadName + " passed the barrier at " + LocalDateTime.now());
} catch (InterruptedException | BrokenBarrierException e) {
throw new RuntimeException(e);
}
}
}
}
Pairwise Thread Data Exchange via Exchanger
The java.util.concurrent.locks.Exchanger is a synchronizer that enables threads to safely exchange objects with peers on value arrival. It does not rely on AQS framework; instead, it uses low-level compare-and-set (CAS) operations to coordinate the exchange, managing its state through generic object references.
package org.course.concurrency;
import java.util.Random;
import java.util.concurrent.Exchanger;
import java.util.concurrent.TimeUnit;
public class MultithreadingApplication {
private static final Random RANDOM = new Random();
public static void main(String[] args) {
Exchanger<String> exchanger = new Exchanger<>();
for (int i = 0; i < 20; i++) {
new Thread(new Worker(exchanger)).start();
}
}
private static class Worker implements Runnable {
private final Exchanger<String> exchanger;
public Worker(Exchanger<String> exchanger) {
this.exchanger = exchanger;
}
@Override
public void run() {
try {
TimeUnit.SECONDS.sleep(RANDOM.nextInt(5));
String threadName = Thread.currentThread().getName();
System.out.println("Value received " + exchanger.exchange(threadName));
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
}
}
Summary
Concurrency is evolving, and using modern APIs in its design highlights how current methods streamline thread communication and coordination, boosting performance and reliability in multithreaded applications. By utilizing these advanced tools, developers can effectively handle complex concurrency challenges.