Blog Concurrency and Multithreading Java

Understanding Concurrency in Java

Concurrency is a crucial concept in modern programming, especially in a language like Java. It allows multiple parts of a program to execute simultaneously, potentially leading to more efficient use of resources and improved performance.

The Importance of Synchronization

In multithreaded programming, synchronization is a mechanism that ensures that two or more concurrent threads do not simultaneously execute some particular program segment known as a critical section.

When we talk about observing an object in an inconsistent state, it means that one thread is in the middle of modifying an object, and another thread is reading from it at the same time. This can lead to unpredictable results because the object is in a state of flux – it’s not fully updated yet.

Synchronization prevents this by ensuring that only one thread can access the critical section at a time. It does this by using a concept called a lock. When a thread wants to enter a critical section, it must first acquire the lock. If another thread already holds the lock, the thread will be blocked until the lock is released.

Visibility of changes made by one thread to the shared data to the other threads. Without synchronization, if one thread modifies a shared variable, other threads might not see that change. Synchronization ensures that all threads see the most up-to-date value of shared variables.

According to the JMM, in order to guarantee visibility of changes, you must use synchronization. So, in summary, synchronization in multithreaded programming serves two main purposes:

  1. Mutual exclusion: It ensures that only one thread can execute a critical section at a time, preventing threads from observing an object in an inconsistent state.
  2. Visibility: It ensures that changes made by one thread to shared data are visible to other threads.

Atomic Operations and Compound Operations

In Java, the term atomic refers to an operation that is completed in a single step. If an operation is atomic, then it’s considered thread-safe, because it can’t be interrupted and observed in an incomplete state by other threads.

Atomic operations ensure that a thread will not see an arbitrary value when reading a field (i.e., it will see a valid value written by some thread), they do not guarantee visibility of changes across threads.

Visibility, in this context, refers to when and how changes made by one thread become visible to others. Without proper synchronization, one thread may not see the changes made by another thread immediately or even at all. This is because threads may cache variables in their local memory, and without synchronization, the cache may not be updated with the recent changes.

The Java Language Specification (JLS) guarantees that reading or writing a variable is atomic, unless the variable is of type long or double. This means that when a thread reads or writes a variable of any type other than long or double, the operation is done in a single, uninterruptible step.

However, for variables of type long or double, the operation might be performed in two steps: one for the first 32 bits, and another for the second 32 bits. This means that another thread could potentially see the variable in an inconsistent state, where it has the value from half of one write and half of another. This is why the JLS does not guarantee atomicity for long or double variables.

It refers to the fact that even if multiple threads modify a variable concurrently and without synchronization, reading a variable (other than a long or double) is guaranteed to return a value that was stored into that variable by some thread. This is because the read or write operation is atomic and thus, even though multiple threads are modifying the variable, any read operation will retrieve a valid value that was written by some thread, not a mix of values from different threads

An operation is considered atomic if it appears to the rest of the system to occur instantaneously. Atomicity is a guarantee of isolation from concurrent processes. However, compound operations, which involve multiple steps, are not atomic and require synchronization to ensure data consistency.

Importance of synchronization when dealing with shared mutable data

Java
// Broken! - How long would you expect this program to run?
public class StopThread {

    private static boolean stopRequested;

    public static void main(String[] args) throws InterruptedException {
        Thread backgroundThread = new Thread(() -> {
            int i = 0;
            while (!stopRequested)
                i++;
        });
        backgroundThread.start();
        TimeUnit.SECONDS.sleep(1);
        stopRequested = true;
    }
}

Above program is an example of the importance of synchronization and a common multithreading issue, when dealing with shared mutable data across multiple threads, even if the operations on the data are atomic.

The stopRequested variable is shared between the main thread and the background thread. The main thread sets stopRequested to true after a delay, and the background thread continuously checks the value of stopRequested in a loop. If stopRequested is true, the background thread should stop looping.

However, without synchronization, there’s no guarantee that the background thread will ever see the change to stopRequested made by the main thread. This is due to the Java Memory Model’s rules about how threads can cache variables and when they have to refresh those caches from main memory. In this case, the background thread could cache the value of stopRequested and never refresh it, meaning it would never see the change and would loop forever.

The Thread.stop method in Java was used to stop a thread, but it has been deprecated because it’s inherently unsafe. The reason is that it causes the thread to stop in the middle of whatever operation it was performing, which can lead to data corruption if the operation was modifying shared data.

Instead of using Thread.stop, a safer way to stop a thread is to use a boolean flag. The idea is to have the thread periodically check this flag and stop itself if the flag is set to true. This method is safe because the thread can finish its current operation before stopping, preventing data corruption.

However, even though reading and writing a boolean field is atomic (i.e., it happens in a single, uninterruptible step), it’s important to still use synchronization when accessing the field. This is because without synchronization, there’s no guarantee that changes to the boolean field made by one thread will be immediately visible to other threads due to caching. Synchronization ensures that when a thread reads the field, it sees the most recent value written by any thread.

So, even though the operations on the boolean field are atomic, dispensing with synchronization can lead to issues with visibility of changes across threads, which can result in bugs that are hard to detect and fix.

Volatile Variables

The volatile keyword in Java is used to mark a variable as being stored in main memory, rather than in the CPU cache. This means that every read of a volatile variable will be read from the computer’s main memory, and not from the CPU cache, and that every write to a volatile variable will be written to main memory, and not just to the CPU cache. However, volatile does not provide any mutual exclusion, which is why it’s only suitable for this specific case where the shared variable is written by one thread and read by others, but not updated by multiple threads concurrently.

Java
// Cooperative thread termination with a volatile field
public class StopThread {

    private static volatile boolean stopRequested;

    public static void main(String[] args) throws InterruptedException {

        Thread backgroundThread = new Thread(() -> {
            int i = 0;
            while (!stopRequested)
                i++;
        });
        backgroundThread.start();
        TimeUnit.SECONDS.sleep(1);
        stopRequested = true;
    }
}

The volatile keyword in Java is used to mark a variable as being stored in main memory, rather than in the CPU cache. This means that every read of a volatile variable will be read from the computer’s main memory, and not from the CPU cache, and that every write to a volatile variable will be written to main memory, and not just to the CPU cache.

In the context of your code, stopRequested is a shared variable between the main thread and the background thread. The main thread sets stopRequested to true, and the background thread keeps checking the value of stopRequested. If stopRequested is true, the background thread stops looping.

By declaring stopRequested as volatile, you’re ensuring that the background thread always sees the latest value. When the main thread sets stopRequested to true, this change is immediately made in main memory, and not just in the CPU cache. So, when the background thread checks the value of stopRequested, it sees the change immediately.

This is why the volatile keyword can be used as an alternative to synchronized for simple communication between threads like this. However, it’s important to note that volatile does not perform any mutual exclusion (i.e., it does not prevent multiple threads from accessing or modifying the variable at the same time). Therefore, it’s only suitable for cases where atomicity of compound operations is not required.

Use Volatile in Compound operations

In concurrent programming, an operation (or a set of operations) is considered atomic if it appears to the rest of the system to occur instantaneously. Atomicity is a guarantee of isolation from concurrent processes. Additionally, atomic operations are performed in a single unit of work without the possibility of interference from other operations.

Compound operations involve multiple steps. An example of a compound operation could be incrementing a counter, which involves three steps: reading the old value, incrementing it, and writing the new value back.

The volatile keyword in Java ensures that a variable is read or written directly from/to main memory, and not from/to the CPU cache. This guarantees visibility, meaning that changes made by one thread to a volatile variable are immediately visible to other threads.

You need to be careful while using volatile in compound operations, please check following program

Java
// Broken - requires synchronization!
private static volatile int nextSerialNumber = 0;

public static int generateSerialNumber() {
    return nextSerialNumber++;
}

The generateSerialNumber method is intended to generate a unique serial number each time it’s called. It does this by incrementing a static volatile variable, nextSerialNumber, each time the method is called.

However, the increment operation (nextSerialNumber++) is not atomic, even though nextSerialNumber is declared as volatile. The increment operation involves three steps:

  1. Read the current value of nextSerialNumber.
  2. Add one to the current value.
  3. Write the new value back to nextSerialNumber.

Even though each of these individual operations is atomic, the entire operation as a whole is not. Without synchronization, it’s possible for two threads to both read the value of nextSerialNumber, increment it, and write it back, effectively causing one of the increments to be lost. This is because the second write operation would overwrite the first, resulting in nextSerialNumber only being incremented once instead of twice.

This is why the method won’t work properly without synchronization, even though nextSerialNumber is volatile. The volatile keyword ensures that changes to nextSerialNumber are immediately visible to all threads, but it doesn’t provide any mutual exclusion.

To fix this issue, you could use the synchronized keyword to ensure that the entire increment operation is atomic:

Java
private static int nextSerialNumber = 0;

public static synchronized int generateSerialNumber() {
    return nextSerialNumber++;
}

In this version of the method, the synchronized keyword is used to ensure that only one thread can execute the method at a time. This makes the entire increment operation atomic, ensuring that each call to generateSerialNumber returns a unique value.

However, volatile does not provide atomicity for compound operations. For example, consider a scenario where multiple threads are incrementing a shared volatile counter. The increment operation is not atomic, even if the counter is declared volatile, because it involves multiple steps (read, increment, write). Without proper synchronization, one thread might read the counter’s value, then another thread might read the same value before the first thread has a chance to increment it. Both threads then increment their local value and write it back, effectively causing one of the increments to be lost.

So, while volatile can be used for simple flag variables that are only written by one thread and read by others (like stopRequested in your example), it’s not suitable for cases where atomicity of compound operations is required. In those cases, you would need to use synchronization to ensure atomicity.

The java.util.concurrent.atomic package provides a class called AtomicLong that’s designed for exactly this kind of situation. An AtomicLong is a long value that can be updated atomically, and it provides methods for operations like incrementing the value. Using AtomicLong for nextSerialNumber would ensure that the increment operation is atomic, and it would likely be more efficient than using synchronized.

Here’s how you might use AtomicLong in your code:

Java
import java.util.concurrent.atomic.AtomicLong;

public class StopThread {
    private static AtomicLong nextSerialNumber = new AtomicLong();

    public static long generateSerialNumber() {
        return nextSerialNumber.getAndIncrement();
    }

    // rest of your code
}

Effective Java: Concurrency Recommendations

In the book “Effective Java, Third Edition”, author Joshua Bloch provides several recommendations for working with concurrency:

  • Avoid sharing mutable data. Either share immutable data or don’t share at all. In other words, confine mutable data to a single thread.
  • It is acceptable for one thread to modify a data object for a while and then to share it with other threads, synchronizing only the act of sharing the object reference. Such objects are said to be effectively immutable.
  • When multiple threads share mutable data, each thread that reads or writes the data must perform synchronization. In the absence of synchronization, there is no guarantee that one thread’s changes will be visible to another.

Ref: https://github.com/AaronLiuIsCool/Effective-Java-Third-Edition-Reading-Notes/blob/master/src/chapter11_Concurrency/Concurrency.md

Avatar

Neelabh

About Author

As Neelabh Singh, I am a Senior Software Engineer with 6.6 years of experience, specializing in Java technologies, Microservices, AWS, Algorithms, and Data Structures. I am also a technology blogger and an active participant in several online coding communities.

You may also like

Blog Design Pattern

Understanding the Builder Design Pattern in Java | Creational Design Patterns | CodeTechSummit

Overview The Builder design pattern is a creational pattern used to construct a complex object step by step. It separates
Blog Tech Toolkit

Base64 Decode

Base64 encoding is a technique used to encode binary data into ASCII characters, making it easier to transmit data over