Skip to content
Go back

The Case of the Vanishing Update: Fixing a Concurrency Bug

Edit page

The Symptom

Recently, I encountered a strange bug in our production environment. We had a global HashMap that stored configuration data.

When an admin updated a configuration, the change was saved successfully. However, some incoming API requests were still serving the old data, while others were serving the new data.

It was inconsistent. Restarting the server fixed it, but obviously, that’s not a solution.

The Investigation

The code looked something like this (simplified):

public class ConfigManager {
    // The shared resource
    private static Map<String, String> configCache = new HashMap<>();

    public static void updateConfig(Map<String, String> newConfig) {
        configCache = newConfig; // The writer
    }

    public static String getConfig(String key) {
        return configCache.get(key); // The reader
    }
}

I realized this was a classic Java Memory Model issue.

In modern CPUs, each core has its own L1/L2 cache. When Thread A (the admin) updated the configCache reference, that change was written to main memory. However, Thread B (the user request) was likely still reading a “stale” copy of the reference from its local CPU cache.

Thread B literally didn’t know the map had changed.

The Fix: volatile

To fix this, I added the volatile keyword to the map variable.

// Added 'volatile' to ensure visibility
private static volatile Map<String, String> configCache = new HashMap<>();

Why this worked

Declaring a variable as volatile establishes a “Happens-Before” relationship.

It tells the JVM: “Do not cache this variable in the CPU registers. Always read it directly from and write it directly to main memory.

Once I deployed this change, the visibility issue vanished. As soon as the admin updated the map, all other threads immediately saw the new reference.

Retrospective & Next Steps

While volatile fixed the immediate visibility problem, it is important to note that it does not make the HashMap itself thread-safe for concurrent writes (Atomicity).

If we had multiple threads writing to the map at the same time, volatile wouldn’t be enough. In that case, I would look into using ConcurrentHashMap or a ReadWriteLock to ensure data integrity.

But for this specific use case—where we had one writer and many readers—volatile was the lightweight, correct solution.


Edit page
Share this post on:

Previous Post
Why your Java Pods get OOMKilled: JVM Memory inside Kubernetes
Next Post
My First Technical Post