Wait, Don’t Touch That!
Mutual Exclusion Locks & JavaScript
Everyone knows that JavaScript runtimes are single-threaded.
The runtime contains a message queue and each message has a function associated with it. Messages are taken off the queue and the associated function is run to completion. When the stack unrolls and the function completes, the next message is taken from the queue. While a function is executing no other messages can be processed, you can’t preempt execution, and no other work can be done.
This makes JavaScript programming simple, right?
You don’t need to worry about concurrency. No parallel code accessing the same resource. No lock contention. No synchronization issues. Right?
Mostly. But there are a couple of exceptional cases.
Different browser tabs can run in different processes and web workers have introduced threading to the browser. So while it is certainly an edge case, situations do exist where shared resources — such as LocalStorage — may be accessed concurrently.
At Medium we use LocalStorage for tracking read time. It increases the chance of making sure reports make it back to the server even in cases where the user navigates away or closes the tab. But we need to avoid multiple tabs sending the data multiple times.
For a solution we look to a 1985 paper by Leslie Lamport, then at DEC, now at Microsoft Research. It describes a mutual exclusion locking algorithm for use in situations where there are no atomic test-and-set operations. While the paper is in the context of multiprocessor environments, this is exactly the problem we have with our concurrent access to LocalStorage.
The algorithm looks like this:
1. Set X = i
2. If Y != 0: Restart
3. Set Y = i
4. If X != i:
5. Delay
6. If Y != i: Restart
7. [Has lock, do work]
8. Set Y = 0Where X and Y are shared memory slots and i is a unique identifier for the client.
In English:
- Always set X to the current client’s unique identifier.
- If Y is not zero then another client has the lock, so restart.
- If Y was zero, then set Y to the client ID.
- If X has changed, there’s a possibility of contention. So…
- Delay for long enough for another client to have seen Y as zero and tried to write it. (We added a random jitter here to minimize the chance of a client being starved.)
- If the client didn’t win Y, then restart the whole process.
- The lock was won, or there was no sign of contention, so now we can do our work.
- Clear Y to allow another client to take the lock.
And in JavaScript:
The algorithm is fast if there is no contention, but in practice a JS implementation will use setTimeout for the restart (#2, #6) and delay (#5) steps, which imposes latency. That means you want to design your application to minimize the likelihood of contention. In our reporting example, we do frequent, lock-free writes to probabilistically unique keys, then a client claims a lock while it reads, sends, and deletes the reports.
An additional consideration is clients dying while they hold the lock, which can happen in the case of error or a closed tab. To solve this, at step #2 you can add a timeout to the claim for Y, in _isLockAvailable().
This is definitely an edge case, and while it’s not a problem most will stumble upon, I’d wager that as the web platform progresses the need will become more common. If I’m right, the Fast Mutex will be a useful tool to have in your arsenal.