Operating System: Three Easy Pieces --- Concurrent Linked Lists (Note)

We next examine a more complicated structure, the linked list. Let's start with a basic approach

once again. For simplicity, we will omit some of the obvious routines that such a list would have

and just focus on concurrent insert; we will leave it to the reader to think about lookup, delete, and

so forth. Figure 29.7 shows the code for this rudimentary data structure.

As you can see in the code, the code simply acquires a lock in the insert routine upon entry, and

releases it upon exit. One small tricky issue arises if malloc() happens to fail (a rare case); in this

case, the code must aldo release the lock before failing the insert.

This kind of exceptional control flow has been shown to be quite error prone; a recent study of 

Linux kernel patches found that a huge fraction of bugs (nearly 40%) are found on such rarely-

taken code paths (indeed, this obversation sparked some of our own research, in which we

removed all memory-failing paths from a Linux file system, resulting in a more robust system).

 Thus, a challenge: can we rewrite the insert and lookup routines to remain correct under 

concurrent insert but avoid the case where the failure path also requires us to add the call to 

unlock ?

The answer, in this case, is yes. Specifically, we can rearrange the code a bit so that the lock and

release only surround the actual critical section in the insert code, and that a common exit

path is used in the lookup code. The former works because part of the lookup actually need not

be locked; assuming that malloc() itself is thread-safe, each thread can call into it without

worry of race conditions or other concurrency bugs. Only when updating the shared list does 

a lock need to be held.

 

posted on 2015-11-03 13:42  Persistence  阅读(145)  评论(0编辑  收藏  举报

导航