Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
torvalds
GitHub Repository: torvalds/linux
Path: blob/master/tools/memory-model/Documentation/explanation.txt
26282 views
1
Explanation of the Linux-Kernel Memory Consistency Model
2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3
4
:Author: Alan Stern <[email protected]>
5
:Created: October 2017
6
7
.. Contents
8
9
1. INTRODUCTION
10
2. BACKGROUND
11
3. A SIMPLE EXAMPLE
12
4. A SELECTION OF MEMORY MODELS
13
5. ORDERING AND CYCLES
14
6. EVENTS
15
7. THE PROGRAM ORDER RELATION: po AND po-loc
16
8. A WARNING
17
9. DEPENDENCY RELATIONS: data, addr, and ctrl
18
10. THE READS-FROM RELATION: rf, rfi, and rfe
19
11. CACHE COHERENCE AND THE COHERENCE ORDER RELATION: co, coi, and coe
20
12. THE FROM-READS RELATION: fr, fri, and fre
21
13. AN OPERATIONAL MODEL
22
14. PROPAGATION ORDER RELATION: cumul-fence
23
15. DERIVATION OF THE LKMM FROM THE OPERATIONAL MODEL
24
16. SEQUENTIAL CONSISTENCY PER VARIABLE
25
17. ATOMIC UPDATES: rmw
26
18. THE PRESERVED PROGRAM ORDER RELATION: ppo
27
19. AND THEN THERE WAS ALPHA
28
20. THE HAPPENS-BEFORE RELATION: hb
29
21. THE PROPAGATES-BEFORE RELATION: pb
30
22. RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb
31
23. SRCU READ-SIDE CRITICAL SECTIONS
32
24. LOCKING
33
25. PLAIN ACCESSES AND DATA RACES
34
26. ODDS AND ENDS
35
36
37
38
INTRODUCTION
39
------------
40
41
The Linux-kernel memory consistency model (LKMM) is rather complex and
42
obscure. This is particularly evident if you read through the
43
linux-kernel.bell and linux-kernel.cat files that make up the formal
44
version of the model; they are extremely terse and their meanings are
45
far from clear.
46
47
This document describes the ideas underlying the LKMM. It is meant
48
for people who want to understand how the model was designed. It does
49
not go into the details of the code in the .bell and .cat files;
50
rather, it explains in English what the code expresses symbolically.
51
52
Sections 2 (BACKGROUND) through 5 (ORDERING AND CYCLES) are aimed
53
toward beginners; they explain what memory consistency models are and
54
the basic notions shared by all such models. People already familiar
55
with these concepts can skim or skip over them. Sections 6 (EVENTS)
56
through 12 (THE FROM_READS RELATION) describe the fundamental
57
relations used in many models. Starting in Section 13 (AN OPERATIONAL
58
MODEL), the workings of the LKMM itself are covered.
59
60
Warning: The code examples in this document are not written in the
61
proper format for litmus tests. They don't include a header line, the
62
initializations are not enclosed in braces, the global variables are
63
not passed by pointers, and they don't have an "exists" clause at the
64
end. Converting them to the right format is left as an exercise for
65
the reader.
66
67
68
BACKGROUND
69
----------
70
71
A memory consistency model (or just memory model, for short) is
72
something which predicts, given a piece of computer code running on a
73
particular kind of system, what values may be obtained by the code's
74
load instructions. The LKMM makes these predictions for code running
75
as part of the Linux kernel.
76
77
In practice, people tend to use memory models the other way around.
78
That is, given a piece of code and a collection of values specified
79
for the loads, the model will predict whether it is possible for the
80
code to run in such a way that the loads will indeed obtain the
81
specified values. Of course, this is just another way of expressing
82
the same idea.
83
84
For code running on a uniprocessor system, the predictions are easy:
85
Each load instruction must obtain the value written by the most recent
86
store instruction accessing the same location (we ignore complicating
87
factors such as DMA and mixed-size accesses.) But on multiprocessor
88
systems, with multiple CPUs making concurrent accesses to shared
89
memory locations, things aren't so simple.
90
91
Different architectures have differing memory models, and the Linux
92
kernel supports a variety of architectures. The LKMM has to be fairly
93
permissive, in the sense that any behavior allowed by one of these
94
architectures also has to be allowed by the LKMM.
95
96
97
A SIMPLE EXAMPLE
98
----------------
99
100
Here is a simple example to illustrate the basic concepts. Consider
101
some code running as part of a device driver for an input device. The
102
driver might contain an interrupt handler which collects data from the
103
device, stores it in a buffer, and sets a flag to indicate the buffer
104
is full. Running concurrently on a different CPU might be a part of
105
the driver code being executed by a process in the midst of a read(2)
106
system call. This code tests the flag to see whether the buffer is
107
ready, and if it is, copies the data back to userspace. The buffer
108
and the flag are memory locations shared between the two CPUs.
109
110
We can abstract out the important pieces of the driver code as follows
111
(the reason for using WRITE_ONCE() and READ_ONCE() instead of simple
112
assignment statements is discussed later):
113
114
int buf = 0, flag = 0;
115
116
P0()
117
{
118
WRITE_ONCE(buf, 1);
119
WRITE_ONCE(flag, 1);
120
}
121
122
P1()
123
{
124
int r1;
125
int r2 = 0;
126
127
r1 = READ_ONCE(flag);
128
if (r1)
129
r2 = READ_ONCE(buf);
130
}
131
132
Here the P0() function represents the interrupt handler running on one
133
CPU and P1() represents the read() routine running on another. The
134
value 1 stored in buf represents input data collected from the device.
135
Thus, P0 stores the data in buf and then sets flag. Meanwhile, P1
136
reads flag into the private variable r1, and if it is set, reads the
137
data from buf into a second private variable r2 for copying to
138
userspace. (Presumably if flag is not set then the driver will wait a
139
while and try again.)
140
141
This pattern of memory accesses, where one CPU stores values to two
142
shared memory locations and another CPU loads from those locations in
143
the opposite order, is widely known as the "Message Passing" or MP
144
pattern. It is typical of memory access patterns in the kernel.
145
146
Please note that this example code is a simplified abstraction. Real
147
buffers are usually larger than a single integer, real device drivers
148
usually use sleep and wakeup mechanisms rather than polling for I/O
149
completion, and real code generally doesn't bother to copy values into
150
private variables before using them. All that is beside the point;
151
the idea here is simply to illustrate the overall pattern of memory
152
accesses by the CPUs.
153
154
A memory model will predict what values P1 might obtain for its loads
155
from flag and buf, or equivalently, what values r1 and r2 might end up
156
with after the code has finished running.
157
158
Some predictions are trivial. For instance, no sane memory model would
159
predict that r1 = 42 or r2 = -7, because neither of those values ever
160
gets stored in flag or buf.
161
162
Some nontrivial predictions are nonetheless quite simple. For
163
instance, P1 might run entirely before P0 begins, in which case r1 and
164
r2 will both be 0 at the end. Or P0 might run entirely before P1
165
begins, in which case r1 and r2 will both be 1.
166
167
The interesting predictions concern what might happen when the two
168
routines run concurrently. One possibility is that P1 runs after P0's
169
store to buf but before the store to flag. In this case, r1 and r2
170
will again both be 0. (If P1 had been designed to read buf
171
unconditionally then we would instead have r1 = 0 and r2 = 1.)
172
173
However, the most interesting possibility is where r1 = 1 and r2 = 0.
174
If this were to occur it would mean the driver contains a bug, because
175
incorrect data would get sent to the user: 0 instead of 1. As it
176
happens, the LKMM does predict this outcome can occur, and the example
177
driver code shown above is indeed buggy.
178
179
180
A SELECTION OF MEMORY MODELS
181
----------------------------
182
183
The first widely cited memory model, and the simplest to understand,
184
is Sequential Consistency. According to this model, systems behave as
185
if each CPU executed its instructions in order but with unspecified
186
timing. In other words, the instructions from the various CPUs get
187
interleaved in a nondeterministic way, always according to some single
188
global order that agrees with the order of the instructions in the
189
program source for each CPU. The model says that the value obtained
190
by each load is simply the value written by the most recently executed
191
store to the same memory location, from any CPU.
192
193
For the MP example code shown above, Sequential Consistency predicts
194
that the undesired result r1 = 1, r2 = 0 cannot occur. The reasoning
195
goes like this:
196
197
Since r1 = 1, P0 must store 1 to flag before P1 loads 1 from
198
it, as loads can obtain values only from earlier stores.
199
200
P1 loads from flag before loading from buf, since CPUs execute
201
their instructions in order.
202
203
P1 must load 0 from buf before P0 stores 1 to it; otherwise r2
204
would be 1 since a load obtains its value from the most recent
205
store to the same address.
206
207
P0 stores 1 to buf before storing 1 to flag, since it executes
208
its instructions in order.
209
210
Since an instruction (in this case, P0's store to flag) cannot
211
execute before itself, the specified outcome is impossible.
212
213
However, real computer hardware almost never follows the Sequential
214
Consistency memory model; doing so would rule out too many valuable
215
performance optimizations. On ARM and PowerPC architectures, for
216
instance, the MP example code really does sometimes yield r1 = 1 and
217
r2 = 0.
218
219
x86 and SPARC follow yet a different memory model: TSO (Total Store
220
Ordering). This model predicts that the undesired outcome for the MP
221
pattern cannot occur, but in other respects it differs from Sequential
222
Consistency. One example is the Store Buffer (SB) pattern, in which
223
each CPU stores to its own shared location and then loads from the
224
other CPU's location:
225
226
int x = 0, y = 0;
227
228
P0()
229
{
230
int r0;
231
232
WRITE_ONCE(x, 1);
233
r0 = READ_ONCE(y);
234
}
235
236
P1()
237
{
238
int r1;
239
240
WRITE_ONCE(y, 1);
241
r1 = READ_ONCE(x);
242
}
243
244
Sequential Consistency predicts that the outcome r0 = 0, r1 = 0 is
245
impossible. (Exercise: Figure out the reasoning.) But TSO allows
246
this outcome to occur, and in fact it does sometimes occur on x86 and
247
SPARC systems.
248
249
The LKMM was inspired by the memory models followed by PowerPC, ARM,
250
x86, Alpha, and other architectures. However, it is different in
251
detail from each of them.
252
253
254
ORDERING AND CYCLES
255
-------------------
256
257
Memory models are all about ordering. Often this is temporal ordering
258
(i.e., the order in which certain events occur) but it doesn't have to
259
be; consider for example the order of instructions in a program's
260
source code. We saw above that Sequential Consistency makes an
261
important assumption that CPUs execute instructions in the same order
262
as those instructions occur in the code, and there are many other
263
instances of ordering playing central roles in memory models.
264
265
The counterpart to ordering is a cycle. Ordering rules out cycles:
266
It's not possible to have X ordered before Y, Y ordered before Z, and
267
Z ordered before X, because this would mean that X is ordered before
268
itself. The analysis of the MP example under Sequential Consistency
269
involved just such an impossible cycle:
270
271
W: P0 stores 1 to flag executes before
272
X: P1 loads 1 from flag executes before
273
Y: P1 loads 0 from buf executes before
274
Z: P0 stores 1 to buf executes before
275
W: P0 stores 1 to flag.
276
277
In short, if a memory model requires certain accesses to be ordered,
278
and a certain outcome for the loads in a piece of code can happen only
279
if those accesses would form a cycle, then the memory model predicts
280
that outcome cannot occur.
281
282
The LKMM is defined largely in terms of cycles, as we will see.
283
284
285
EVENTS
286
------
287
288
The LKMM does not work directly with the C statements that make up
289
kernel source code. Instead it considers the effects of those
290
statements in a more abstract form, namely, events. The model
291
includes three types of events:
292
293
Read events correspond to loads from shared memory, such as
294
calls to READ_ONCE(), smp_load_acquire(), or
295
rcu_dereference().
296
297
Write events correspond to stores to shared memory, such as
298
calls to WRITE_ONCE(), smp_store_release(), or atomic_set().
299
300
Fence events correspond to memory barriers (also known as
301
fences), such as calls to smp_rmb() or rcu_read_lock().
302
303
These categories are not exclusive; a read or write event can also be
304
a fence. This happens with functions like smp_load_acquire() or
305
spin_lock(). However, no single event can be both a read and a write.
306
Atomic read-modify-write accesses, such as atomic_inc() or xchg(),
307
correspond to a pair of events: a read followed by a write. (The
308
write event is omitted for executions where it doesn't occur, such as
309
a cmpxchg() where the comparison fails.)
310
311
Other parts of the code, those which do not involve interaction with
312
shared memory, do not give rise to events. Thus, arithmetic and
313
logical computations, control-flow instructions, or accesses to
314
private memory or CPU registers are not of central interest to the
315
memory model. They only affect the model's predictions indirectly.
316
For example, an arithmetic computation might determine the value that
317
gets stored to a shared memory location (or in the case of an array
318
index, the address where the value gets stored), but the memory model
319
is concerned only with the store itself -- its value and its address
320
-- not the computation leading up to it.
321
322
Events in the LKMM can be linked by various relations, which we will
323
describe in the following sections. The memory model requires certain
324
of these relations to be orderings, that is, it requires them not to
325
have any cycles.
326
327
328
THE PROGRAM ORDER RELATION: po AND po-loc
329
-----------------------------------------
330
331
The most important relation between events is program order (po). You
332
can think of it as the order in which statements occur in the source
333
code after branches are taken into account and loops have been
334
unrolled. A better description might be the order in which
335
instructions are presented to a CPU's execution unit. Thus, we say
336
that X is po-before Y (written as "X ->po Y" in formulas) if X occurs
337
before Y in the instruction stream.
338
339
This is inherently a single-CPU relation; two instructions executing
340
on different CPUs are never linked by po. Also, it is by definition
341
an ordering so it cannot have any cycles.
342
343
po-loc is a sub-relation of po. It links two memory accesses when the
344
first comes before the second in program order and they access the
345
same memory location (the "-loc" suffix).
346
347
Although this may seem straightforward, there is one subtle aspect to
348
program order we need to explain. The LKMM was inspired by low-level
349
architectural memory models which describe the behavior of machine
350
code, and it retains their outlook to a considerable extent. The
351
read, write, and fence events used by the model are close in spirit to
352
individual machine instructions. Nevertheless, the LKMM describes
353
kernel code written in C, and the mapping from C to machine code can
354
be extremely complex.
355
356
Optimizing compilers have great freedom in the way they translate
357
source code to object code. They are allowed to apply transformations
358
that add memory accesses, eliminate accesses, combine them, split them
359
into pieces, or move them around. The use of READ_ONCE(), WRITE_ONCE(),
360
or one of the other atomic or synchronization primitives prevents a
361
large number of compiler optimizations. In particular, it is guaranteed
362
that the compiler will not remove such accesses from the generated code
363
(unless it can prove the accesses will never be executed), it will not
364
change the order in which they occur in the code (within limits imposed
365
by the C standard), and it will not introduce extraneous accesses.
366
367
The MP and SB examples above used READ_ONCE() and WRITE_ONCE() rather
368
than ordinary memory accesses. Thanks to this usage, we can be certain
369
that in the MP example, the compiler won't reorder P0's write event to
370
buf and P0's write event to flag, and similarly for the other shared
371
memory accesses in the examples.
372
373
Since private variables are not shared between CPUs, they can be
374
accessed normally without READ_ONCE() or WRITE_ONCE(). In fact, they
375
need not even be stored in normal memory at all -- in principle a
376
private variable could be stored in a CPU register (hence the convention
377
that these variables have names starting with the letter 'r').
378
379
380
A WARNING
381
---------
382
383
The protections provided by READ_ONCE(), WRITE_ONCE(), and others are
384
not perfect; and under some circumstances it is possible for the
385
compiler to undermine the memory model. Here is an example. Suppose
386
both branches of an "if" statement store the same value to the same
387
location:
388
389
r1 = READ_ONCE(x);
390
if (r1) {
391
WRITE_ONCE(y, 2);
392
... /* do something */
393
} else {
394
WRITE_ONCE(y, 2);
395
... /* do something else */
396
}
397
398
For this code, the LKMM predicts that the load from x will always be
399
executed before either of the stores to y. However, a compiler could
400
lift the stores out of the conditional, transforming the code into
401
something resembling:
402
403
r1 = READ_ONCE(x);
404
WRITE_ONCE(y, 2);
405
if (r1) {
406
... /* do something */
407
} else {
408
... /* do something else */
409
}
410
411
Given this version of the code, the LKMM would predict that the load
412
from x could be executed after the store to y. Thus, the memory
413
model's original prediction could be invalidated by the compiler.
414
415
Another issue arises from the fact that in C, arguments to many
416
operators and function calls can be evaluated in any order. For
417
example:
418
419
r1 = f(5) + g(6);
420
421
The object code might call f(5) either before or after g(6); the
422
memory model cannot assume there is a fixed program order relation
423
between them. (In fact, if the function calls are inlined then the
424
compiler might even interleave their object code.)
425
426
427
DEPENDENCY RELATIONS: data, addr, and ctrl
428
------------------------------------------
429
430
We say that two events are linked by a dependency relation when the
431
execution of the second event depends in some way on a value obtained
432
from memory by the first. The first event must be a read, and the
433
value it obtains must somehow affect what the second event does.
434
There are three kinds of dependencies: data, address (addr), and
435
control (ctrl).
436
437
A read and a write event are linked by a data dependency if the value
438
obtained by the read affects the value stored by the write. As a very
439
simple example:
440
441
int x, y;
442
443
r1 = READ_ONCE(x);
444
WRITE_ONCE(y, r1 + 5);
445
446
The value stored by the WRITE_ONCE obviously depends on the value
447
loaded by the READ_ONCE. Such dependencies can wind through
448
arbitrarily complicated computations, and a write can depend on the
449
values of multiple reads.
450
451
A read event and another memory access event are linked by an address
452
dependency if the value obtained by the read affects the location
453
accessed by the other event. The second event can be either a read or
454
a write. Here's another simple example:
455
456
int a[20];
457
int i;
458
459
r1 = READ_ONCE(i);
460
r2 = READ_ONCE(a[r1]);
461
462
Here the location accessed by the second READ_ONCE() depends on the
463
index value loaded by the first. Pointer indirection also gives rise
464
to address dependencies, since the address of a location accessed
465
through a pointer will depend on the value read earlier from that
466
pointer.
467
468
Finally, a read event X and a write event Y are linked by a control
469
dependency if Y syntactically lies within an arm of an if statement and
470
X affects the evaluation of the if condition via a data or address
471
dependency (or similarly for a switch statement). Simple example:
472
473
int x, y;
474
475
r1 = READ_ONCE(x);
476
if (r1)
477
WRITE_ONCE(y, 1984);
478
479
Execution of the WRITE_ONCE() is controlled by a conditional expression
480
which depends on the value obtained by the READ_ONCE(); hence there is
481
a control dependency from the load to the store.
482
483
It should be pretty obvious that events can only depend on reads that
484
come earlier in program order. Symbolically, if we have R ->data X,
485
R ->addr X, or R ->ctrl X (where R is a read event), then we must also
486
have R ->po X. It wouldn't make sense for a computation to depend
487
somehow on a value that doesn't get loaded from shared memory until
488
later in the code!
489
490
Here's a trick question: When is a dependency not a dependency? Answer:
491
When it is purely syntactic rather than semantic. We say a dependency
492
between two accesses is purely syntactic if the second access doesn't
493
actually depend on the result of the first. Here is a trivial example:
494
495
r1 = READ_ONCE(x);
496
WRITE_ONCE(y, r1 * 0);
497
498
There appears to be a data dependency from the load of x to the store
499
of y, since the value to be stored is computed from the value that was
500
loaded. But in fact, the value stored does not really depend on
501
anything since it will always be 0. Thus the data dependency is only
502
syntactic (it appears to exist in the code) but not semantic (the
503
second access will always be the same, regardless of the value of the
504
first access). Given code like this, a compiler could simply discard
505
the value returned by the load from x, which would certainly destroy
506
any dependency. (The compiler is not permitted to eliminate entirely
507
the load generated for a READ_ONCE() -- that's one of the nice
508
properties of READ_ONCE() -- but it is allowed to ignore the load's
509
value.)
510
511
It's natural to object that no one in their right mind would write
512
code like the above. However, macro expansions can easily give rise
513
to this sort of thing, in ways that often are not apparent to the
514
programmer.
515
516
Another mechanism that can lead to purely syntactic dependencies is
517
related to the notion of "undefined behavior". Certain program
518
behaviors are called "undefined" in the C language specification,
519
which means that when they occur there are no guarantees at all about
520
the outcome. Consider the following example:
521
522
int a[1];
523
int i;
524
525
r1 = READ_ONCE(i);
526
r2 = READ_ONCE(a[r1]);
527
528
Access beyond the end or before the beginning of an array is one kind
529
of undefined behavior. Therefore the compiler doesn't have to worry
530
about what will happen if r1 is nonzero, and it can assume that r1
531
will always be zero regardless of the value actually loaded from i.
532
(If the assumption turns out to be wrong the resulting behavior will
533
be undefined anyway, so the compiler doesn't care!) Thus the value
534
from the load can be discarded, breaking the address dependency.
535
536
The LKMM is unaware that purely syntactic dependencies are different
537
from semantic dependencies and therefore mistakenly predicts that the
538
accesses in the two examples above will be ordered. This is another
539
example of how the compiler can undermine the memory model. Be warned.
540
541
542
THE READS-FROM RELATION: rf, rfi, and rfe
543
-----------------------------------------
544
545
The reads-from relation (rf) links a write event to a read event when
546
the value loaded by the read is the value that was stored by the
547
write. In colloquial terms, the load "reads from" the store. We
548
write W ->rf R to indicate that the load R reads from the store W. We
549
further distinguish the cases where the load and the store occur on
550
the same CPU (internal reads-from, or rfi) and where they occur on
551
different CPUs (external reads-from, or rfe).
552
553
For our purposes, a memory location's initial value is treated as
554
though it had been written there by an imaginary initial store that
555
executes on a separate CPU before the main program runs.
556
557
Usage of the rf relation implicitly assumes that loads will always
558
read from a single store. It doesn't apply properly in the presence
559
of load-tearing, where a load obtains some of its bits from one store
560
and some of them from another store. Fortunately, use of READ_ONCE()
561
and WRITE_ONCE() will prevent load-tearing; it's not possible to have:
562
563
int x = 0;
564
565
P0()
566
{
567
WRITE_ONCE(x, 0x1234);
568
}
569
570
P1()
571
{
572
int r1;
573
574
r1 = READ_ONCE(x);
575
}
576
577
and end up with r1 = 0x1200 (partly from x's initial value and partly
578
from the value stored by P0).
579
580
On the other hand, load-tearing is unavoidable when mixed-size
581
accesses are used. Consider this example:
582
583
union {
584
u32 w;
585
u16 h[2];
586
} x;
587
588
P0()
589
{
590
WRITE_ONCE(x.h[0], 0x1234);
591
WRITE_ONCE(x.h[1], 0x5678);
592
}
593
594
P1()
595
{
596
int r1;
597
598
r1 = READ_ONCE(x.w);
599
}
600
601
If r1 = 0x56781234 (little-endian!) at the end, then P1 must have read
602
from both of P0's stores. It is possible to handle mixed-size and
603
unaligned accesses in a memory model, but the LKMM currently does not
604
attempt to do so. It requires all accesses to be properly aligned and
605
of the location's actual size.
606
607
608
CACHE COHERENCE AND THE COHERENCE ORDER RELATION: co, coi, and coe
609
------------------------------------------------------------------
610
611
Cache coherence is a general principle requiring that in a
612
multi-processor system, the CPUs must share a consistent view of the
613
memory contents. Specifically, it requires that for each location in
614
shared memory, the stores to that location must form a single global
615
ordering which all the CPUs agree on (the coherence order), and this
616
ordering must be consistent with the program order for accesses to
617
that location.
618
619
To put it another way, for any variable x, the coherence order (co) of
620
the stores to x is simply the order in which the stores overwrite one
621
another. The imaginary store which establishes x's initial value
622
comes first in the coherence order; the store which directly
623
overwrites the initial value comes second; the store which overwrites
624
that value comes third, and so on.
625
626
You can think of the coherence order as being the order in which the
627
stores reach x's location in memory (or if you prefer a more
628
hardware-centric view, the order in which the stores get written to
629
x's cache line). We write W ->co W' if W comes before W' in the
630
coherence order, that is, if the value stored by W gets overwritten,
631
directly or indirectly, by the value stored by W'.
632
633
Coherence order is required to be consistent with program order. This
634
requirement takes the form of four coherency rules:
635
636
Write-write coherence: If W ->po-loc W' (i.e., W comes before
637
W' in program order and they access the same location), where W
638
and W' are two stores, then W ->co W'.
639
640
Write-read coherence: If W ->po-loc R, where W is a store and R
641
is a load, then R must read from W or from some other store
642
which comes after W in the coherence order.
643
644
Read-write coherence: If R ->po-loc W, where R is a load and W
645
is a store, then the store which R reads from must come before
646
W in the coherence order.
647
648
Read-read coherence: If R ->po-loc R', where R and R' are two
649
loads, then either they read from the same store or else the
650
store read by R comes before the store read by R' in the
651
coherence order.
652
653
This is sometimes referred to as sequential consistency per variable,
654
because it means that the accesses to any single memory location obey
655
the rules of the Sequential Consistency memory model. (According to
656
Wikipedia, sequential consistency per variable and cache coherence
657
mean the same thing except that cache coherence includes an extra
658
requirement that every store eventually becomes visible to every CPU.)
659
660
Any reasonable memory model will include cache coherence. Indeed, our
661
expectation of cache coherence is so deeply ingrained that violations
662
of its requirements look more like hardware bugs than programming
663
errors:
664
665
int x;
666
667
P0()
668
{
669
WRITE_ONCE(x, 17);
670
WRITE_ONCE(x, 23);
671
}
672
673
If the final value stored in x after this code ran was 17, you would
674
think your computer was broken. It would be a violation of the
675
write-write coherence rule: Since the store of 23 comes later in
676
program order, it must also come later in x's coherence order and
677
thus must overwrite the store of 17.
678
679
int x = 0;
680
681
P0()
682
{
683
int r1;
684
685
r1 = READ_ONCE(x);
686
WRITE_ONCE(x, 666);
687
}
688
689
If r1 = 666 at the end, this would violate the read-write coherence
690
rule: The READ_ONCE() load comes before the WRITE_ONCE() store in
691
program order, so it must not read from that store but rather from one
692
coming earlier in the coherence order (in this case, x's initial
693
value).
694
695
int x = 0;
696
697
P0()
698
{
699
WRITE_ONCE(x, 5);
700
}
701
702
P1()
703
{
704
int r1, r2;
705
706
r1 = READ_ONCE(x);
707
r2 = READ_ONCE(x);
708
}
709
710
If r1 = 5 (reading from P0's store) and r2 = 0 (reading from the
711
imaginary store which establishes x's initial value) at the end, this
712
would violate the read-read coherence rule: The r1 load comes before
713
the r2 load in program order, so it must not read from a store that
714
comes later in the coherence order.
715
716
(As a minor curiosity, if this code had used normal loads instead of
717
READ_ONCE() in P1, on Itanium it sometimes could end up with r1 = 5
718
and r2 = 0! This results from parallel execution of the operations
719
encoded in Itanium's Very-Long-Instruction-Word format, and it is yet
720
another motivation for using READ_ONCE() when accessing shared memory
721
locations.)
722
723
Just like the po relation, co is inherently an ordering -- it is not
724
possible for a store to directly or indirectly overwrite itself! And
725
just like with the rf relation, we distinguish between stores that
726
occur on the same CPU (internal coherence order, or coi) and stores
727
that occur on different CPUs (external coherence order, or coe).
728
729
On the other hand, stores to different memory locations are never
730
related by co, just as instructions on different CPUs are never
731
related by po. Coherence order is strictly per-location, or if you
732
prefer, each location has its own independent coherence order.
733
734
735
THE FROM-READS RELATION: fr, fri, and fre
736
-----------------------------------------
737
738
The from-reads relation (fr) can be a little difficult for people to
739
grok. It describes the situation where a load reads a value that gets
740
overwritten by a store. In other words, we have R ->fr W when the
741
value that R reads is overwritten (directly or indirectly) by W, or
742
equivalently, when R reads from a store which comes earlier than W in
743
the coherence order.
744
745
For example:
746
747
int x = 0;
748
749
P0()
750
{
751
int r1;
752
753
r1 = READ_ONCE(x);
754
WRITE_ONCE(x, 2);
755
}
756
757
The value loaded from x will be 0 (assuming cache coherence!), and it
758
gets overwritten by the value 2. Thus there is an fr link from the
759
READ_ONCE() to the WRITE_ONCE(). If the code contained any later
760
stores to x, there would also be fr links from the READ_ONCE() to
761
them.
762
763
As with rf, rfi, and rfe, we subdivide the fr relation into fri (when
764
the load and the store are on the same CPU) and fre (when they are on
765
different CPUs).
766
767
Note that the fr relation is determined entirely by the rf and co
768
relations; it is not independent. Given a read event R and a write
769
event W for the same location, we will have R ->fr W if and only if
770
the write which R reads from is co-before W. In symbols,
771
772
(R ->fr W) := (there exists W' with W' ->rf R and W' ->co W).
773
774
775
AN OPERATIONAL MODEL
776
--------------------
777
778
The LKMM is based on various operational memory models, meaning that
779
the models arise from an abstract view of how a computer system
780
operates. Here are the main ideas, as incorporated into the LKMM.
781
782
The system as a whole is divided into the CPUs and a memory subsystem.
783
The CPUs are responsible for executing instructions (not necessarily
784
in program order), and they communicate with the memory subsystem.
785
For the most part, executing an instruction requires a CPU to perform
786
only internal operations. However, loads, stores, and fences involve
787
more.
788
789
When CPU C executes a store instruction, it tells the memory subsystem
790
to store a certain value at a certain location. The memory subsystem
791
propagates the store to all the other CPUs as well as to RAM. (As a
792
special case, we say that the store propagates to its own CPU at the
793
time it is executed.) The memory subsystem also determines where the
794
store falls in the location's coherence order. In particular, it must
795
arrange for the store to be co-later than (i.e., to overwrite) any
796
other store to the same location which has already propagated to CPU C.
797
798
When a CPU executes a load instruction R, it first checks to see
799
whether there are any as-yet unexecuted store instructions, for the
800
same location, that come before R in program order. If there are, it
801
uses the value of the po-latest such store as the value obtained by R,
802
and we say that the store's value is forwarded to R. Otherwise, the
803
CPU asks the memory subsystem for the value to load and we say that R
804
is satisfied from memory. The memory subsystem hands back the value
805
of the co-latest store to the location in question which has already
806
propagated to that CPU.
807
808
(In fact, the picture needs to be a little more complicated than this.
809
CPUs have local caches, and propagating a store to a CPU really means
810
propagating it to the CPU's local cache. A local cache can take some
811
time to process the stores that it receives, and a store can't be used
812
to satisfy one of the CPU's loads until it has been processed. On
813
most architectures, the local caches process stores in
814
First-In-First-Out order, and consequently the processing delay
815
doesn't matter for the memory model. But on Alpha, the local caches
816
have a partitioned design that results in non-FIFO behavior. We will
817
discuss this in more detail later.)
818
819
Note that load instructions may be executed speculatively and may be
820
restarted under certain circumstances. The memory model ignores these
821
premature executions; we simply say that the load executes at the
822
final time it is forwarded or satisfied.
823
824
Executing a fence (or memory barrier) instruction doesn't require a
825
CPU to do anything special other than informing the memory subsystem
826
about the fence. However, fences do constrain the way CPUs and the
827
memory subsystem handle other instructions, in two respects.
828
829
First, a fence forces the CPU to execute various instructions in
830
program order. Exactly which instructions are ordered depends on the
831
type of fence:
832
833
Strong fences, including smp_mb() and synchronize_rcu(), force
834
the CPU to execute all po-earlier instructions before any
835
po-later instructions;
836
837
smp_rmb() forces the CPU to execute all po-earlier loads
838
before any po-later loads;
839
840
smp_wmb() forces the CPU to execute all po-earlier stores
841
before any po-later stores;
842
843
Acquire fences, such as smp_load_acquire(), force the CPU to
844
execute the load associated with the fence (e.g., the load
845
part of an smp_load_acquire()) before any po-later
846
instructions;
847
848
Release fences, such as smp_store_release(), force the CPU to
849
execute all po-earlier instructions before the store
850
associated with the fence (e.g., the store part of an
851
smp_store_release()).
852
853
Second, some types of fence affect the way the memory subsystem
854
propagates stores. When a fence instruction is executed on CPU C:
855
856
For each other CPU C', smp_wmb() forces all po-earlier stores
857
on C to propagate to C' before any po-later stores do.
858
859
For each other CPU C', any store which propagates to C before
860
a release fence is executed (including all po-earlier
861
stores executed on C) is forced to propagate to C' before the
862
store associated with the release fence does.
863
864
Any store which propagates to C before a strong fence is
865
executed (including all po-earlier stores on C) is forced to
866
propagate to all other CPUs before any instructions po-after
867
the strong fence are executed on C.
868
869
The propagation ordering enforced by release fences and strong fences
870
affects stores from other CPUs that propagate to CPU C before the
871
fence is executed, as well as stores that are executed on C before the
872
fence. We describe this property by saying that release fences and
873
strong fences are A-cumulative. By contrast, smp_wmb() fences are not
874
A-cumulative; they only affect the propagation of stores that are
875
executed on C before the fence (i.e., those which precede the fence in
876
program order).
877
878
rcu_read_lock(), rcu_read_unlock(), and synchronize_rcu() fences have
879
other properties which we discuss later.
880
881
882
PROPAGATION ORDER RELATION: cumul-fence
883
---------------------------------------
884
885
The fences which affect propagation order (i.e., strong, release, and
886
smp_wmb() fences) are collectively referred to as cumul-fences, even
887
though smp_wmb() isn't A-cumulative. The cumul-fence relation is
888
defined to link memory access events E and F whenever:
889
890
E and F are both stores on the same CPU and an smp_wmb() fence
891
event occurs between them in program order; or
892
893
F is a release fence and some X comes before F in program order,
894
where either X = E or else E ->rf X; or
895
896
A strong fence event occurs between some X and F in program
897
order, where either X = E or else E ->rf X.
898
899
The operational model requires that whenever W and W' are both stores
900
and W ->cumul-fence W', then W must propagate to any given CPU
901
before W' does. However, for different CPUs C and C', it does not
902
require W to propagate to C before W' propagates to C'.
903
904
905
DERIVATION OF THE LKMM FROM THE OPERATIONAL MODEL
906
-------------------------------------------------
907
908
The LKMM is derived from the restrictions imposed by the design
909
outlined above. These restrictions involve the necessity of
910
maintaining cache coherence and the fact that a CPU can't operate on a
911
value before it knows what that value is, among other things.
912
913
The formal version of the LKMM is defined by six requirements, or
914
axioms:
915
916
Sequential consistency per variable: This requires that the
917
system obey the four coherency rules.
918
919
Atomicity: This requires that atomic read-modify-write
920
operations really are atomic, that is, no other stores can
921
sneak into the middle of such an update.
922
923
Happens-before: This requires that certain instructions are
924
executed in a specific order.
925
926
Propagation: This requires that certain stores propagate to
927
CPUs and to RAM in a specific order.
928
929
Rcu: This requires that RCU read-side critical sections and
930
grace periods obey the rules of RCU, in particular, the
931
Grace-Period Guarantee.
932
933
Plain-coherence: This requires that plain memory accesses
934
(those not using READ_ONCE(), WRITE_ONCE(), etc.) must obey
935
the operational model's rules regarding cache coherence.
936
937
The first and second are quite common; they can be found in many
938
memory models (such as those for C11/C++11). The "happens-before" and
939
"propagation" axioms have analogs in other memory models as well. The
940
"rcu" and "plain-coherence" axioms are specific to the LKMM.
941
942
Each of these axioms is discussed below.
943
944
945
SEQUENTIAL CONSISTENCY PER VARIABLE
946
-----------------------------------
947
948
According to the principle of cache coherence, the stores to any fixed
949
shared location in memory form a global ordering. We can imagine
950
inserting the loads from that location into this ordering, by placing
951
each load between the store that it reads from and the following
952
store. This leaves the relative positions of loads that read from the
953
same store unspecified; let's say they are inserted in program order,
954
first for CPU 0, then CPU 1, etc.
955
956
You can check that the four coherency rules imply that the rf, co, fr,
957
and po-loc relations agree with this global ordering; in other words,
958
whenever we have X ->rf Y or X ->co Y or X ->fr Y or X ->po-loc Y, the
959
X event comes before the Y event in the global ordering. The LKMM's
960
"coherence" axiom expresses this by requiring the union of these
961
relations not to have any cycles. This means it must not be possible
962
to find events
963
964
X0 -> X1 -> X2 -> ... -> Xn -> X0,
965
966
where each of the links is either rf, co, fr, or po-loc. This has to
967
hold if the accesses to the fixed memory location can be ordered as
968
cache coherence demands.
969
970
Although it is not obvious, it can be shown that the converse is also
971
true: This LKMM axiom implies that the four coherency rules are
972
obeyed.
973
974
975
ATOMIC UPDATES: rmw
976
-------------------
977
978
What does it mean to say that a read-modify-write (rmw) update, such
979
as atomic_inc(&x), is atomic? It means that the memory location (x in
980
this case) does not get altered between the read and the write events
981
making up the atomic operation. In particular, if two CPUs perform
982
atomic_inc(&x) concurrently, it must be guaranteed that the final
983
value of x will be the initial value plus two. We should never have
984
the following sequence of events:
985
986
CPU 0 loads x obtaining 13;
987
CPU 1 loads x obtaining 13;
988
CPU 0 stores 14 to x;
989
CPU 1 stores 14 to x;
990
991
where the final value of x is wrong (14 rather than 15).
992
993
In this example, CPU 0's increment effectively gets lost because it
994
occurs in between CPU 1's load and store. To put it another way, the
995
problem is that the position of CPU 0's store in x's coherence order
996
is between the store that CPU 1 reads from and the store that CPU 1
997
performs.
998
999
The same analysis applies to all atomic update operations. Therefore,
1000
to enforce atomicity the LKMM requires that atomic updates follow this
1001
rule: Whenever R and W are the read and write events composing an
1002
atomic read-modify-write and W' is the write event which R reads from,
1003
there must not be any stores coming between W' and W in the coherence
1004
order. Equivalently,
1005
1006
(R ->rmw W) implies (there is no X with R ->fr X and X ->co W),
1007
1008
where the rmw relation links the read and write events making up each
1009
atomic update. This is what the LKMM's "atomic" axiom says.
1010
1011
Atomic rmw updates play one more role in the LKMM: They can form "rmw
1012
sequences". An rmw sequence is simply a bunch of atomic updates where
1013
each update reads from the previous one. Written using events, it
1014
looks like this:
1015
1016
Z0 ->rf Y1 ->rmw Z1 ->rf ... ->rf Yn ->rmw Zn,
1017
1018
where Z0 is some store event and n can be any number (even 0, in the
1019
degenerate case). We write this relation as: Z0 ->rmw-sequence Zn.
1020
Note that this implies Z0 and Zn are stores to the same variable.
1021
1022
Rmw sequences have a special property in the LKMM: They can extend the
1023
cumul-fence relation. That is, if we have:
1024
1025
U ->cumul-fence X -> rmw-sequence Y
1026
1027
then also U ->cumul-fence Y. Thinking about this in terms of the
1028
operational model, U ->cumul-fence X says that the store U propagates
1029
to each CPU before the store X does. Then the fact that X and Y are
1030
linked by an rmw sequence means that U also propagates to each CPU
1031
before Y does. In an analogous way, rmw sequences can also extend
1032
the w-post-bounded relation defined below in the PLAIN ACCESSES AND
1033
DATA RACES section.
1034
1035
(The notion of rmw sequences in the LKMM is similar to, but not quite
1036
the same as, that of release sequences in the C11 memory model. They
1037
were added to the LKMM to fix an obscure bug; without them, atomic
1038
updates with full-barrier semantics did not always guarantee ordering
1039
at least as strong as atomic updates with release-barrier semantics.)
1040
1041
1042
THE PRESERVED PROGRAM ORDER RELATION: ppo
1043
-----------------------------------------
1044
1045
There are many situations where a CPU is obliged to execute two
1046
instructions in program order. We amalgamate them into the ppo (for
1047
"preserved program order") relation, which links the po-earlier
1048
instruction to the po-later instruction and is thus a sub-relation of
1049
po.
1050
1051
The operational model already includes a description of one such
1052
situation: Fences are a source of ppo links. Suppose X and Y are
1053
memory accesses with X ->po Y; then the CPU must execute X before Y if
1054
any of the following hold:
1055
1056
A strong (smp_mb() or synchronize_rcu()) fence occurs between
1057
X and Y;
1058
1059
X and Y are both stores and an smp_wmb() fence occurs between
1060
them;
1061
1062
X and Y are both loads and an smp_rmb() fence occurs between
1063
them;
1064
1065
X is also an acquire fence, such as smp_load_acquire();
1066
1067
Y is also a release fence, such as smp_store_release().
1068
1069
Another possibility, not mentioned earlier but discussed in the next
1070
section, is:
1071
1072
X and Y are both loads, X ->addr Y (i.e., there is an address
1073
dependency from X to Y), and X is a READ_ONCE() or an atomic
1074
access.
1075
1076
Dependencies can also cause instructions to be executed in program
1077
order. This is uncontroversial when the second instruction is a
1078
store; either a data, address, or control dependency from a load R to
1079
a store W will force the CPU to execute R before W. This is very
1080
simply because the CPU cannot tell the memory subsystem about W's
1081
store before it knows what value should be stored (in the case of a
1082
data dependency), what location it should be stored into (in the case
1083
of an address dependency), or whether the store should actually take
1084
place (in the case of a control dependency).
1085
1086
Dependencies to load instructions are more problematic. To begin with,
1087
there is no such thing as a data dependency to a load. Next, a CPU
1088
has no reason to respect a control dependency to a load, because it
1089
can always satisfy the second load speculatively before the first, and
1090
then ignore the result if it turns out that the second load shouldn't
1091
be executed after all. And lastly, the real difficulties begin when
1092
we consider address dependencies to loads.
1093
1094
To be fair about it, all Linux-supported architectures do execute
1095
loads in program order if there is an address dependency between them.
1096
After all, a CPU cannot ask the memory subsystem to load a value from
1097
a particular location before it knows what that location is. However,
1098
the split-cache design used by Alpha can cause it to behave in a way
1099
that looks as if the loads were executed out of order (see the next
1100
section for more details). The kernel includes a workaround for this
1101
problem when the loads come from READ_ONCE(), and therefore the LKMM
1102
includes address dependencies to loads in the ppo relation.
1103
1104
On the other hand, dependencies can indirectly affect the ordering of
1105
two loads. This happens when there is a dependency from a load to a
1106
store and a second, po-later load reads from that store:
1107
1108
R ->dep W ->rfi R',
1109
1110
where the dep link can be either an address or a data dependency. In
1111
this situation we know it is possible for the CPU to execute R' before
1112
W, because it can forward the value that W will store to R'. But it
1113
cannot execute R' before R, because it cannot forward the value before
1114
it knows what that value is, or that W and R' do access the same
1115
location. However, if there is merely a control dependency between R
1116
and W then the CPU can speculatively forward W to R' before executing
1117
R; if the speculation turns out to be wrong then the CPU merely has to
1118
restart or abandon R'.
1119
1120
(In theory, a CPU might forward a store to a load when it runs across
1121
an address dependency like this:
1122
1123
r1 = READ_ONCE(ptr);
1124
WRITE_ONCE(*r1, 17);
1125
r2 = READ_ONCE(*r1);
1126
1127
because it could tell that the store and the second load access the
1128
same location even before it knows what the location's address is.
1129
However, none of the architectures supported by the Linux kernel do
1130
this.)
1131
1132
Two memory accesses of the same location must always be executed in
1133
program order if the second access is a store. Thus, if we have
1134
1135
R ->po-loc W
1136
1137
(the po-loc link says that R comes before W in program order and they
1138
access the same location), the CPU is obliged to execute W after R.
1139
If it executed W first then the memory subsystem would respond to R's
1140
read request with the value stored by W (or an even later store), in
1141
violation of the read-write coherence rule. Similarly, if we had
1142
1143
W ->po-loc W'
1144
1145
and the CPU executed W' before W, then the memory subsystem would put
1146
W' before W in the coherence order. It would effectively cause W to
1147
overwrite W', in violation of the write-write coherence rule.
1148
(Interestingly, an early ARMv8 memory model, now obsolete, proposed
1149
allowing out-of-order writes like this to occur. The model avoided
1150
violating the write-write coherence rule by requiring the CPU not to
1151
send the W write to the memory subsystem at all!)
1152
1153
1154
AND THEN THERE WAS ALPHA
1155
------------------------
1156
1157
As mentioned above, the Alpha architecture is unique in that it does
1158
not appear to respect address dependencies to loads. This means that
1159
code such as the following:
1160
1161
int x = 0;
1162
int y = -1;
1163
int *ptr = &y;
1164
1165
P0()
1166
{
1167
WRITE_ONCE(x, 1);
1168
smp_wmb();
1169
WRITE_ONCE(ptr, &x);
1170
}
1171
1172
P1()
1173
{
1174
int *r1;
1175
int r2;
1176
1177
r1 = ptr;
1178
r2 = READ_ONCE(*r1);
1179
}
1180
1181
can malfunction on Alpha systems (notice that P1 uses an ordinary load
1182
to read ptr instead of READ_ONCE()). It is quite possible that r1 = &x
1183
and r2 = 0 at the end, in spite of the address dependency.
1184
1185
At first glance this doesn't seem to make sense. We know that the
1186
smp_wmb() forces P0's store to x to propagate to P1 before the store
1187
to ptr does. And since P1 can't execute its second load
1188
until it knows what location to load from, i.e., after executing its
1189
first load, the value x = 1 must have propagated to P1 before the
1190
second load executed. So why doesn't r2 end up equal to 1?
1191
1192
The answer lies in the Alpha's split local caches. Although the two
1193
stores do reach P1's local cache in the proper order, it can happen
1194
that the first store is processed by a busy part of the cache while
1195
the second store is processed by an idle part. As a result, the x = 1
1196
value may not become available for P1's CPU to read until after the
1197
ptr = &x value does, leading to the undesirable result above. The
1198
final effect is that even though the two loads really are executed in
1199
program order, it appears that they aren't.
1200
1201
This could not have happened if the local cache had processed the
1202
incoming stores in FIFO order. By contrast, other architectures
1203
maintain at least the appearance of FIFO order.
1204
1205
In practice, this difficulty is solved by inserting a special fence
1206
between P1's two loads when the kernel is compiled for the Alpha
1207
architecture. In fact, as of version 4.15, the kernel automatically
1208
adds this fence after every READ_ONCE() and atomic load on Alpha. The
1209
effect of the fence is to cause the CPU not to execute any po-later
1210
instructions until after the local cache has finished processing all
1211
the stores it has already received. Thus, if the code was changed to:
1212
1213
P1()
1214
{
1215
int *r1;
1216
int r2;
1217
1218
r1 = READ_ONCE(ptr);
1219
r2 = READ_ONCE(*r1);
1220
}
1221
1222
then we would never get r1 = &x and r2 = 0. By the time P1 executed
1223
its second load, the x = 1 store would already be fully processed by
1224
the local cache and available for satisfying the read request. Thus
1225
we have yet another reason why shared data should always be read with
1226
READ_ONCE() or another synchronization primitive rather than accessed
1227
directly.
1228
1229
The LKMM requires that smp_rmb(), acquire fences, and strong fences
1230
share this property: They do not allow the CPU to execute any po-later
1231
instructions (or po-later loads in the case of smp_rmb()) until all
1232
outstanding stores have been processed by the local cache. In the
1233
case of a strong fence, the CPU first has to wait for all of its
1234
po-earlier stores to propagate to every other CPU in the system; then
1235
it has to wait for the local cache to process all the stores received
1236
as of that time -- not just the stores received when the strong fence
1237
began.
1238
1239
And of course, none of this matters for any architecture other than
1240
Alpha.
1241
1242
1243
THE HAPPENS-BEFORE RELATION: hb
1244
-------------------------------
1245
1246
The happens-before relation (hb) links memory accesses that have to
1247
execute in a certain order. hb includes the ppo relation and two
1248
others, one of which is rfe.
1249
1250
W ->rfe R implies that W and R are on different CPUs. It also means
1251
that W's store must have propagated to R's CPU before R executed;
1252
otherwise R could not have read the value stored by W. Therefore W
1253
must have executed before R, and so we have W ->hb R.
1254
1255
The equivalent fact need not hold if W ->rfi R (i.e., W and R are on
1256
the same CPU). As we have already seen, the operational model allows
1257
W's value to be forwarded to R in such cases, meaning that R may well
1258
execute before W does.
1259
1260
It's important to understand that neither coe nor fre is included in
1261
hb, despite their similarities to rfe. For example, suppose we have
1262
W ->coe W'. This means that W and W' are stores to the same location,
1263
they execute on different CPUs, and W comes before W' in the coherence
1264
order (i.e., W' overwrites W). Nevertheless, it is possible for W' to
1265
execute before W, because the decision as to which store overwrites
1266
the other is made later by the memory subsystem. When the stores are
1267
nearly simultaneous, either one can come out on top. Similarly,
1268
R ->fre W means that W overwrites the value which R reads, but it
1269
doesn't mean that W has to execute after R. All that's necessary is
1270
for the memory subsystem not to propagate W to R's CPU until after R
1271
has executed, which is possible if W executes shortly before R.
1272
1273
The third relation included in hb is like ppo, in that it only links
1274
events that are on the same CPU. However it is more difficult to
1275
explain, because it arises only indirectly from the requirement of
1276
cache coherence. The relation is called prop, and it links two events
1277
on CPU C in situations where a store from some other CPU comes after
1278
the first event in the coherence order and propagates to C before the
1279
second event executes.
1280
1281
This is best explained with some examples. The simplest case looks
1282
like this:
1283
1284
int x;
1285
1286
P0()
1287
{
1288
int r1;
1289
1290
WRITE_ONCE(x, 1);
1291
r1 = READ_ONCE(x);
1292
}
1293
1294
P1()
1295
{
1296
WRITE_ONCE(x, 8);
1297
}
1298
1299
If r1 = 8 at the end then P0's accesses must have executed in program
1300
order. We can deduce this from the operational model; if P0's load
1301
had executed before its store then the value of the store would have
1302
been forwarded to the load, so r1 would have ended up equal to 1, not
1303
8. In this case there is a prop link from P0's write event to its read
1304
event, because P1's store came after P0's store in x's coherence
1305
order, and P1's store propagated to P0 before P0's load executed.
1306
1307
An equally simple case involves two loads of the same location that
1308
read from different stores:
1309
1310
int x = 0;
1311
1312
P0()
1313
{
1314
int r1, r2;
1315
1316
r1 = READ_ONCE(x);
1317
r2 = READ_ONCE(x);
1318
}
1319
1320
P1()
1321
{
1322
WRITE_ONCE(x, 9);
1323
}
1324
1325
If r1 = 0 and r2 = 9 at the end then P0's accesses must have executed
1326
in program order. If the second load had executed before the first
1327
then the x = 9 store must have been propagated to P0 before the first
1328
load executed, and so r1 would have been 9 rather than 0. In this
1329
case there is a prop link from P0's first read event to its second,
1330
because P1's store overwrote the value read by P0's first load, and
1331
P1's store propagated to P0 before P0's second load executed.
1332
1333
Less trivial examples of prop all involve fences. Unlike the simple
1334
examples above, they can require that some instructions are executed
1335
out of program order. This next one should look familiar:
1336
1337
int buf = 0, flag = 0;
1338
1339
P0()
1340
{
1341
WRITE_ONCE(buf, 1);
1342
smp_wmb();
1343
WRITE_ONCE(flag, 1);
1344
}
1345
1346
P1()
1347
{
1348
int r1;
1349
int r2;
1350
1351
r1 = READ_ONCE(flag);
1352
r2 = READ_ONCE(buf);
1353
}
1354
1355
This is the MP pattern again, with an smp_wmb() fence between the two
1356
stores. If r1 = 1 and r2 = 0 at the end then there is a prop link
1357
from P1's second load to its first (backwards!). The reason is
1358
similar to the previous examples: The value P1 loads from buf gets
1359
overwritten by P0's store to buf, the fence guarantees that the store
1360
to buf will propagate to P1 before the store to flag does, and the
1361
store to flag propagates to P1 before P1 reads flag.
1362
1363
The prop link says that in order to obtain the r1 = 1, r2 = 0 result,
1364
P1 must execute its second load before the first. Indeed, if the load
1365
from flag were executed first, then the buf = 1 store would already
1366
have propagated to P1 by the time P1's load from buf executed, so r2
1367
would have been 1 at the end, not 0. (The reasoning holds even for
1368
Alpha, although the details are more complicated and we will not go
1369
into them.)
1370
1371
But what if we put an smp_rmb() fence between P1's loads? The fence
1372
would force the two loads to be executed in program order, and it
1373
would generate a cycle in the hb relation: The fence would create a ppo
1374
link (hence an hb link) from the first load to the second, and the
1375
prop relation would give an hb link from the second load to the first.
1376
Since an instruction can't execute before itself, we are forced to
1377
conclude that if an smp_rmb() fence is added, the r1 = 1, r2 = 0
1378
outcome is impossible -- as it should be.
1379
1380
The formal definition of the prop relation involves a coe or fre link,
1381
followed by an arbitrary number of cumul-fence links, ending with an
1382
rfe link. You can concoct more exotic examples, containing more than
1383
one fence, although this quickly leads to diminishing returns in terms
1384
of complexity. For instance, here's an example containing a coe link
1385
followed by two cumul-fences and an rfe link, utilizing the fact that
1386
release fences are A-cumulative:
1387
1388
int x, y, z;
1389
1390
P0()
1391
{
1392
int r0;
1393
1394
WRITE_ONCE(x, 1);
1395
r0 = READ_ONCE(z);
1396
}
1397
1398
P1()
1399
{
1400
WRITE_ONCE(x, 2);
1401
smp_wmb();
1402
WRITE_ONCE(y, 1);
1403
}
1404
1405
P2()
1406
{
1407
int r2;
1408
1409
r2 = READ_ONCE(y);
1410
smp_store_release(&z, 1);
1411
}
1412
1413
If x = 2, r0 = 1, and r2 = 1 after this code runs then there is a prop
1414
link from P0's store to its load. This is because P0's store gets
1415
overwritten by P1's store since x = 2 at the end (a coe link), the
1416
smp_wmb() ensures that P1's store to x propagates to P2 before the
1417
store to y does (the first cumul-fence), the store to y propagates to P2
1418
before P2's load and store execute, P2's smp_store_release()
1419
guarantees that the stores to x and y both propagate to P0 before the
1420
store to z does (the second cumul-fence), and P0's load executes after the
1421
store to z has propagated to P0 (an rfe link).
1422
1423
In summary, the fact that the hb relation links memory access events
1424
in the order they execute means that it must not have cycles. This
1425
requirement is the content of the LKMM's "happens-before" axiom.
1426
1427
The LKMM defines yet another relation connected to times of
1428
instruction execution, but it is not included in hb. It relies on the
1429
particular properties of strong fences, which we cover in the next
1430
section.
1431
1432
1433
THE PROPAGATES-BEFORE RELATION: pb
1434
----------------------------------
1435
1436
The propagates-before (pb) relation capitalizes on the special
1437
features of strong fences. It links two events E and F whenever some
1438
store is coherence-later than E and propagates to every CPU and to RAM
1439
before F executes. The formal definition requires that E be linked to
1440
F via a coe or fre link, an arbitrary number of cumul-fences, an
1441
optional rfe link, a strong fence, and an arbitrary number of hb
1442
links. Let's see how this definition works out.
1443
1444
Consider first the case where E is a store (implying that the sequence
1445
of links begins with coe). Then there are events W, X, Y, and Z such
1446
that:
1447
1448
E ->coe W ->cumul-fence* X ->rfe? Y ->strong-fence Z ->hb* F,
1449
1450
where the * suffix indicates an arbitrary number of links of the
1451
specified type, and the ? suffix indicates the link is optional (Y may
1452
be equal to X). Because of the cumul-fence links, we know that W will
1453
propagate to Y's CPU before X does, hence before Y executes and hence
1454
before the strong fence executes. Because this fence is strong, we
1455
know that W will propagate to every CPU and to RAM before Z executes.
1456
And because of the hb links, we know that Z will execute before F.
1457
Thus W, which comes later than E in the coherence order, will
1458
propagate to every CPU and to RAM before F executes.
1459
1460
The case where E is a load is exactly the same, except that the first
1461
link in the sequence is fre instead of coe.
1462
1463
The existence of a pb link from E to F implies that E must execute
1464
before F. To see why, suppose that F executed first. Then W would
1465
have propagated to E's CPU before E executed. If E was a store, the
1466
memory subsystem would then be forced to make E come after W in the
1467
coherence order, contradicting the fact that E ->coe W. If E was a
1468
load, the memory subsystem would then be forced to satisfy E's read
1469
request with the value stored by W or an even later store,
1470
contradicting the fact that E ->fre W.
1471
1472
A good example illustrating how pb works is the SB pattern with strong
1473
fences:
1474
1475
int x = 0, y = 0;
1476
1477
P0()
1478
{
1479
int r0;
1480
1481
WRITE_ONCE(x, 1);
1482
smp_mb();
1483
r0 = READ_ONCE(y);
1484
}
1485
1486
P1()
1487
{
1488
int r1;
1489
1490
WRITE_ONCE(y, 1);
1491
smp_mb();
1492
r1 = READ_ONCE(x);
1493
}
1494
1495
If r0 = 0 at the end then there is a pb link from P0's load to P1's
1496
load: an fre link from P0's load to P1's store (which overwrites the
1497
value read by P0), and a strong fence between P1's store and its load.
1498
In this example, the sequences of cumul-fence and hb links are empty.
1499
Note that this pb link is not included in hb as an instance of prop,
1500
because it does not start and end on the same CPU.
1501
1502
Similarly, if r1 = 0 at the end then there is a pb link from P1's load
1503
to P0's. This means that if both r1 and r2 were 0 there would be a
1504
cycle in pb, which is not possible since an instruction cannot execute
1505
before itself. Thus, adding smp_mb() fences to the SB pattern
1506
prevents the r0 = 0, r1 = 0 outcome.
1507
1508
In summary, the fact that the pb relation links events in the order
1509
they execute means that it cannot have cycles. This requirement is
1510
the content of the LKMM's "propagation" axiom.
1511
1512
1513
RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb
1514
------------------------------------------------------------------------
1515
1516
RCU (Read-Copy-Update) is a powerful synchronization mechanism. It
1517
rests on two concepts: grace periods and read-side critical sections.
1518
1519
A grace period is the span of time occupied by a call to
1520
synchronize_rcu(). A read-side critical section (or just critical
1521
section, for short) is a region of code delimited by rcu_read_lock()
1522
at the start and rcu_read_unlock() at the end. Critical sections can
1523
be nested, although we won't make use of this fact.
1524
1525
As far as memory models are concerned, RCU's main feature is its
1526
Grace-Period Guarantee, which states that a critical section can never
1527
span a full grace period. In more detail, the Guarantee says:
1528
1529
For any critical section C and any grace period G, at least
1530
one of the following statements must hold:
1531
1532
(1) C ends before G does, and in addition, every store that
1533
propagates to C's CPU before the end of C must propagate to
1534
every CPU before G ends.
1535
1536
(2) G starts before C does, and in addition, every store that
1537
propagates to G's CPU before the start of G must propagate
1538
to every CPU before C starts.
1539
1540
In particular, it is not possible for a critical section to both start
1541
before and end after a grace period.
1542
1543
Here is a simple example of RCU in action:
1544
1545
int x, y;
1546
1547
P0()
1548
{
1549
rcu_read_lock();
1550
WRITE_ONCE(x, 1);
1551
WRITE_ONCE(y, 1);
1552
rcu_read_unlock();
1553
}
1554
1555
P1()
1556
{
1557
int r1, r2;
1558
1559
r1 = READ_ONCE(x);
1560
synchronize_rcu();
1561
r2 = READ_ONCE(y);
1562
}
1563
1564
The Grace Period Guarantee tells us that when this code runs, it will
1565
never end with r1 = 1 and r2 = 0. The reasoning is as follows. r1 = 1
1566
means that P0's store to x propagated to P1 before P1 called
1567
synchronize_rcu(), so P0's critical section must have started before
1568
P1's grace period, contrary to part (2) of the Guarantee. On the
1569
other hand, r2 = 0 means that P0's store to y, which occurs before the
1570
end of the critical section, did not propagate to P1 before the end of
1571
the grace period, contrary to part (1). Together the results violate
1572
the Guarantee.
1573
1574
In the kernel's implementations of RCU, the requirements for stores
1575
to propagate to every CPU are fulfilled by placing strong fences at
1576
suitable places in the RCU-related code. Thus, if a critical section
1577
starts before a grace period does then the critical section's CPU will
1578
execute an smp_mb() fence after the end of the critical section and
1579
some time before the grace period's synchronize_rcu() call returns.
1580
And if a critical section ends after a grace period does then the
1581
synchronize_rcu() routine will execute an smp_mb() fence at its start
1582
and some time before the critical section's opening rcu_read_lock()
1583
executes.
1584
1585
What exactly do we mean by saying that a critical section "starts
1586
before" or "ends after" a grace period? Some aspects of the meaning
1587
are pretty obvious, as in the example above, but the details aren't
1588
entirely clear. The LKMM formalizes this notion by means of the
1589
rcu-link relation. rcu-link encompasses a very general notion of
1590
"before": If E and F are RCU fence events (i.e., rcu_read_lock(),
1591
rcu_read_unlock(), or synchronize_rcu()) then among other things,
1592
E ->rcu-link F includes cases where E is po-before some memory-access
1593
event X, F is po-after some memory-access event Y, and we have any of
1594
X ->rfe Y, X ->co Y, or X ->fr Y.
1595
1596
The formal definition of the rcu-link relation is more than a little
1597
obscure, and we won't give it here. It is closely related to the pb
1598
relation, and the details don't matter unless you want to comb through
1599
a somewhat lengthy formal proof. Pretty much all you need to know
1600
about rcu-link is the information in the preceding paragraph.
1601
1602
The LKMM also defines the rcu-gp and rcu-rscsi relations. They bring
1603
grace periods and read-side critical sections into the picture, in the
1604
following way:
1605
1606
E ->rcu-gp F means that E and F are in fact the same event,
1607
and that event is a synchronize_rcu() fence (i.e., a grace
1608
period).
1609
1610
E ->rcu-rscsi F means that E and F are the rcu_read_unlock()
1611
and rcu_read_lock() fence events delimiting some read-side
1612
critical section. (The 'i' at the end of the name emphasizes
1613
that this relation is "inverted": It links the end of the
1614
critical section to the start.)
1615
1616
If we think of the rcu-link relation as standing for an extended
1617
"before", then X ->rcu-gp Y ->rcu-link Z roughly says that X is a
1618
grace period which ends before Z begins. (In fact it covers more than
1619
this, because it also includes cases where some store propagates to
1620
Z's CPU before Z begins but doesn't propagate to some other CPU until
1621
after X ends.) Similarly, X ->rcu-rscsi Y ->rcu-link Z says that X is
1622
the end of a critical section which starts before Z begins.
1623
1624
The LKMM goes on to define the rcu-order relation as a sequence of
1625
rcu-gp and rcu-rscsi links separated by rcu-link links, in which the
1626
number of rcu-gp links is >= the number of rcu-rscsi links. For
1627
example:
1628
1629
X ->rcu-gp Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V
1630
1631
would imply that X ->rcu-order V, because this sequence contains two
1632
rcu-gp links and one rcu-rscsi link. (It also implies that
1633
X ->rcu-order T and Z ->rcu-order V.) On the other hand:
1634
1635
X ->rcu-rscsi Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V
1636
1637
does not imply X ->rcu-order V, because the sequence contains only
1638
one rcu-gp link but two rcu-rscsi links.
1639
1640
The rcu-order relation is important because the Grace Period Guarantee
1641
means that rcu-order links act kind of like strong fences. In
1642
particular, E ->rcu-order F implies not only that E begins before F
1643
ends, but also that any write po-before E will propagate to every CPU
1644
before any instruction po-after F can execute. (However, it does not
1645
imply that E must execute before F; in fact, each synchronize_rcu()
1646
fence event is linked to itself by rcu-order as a degenerate case.)
1647
1648
To prove this in full generality requires some intellectual effort.
1649
We'll consider just a very simple case:
1650
1651
G ->rcu-gp W ->rcu-link Z ->rcu-rscsi F.
1652
1653
This formula means that G and W are the same event (a grace period),
1654
and there are events X, Y and a read-side critical section C such that:
1655
1656
1. G = W is po-before or equal to X;
1657
1658
2. X comes "before" Y in some sense (including rfe, co and fr);
1659
1660
3. Y is po-before Z;
1661
1662
4. Z is the rcu_read_unlock() event marking the end of C;
1663
1664
5. F is the rcu_read_lock() event marking the start of C.
1665
1666
From 1 - 4 we deduce that the grace period G ends before the critical
1667
section C. Then part (2) of the Grace Period Guarantee says not only
1668
that G starts before C does, but also that any write which executes on
1669
G's CPU before G starts must propagate to every CPU before C starts.
1670
In particular, the write propagates to every CPU before F finishes
1671
executing and hence before any instruction po-after F can execute.
1672
This sort of reasoning can be extended to handle all the situations
1673
covered by rcu-order.
1674
1675
The rcu-fence relation is a simple extension of rcu-order. While
1676
rcu-order only links certain fence events (calls to synchronize_rcu(),
1677
rcu_read_lock(), or rcu_read_unlock()), rcu-fence links any events
1678
that are separated by an rcu-order link. This is analogous to the way
1679
the strong-fence relation links events that are separated by an
1680
smp_mb() fence event (as mentioned above, rcu-order links act kind of
1681
like strong fences). Written symbolically, X ->rcu-fence Y means
1682
there are fence events E and F such that:
1683
1684
X ->po E ->rcu-order F ->po Y.
1685
1686
From the discussion above, we see this implies not only that X
1687
executes before Y, but also (if X is a store) that X propagates to
1688
every CPU before Y executes. Thus rcu-fence is sort of a
1689
"super-strong" fence: Unlike the original strong fences (smp_mb() and
1690
synchronize_rcu()), rcu-fence is able to link events on different
1691
CPUs. (Perhaps this fact should lead us to say that rcu-fence isn't
1692
really a fence at all!)
1693
1694
Finally, the LKMM defines the RCU-before (rb) relation in terms of
1695
rcu-fence. This is done in essentially the same way as the pb
1696
relation was defined in terms of strong-fence. We will omit the
1697
details; the end result is that E ->rb F implies E must execute
1698
before F, just as E ->pb F does (and for much the same reasons).
1699
1700
Putting this all together, the LKMM expresses the Grace Period
1701
Guarantee by requiring that the rb relation does not contain a cycle.
1702
Equivalently, this "rcu" axiom requires that there are no events E
1703
and F with E ->rcu-link F ->rcu-order E. Or to put it a third way,
1704
the axiom requires that there are no cycles consisting of rcu-gp and
1705
rcu-rscsi alternating with rcu-link, where the number of rcu-gp links
1706
is >= the number of rcu-rscsi links.
1707
1708
Justifying the axiom isn't easy, but it is in fact a valid
1709
formalization of the Grace Period Guarantee. We won't attempt to go
1710
through the detailed argument, but the following analysis gives a
1711
taste of what is involved. Suppose both parts of the Guarantee are
1712
violated: A critical section starts before a grace period, and some
1713
store propagates to the critical section's CPU before the end of the
1714
critical section but doesn't propagate to some other CPU until after
1715
the end of the grace period.
1716
1717
Putting symbols to these ideas, let L and U be the rcu_read_lock() and
1718
rcu_read_unlock() fence events delimiting the critical section in
1719
question, and let S be the synchronize_rcu() fence event for the grace
1720
period. Saying that the critical section starts before S means there
1721
are events Q and R where Q is po-after L (which marks the start of the
1722
critical section), Q is "before" R in the sense used by the rcu-link
1723
relation, and R is po-before the grace period S. Thus we have:
1724
1725
L ->rcu-link S.
1726
1727
Let W be the store mentioned above, let Y come before the end of the
1728
critical section and witness that W propagates to the critical
1729
section's CPU by reading from W, and let Z on some arbitrary CPU be a
1730
witness that W has not propagated to that CPU, where Z happens after
1731
some event X which is po-after S. Symbolically, this amounts to:
1732
1733
S ->po X ->hb* Z ->fr W ->rf Y ->po U.
1734
1735
The fr link from Z to W indicates that W has not propagated to Z's CPU
1736
at the time that Z executes. From this, it can be shown (see the
1737
discussion of the rcu-link relation earlier) that S and U are related
1738
by rcu-link:
1739
1740
S ->rcu-link U.
1741
1742
Since S is a grace period we have S ->rcu-gp S, and since L and U are
1743
the start and end of the critical section C we have U ->rcu-rscsi L.
1744
From this we obtain:
1745
1746
S ->rcu-gp S ->rcu-link U ->rcu-rscsi L ->rcu-link S,
1747
1748
a forbidden cycle. Thus the "rcu" axiom rules out this violation of
1749
the Grace Period Guarantee.
1750
1751
For something a little more down-to-earth, let's see how the axiom
1752
works out in practice. Consider the RCU code example from above, this
1753
time with statement labels added:
1754
1755
int x, y;
1756
1757
P0()
1758
{
1759
L: rcu_read_lock();
1760
X: WRITE_ONCE(x, 1);
1761
Y: WRITE_ONCE(y, 1);
1762
U: rcu_read_unlock();
1763
}
1764
1765
P1()
1766
{
1767
int r1, r2;
1768
1769
Z: r1 = READ_ONCE(x);
1770
S: synchronize_rcu();
1771
W: r2 = READ_ONCE(y);
1772
}
1773
1774
1775
If r2 = 0 at the end then P0's store at Y overwrites the value that
1776
P1's load at W reads from, so we have W ->fre Y. Since S ->po W and
1777
also Y ->po U, we get S ->rcu-link U. In addition, S ->rcu-gp S
1778
because S is a grace period.
1779
1780
If r1 = 1 at the end then P1's load at Z reads from P0's store at X,
1781
so we have X ->rfe Z. Together with L ->po X and Z ->po S, this
1782
yields L ->rcu-link S. And since L and U are the start and end of a
1783
critical section, we have U ->rcu-rscsi L.
1784
1785
Then U ->rcu-rscsi L ->rcu-link S ->rcu-gp S ->rcu-link U is a
1786
forbidden cycle, violating the "rcu" axiom. Hence the outcome is not
1787
allowed by the LKMM, as we would expect.
1788
1789
For contrast, let's see what can happen in a more complicated example:
1790
1791
int x, y, z;
1792
1793
P0()
1794
{
1795
int r0;
1796
1797
L0: rcu_read_lock();
1798
r0 = READ_ONCE(x);
1799
WRITE_ONCE(y, 1);
1800
U0: rcu_read_unlock();
1801
}
1802
1803
P1()
1804
{
1805
int r1;
1806
1807
r1 = READ_ONCE(y);
1808
S1: synchronize_rcu();
1809
WRITE_ONCE(z, 1);
1810
}
1811
1812
P2()
1813
{
1814
int r2;
1815
1816
L2: rcu_read_lock();
1817
r2 = READ_ONCE(z);
1818
WRITE_ONCE(x, 1);
1819
U2: rcu_read_unlock();
1820
}
1821
1822
If r0 = r1 = r2 = 1 at the end, then similar reasoning to before shows
1823
that U0 ->rcu-rscsi L0 ->rcu-link S1 ->rcu-gp S1 ->rcu-link U2 ->rcu-rscsi
1824
L2 ->rcu-link U0. However this cycle is not forbidden, because the
1825
sequence of relations contains fewer instances of rcu-gp (one) than of
1826
rcu-rscsi (two). Consequently the outcome is allowed by the LKMM.
1827
The following instruction timing diagram shows how it might actually
1828
occur:
1829
1830
P0 P1 P2
1831
-------------------- -------------------- --------------------
1832
rcu_read_lock()
1833
WRITE_ONCE(y, 1)
1834
r1 = READ_ONCE(y)
1835
synchronize_rcu() starts
1836
. rcu_read_lock()
1837
. WRITE_ONCE(x, 1)
1838
r0 = READ_ONCE(x) .
1839
rcu_read_unlock() .
1840
synchronize_rcu() ends
1841
WRITE_ONCE(z, 1)
1842
r2 = READ_ONCE(z)
1843
rcu_read_unlock()
1844
1845
This requires P0 and P2 to execute their loads and stores out of
1846
program order, but of course they are allowed to do so. And as you
1847
can see, the Grace Period Guarantee is not violated: The critical
1848
section in P0 both starts before P1's grace period does and ends
1849
before it does, and the critical section in P2 both starts after P1's
1850
grace period does and ends after it does.
1851
1852
The LKMM supports SRCU (Sleepable Read-Copy-Update) in addition to
1853
normal RCU. The ideas involved are much the same as above, with new
1854
relations srcu-gp and srcu-rscsi added to represent SRCU grace periods
1855
and read-side critical sections. However, there are some significant
1856
differences between RCU read-side critical sections and their SRCU
1857
counterparts, as described in the next section.
1858
1859
1860
SRCU READ-SIDE CRITICAL SECTIONS
1861
--------------------------------
1862
1863
The LKMM uses the srcu-rscsi relation to model SRCU read-side critical
1864
sections. They differ from RCU read-side critical sections in the
1865
following respects:
1866
1867
1. Unlike the analogous RCU primitives, synchronize_srcu(),
1868
srcu_read_lock(), and srcu_read_unlock() take a pointer to a
1869
struct srcu_struct as an argument. This structure is called
1870
an SRCU domain, and calls linked by srcu-rscsi must have the
1871
same domain. Read-side critical sections and grace periods
1872
associated with different domains are independent of one
1873
another; the SRCU version of the RCU Guarantee applies only
1874
to pairs of critical sections and grace periods having the
1875
same domain.
1876
1877
2. srcu_read_lock() returns a value, called the index, which must
1878
be passed to the matching srcu_read_unlock() call. Unlike
1879
rcu_read_lock() and rcu_read_unlock(), an srcu_read_lock()
1880
call does not always have to match the next unpaired
1881
srcu_read_unlock(). In fact, it is possible for two SRCU
1882
read-side critical sections to overlap partially, as in the
1883
following example (where s is an srcu_struct and idx1 and idx2
1884
are integer variables):
1885
1886
idx1 = srcu_read_lock(&s); // Start of first RSCS
1887
idx2 = srcu_read_lock(&s); // Start of second RSCS
1888
srcu_read_unlock(&s, idx1); // End of first RSCS
1889
srcu_read_unlock(&s, idx2); // End of second RSCS
1890
1891
The matching is determined entirely by the domain pointer and
1892
index value. By contrast, if the calls had been
1893
rcu_read_lock() and rcu_read_unlock() then they would have
1894
created two nested (fully overlapping) read-side critical
1895
sections: an inner one and an outer one.
1896
1897
3. The srcu_down_read() and srcu_up_read() primitives work
1898
exactly like srcu_read_lock() and srcu_read_unlock(), except
1899
that matching calls don't have to execute within the same context.
1900
(The names are meant to be suggestive of operations on
1901
semaphores.) Since the matching is determined by the domain
1902
pointer and index value, these primitives make it possible for
1903
an SRCU read-side critical section to start on one CPU and end
1904
on another, so to speak.
1905
1906
In order to account for these properties of SRCU, the LKMM models
1907
srcu_read_lock() as a special type of load event (which is
1908
appropriate, since it takes a memory location as argument and returns
1909
a value, just as a load does) and srcu_read_unlock() as a special type
1910
of store event (again appropriate, since it takes as arguments a
1911
memory location and a value). These loads and stores are annotated as
1912
belonging to the "srcu-lock" and "srcu-unlock" event classes
1913
respectively.
1914
1915
This approach allows the LKMM to tell whether two events are
1916
associated with the same SRCU domain, simply by checking whether they
1917
access the same memory location (i.e., they are linked by the loc
1918
relation). It also gives a way to tell which unlock matches a
1919
particular lock, by checking for the presence of a data dependency
1920
from the load (srcu-lock) to the store (srcu-unlock). For example,
1921
given the situation outlined earlier (with statement labels added):
1922
1923
A: idx1 = srcu_read_lock(&s);
1924
B: idx2 = srcu_read_lock(&s);
1925
C: srcu_read_unlock(&s, idx1);
1926
D: srcu_read_unlock(&s, idx2);
1927
1928
the LKMM will treat A and B as loads from s yielding values saved in
1929
idx1 and idx2 respectively. Similarly, it will treat C and D as
1930
though they stored the values from idx1 and idx2 in s. The end result
1931
is much as if we had written:
1932
1933
A: idx1 = READ_ONCE(s);
1934
B: idx2 = READ_ONCE(s);
1935
C: WRITE_ONCE(s, idx1);
1936
D: WRITE_ONCE(s, idx2);
1937
1938
except for the presence of the special srcu-lock and srcu-unlock
1939
annotations. You can see at once that we have A ->data C and
1940
B ->data D. These dependencies tell the LKMM that C is the
1941
srcu-unlock event matching srcu-lock event A, and D is the
1942
srcu-unlock event matching srcu-lock event B.
1943
1944
This approach is admittedly a hack, and it has the potential to lead
1945
to problems. For example, in:
1946
1947
idx1 = srcu_read_lock(&s);
1948
srcu_read_unlock(&s, idx1);
1949
idx2 = srcu_read_lock(&s);
1950
srcu_read_unlock(&s, idx2);
1951
1952
the LKMM will believe that idx2 must have the same value as idx1,
1953
since it reads from the immediately preceding store of idx1 in s.
1954
Fortunately this won't matter, assuming that litmus tests never do
1955
anything with SRCU index values other than pass them to
1956
srcu_read_unlock() or srcu_up_read() calls.
1957
1958
However, sometimes it is necessary to store an index value in a
1959
shared variable temporarily. In fact, this is the only way for
1960
srcu_down_read() to pass the index it gets to an srcu_up_read() call
1961
on a different CPU. In more detail, we might have soething like:
1962
1963
struct srcu_struct s;
1964
int x;
1965
1966
P0()
1967
{
1968
int r0;
1969
1970
A: r0 = srcu_down_read(&s);
1971
B: WRITE_ONCE(x, r0);
1972
}
1973
1974
P1()
1975
{
1976
int r1;
1977
1978
C: r1 = READ_ONCE(x);
1979
D: srcu_up_read(&s, r1);
1980
}
1981
1982
Assuming that P1 executes after P0 and does read the index value
1983
stored in x, we can write this (using brackets to represent event
1984
annotations) as:
1985
1986
A[srcu-lock] ->data B[once] ->rf C[once] ->data D[srcu-unlock].
1987
1988
The LKMM defines a carry-srcu-data relation to express this pattern;
1989
it permits an arbitrarily long sequence of
1990
1991
data ; rf
1992
1993
pairs (that is, a data link followed by an rf link) to occur between
1994
an srcu-lock event and the final data dependency leading to the
1995
matching srcu-unlock event. carry-srcu-data is complicated by the
1996
need to ensure that none of the intermediate store events in this
1997
sequence are instances of srcu-unlock. This is necessary because in a
1998
pattern like the one above:
1999
2000
A: idx1 = srcu_read_lock(&s);
2001
B: srcu_read_unlock(&s, idx1);
2002
C: idx2 = srcu_read_lock(&s);
2003
D: srcu_read_unlock(&s, idx2);
2004
2005
the LKMM treats B as a store to the variable s and C as a load from
2006
that variable, creating an undesirable rf link from B to C:
2007
2008
A ->data B ->rf C ->data D.
2009
2010
This would cause carry-srcu-data to mistakenly extend a data
2011
dependency from A to D, giving the impression that D was the
2012
srcu-unlock event matching A's srcu-lock. To avoid such problems,
2013
carry-srcu-data does not accept sequences in which the ends of any of
2014
the intermediate ->data links (B above) is an srcu-unlock event.
2015
2016
2017
LOCKING
2018
-------
2019
2020
The LKMM includes locking. In fact, there is special code for locking
2021
in the formal model, added in order to make tools run faster.
2022
However, this special code is intended to be more or less equivalent
2023
to concepts we have already covered. A spinlock_t variable is treated
2024
the same as an int, and spin_lock(&s) is treated almost the same as:
2025
2026
while (cmpxchg_acquire(&s, 0, 1) != 0)
2027
cpu_relax();
2028
2029
This waits until s is equal to 0 and then atomically sets it to 1,
2030
and the read part of the cmpxchg operation acts as an acquire fence.
2031
An alternate way to express the same thing would be:
2032
2033
r = xchg_acquire(&s, 1);
2034
2035
along with a requirement that at the end, r = 0. Similarly,
2036
spin_trylock(&s) is treated almost the same as:
2037
2038
return !cmpxchg_acquire(&s, 0, 1);
2039
2040
which atomically sets s to 1 if it is currently equal to 0 and returns
2041
true if it succeeds (the read part of the cmpxchg operation acts as an
2042
acquire fence only if the operation is successful). spin_unlock(&s)
2043
is treated almost the same as:
2044
2045
smp_store_release(&s, 0);
2046
2047
The "almost" qualifiers above need some explanation. In the LKMM, the
2048
store-release in a spin_unlock() and the load-acquire which forms the
2049
first half of the atomic rmw update in a spin_lock() or a successful
2050
spin_trylock() -- we can call these things lock-releases and
2051
lock-acquires -- have two properties beyond those of ordinary releases
2052
and acquires.
2053
2054
First, when a lock-acquire reads from or is po-after a lock-release,
2055
the LKMM requires that every instruction po-before the lock-release
2056
must execute before any instruction po-after the lock-acquire. This
2057
would naturally hold if the release and acquire operations were on
2058
different CPUs and accessed the same lock variable, but the LKMM says
2059
it also holds when they are on the same CPU, even if they access
2060
different lock variables. For example:
2061
2062
int x, y;
2063
spinlock_t s, t;
2064
2065
P0()
2066
{
2067
int r1, r2;
2068
2069
spin_lock(&s);
2070
r1 = READ_ONCE(x);
2071
spin_unlock(&s);
2072
spin_lock(&t);
2073
r2 = READ_ONCE(y);
2074
spin_unlock(&t);
2075
}
2076
2077
P1()
2078
{
2079
WRITE_ONCE(y, 1);
2080
smp_wmb();
2081
WRITE_ONCE(x, 1);
2082
}
2083
2084
Here the second spin_lock() is po-after the first spin_unlock(), and
2085
therefore the load of x must execute before the load of y, even though
2086
the two locking operations use different locks. Thus we cannot have
2087
r1 = 1 and r2 = 0 at the end (this is an instance of the MP pattern).
2088
2089
This requirement does not apply to ordinary release and acquire
2090
fences, only to lock-related operations. For instance, suppose P0()
2091
in the example had been written as:
2092
2093
P0()
2094
{
2095
int r1, r2, r3;
2096
2097
r1 = READ_ONCE(x);
2098
smp_store_release(&s, 1);
2099
r3 = smp_load_acquire(&s);
2100
r2 = READ_ONCE(y);
2101
}
2102
2103
Then the CPU would be allowed to forward the s = 1 value from the
2104
smp_store_release() to the smp_load_acquire(), executing the
2105
instructions in the following order:
2106
2107
r3 = smp_load_acquire(&s); // Obtains r3 = 1
2108
r2 = READ_ONCE(y);
2109
r1 = READ_ONCE(x);
2110
smp_store_release(&s, 1); // Value is forwarded
2111
2112
and thus it could load y before x, obtaining r2 = 0 and r1 = 1.
2113
2114
Second, when a lock-acquire reads from or is po-after a lock-release,
2115
and some other stores W and W' occur po-before the lock-release and
2116
po-after the lock-acquire respectively, the LKMM requires that W must
2117
propagate to each CPU before W' does. For example, consider:
2118
2119
int x, y;
2120
spinlock_t s;
2121
2122
P0()
2123
{
2124
spin_lock(&s);
2125
WRITE_ONCE(x, 1);
2126
spin_unlock(&s);
2127
}
2128
2129
P1()
2130
{
2131
int r1;
2132
2133
spin_lock(&s);
2134
r1 = READ_ONCE(x);
2135
WRITE_ONCE(y, 1);
2136
spin_unlock(&s);
2137
}
2138
2139
P2()
2140
{
2141
int r2, r3;
2142
2143
r2 = READ_ONCE(y);
2144
smp_rmb();
2145
r3 = READ_ONCE(x);
2146
}
2147
2148
If r1 = 1 at the end then the spin_lock() in P1 must have read from
2149
the spin_unlock() in P0. Hence the store to x must propagate to P2
2150
before the store to y does, so we cannot have r2 = 1 and r3 = 0. But
2151
if P1 had used a lock variable different from s, the writes could have
2152
propagated in either order. (On the other hand, if the code in P0 and
2153
P1 had all executed on a single CPU, as in the example before this
2154
one, then the writes would have propagated in order even if the two
2155
critical sections used different lock variables.)
2156
2157
These two special requirements for lock-release and lock-acquire do
2158
not arise from the operational model. Nevertheless, kernel developers
2159
have come to expect and rely on them because they do hold on all
2160
architectures supported by the Linux kernel, albeit for various
2161
differing reasons.
2162
2163
2164
PLAIN ACCESSES AND DATA RACES
2165
-----------------------------
2166
2167
In the LKMM, memory accesses such as READ_ONCE(x), atomic_inc(&y),
2168
smp_load_acquire(&z), and so on are collectively referred to as
2169
"marked" accesses, because they are all annotated with special
2170
operations of one kind or another. Ordinary C-language memory
2171
accesses such as x or y = 0 are simply called "plain" accesses.
2172
2173
Early versions of the LKMM had nothing to say about plain accesses.
2174
The C standard allows compilers to assume that the variables affected
2175
by plain accesses are not concurrently read or written by any other
2176
threads or CPUs. This leaves compilers free to implement all manner
2177
of transformations or optimizations of code containing plain accesses,
2178
making such code very difficult for a memory model to handle.
2179
2180
Here is just one example of a possible pitfall:
2181
2182
int a = 6;
2183
int *x = &a;
2184
2185
P0()
2186
{
2187
int *r1;
2188
int r2 = 0;
2189
2190
r1 = x;
2191
if (r1 != NULL)
2192
r2 = READ_ONCE(*r1);
2193
}
2194
2195
P1()
2196
{
2197
WRITE_ONCE(x, NULL);
2198
}
2199
2200
On the face of it, one would expect that when this code runs, the only
2201
possible final values for r2 are 6 and 0, depending on whether or not
2202
P1's store to x propagates to P0 before P0's load from x executes.
2203
But since P0's load from x is a plain access, the compiler may decide
2204
to carry out the load twice (for the comparison against NULL, then again
2205
for the READ_ONCE()) and eliminate the temporary variable r1. The
2206
object code generated for P0 could therefore end up looking rather
2207
like this:
2208
2209
P0()
2210
{
2211
int r2 = 0;
2212
2213
if (x != NULL)
2214
r2 = READ_ONCE(*x);
2215
}
2216
2217
And now it is obvious that this code runs the risk of dereferencing a
2218
NULL pointer, because P1's store to x might propagate to P0 after the
2219
test against NULL has been made but before the READ_ONCE() executes.
2220
If the original code had said "r1 = READ_ONCE(x)" instead of "r1 = x",
2221
the compiler would not have performed this optimization and there
2222
would be no possibility of a NULL-pointer dereference.
2223
2224
Given the possibility of transformations like this one, the LKMM
2225
doesn't try to predict all possible outcomes of code containing plain
2226
accesses. It is instead content to determine whether the code
2227
violates the compiler's assumptions, which would render the ultimate
2228
outcome undefined.
2229
2230
In technical terms, the compiler is allowed to assume that when the
2231
program executes, there will not be any data races. A "data race"
2232
occurs when there are two memory accesses such that:
2233
2234
1. they access the same location,
2235
2236
2. at least one of them is a store,
2237
2238
3. at least one of them is plain,
2239
2240
4. they occur on different CPUs (or in different threads on the
2241
same CPU), and
2242
2243
5. they execute concurrently.
2244
2245
In the literature, two accesses are said to "conflict" if they satisfy
2246
1 and 2 above. We'll go a little farther and say that two accesses
2247
are "race candidates" if they satisfy 1 - 4. Thus, whether or not two
2248
race candidates actually do race in a given execution depends on
2249
whether they are concurrent.
2250
2251
The LKMM tries to determine whether a program contains race candidates
2252
which may execute concurrently; if it does then the LKMM says there is
2253
a potential data race and makes no predictions about the program's
2254
outcome.
2255
2256
Determining whether two accesses are race candidates is easy; you can
2257
see that all the concepts involved in the definition above are already
2258
part of the memory model. The hard part is telling whether they may
2259
execute concurrently. The LKMM takes a conservative attitude,
2260
assuming that accesses may be concurrent unless it can prove they
2261
are not.
2262
2263
If two memory accesses aren't concurrent then one must execute before
2264
the other. Therefore the LKMM decides two accesses aren't concurrent
2265
if they can be connected by a sequence of hb, pb, and rb links
2266
(together referred to as xb, for "executes before"). However, there
2267
are two complicating factors.
2268
2269
If X is a load and X executes before a store Y, then indeed there is
2270
no danger of X and Y being concurrent. After all, Y can't have any
2271
effect on the value obtained by X until the memory subsystem has
2272
propagated Y from its own CPU to X's CPU, which won't happen until
2273
some time after Y executes and thus after X executes. But if X is a
2274
store, then even if X executes before Y it is still possible that X
2275
will propagate to Y's CPU just as Y is executing. In such a case X
2276
could very well interfere somehow with Y, and we would have to
2277
consider X and Y to be concurrent.
2278
2279
Therefore when X is a store, for X and Y to be non-concurrent the LKMM
2280
requires not only that X must execute before Y but also that X must
2281
propagate to Y's CPU before Y executes. (Or vice versa, of course, if
2282
Y executes before X -- then Y must propagate to X's CPU before X
2283
executes if Y is a store.) This is expressed by the visibility
2284
relation (vis), where X ->vis Y is defined to hold if there is an
2285
intermediate event Z such that:
2286
2287
X is connected to Z by a possibly empty sequence of
2288
cumul-fence links followed by an optional rfe link (if none of
2289
these links are present, X and Z are the same event),
2290
2291
and either:
2292
2293
Z is connected to Y by a strong-fence link followed by a
2294
possibly empty sequence of xb links,
2295
2296
or:
2297
2298
Z is on the same CPU as Y and is connected to Y by a possibly
2299
empty sequence of xb links (again, if the sequence is empty it
2300
means Z and Y are the same event).
2301
2302
The motivations behind this definition are straightforward:
2303
2304
cumul-fence memory barriers force stores that are po-before
2305
the barrier to propagate to other CPUs before stores that are
2306
po-after the barrier.
2307
2308
An rfe link from an event W to an event R says that R reads
2309
from W, which certainly means that W must have propagated to
2310
R's CPU before R executed.
2311
2312
strong-fence memory barriers force stores that are po-before
2313
the barrier, or that propagate to the barrier's CPU before the
2314
barrier executes, to propagate to all CPUs before any events
2315
po-after the barrier can execute.
2316
2317
To see how this works out in practice, consider our old friend, the MP
2318
pattern (with fences and statement labels, but without the conditional
2319
test):
2320
2321
int buf = 0, flag = 0;
2322
2323
P0()
2324
{
2325
X: WRITE_ONCE(buf, 1);
2326
smp_wmb();
2327
W: WRITE_ONCE(flag, 1);
2328
}
2329
2330
P1()
2331
{
2332
int r1;
2333
int r2 = 0;
2334
2335
Z: r1 = READ_ONCE(flag);
2336
smp_rmb();
2337
Y: r2 = READ_ONCE(buf);
2338
}
2339
2340
The smp_wmb() memory barrier gives a cumul-fence link from X to W, and
2341
assuming r1 = 1 at the end, there is an rfe link from W to Z. This
2342
means that the store to buf must propagate from P0 to P1 before Z
2343
executes. Next, Z and Y are on the same CPU and the smp_rmb() fence
2344
provides an xb link from Z to Y (i.e., it forces Z to execute before
2345
Y). Therefore we have X ->vis Y: X must propagate to Y's CPU before Y
2346
executes.
2347
2348
The second complicating factor mentioned above arises from the fact
2349
that when we are considering data races, some of the memory accesses
2350
are plain. Now, although we have not said so explicitly, up to this
2351
point most of the relations defined by the LKMM (ppo, hb, prop,
2352
cumul-fence, pb, and so on -- including vis) apply only to marked
2353
accesses.
2354
2355
There are good reasons for this restriction. The compiler is not
2356
allowed to apply fancy transformations to marked accesses, and
2357
consequently each such access in the source code corresponds more or
2358
less directly to a single machine instruction in the object code. But
2359
plain accesses are a different story; the compiler may combine them,
2360
split them up, duplicate them, eliminate them, invent new ones, and
2361
who knows what else. Seeing a plain access in the source code tells
2362
you almost nothing about what machine instructions will end up in the
2363
object code.
2364
2365
Fortunately, the compiler isn't completely free; it is subject to some
2366
limitations. For one, it is not allowed to introduce a data race into
2367
the object code if the source code does not already contain a data
2368
race (if it could, memory models would be useless and no multithreaded
2369
code would be safe!). For another, it cannot move a plain access past
2370
a compiler barrier.
2371
2372
A compiler barrier is a kind of fence, but as the name implies, it
2373
only affects the compiler; it does not necessarily have any effect on
2374
how instructions are executed by the CPU. In Linux kernel source
2375
code, the barrier() function is a compiler barrier. It doesn't give
2376
rise directly to any machine instructions in the object code; rather,
2377
it affects how the compiler generates the rest of the object code.
2378
Given source code like this:
2379
2380
... some memory accesses ...
2381
barrier();
2382
... some other memory accesses ...
2383
2384
the barrier() function ensures that the machine instructions
2385
corresponding to the first group of accesses will all end po-before
2386
any machine instructions corresponding to the second group of accesses
2387
-- even if some of the accesses are plain. (Of course, the CPU may
2388
then execute some of those accesses out of program order, but we
2389
already know how to deal with such issues.) Without the barrier()
2390
there would be no such guarantee; the two groups of accesses could be
2391
intermingled or even reversed in the object code.
2392
2393
The LKMM doesn't say much about the barrier() function, but it does
2394
require that all fences are also compiler barriers. In addition, it
2395
requires that the ordering properties of memory barriers such as
2396
smp_rmb() or smp_store_release() apply to plain accesses as well as to
2397
marked accesses.
2398
2399
This is the key to analyzing data races. Consider the MP pattern
2400
again, now using plain accesses for buf:
2401
2402
int buf = 0, flag = 0;
2403
2404
P0()
2405
{
2406
U: buf = 1;
2407
smp_wmb();
2408
X: WRITE_ONCE(flag, 1);
2409
}
2410
2411
P1()
2412
{
2413
int r1;
2414
int r2 = 0;
2415
2416
Y: r1 = READ_ONCE(flag);
2417
if (r1) {
2418
smp_rmb();
2419
V: r2 = buf;
2420
}
2421
}
2422
2423
This program does not contain a data race. Although the U and V
2424
accesses are race candidates, the LKMM can prove they are not
2425
concurrent as follows:
2426
2427
The smp_wmb() fence in P0 is both a compiler barrier and a
2428
cumul-fence. It guarantees that no matter what hash of
2429
machine instructions the compiler generates for the plain
2430
access U, all those instructions will be po-before the fence.
2431
Consequently U's store to buf, no matter how it is carried out
2432
at the machine level, must propagate to P1 before X's store to
2433
flag does.
2434
2435
X and Y are both marked accesses. Hence an rfe link from X to
2436
Y is a valid indicator that X propagated to P1 before Y
2437
executed, i.e., X ->vis Y. (And if there is no rfe link then
2438
r1 will be 0, so V will not be executed and ipso facto won't
2439
race with U.)
2440
2441
The smp_rmb() fence in P1 is a compiler barrier as well as a
2442
fence. It guarantees that all the machine-level instructions
2443
corresponding to the access V will be po-after the fence, and
2444
therefore any loads among those instructions will execute
2445
after the fence does and hence after Y does.
2446
2447
Thus U's store to buf is forced to propagate to P1 before V's load
2448
executes (assuming V does execute), ruling out the possibility of a
2449
data race between them.
2450
2451
This analysis illustrates how the LKMM deals with plain accesses in
2452
general. Suppose R is a plain load and we want to show that R
2453
executes before some marked access E. We can do this by finding a
2454
marked access X such that R and X are ordered by a suitable fence and
2455
X ->xb* E. If E was also a plain access, we would also look for a
2456
marked access Y such that X ->xb* Y, and Y and E are ordered by a
2457
fence. We describe this arrangement by saying that R is
2458
"post-bounded" by X and E is "pre-bounded" by Y.
2459
2460
In fact, we go one step further: Since R is a read, we say that R is
2461
"r-post-bounded" by X. Similarly, E would be "r-pre-bounded" or
2462
"w-pre-bounded" by Y, depending on whether E was a store or a load.
2463
This distinction is needed because some fences affect only loads
2464
(i.e., smp_rmb()) and some affect only stores (smp_wmb()); otherwise
2465
the two types of bounds are the same. And as a degenerate case, we
2466
say that a marked access pre-bounds and post-bounds itself (e.g., if R
2467
above were a marked load then X could simply be taken to be R itself.)
2468
2469
The need to distinguish between r- and w-bounding raises yet another
2470
issue. When the source code contains a plain store, the compiler is
2471
allowed to put plain loads of the same location into the object code.
2472
For example, given the source code:
2473
2474
x = 1;
2475
2476
the compiler is theoretically allowed to generate object code that
2477
looks like:
2478
2479
if (x != 1)
2480
x = 1;
2481
2482
thereby adding a load (and possibly replacing the store entirely).
2483
For this reason, whenever the LKMM requires a plain store to be
2484
w-pre-bounded or w-post-bounded by a marked access, it also requires
2485
the store to be r-pre-bounded or r-post-bounded, so as to handle cases
2486
where the compiler adds a load.
2487
2488
(This may be overly cautious. We don't know of any examples where a
2489
compiler has augmented a store with a load in this fashion, and the
2490
Linux kernel developers would probably fight pretty hard to change a
2491
compiler if it ever did this. Still, better safe than sorry.)
2492
2493
Incidentally, the other tranformation -- augmenting a plain load by
2494
adding in a store to the same location -- is not allowed. This is
2495
because the compiler cannot know whether any other CPUs might perform
2496
a concurrent load from that location. Two concurrent loads don't
2497
constitute a race (they can't interfere with each other), but a store
2498
does race with a concurrent load. Thus adding a store might create a
2499
data race where one was not already present in the source code,
2500
something the compiler is forbidden to do. Augmenting a store with a
2501
load, on the other hand, is acceptable because doing so won't create a
2502
data race unless one already existed.
2503
2504
The LKMM includes a second way to pre-bound plain accesses, in
2505
addition to fences: an address dependency from a marked load. That
2506
is, in the sequence:
2507
2508
p = READ_ONCE(ptr);
2509
r = *p;
2510
2511
the LKMM says that the marked load of ptr pre-bounds the plain load of
2512
*p; the marked load must execute before any of the machine
2513
instructions corresponding to the plain load. This is a reasonable
2514
stipulation, since after all, the CPU can't perform the load of *p
2515
until it knows what value p will hold. Furthermore, without some
2516
assumption like this one, some usages typical of RCU would count as
2517
data races. For example:
2518
2519
int a = 1, b;
2520
int *ptr = &a;
2521
2522
P0()
2523
{
2524
b = 2;
2525
rcu_assign_pointer(ptr, &b);
2526
}
2527
2528
P1()
2529
{
2530
int *p;
2531
int r;
2532
2533
rcu_read_lock();
2534
p = rcu_dereference(ptr);
2535
r = *p;
2536
rcu_read_unlock();
2537
}
2538
2539
(In this example the rcu_read_lock() and rcu_read_unlock() calls don't
2540
really do anything, because there aren't any grace periods. They are
2541
included merely for the sake of good form; typically P0 would call
2542
synchronize_rcu() somewhere after the rcu_assign_pointer().)
2543
2544
rcu_assign_pointer() performs a store-release, so the plain store to b
2545
is definitely w-post-bounded before the store to ptr, and the two
2546
stores will propagate to P1 in that order. However, rcu_dereference()
2547
is only equivalent to READ_ONCE(). While it is a marked access, it is
2548
not a fence or compiler barrier. Hence the only guarantee we have
2549
that the load of ptr in P1 is r-pre-bounded before the load of *p
2550
(thus avoiding a race) is the assumption about address dependencies.
2551
2552
This is a situation where the compiler can undermine the memory model,
2553
and a certain amount of care is required when programming constructs
2554
like this one. In particular, comparisons between the pointer and
2555
other known addresses can cause trouble. If you have something like:
2556
2557
p = rcu_dereference(ptr);
2558
if (p == &x)
2559
r = *p;
2560
2561
then the compiler just might generate object code resembling:
2562
2563
p = rcu_dereference(ptr);
2564
if (p == &x)
2565
r = x;
2566
2567
or even:
2568
2569
rtemp = x;
2570
p = rcu_dereference(ptr);
2571
if (p == &x)
2572
r = rtemp;
2573
2574
which would invalidate the memory model's assumption, since the CPU
2575
could now perform the load of x before the load of ptr (there might be
2576
a control dependency but no address dependency at the machine level).
2577
2578
Finally, it turns out there is a situation in which a plain write does
2579
not need to be w-post-bounded: when it is separated from the other
2580
race-candidate access by a fence. At first glance this may seem
2581
impossible. After all, to be race candidates the two accesses must
2582
be on different CPUs, and fences don't link events on different CPUs.
2583
Well, normal fences don't -- but rcu-fence can! Here's an example:
2584
2585
int x, y;
2586
2587
P0()
2588
{
2589
WRITE_ONCE(x, 1);
2590
synchronize_rcu();
2591
y = 3;
2592
}
2593
2594
P1()
2595
{
2596
rcu_read_lock();
2597
if (READ_ONCE(x) == 0)
2598
y = 2;
2599
rcu_read_unlock();
2600
}
2601
2602
Do the plain stores to y race? Clearly not if P1 reads a non-zero
2603
value for x, so let's assume the READ_ONCE(x) does obtain 0. This
2604
means that the read-side critical section in P1 must finish executing
2605
before the grace period in P0 does, because RCU's Grace-Period
2606
Guarantee says that otherwise P0's store to x would have propagated to
2607
P1 before the critical section started and so would have been visible
2608
to the READ_ONCE(). (Another way of putting it is that the fre link
2609
from the READ_ONCE() to the WRITE_ONCE() gives rise to an rcu-link
2610
between those two events.)
2611
2612
This means there is an rcu-fence link from P1's "y = 2" store to P0's
2613
"y = 3" store, and consequently the first must propagate from P1 to P0
2614
before the second can execute. Therefore the two stores cannot be
2615
concurrent and there is no race, even though P1's plain store to y
2616
isn't w-post-bounded by any marked accesses.
2617
2618
Putting all this material together yields the following picture. For
2619
race-candidate stores W and W', where W ->co W', the LKMM says the
2620
stores don't race if W can be linked to W' by a
2621
2622
w-post-bounded ; vis ; w-pre-bounded
2623
2624
sequence. If W is plain then they also have to be linked by an
2625
2626
r-post-bounded ; xb* ; w-pre-bounded
2627
2628
sequence, and if W' is plain then they also have to be linked by a
2629
2630
w-post-bounded ; vis ; r-pre-bounded
2631
2632
sequence. For race-candidate load R and store W, the LKMM says the
2633
two accesses don't race if R can be linked to W by an
2634
2635
r-post-bounded ; xb* ; w-pre-bounded
2636
2637
sequence or if W can be linked to R by a
2638
2639
w-post-bounded ; vis ; r-pre-bounded
2640
2641
sequence. For the cases involving a vis link, the LKMM also accepts
2642
sequences in which W is linked to W' or R by a
2643
2644
strong-fence ; xb* ; {w and/or r}-pre-bounded
2645
2646
sequence with no post-bounding, and in every case the LKMM also allows
2647
the link simply to be a fence with no bounding at all. If no sequence
2648
of the appropriate sort exists, the LKMM says that the accesses race.
2649
2650
There is one more part of the LKMM related to plain accesses (although
2651
not to data races) we should discuss. Recall that many relations such
2652
as hb are limited to marked accesses only. As a result, the
2653
happens-before, propagates-before, and rcu axioms (which state that
2654
various relation must not contain a cycle) doesn't apply to plain
2655
accesses. Nevertheless, we do want to rule out such cycles, because
2656
they don't make sense even for plain accesses.
2657
2658
To this end, the LKMM imposes three extra restrictions, together
2659
called the "plain-coherence" axiom because of their resemblance to the
2660
rules used by the operational model to ensure cache coherence (that
2661
is, the rules governing the memory subsystem's choice of a store to
2662
satisfy a load request and its determination of where a store will
2663
fall in the coherence order):
2664
2665
If R and W are race candidates and it is possible to link R to
2666
W by one of the xb* sequences listed above, then W ->rfe R is
2667
not allowed (i.e., a load cannot read from a store that it
2668
executes before, even if one or both is plain).
2669
2670
If W and R are race candidates and it is possible to link W to
2671
R by one of the vis sequences listed above, then R ->fre W is
2672
not allowed (i.e., if a store is visible to a load then the
2673
load must read from that store or one coherence-after it).
2674
2675
If W and W' are race candidates and it is possible to link W
2676
to W' by one of the vis sequences listed above, then W' ->co W
2677
is not allowed (i.e., if one store is visible to a second then
2678
the second must come after the first in the coherence order).
2679
2680
This is the extent to which the LKMM deals with plain accesses.
2681
Perhaps it could say more (for example, plain accesses might
2682
contribute to the ppo relation), but at the moment it seems that this
2683
minimal, conservative approach is good enough.
2684
2685
2686
ODDS AND ENDS
2687
-------------
2688
2689
This section covers material that didn't quite fit anywhere in the
2690
earlier sections.
2691
2692
The descriptions in this document don't always match the formal
2693
version of the LKMM exactly. For example, the actual formal
2694
definition of the prop relation makes the initial coe or fre part
2695
optional, and it doesn't require the events linked by the relation to
2696
be on the same CPU. These differences are very unimportant; indeed,
2697
instances where the coe/fre part of prop is missing are of no interest
2698
because all the other parts (fences and rfe) are already included in
2699
hb anyway, and where the formal model adds prop into hb, it includes
2700
an explicit requirement that the events being linked are on the same
2701
CPU.
2702
2703
Another minor difference has to do with events that are both memory
2704
accesses and fences, such as those corresponding to smp_load_acquire()
2705
calls. In the formal model, these events aren't actually both reads
2706
and fences; rather, they are read events with an annotation marking
2707
them as acquires. (Or write events annotated as releases, in the case
2708
smp_store_release().) The final effect is the same.
2709
2710
Although we didn't mention it above, the instruction execution
2711
ordering provided by the smp_rmb() fence doesn't apply to read events
2712
that are part of a non-value-returning atomic update. For instance,
2713
given:
2714
2715
atomic_inc(&x);
2716
smp_rmb();
2717
r1 = READ_ONCE(y);
2718
2719
it is not guaranteed that the load from y will execute after the
2720
update to x. This is because the ARMv8 architecture allows
2721
non-value-returning atomic operations effectively to be executed off
2722
the CPU. Basically, the CPU tells the memory subsystem to increment
2723
x, and then the increment is carried out by the memory hardware with
2724
no further involvement from the CPU. Since the CPU doesn't ever read
2725
the value of x, there is nothing for the smp_rmb() fence to act on.
2726
2727
The LKMM defines a few extra synchronization operations in terms of
2728
things we have already covered. In particular, rcu_dereference() is
2729
treated as READ_ONCE() and rcu_assign_pointer() is treated as
2730
smp_store_release() -- which is basically how the Linux kernel treats
2731
them.
2732
2733
Although we said that plain accesses are not linked by the ppo
2734
relation, they do contribute to it indirectly. Firstly, when there is
2735
an address dependency from a marked load R to a plain store W,
2736
followed by smp_wmb() and then a marked store W', the LKMM creates a
2737
ppo link from R to W'. The reasoning behind this is perhaps a little
2738
shaky, but essentially it says there is no way to generate object code
2739
for this source code in which W' could execute before R. Just as with
2740
pre-bounding by address dependencies, it is possible for the compiler
2741
to undermine this relation if sufficient care is not taken.
2742
2743
Secondly, plain accesses can carry dependencies: If a data dependency
2744
links a marked load R to a store W, and the store is read by a load R'
2745
from the same thread, then the data loaded by R' depends on the data
2746
loaded originally by R. Thus, if R' is linked to any access X by a
2747
dependency, R is also linked to access X by the same dependency, even
2748
if W' or R' (or both!) are plain.
2749
2750
There are a few oddball fences which need special treatment:
2751
smp_mb__before_atomic(), smp_mb__after_atomic(), and
2752
smp_mb__after_spinlock(). The LKMM uses fence events with special
2753
annotations for them; they act as strong fences just like smp_mb()
2754
except for the sets of events that they order. Instead of ordering
2755
all po-earlier events against all po-later events, as smp_mb() does,
2756
they behave as follows:
2757
2758
smp_mb__before_atomic() orders all po-earlier events against
2759
po-later atomic updates and the events following them;
2760
2761
smp_mb__after_atomic() orders po-earlier atomic updates and
2762
the events preceding them against all po-later events;
2763
2764
smp_mb__after_spinlock() orders po-earlier lock acquisition
2765
events and the events preceding them against all po-later
2766
events.
2767
2768
Interestingly, RCU and locking each introduce the possibility of
2769
deadlock. When faced with code sequences such as:
2770
2771
spin_lock(&s);
2772
spin_lock(&s);
2773
spin_unlock(&s);
2774
spin_unlock(&s);
2775
2776
or:
2777
2778
rcu_read_lock();
2779
synchronize_rcu();
2780
rcu_read_unlock();
2781
2782
what does the LKMM have to say? Answer: It says there are no allowed
2783
executions at all, which makes sense. But this can also lead to
2784
misleading results, because if a piece of code has multiple possible
2785
executions, some of which deadlock, the model will report only on the
2786
non-deadlocking executions. For example:
2787
2788
int x, y;
2789
2790
P0()
2791
{
2792
int r0;
2793
2794
WRITE_ONCE(x, 1);
2795
r0 = READ_ONCE(y);
2796
}
2797
2798
P1()
2799
{
2800
rcu_read_lock();
2801
if (READ_ONCE(x) > 0) {
2802
WRITE_ONCE(y, 36);
2803
synchronize_rcu();
2804
}
2805
rcu_read_unlock();
2806
}
2807
2808
Is it possible to end up with r0 = 36 at the end? The LKMM will tell
2809
you it is not, but the model won't mention that this is because P1
2810
will self-deadlock in the executions where it stores 36 in y.
2811
2812