Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
torvalds
GitHub Repository: torvalds/linux
Path: blob/master/tools/memory-model/Documentation/locking.txt
26288 views
1
[!] Note:
2
This file expands on the "Locking" section of recipes.txt,
3
focusing on locklessly accessing shared variables that are
4
otherwise protected by a lock.
5
6
Locking
7
=======
8
9
Locking is well-known and the common use cases are straightforward: Any
10
CPU holding a given lock sees any changes previously seen or made by any
11
CPU before it previously released that same lock. This last sentence
12
is the only part of this document that most developers will need to read.
13
14
However, developers who would like to also access lock-protected shared
15
variables outside of their corresponding locks should continue reading.
16
17
18
Locking and Prior Accesses
19
--------------------------
20
21
The basic rule of locking is worth repeating:
22
23
Any CPU holding a given lock sees any changes previously seen
24
or made by any CPU before it previously released that same lock.
25
26
Note that this statement is a bit stronger than "Any CPU holding a
27
given lock sees all changes made by any CPU during the time that CPU was
28
previously holding this same lock". For example, consider the following
29
pair of code fragments:
30
31
/* See MP+polocks.litmus. */
32
void CPU0(void)
33
{
34
WRITE_ONCE(x, 1);
35
spin_lock(&mylock);
36
WRITE_ONCE(y, 1);
37
spin_unlock(&mylock);
38
}
39
40
void CPU1(void)
41
{
42
spin_lock(&mylock);
43
r0 = READ_ONCE(y);
44
spin_unlock(&mylock);
45
r1 = READ_ONCE(x);
46
}
47
48
The basic rule guarantees that if CPU0() acquires mylock before CPU1(),
49
then both r0 and r1 must be set to the value 1. This also has the
50
consequence that if the final value of r0 is equal to 1, then the final
51
value of r1 must also be equal to 1. In contrast, the weaker rule would
52
say nothing about the final value of r1.
53
54
55
Locking and Subsequent Accesses
56
-------------------------------
57
58
The converse to the basic rule also holds: Any CPU holding a given
59
lock will not see any changes that will be made by any CPU after it
60
subsequently acquires this same lock. This converse statement is
61
illustrated by the following litmus test:
62
63
/* See MP+porevlocks.litmus. */
64
void CPU0(void)
65
{
66
r0 = READ_ONCE(y);
67
spin_lock(&mylock);
68
r1 = READ_ONCE(x);
69
spin_unlock(&mylock);
70
}
71
72
void CPU1(void)
73
{
74
spin_lock(&mylock);
75
WRITE_ONCE(x, 1);
76
spin_unlock(&mylock);
77
WRITE_ONCE(y, 1);
78
}
79
80
This converse to the basic rule guarantees that if CPU0() acquires
81
mylock before CPU1(), then both r0 and r1 must be set to the value 0.
82
This also has the consequence that if the final value of r1 is equal
83
to 0, then the final value of r0 must also be equal to 0. In contrast,
84
the weaker rule would say nothing about the final value of r0.
85
86
These examples show only a single pair of CPUs, but the effects of the
87
locking basic rule extend across multiple acquisitions of a given lock
88
across multiple CPUs.
89
90
91
Double-Checked Locking
92
----------------------
93
94
It is well known that more than just a lock is required to make
95
double-checked locking work correctly, This litmus test illustrates
96
one incorrect approach:
97
98
/* See Documentation/litmus-tests/locking/DCL-broken.litmus. */
99
void CPU0(void)
100
{
101
r0 = READ_ONCE(flag);
102
if (r0 == 0) {
103
spin_lock(&lck);
104
r1 = READ_ONCE(flag);
105
if (r1 == 0) {
106
WRITE_ONCE(data, 1);
107
WRITE_ONCE(flag, 1);
108
}
109
spin_unlock(&lck);
110
}
111
r2 = READ_ONCE(data);
112
}
113
/* CPU1() is the exactly the same as CPU0(). */
114
115
There are two problems. First, there is no ordering between the first
116
READ_ONCE() of "flag" and the READ_ONCE() of "data". Second, there is
117
no ordering between the two WRITE_ONCE() calls. It should therefore be
118
no surprise that "r2" can be zero, and a quick herd7 run confirms this.
119
120
One way to fix this is to use smp_load_acquire() and smp_store_release()
121
as shown in this corrected version:
122
123
/* See Documentation/litmus-tests/locking/DCL-fixed.litmus. */
124
void CPU0(void)
125
{
126
r0 = smp_load_acquire(&flag);
127
if (r0 == 0) {
128
spin_lock(&lck);
129
r1 = READ_ONCE(flag);
130
if (r1 == 0) {
131
WRITE_ONCE(data, 1);
132
smp_store_release(&flag, 1);
133
}
134
spin_unlock(&lck);
135
}
136
r2 = READ_ONCE(data);
137
}
138
/* CPU1() is the exactly the same as CPU0(). */
139
140
The smp_load_acquire() guarantees that its load from "flags" will
141
be ordered before the READ_ONCE() from data, thus solving the first
142
problem. The smp_store_release() guarantees that its store will be
143
ordered after the WRITE_ONCE() to "data", solving the second problem.
144
The smp_store_release() pairs with the smp_load_acquire(), thus ensuring
145
that the ordering provided by each actually takes effect. Again, a
146
quick herd7 run confirms this.
147
148
In short, if you access a lock-protected variable without holding the
149
corresponding lock, you will need to provide additional ordering, in
150
this case, via the smp_load_acquire() and the smp_store_release().
151
152
153
Ordering Provided by a Lock to CPUs Not Holding That Lock
154
---------------------------------------------------------
155
156
It is not necessarily the case that accesses ordered by locking will be
157
seen as ordered by CPUs not holding that lock. Consider this example:
158
159
/* See Z6.0+pooncelock+pooncelock+pombonce.litmus. */
160
void CPU0(void)
161
{
162
spin_lock(&mylock);
163
WRITE_ONCE(x, 1);
164
WRITE_ONCE(y, 1);
165
spin_unlock(&mylock);
166
}
167
168
void CPU1(void)
169
{
170
spin_lock(&mylock);
171
r0 = READ_ONCE(y);
172
WRITE_ONCE(z, 1);
173
spin_unlock(&mylock);
174
}
175
176
void CPU2(void)
177
{
178
WRITE_ONCE(z, 2);
179
smp_mb();
180
r1 = READ_ONCE(x);
181
}
182
183
Counter-intuitive though it might be, it is quite possible to have
184
the final value of r0 be 1, the final value of z be 2, and the final
185
value of r1 be 0. The reason for this surprising outcome is that CPU2()
186
never acquired the lock, and thus did not fully benefit from the lock's
187
ordering properties.
188
189
Ordering can be extended to CPUs not holding the lock by careful use
190
of smp_mb__after_spinlock():
191
192
/* See Z6.0+pooncelock+poonceLock+pombonce.litmus. */
193
void CPU0(void)
194
{
195
spin_lock(&mylock);
196
WRITE_ONCE(x, 1);
197
WRITE_ONCE(y, 1);
198
spin_unlock(&mylock);
199
}
200
201
void CPU1(void)
202
{
203
spin_lock(&mylock);
204
smp_mb__after_spinlock();
205
r0 = READ_ONCE(y);
206
WRITE_ONCE(z, 1);
207
spin_unlock(&mylock);
208
}
209
210
void CPU2(void)
211
{
212
WRITE_ONCE(z, 2);
213
smp_mb();
214
r1 = READ_ONCE(x);
215
}
216
217
This addition of smp_mb__after_spinlock() strengthens the lock
218
acquisition sufficiently to rule out the counter-intuitive outcome.
219
In other words, the addition of the smp_mb__after_spinlock() prohibits
220
the counter-intuitive result where the final value of r0 is 1, the final
221
value of z is 2, and the final value of r1 is 0.
222
223
224
No Roach-Motel Locking!
225
-----------------------
226
227
This example requires familiarity with the herd7 "filter" clause, so
228
please read up on that topic in litmus-tests.txt.
229
230
It is tempting to allow memory-reference instructions to be pulled
231
into a critical section, but this cannot be allowed in the general case.
232
For example, consider a spin loop preceding a lock-based critical section.
233
Now, herd7 does not model spin loops, but we can emulate one with two
234
loads, with a "filter" clause to constrain the first to return the
235
initial value and the second to return the updated value, as shown below:
236
237
/* See Documentation/litmus-tests/locking/RM-fixed.litmus. */
238
void CPU0(void)
239
{
240
spin_lock(&lck);
241
r2 = atomic_inc_return(&y);
242
WRITE_ONCE(x, 1);
243
spin_unlock(&lck);
244
}
245
246
void CPU1(void)
247
{
248
r0 = READ_ONCE(x);
249
r1 = READ_ONCE(x);
250
spin_lock(&lck);
251
r2 = atomic_inc_return(&y);
252
spin_unlock(&lck);
253
}
254
255
filter (1:r0=0 /\ 1:r1=1)
256
exists (1:r2=1)
257
258
The variable "x" is the control variable for the emulated spin loop.
259
CPU0() sets it to "1" while holding the lock, and CPU1() emulates the
260
spin loop by reading it twice, first into "1:r0" (which should get the
261
initial value "0") and then into "1:r1" (which should get the updated
262
value "1").
263
264
The "filter" clause takes this into account, constraining "1:r0" to
265
equal "0" and "1:r1" to equal 1.
266
267
Then the "exists" clause checks to see if CPU1() acquired its lock first,
268
which should not happen given the filter clause because CPU0() updates
269
"x" while holding the lock. And herd7 confirms this.
270
271
But suppose that the compiler was permitted to reorder the spin loop
272
into CPU1()'s critical section, like this:
273
274
/* See Documentation/litmus-tests/locking/RM-broken.litmus. */
275
void CPU0(void)
276
{
277
int r2;
278
279
spin_lock(&lck);
280
r2 = atomic_inc_return(&y);
281
WRITE_ONCE(x, 1);
282
spin_unlock(&lck);
283
}
284
285
void CPU1(void)
286
{
287
spin_lock(&lck);
288
r0 = READ_ONCE(x);
289
r1 = READ_ONCE(x);
290
r2 = atomic_inc_return(&y);
291
spin_unlock(&lck);
292
}
293
294
filter (1:r0=0 /\ 1:r1=1)
295
exists (1:r2=1)
296
297
If "1:r0" is equal to "0", "1:r1" can never equal "1" because CPU0()
298
cannot update "x" while CPU1() holds the lock. And herd7 confirms this,
299
showing zero executions matching the "filter" criteria.
300
301
And this is why Linux-kernel lock and unlock primitives must prevent
302
code from entering critical sections. It is not sufficient to only
303
prevent code from leaving them.
304
305