Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
torvalds
GitHub Repository: torvalds/linux
Path: blob/master/tools/memory-model/Documentation/access-marking.txt
26288 views
1
MARKING SHARED-MEMORY ACCESSES
2
==============================
3
4
This document provides guidelines for marking intentionally concurrent
5
normal accesses to shared memory, that is "normal" as in accesses that do
6
not use read-modify-write atomic operations. It also describes how to
7
document these accesses, both with comments and with special assertions
8
processed by the Kernel Concurrency Sanitizer (KCSAN). This discussion
9
builds on an earlier LWN article [1] and Linux Foundation mentorship
10
session [2].
11
12
13
ACCESS-MARKING OPTIONS
14
======================
15
16
The Linux kernel provides the following access-marking options:
17
18
1. Plain C-language accesses (unmarked), for example, "a = b;"
19
20
2. Data-race marking, for example, "data_race(a = b);"
21
22
3. READ_ONCE(), for example, "a = READ_ONCE(b);"
23
The various forms of atomic_read() also fit in here.
24
25
4. WRITE_ONCE(), for example, "WRITE_ONCE(a, b);"
26
The various forms of atomic_set() also fit in here.
27
28
5. __data_racy, for example "int __data_racy a;"
29
30
6. KCSAN's negative-marking assertions, ASSERT_EXCLUSIVE_ACCESS()
31
and ASSERT_EXCLUSIVE_WRITER(), are described in the
32
"ACCESS-DOCUMENTATION OPTIONS" section below.
33
34
These may be used in combination, as shown in this admittedly improbable
35
example:
36
37
WRITE_ONCE(a, b + data_race(c + d) + READ_ONCE(e));
38
39
Neither plain C-language accesses nor data_race() (#1 and #2 above) place
40
any sort of constraint on the compiler's choice of optimizations [3].
41
In contrast, READ_ONCE() and WRITE_ONCE() (#3 and #4 above) restrict the
42
compiler's use of code-motion and common-subexpression optimizations.
43
Therefore, if a given access is involved in an intentional data race,
44
using READ_ONCE() for loads and WRITE_ONCE() for stores is usually
45
preferable to data_race(), which in turn is usually preferable to plain
46
C-language accesses. It is permissible to combine #2 and #3, for example,
47
data_race(READ_ONCE(a)), which will both restrict compiler optimizations
48
and disable KCSAN diagnostics.
49
50
KCSAN will complain about many types of data races involving plain
51
C-language accesses, but marking all accesses involved in a given data
52
race with one of data_race(), READ_ONCE(), or WRITE_ONCE(), will prevent
53
KCSAN from complaining. Of course, lack of KCSAN complaints does not
54
imply correct code. Therefore, please take a thoughtful approach
55
when responding to KCSAN complaints. Churning the code base with
56
ill-considered additions of data_race(), READ_ONCE(), and WRITE_ONCE()
57
is unhelpful.
58
59
In fact, the following sections describe situations where use of
60
data_race() and even plain C-language accesses is preferable to
61
READ_ONCE() and WRITE_ONCE().
62
63
64
Use of the data_race() Macro
65
----------------------------
66
67
Here are some situations where data_race() should be used instead of
68
READ_ONCE() and WRITE_ONCE():
69
70
1. Data-racy loads from shared variables whose values are used only
71
for diagnostic purposes.
72
73
2. Data-racy reads whose values are checked against marked reload.
74
75
3. Reads whose values feed into error-tolerant heuristics.
76
77
4. Writes setting values that feed into error-tolerant heuristics.
78
79
80
Data-Racy Reads for Approximate Diagnostics
81
82
Approximate diagnostics include lockdep reports, monitoring/statistics
83
(including /proc and /sys output), WARN*()/BUG*() checks whose return
84
values are ignored, and other situations where reads from shared variables
85
are not an integral part of the core concurrency design.
86
87
In fact, use of data_race() instead READ_ONCE() for these diagnostic
88
reads can enable better checking of the remaining accesses implementing
89
the core concurrency design. For example, suppose that the core design
90
prevents any non-diagnostic reads from shared variable x from running
91
concurrently with updates to x. Then using plain C-language writes
92
to x allows KCSAN to detect reads from x from within regions of code
93
that fail to exclude the updates. In this case, it is important to use
94
data_race() for the diagnostic reads because otherwise KCSAN would give
95
false-positive warnings about these diagnostic reads.
96
97
If it is necessary to both restrict compiler optimizations and disable
98
KCSAN diagnostics, use both data_race() and READ_ONCE(), for example,
99
data_race(READ_ONCE(a)).
100
101
In theory, plain C-language loads can also be used for this use case.
102
However, in practice this will have the disadvantage of causing KCSAN
103
to generate false positives because KCSAN will have no way of knowing
104
that the resulting data race was intentional.
105
106
107
Data-Racy Reads That Are Checked Against Marked Reload
108
109
The values from some reads are not implicitly trusted. They are instead
110
fed into some operation that checks the full value against a later marked
111
load from memory, which means that the occasional arbitrarily bogus value
112
is not a problem. For example, if a bogus value is fed into cmpxchg(),
113
all that happens is that this cmpxchg() fails, which normally results
114
in a retry. Unless the race condition that resulted in the bogus value
115
recurs, this retry will with high probability succeed, so no harm done.
116
117
However, please keep in mind that a data_race() load feeding into
118
a cmpxchg_relaxed() might still be subject to load fusing on some
119
architectures. Therefore, it is best to capture the return value from
120
the failing cmpxchg() for the next iteration of the loop, an approach
121
that provides the compiler much less scope for mischievous optimizations.
122
Capturing the return value from cmpxchg() also saves a memory reference
123
in many cases.
124
125
In theory, plain C-language loads can also be used for this use case.
126
However, in practice this will have the disadvantage of causing KCSAN
127
to generate false positives because KCSAN will have no way of knowing
128
that the resulting data race was intentional.
129
130
131
Reads Feeding Into Error-Tolerant Heuristics
132
133
Values from some reads feed into heuristics that can tolerate occasional
134
errors. Such reads can use data_race(), thus allowing KCSAN to focus on
135
the other accesses to the relevant shared variables. But please note
136
that data_race() loads are subject to load fusing, which can result in
137
consistent errors, which in turn are quite capable of breaking heuristics.
138
Therefore use of data_race() should be limited to cases where some other
139
code (such as a barrier() call) will force the occasional reload.
140
141
Note that this use case requires that the heuristic be able to handle
142
any possible error. In contrast, if the heuristics might be fatally
143
confused by one or more of the possible erroneous values, use READ_ONCE()
144
instead of data_race().
145
146
In theory, plain C-language loads can also be used for this use case.
147
However, in practice this will have the disadvantage of causing KCSAN
148
to generate false positives because KCSAN will have no way of knowing
149
that the resulting data race was intentional.
150
151
152
Writes Setting Values Feeding Into Error-Tolerant Heuristics
153
154
The values read into error-tolerant heuristics come from somewhere,
155
for example, from sysfs. This means that some code in sysfs writes
156
to this same variable, and these writes can also use data_race().
157
After all, if the heuristic can tolerate the occasional bogus value
158
due to compiler-mangled reads, it can also tolerate the occasional
159
compiler-mangled write, at least assuming that the proper value is in
160
place once the write completes.
161
162
Plain C-language stores can also be used for this use case. However,
163
in kernels built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, this
164
will have the disadvantage of causing KCSAN to generate false positives
165
because KCSAN will have no way of knowing that the resulting data race
166
was intentional.
167
168
169
Use of Plain C-Language Accesses
170
--------------------------------
171
172
Here are some example situations where plain C-language accesses should
173
used instead of READ_ONCE(), WRITE_ONCE(), and data_race():
174
175
1. Accesses protected by mutual exclusion, including strict locking
176
and sequence locking.
177
178
2. Initialization-time and cleanup-time accesses. This covers a
179
wide variety of situations, including the uniprocessor phase of
180
system boot, variables to be used by not-yet-spawned kthreads,
181
structures not yet published to reference-counted or RCU-protected
182
data structures, and the cleanup side of any of these situations.
183
184
3. Per-CPU variables that are not accessed from other CPUs.
185
186
4. Private per-task variables, including on-stack variables, some
187
fields in the task_struct structure, and task-private heap data.
188
189
5. Any other loads for which there is not supposed to be a concurrent
190
store to that same variable.
191
192
6. Any other stores for which there should be neither concurrent
193
loads nor concurrent stores to that same variable.
194
195
But note that KCSAN makes two explicit exceptions to this rule
196
by default, refraining from flagging plain C-language stores:
197
198
a. No matter what. You can override this default by building
199
with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.
200
201
b. When the store writes the value already contained in
202
that variable. You can override this default by building
203
with CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
204
205
c. When one of the stores is in an interrupt handler and
206
the other in the interrupted code. You can override this
207
default by building with CONFIG_KCSAN_INTERRUPT_WATCHER=y.
208
209
Note that it is important to use plain C-language accesses in these cases,
210
because doing otherwise prevents KCSAN from detecting violations of your
211
code's synchronization rules.
212
213
214
Use of __data_racy
215
------------------
216
217
Adding the __data_racy type qualifier to the declaration of a variable
218
causes KCSAN to treat all accesses to that variable as if they were
219
enclosed by data_race(). However, __data_racy does not affect the
220
compiler, though one could imagine hardened kernel builds treating the
221
__data_racy type qualifier as if it was the volatile keyword.
222
223
Note well that __data_racy is subject to the same pointer-declaration
224
rules as are other type qualifiers such as const and volatile.
225
For example:
226
227
int __data_racy *p; // Pointer to data-racy data.
228
int *__data_racy p; // Data-racy pointer to non-data-racy data.
229
230
231
ACCESS-DOCUMENTATION OPTIONS
232
============================
233
234
It is important to comment marked accesses so that people reading your
235
code, yourself included, are reminded of the synchronization design.
236
However, it is even more important to comment plain C-language accesses
237
that are intentionally involved in data races. Such comments are
238
needed to remind people reading your code, again, yourself included,
239
of how the compiler has been prevented from optimizing those accesses
240
into concurrency bugs.
241
242
It is also possible to tell KCSAN about your synchronization design.
243
For example, ASSERT_EXCLUSIVE_ACCESS(foo) tells KCSAN that any
244
concurrent access to variable foo by any other CPU is an error, even
245
if that concurrent access is marked with READ_ONCE(). In addition,
246
ASSERT_EXCLUSIVE_WRITER(foo) tells KCSAN that although it is OK for there
247
to be concurrent reads from foo from other CPUs, it is an error for some
248
other CPU to be concurrently writing to foo, even if that concurrent
249
write is marked with data_race() or WRITE_ONCE().
250
251
Note that although KCSAN will call out data races involving either
252
ASSERT_EXCLUSIVE_ACCESS() or ASSERT_EXCLUSIVE_WRITER() on the one hand
253
and data_race() writes on the other, KCSAN will not report the location
254
of these data_race() writes.
255
256
257
EXAMPLES
258
========
259
260
As noted earlier, the goal is to prevent the compiler from destroying
261
your concurrent algorithm, to help the human reader, and to inform
262
KCSAN of aspects of your concurrency design. This section looks at a
263
few examples showing how this can be done.
264
265
266
Lock Protection With Lockless Diagnostic Access
267
-----------------------------------------------
268
269
For example, suppose a shared variable "foo" is read only while a
270
reader-writer spinlock is read-held, written only while that same
271
spinlock is write-held, except that it is also read locklessly for
272
diagnostic purposes. The code might look as follows:
273
274
int foo;
275
DEFINE_RWLOCK(foo_rwlock);
276
277
void update_foo(int newval)
278
{
279
write_lock(&foo_rwlock);
280
foo = newval;
281
do_something(newval);
282
write_unlock(&foo_rwlock);
283
}
284
285
int read_foo(void)
286
{
287
int ret;
288
289
read_lock(&foo_rwlock);
290
do_something_else();
291
ret = foo;
292
read_unlock(&foo_rwlock);
293
return ret;
294
}
295
296
void read_foo_diagnostic(void)
297
{
298
pr_info("Current value of foo: %d\n", data_race(foo));
299
}
300
301
The reader-writer lock prevents the compiler from introducing concurrency
302
bugs into any part of the main algorithm using foo, which means that
303
the accesses to foo within both update_foo() and read_foo() can (and
304
should) be plain C-language accesses. One benefit of making them be
305
plain C-language accesses is that KCSAN can detect any erroneous lockless
306
reads from or updates to foo. The data_race() in read_foo_diagnostic()
307
tells KCSAN that data races are expected, and should be silently
308
ignored. This data_race() also tells the human reading the code that
309
read_foo_diagnostic() might sometimes return a bogus value.
310
311
If it is necessary to suppress compiler optimization and also detect
312
buggy lockless writes, read_foo_diagnostic() can be updated as follows:
313
314
void read_foo_diagnostic(void)
315
{
316
pr_info("Current value of foo: %d\n", data_race(READ_ONCE(foo)));
317
}
318
319
Alternatively, given that KCSAN is to ignore all accesses in this function,
320
this function can be marked __no_kcsan and the data_race() can be dropped:
321
322
void __no_kcsan read_foo_diagnostic(void)
323
{
324
pr_info("Current value of foo: %d\n", READ_ONCE(foo));
325
}
326
327
However, in order for KCSAN to detect buggy lockless writes, your kernel
328
must be built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n. If you
329
need KCSAN to detect such a write even if that write did not change
330
the value of foo, you also need CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
331
If you need KCSAN to detect such a write happening in an interrupt handler
332
running on the same CPU doing the legitimate lock-protected write, you
333
also need CONFIG_KCSAN_INTERRUPT_WATCHER=y. With some or all of these
334
Kconfig options set properly, KCSAN can be quite helpful, although
335
it is not necessarily a full replacement for hardware watchpoints.
336
On the other hand, neither are hardware watchpoints a full replacement
337
for KCSAN because it is not always easy to tell hardware watchpoint to
338
conditionally trap on accesses.
339
340
341
Lock-Protected Writes With Lockless Reads
342
-----------------------------------------
343
344
For another example, suppose a shared variable "foo" is updated only
345
while holding a spinlock, but is read locklessly. The code might look
346
as follows:
347
348
int foo;
349
DEFINE_SPINLOCK(foo_lock);
350
351
void update_foo(int newval)
352
{
353
spin_lock(&foo_lock);
354
WRITE_ONCE(foo, newval);
355
ASSERT_EXCLUSIVE_WRITER(foo);
356
do_something(newval);
357
spin_unlock(&foo_wlock);
358
}
359
360
int read_foo(void)
361
{
362
do_something_else();
363
return READ_ONCE(foo);
364
}
365
366
Because foo is read locklessly, all accesses are marked. The purpose
367
of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
368
concurrent write, whether marked or not.
369
370
371
Lock-Protected Writes With Heuristic Lockless Reads
372
---------------------------------------------------
373
374
For another example, suppose that the code can normally make use of
375
a per-data-structure lock, but there are times when a global lock
376
is required. These times are indicated via a global flag. The code
377
might look as follows, and is based loosely on nf_conntrack_lock(),
378
nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
379
380
bool global_flag;
381
DEFINE_SPINLOCK(global_lock);
382
struct foo {
383
spinlock_t f_lock;
384
int f_data;
385
};
386
387
/* All foo structures are in the following array. */
388
int nfoo;
389
struct foo *foo_array;
390
391
void do_something_locked(struct foo *fp)
392
{
393
/* This works even if data_race() returns nonsense. */
394
if (!data_race(global_flag)) {
395
spin_lock(&fp->f_lock);
396
if (!smp_load_acquire(&global_flag)) {
397
do_something(fp);
398
spin_unlock(&fp->f_lock);
399
return;
400
}
401
spin_unlock(&fp->f_lock);
402
}
403
spin_lock(&global_lock);
404
/* global_lock held, thus global flag cannot be set. */
405
spin_lock(&fp->f_lock);
406
spin_unlock(&global_lock);
407
/*
408
* global_flag might be set here, but begin_global()
409
* will wait for ->f_lock to be released.
410
*/
411
do_something(fp);
412
spin_unlock(&fp->f_lock);
413
}
414
415
void begin_global(void)
416
{
417
int i;
418
419
spin_lock(&global_lock);
420
WRITE_ONCE(global_flag, true);
421
for (i = 0; i < nfoo; i++) {
422
/*
423
* Wait for pre-existing local locks. One at
424
* a time to avoid lockdep limitations.
425
*/
426
spin_lock(&fp->f_lock);
427
spin_unlock(&fp->f_lock);
428
}
429
}
430
431
void end_global(void)
432
{
433
smp_store_release(&global_flag, false);
434
spin_unlock(&global_lock);
435
}
436
437
All code paths leading from the do_something_locked() function's first
438
read from global_flag acquire a lock, so endless load fusing cannot
439
happen.
440
441
If the value read from global_flag is true, then global_flag is
442
rechecked while holding ->f_lock, which, if global_flag is now false,
443
prevents begin_global() from completing. It is therefore safe to invoke
444
do_something().
445
446
Otherwise, if either value read from global_flag is true, then after
447
global_lock is acquired global_flag must be false. The acquisition of
448
->f_lock will prevent any call to begin_global() from returning, which
449
means that it is safe to release global_lock and invoke do_something().
450
451
For this to work, only those foo structures in foo_array[] may be passed
452
to do_something_locked(). The reason for this is that the synchronization
453
with begin_global() relies on momentarily holding the lock of each and
454
every foo structure.
455
456
The smp_load_acquire() and smp_store_release() are required because
457
changes to a foo structure between calls to begin_global() and
458
end_global() are carried out without holding that structure's ->f_lock.
459
The smp_load_acquire() and smp_store_release() ensure that the next
460
invocation of do_something() from do_something_locked() will see those
461
changes.
462
463
464
Lockless Reads and Writes
465
-------------------------
466
467
For another example, suppose a shared variable "foo" is both read and
468
updated locklessly. The code might look as follows:
469
470
int foo;
471
472
int update_foo(int newval)
473
{
474
int ret;
475
476
ret = xchg(&foo, newval);
477
do_something(newval);
478
return ret;
479
}
480
481
int read_foo(void)
482
{
483
do_something_else();
484
return READ_ONCE(foo);
485
}
486
487
Because foo is accessed locklessly, all accesses are marked. It does
488
not make sense to use ASSERT_EXCLUSIVE_WRITER() in this case because
489
there really can be concurrent lockless writers. KCSAN would
490
flag any concurrent plain C-language reads from foo, and given
491
CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, also any concurrent plain
492
C-language writes to foo.
493
494
495
Lockless Reads and Writes, But With Single-Threaded Initialization
496
------------------------------------------------------------------
497
498
For yet another example, suppose that foo is initialized in a
499
single-threaded manner, but that a number of kthreads are then created
500
that locklessly and concurrently access foo. Some snippets of this code
501
might look as follows:
502
503
int foo;
504
505
void initialize_foo(int initval, int nkthreads)
506
{
507
int i;
508
509
foo = initval;
510
ASSERT_EXCLUSIVE_ACCESS(foo);
511
for (i = 0; i < nkthreads; i++)
512
kthread_run(access_foo_concurrently, ...);
513
}
514
515
/* Called from access_foo_concurrently(). */
516
int update_foo(int newval)
517
{
518
int ret;
519
520
ret = xchg(&foo, newval);
521
do_something(newval);
522
return ret;
523
}
524
525
/* Also called from access_foo_concurrently(). */
526
int read_foo(void)
527
{
528
do_something_else();
529
return READ_ONCE(foo);
530
}
531
532
The initialize_foo() uses a plain C-language write to foo because there
533
are not supposed to be concurrent accesses during initialization. The
534
ASSERT_EXCLUSIVE_ACCESS() allows KCSAN to flag buggy concurrent unmarked
535
reads, and the ASSERT_EXCLUSIVE_ACCESS() call further allows KCSAN to
536
flag buggy concurrent writes, even if: (1) Those writes are marked or
537
(2) The kernel was built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.
538
539
540
Checking Stress-Test Race Coverage
541
----------------------------------
542
543
When designing stress tests it is important to ensure that race conditions
544
of interest really do occur. For example, consider the following code
545
fragment:
546
547
int foo;
548
549
int update_foo(int newval)
550
{
551
return xchg(&foo, newval);
552
}
553
554
int xor_shift_foo(int shift, int mask)
555
{
556
int old, new, newold;
557
558
newold = data_race(foo); /* Checked by cmpxchg(). */
559
do {
560
old = newold;
561
new = (old << shift) ^ mask;
562
newold = cmpxchg(&foo, old, new);
563
} while (newold != old);
564
return old;
565
}
566
567
int read_foo(void)
568
{
569
return READ_ONCE(foo);
570
}
571
572
If it is possible for update_foo(), xor_shift_foo(), and read_foo() to be
573
invoked concurrently, the stress test should force this concurrency to
574
actually happen. KCSAN can evaluate the stress test when the above code
575
is modified to read as follows:
576
577
int foo;
578
579
int update_foo(int newval)
580
{
581
ASSERT_EXCLUSIVE_ACCESS(foo);
582
return xchg(&foo, newval);
583
}
584
585
int xor_shift_foo(int shift, int mask)
586
{
587
int old, new, newold;
588
589
newold = data_race(foo); /* Checked by cmpxchg(). */
590
do {
591
old = newold;
592
new = (old << shift) ^ mask;
593
ASSERT_EXCLUSIVE_ACCESS(foo);
594
newold = cmpxchg(&foo, old, new);
595
} while (newold != old);
596
return old;
597
}
598
599
600
int read_foo(void)
601
{
602
ASSERT_EXCLUSIVE_ACCESS(foo);
603
return READ_ONCE(foo);
604
}
605
606
If a given stress-test run does not result in KCSAN complaints from
607
each possible pair of ASSERT_EXCLUSIVE_ACCESS() invocations, the
608
stress test needs improvement. If the stress test was to be evaluated
609
on a regular basis, it would be wise to place the above instances of
610
ASSERT_EXCLUSIVE_ACCESS() under #ifdef so that they did not result in
611
false positives when not evaluating the stress test.
612
613
614
REFERENCES
615
==========
616
617
[1] "Concurrency bugs should fear the big bad data-race detector (part 2)"
618
https://lwn.net/Articles/816854/
619
620
[2] "The Kernel Concurrency Sanitizer"
621
https://www.linuxfoundation.org/webinars/the-kernel-concurrency-sanitizer
622
623
[3] "Who's afraid of a big bad optimizing compiler?"
624
https://lwn.net/Articles/793253/
625
626