Path: blob/master/tools/memory-model/Documentation/access-marking.txt
26288 views
MARKING SHARED-MEMORY ACCESSES1==============================23This document provides guidelines for marking intentionally concurrent4normal accesses to shared memory, that is "normal" as in accesses that do5not use read-modify-write atomic operations. It also describes how to6document these accesses, both with comments and with special assertions7processed by the Kernel Concurrency Sanitizer (KCSAN). This discussion8builds on an earlier LWN article [1] and Linux Foundation mentorship9session [2].101112ACCESS-MARKING OPTIONS13======================1415The Linux kernel provides the following access-marking options:16171. Plain C-language accesses (unmarked), for example, "a = b;"18192. Data-race marking, for example, "data_race(a = b);"20213. READ_ONCE(), for example, "a = READ_ONCE(b);"22The various forms of atomic_read() also fit in here.23244. WRITE_ONCE(), for example, "WRITE_ONCE(a, b);"25The various forms of atomic_set() also fit in here.26275. __data_racy, for example "int __data_racy a;"28296. KCSAN's negative-marking assertions, ASSERT_EXCLUSIVE_ACCESS()30and ASSERT_EXCLUSIVE_WRITER(), are described in the31"ACCESS-DOCUMENTATION OPTIONS" section below.3233These may be used in combination, as shown in this admittedly improbable34example:3536WRITE_ONCE(a, b + data_race(c + d) + READ_ONCE(e));3738Neither plain C-language accesses nor data_race() (#1 and #2 above) place39any sort of constraint on the compiler's choice of optimizations [3].40In contrast, READ_ONCE() and WRITE_ONCE() (#3 and #4 above) restrict the41compiler's use of code-motion and common-subexpression optimizations.42Therefore, if a given access is involved in an intentional data race,43using READ_ONCE() for loads and WRITE_ONCE() for stores is usually44preferable to data_race(), which in turn is usually preferable to plain45C-language accesses. It is permissible to combine #2 and #3, for example,46data_race(READ_ONCE(a)), which will both restrict compiler optimizations47and disable KCSAN diagnostics.4849KCSAN will complain about many types of data races involving plain50C-language accesses, but marking all accesses involved in a given data51race with one of data_race(), READ_ONCE(), or WRITE_ONCE(), will prevent52KCSAN from complaining. Of course, lack of KCSAN complaints does not53imply correct code. Therefore, please take a thoughtful approach54when responding to KCSAN complaints. Churning the code base with55ill-considered additions of data_race(), READ_ONCE(), and WRITE_ONCE()56is unhelpful.5758In fact, the following sections describe situations where use of59data_race() and even plain C-language accesses is preferable to60READ_ONCE() and WRITE_ONCE().616263Use of the data_race() Macro64----------------------------6566Here are some situations where data_race() should be used instead of67READ_ONCE() and WRITE_ONCE():68691. Data-racy loads from shared variables whose values are used only70for diagnostic purposes.71722. Data-racy reads whose values are checked against marked reload.73743. Reads whose values feed into error-tolerant heuristics.75764. Writes setting values that feed into error-tolerant heuristics.777879Data-Racy Reads for Approximate Diagnostics8081Approximate diagnostics include lockdep reports, monitoring/statistics82(including /proc and /sys output), WARN*()/BUG*() checks whose return83values are ignored, and other situations where reads from shared variables84are not an integral part of the core concurrency design.8586In fact, use of data_race() instead READ_ONCE() for these diagnostic87reads can enable better checking of the remaining accesses implementing88the core concurrency design. For example, suppose that the core design89prevents any non-diagnostic reads from shared variable x from running90concurrently with updates to x. Then using plain C-language writes91to x allows KCSAN to detect reads from x from within regions of code92that fail to exclude the updates. In this case, it is important to use93data_race() for the diagnostic reads because otherwise KCSAN would give94false-positive warnings about these diagnostic reads.9596If it is necessary to both restrict compiler optimizations and disable97KCSAN diagnostics, use both data_race() and READ_ONCE(), for example,98data_race(READ_ONCE(a)).99100In theory, plain C-language loads can also be used for this use case.101However, in practice this will have the disadvantage of causing KCSAN102to generate false positives because KCSAN will have no way of knowing103that the resulting data race was intentional.104105106Data-Racy Reads That Are Checked Against Marked Reload107108The values from some reads are not implicitly trusted. They are instead109fed into some operation that checks the full value against a later marked110load from memory, which means that the occasional arbitrarily bogus value111is not a problem. For example, if a bogus value is fed into cmpxchg(),112all that happens is that this cmpxchg() fails, which normally results113in a retry. Unless the race condition that resulted in the bogus value114recurs, this retry will with high probability succeed, so no harm done.115116However, please keep in mind that a data_race() load feeding into117a cmpxchg_relaxed() might still be subject to load fusing on some118architectures. Therefore, it is best to capture the return value from119the failing cmpxchg() for the next iteration of the loop, an approach120that provides the compiler much less scope for mischievous optimizations.121Capturing the return value from cmpxchg() also saves a memory reference122in many cases.123124In theory, plain C-language loads can also be used for this use case.125However, in practice this will have the disadvantage of causing KCSAN126to generate false positives because KCSAN will have no way of knowing127that the resulting data race was intentional.128129130Reads Feeding Into Error-Tolerant Heuristics131132Values from some reads feed into heuristics that can tolerate occasional133errors. Such reads can use data_race(), thus allowing KCSAN to focus on134the other accesses to the relevant shared variables. But please note135that data_race() loads are subject to load fusing, which can result in136consistent errors, which in turn are quite capable of breaking heuristics.137Therefore use of data_race() should be limited to cases where some other138code (such as a barrier() call) will force the occasional reload.139140Note that this use case requires that the heuristic be able to handle141any possible error. In contrast, if the heuristics might be fatally142confused by one or more of the possible erroneous values, use READ_ONCE()143instead of data_race().144145In theory, plain C-language loads can also be used for this use case.146However, in practice this will have the disadvantage of causing KCSAN147to generate false positives because KCSAN will have no way of knowing148that the resulting data race was intentional.149150151Writes Setting Values Feeding Into Error-Tolerant Heuristics152153The values read into error-tolerant heuristics come from somewhere,154for example, from sysfs. This means that some code in sysfs writes155to this same variable, and these writes can also use data_race().156After all, if the heuristic can tolerate the occasional bogus value157due to compiler-mangled reads, it can also tolerate the occasional158compiler-mangled write, at least assuming that the proper value is in159place once the write completes.160161Plain C-language stores can also be used for this use case. However,162in kernels built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, this163will have the disadvantage of causing KCSAN to generate false positives164because KCSAN will have no way of knowing that the resulting data race165was intentional.166167168Use of Plain C-Language Accesses169--------------------------------170171Here are some example situations where plain C-language accesses should172used instead of READ_ONCE(), WRITE_ONCE(), and data_race():1731741. Accesses protected by mutual exclusion, including strict locking175and sequence locking.1761772. Initialization-time and cleanup-time accesses. This covers a178wide variety of situations, including the uniprocessor phase of179system boot, variables to be used by not-yet-spawned kthreads,180structures not yet published to reference-counted or RCU-protected181data structures, and the cleanup side of any of these situations.1821833. Per-CPU variables that are not accessed from other CPUs.1841854. Private per-task variables, including on-stack variables, some186fields in the task_struct structure, and task-private heap data.1871885. Any other loads for which there is not supposed to be a concurrent189store to that same variable.1901916. Any other stores for which there should be neither concurrent192loads nor concurrent stores to that same variable.193194But note that KCSAN makes two explicit exceptions to this rule195by default, refraining from flagging plain C-language stores:196197a. No matter what. You can override this default by building198with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.199200b. When the store writes the value already contained in201that variable. You can override this default by building202with CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.203204c. When one of the stores is in an interrupt handler and205the other in the interrupted code. You can override this206default by building with CONFIG_KCSAN_INTERRUPT_WATCHER=y.207208Note that it is important to use plain C-language accesses in these cases,209because doing otherwise prevents KCSAN from detecting violations of your210code's synchronization rules.211212213Use of __data_racy214------------------215216Adding the __data_racy type qualifier to the declaration of a variable217causes KCSAN to treat all accesses to that variable as if they were218enclosed by data_race(). However, __data_racy does not affect the219compiler, though one could imagine hardened kernel builds treating the220__data_racy type qualifier as if it was the volatile keyword.221222Note well that __data_racy is subject to the same pointer-declaration223rules as are other type qualifiers such as const and volatile.224For example:225226int __data_racy *p; // Pointer to data-racy data.227int *__data_racy p; // Data-racy pointer to non-data-racy data.228229230ACCESS-DOCUMENTATION OPTIONS231============================232233It is important to comment marked accesses so that people reading your234code, yourself included, are reminded of the synchronization design.235However, it is even more important to comment plain C-language accesses236that are intentionally involved in data races. Such comments are237needed to remind people reading your code, again, yourself included,238of how the compiler has been prevented from optimizing those accesses239into concurrency bugs.240241It is also possible to tell KCSAN about your synchronization design.242For example, ASSERT_EXCLUSIVE_ACCESS(foo) tells KCSAN that any243concurrent access to variable foo by any other CPU is an error, even244if that concurrent access is marked with READ_ONCE(). In addition,245ASSERT_EXCLUSIVE_WRITER(foo) tells KCSAN that although it is OK for there246to be concurrent reads from foo from other CPUs, it is an error for some247other CPU to be concurrently writing to foo, even if that concurrent248write is marked with data_race() or WRITE_ONCE().249250Note that although KCSAN will call out data races involving either251ASSERT_EXCLUSIVE_ACCESS() or ASSERT_EXCLUSIVE_WRITER() on the one hand252and data_race() writes on the other, KCSAN will not report the location253of these data_race() writes.254255256EXAMPLES257========258259As noted earlier, the goal is to prevent the compiler from destroying260your concurrent algorithm, to help the human reader, and to inform261KCSAN of aspects of your concurrency design. This section looks at a262few examples showing how this can be done.263264265Lock Protection With Lockless Diagnostic Access266-----------------------------------------------267268For example, suppose a shared variable "foo" is read only while a269reader-writer spinlock is read-held, written only while that same270spinlock is write-held, except that it is also read locklessly for271diagnostic purposes. The code might look as follows:272273int foo;274DEFINE_RWLOCK(foo_rwlock);275276void update_foo(int newval)277{278write_lock(&foo_rwlock);279foo = newval;280do_something(newval);281write_unlock(&foo_rwlock);282}283284int read_foo(void)285{286int ret;287288read_lock(&foo_rwlock);289do_something_else();290ret = foo;291read_unlock(&foo_rwlock);292return ret;293}294295void read_foo_diagnostic(void)296{297pr_info("Current value of foo: %d\n", data_race(foo));298}299300The reader-writer lock prevents the compiler from introducing concurrency301bugs into any part of the main algorithm using foo, which means that302the accesses to foo within both update_foo() and read_foo() can (and303should) be plain C-language accesses. One benefit of making them be304plain C-language accesses is that KCSAN can detect any erroneous lockless305reads from or updates to foo. The data_race() in read_foo_diagnostic()306tells KCSAN that data races are expected, and should be silently307ignored. This data_race() also tells the human reading the code that308read_foo_diagnostic() might sometimes return a bogus value.309310If it is necessary to suppress compiler optimization and also detect311buggy lockless writes, read_foo_diagnostic() can be updated as follows:312313void read_foo_diagnostic(void)314{315pr_info("Current value of foo: %d\n", data_race(READ_ONCE(foo)));316}317318Alternatively, given that KCSAN is to ignore all accesses in this function,319this function can be marked __no_kcsan and the data_race() can be dropped:320321void __no_kcsan read_foo_diagnostic(void)322{323pr_info("Current value of foo: %d\n", READ_ONCE(foo));324}325326However, in order for KCSAN to detect buggy lockless writes, your kernel327must be built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n. If you328need KCSAN to detect such a write even if that write did not change329the value of foo, you also need CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.330If you need KCSAN to detect such a write happening in an interrupt handler331running on the same CPU doing the legitimate lock-protected write, you332also need CONFIG_KCSAN_INTERRUPT_WATCHER=y. With some or all of these333Kconfig options set properly, KCSAN can be quite helpful, although334it is not necessarily a full replacement for hardware watchpoints.335On the other hand, neither are hardware watchpoints a full replacement336for KCSAN because it is not always easy to tell hardware watchpoint to337conditionally trap on accesses.338339340Lock-Protected Writes With Lockless Reads341-----------------------------------------342343For another example, suppose a shared variable "foo" is updated only344while holding a spinlock, but is read locklessly. The code might look345as follows:346347int foo;348DEFINE_SPINLOCK(foo_lock);349350void update_foo(int newval)351{352spin_lock(&foo_lock);353WRITE_ONCE(foo, newval);354ASSERT_EXCLUSIVE_WRITER(foo);355do_something(newval);356spin_unlock(&foo_wlock);357}358359int read_foo(void)360{361do_something_else();362return READ_ONCE(foo);363}364365Because foo is read locklessly, all accesses are marked. The purpose366of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy367concurrent write, whether marked or not.368369370Lock-Protected Writes With Heuristic Lockless Reads371---------------------------------------------------372373For another example, suppose that the code can normally make use of374a per-data-structure lock, but there are times when a global lock375is required. These times are indicated via a global flag. The code376might look as follows, and is based loosely on nf_conntrack_lock(),377nf_conntrack_all_lock(), and nf_conntrack_all_unlock():378379bool global_flag;380DEFINE_SPINLOCK(global_lock);381struct foo {382spinlock_t f_lock;383int f_data;384};385386/* All foo structures are in the following array. */387int nfoo;388struct foo *foo_array;389390void do_something_locked(struct foo *fp)391{392/* This works even if data_race() returns nonsense. */393if (!data_race(global_flag)) {394spin_lock(&fp->f_lock);395if (!smp_load_acquire(&global_flag)) {396do_something(fp);397spin_unlock(&fp->f_lock);398return;399}400spin_unlock(&fp->f_lock);401}402spin_lock(&global_lock);403/* global_lock held, thus global flag cannot be set. */404spin_lock(&fp->f_lock);405spin_unlock(&global_lock);406/*407* global_flag might be set here, but begin_global()408* will wait for ->f_lock to be released.409*/410do_something(fp);411spin_unlock(&fp->f_lock);412}413414void begin_global(void)415{416int i;417418spin_lock(&global_lock);419WRITE_ONCE(global_flag, true);420for (i = 0; i < nfoo; i++) {421/*422* Wait for pre-existing local locks. One at423* a time to avoid lockdep limitations.424*/425spin_lock(&fp->f_lock);426spin_unlock(&fp->f_lock);427}428}429430void end_global(void)431{432smp_store_release(&global_flag, false);433spin_unlock(&global_lock);434}435436All code paths leading from the do_something_locked() function's first437read from global_flag acquire a lock, so endless load fusing cannot438happen.439440If the value read from global_flag is true, then global_flag is441rechecked while holding ->f_lock, which, if global_flag is now false,442prevents begin_global() from completing. It is therefore safe to invoke443do_something().444445Otherwise, if either value read from global_flag is true, then after446global_lock is acquired global_flag must be false. The acquisition of447->f_lock will prevent any call to begin_global() from returning, which448means that it is safe to release global_lock and invoke do_something().449450For this to work, only those foo structures in foo_array[] may be passed451to do_something_locked(). The reason for this is that the synchronization452with begin_global() relies on momentarily holding the lock of each and453every foo structure.454455The smp_load_acquire() and smp_store_release() are required because456changes to a foo structure between calls to begin_global() and457end_global() are carried out without holding that structure's ->f_lock.458The smp_load_acquire() and smp_store_release() ensure that the next459invocation of do_something() from do_something_locked() will see those460changes.461462463Lockless Reads and Writes464-------------------------465466For another example, suppose a shared variable "foo" is both read and467updated locklessly. The code might look as follows:468469int foo;470471int update_foo(int newval)472{473int ret;474475ret = xchg(&foo, newval);476do_something(newval);477return ret;478}479480int read_foo(void)481{482do_something_else();483return READ_ONCE(foo);484}485486Because foo is accessed locklessly, all accesses are marked. It does487not make sense to use ASSERT_EXCLUSIVE_WRITER() in this case because488there really can be concurrent lockless writers. KCSAN would489flag any concurrent plain C-language reads from foo, and given490CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, also any concurrent plain491C-language writes to foo.492493494Lockless Reads and Writes, But With Single-Threaded Initialization495------------------------------------------------------------------496497For yet another example, suppose that foo is initialized in a498single-threaded manner, but that a number of kthreads are then created499that locklessly and concurrently access foo. Some snippets of this code500might look as follows:501502int foo;503504void initialize_foo(int initval, int nkthreads)505{506int i;507508foo = initval;509ASSERT_EXCLUSIVE_ACCESS(foo);510for (i = 0; i < nkthreads; i++)511kthread_run(access_foo_concurrently, ...);512}513514/* Called from access_foo_concurrently(). */515int update_foo(int newval)516{517int ret;518519ret = xchg(&foo, newval);520do_something(newval);521return ret;522}523524/* Also called from access_foo_concurrently(). */525int read_foo(void)526{527do_something_else();528return READ_ONCE(foo);529}530531The initialize_foo() uses a plain C-language write to foo because there532are not supposed to be concurrent accesses during initialization. The533ASSERT_EXCLUSIVE_ACCESS() allows KCSAN to flag buggy concurrent unmarked534reads, and the ASSERT_EXCLUSIVE_ACCESS() call further allows KCSAN to535flag buggy concurrent writes, even if: (1) Those writes are marked or536(2) The kernel was built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.537538539Checking Stress-Test Race Coverage540----------------------------------541542When designing stress tests it is important to ensure that race conditions543of interest really do occur. For example, consider the following code544fragment:545546int foo;547548int update_foo(int newval)549{550return xchg(&foo, newval);551}552553int xor_shift_foo(int shift, int mask)554{555int old, new, newold;556557newold = data_race(foo); /* Checked by cmpxchg(). */558do {559old = newold;560new = (old << shift) ^ mask;561newold = cmpxchg(&foo, old, new);562} while (newold != old);563return old;564}565566int read_foo(void)567{568return READ_ONCE(foo);569}570571If it is possible for update_foo(), xor_shift_foo(), and read_foo() to be572invoked concurrently, the stress test should force this concurrency to573actually happen. KCSAN can evaluate the stress test when the above code574is modified to read as follows:575576int foo;577578int update_foo(int newval)579{580ASSERT_EXCLUSIVE_ACCESS(foo);581return xchg(&foo, newval);582}583584int xor_shift_foo(int shift, int mask)585{586int old, new, newold;587588newold = data_race(foo); /* Checked by cmpxchg(). */589do {590old = newold;591new = (old << shift) ^ mask;592ASSERT_EXCLUSIVE_ACCESS(foo);593newold = cmpxchg(&foo, old, new);594} while (newold != old);595return old;596}597598599int read_foo(void)600{601ASSERT_EXCLUSIVE_ACCESS(foo);602return READ_ONCE(foo);603}604605If a given stress-test run does not result in KCSAN complaints from606each possible pair of ASSERT_EXCLUSIVE_ACCESS() invocations, the607stress test needs improvement. If the stress test was to be evaluated608on a regular basis, it would be wise to place the above instances of609ASSERT_EXCLUSIVE_ACCESS() under #ifdef so that they did not result in610false positives when not evaluating the stress test.611612613REFERENCES614==========615616[1] "Concurrency bugs should fear the big bad data-race detector (part 2)"617https://lwn.net/Articles/816854/618619[2] "The Kernel Concurrency Sanitizer"620https://www.linuxfoundation.org/webinars/the-kernel-concurrency-sanitizer621622[3] "Who's afraid of a big bad optimizing compiler?"623https://lwn.net/Articles/793253/624625626