Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
awilliam
GitHub Repository: awilliam/linux-vfio
Path: blob/master/Documentation/RCU/listRCU.txt
10821 views
1
Using RCU to Protect Read-Mostly Linked Lists
2
3
4
One of the best applications of RCU is to protect read-mostly linked lists
5
("struct list_head" in list.h). One big advantage of this approach
6
is that all of the required memory barriers are included for you in
7
the list macros. This document describes several applications of RCU,
8
with the best fits first.
9
10
11
Example 1: Read-Side Action Taken Outside of Lock, No In-Place Updates
12
13
The best applications are cases where, if reader-writer locking were
14
used, the read-side lock would be dropped before taking any action
15
based on the results of the search. The most celebrated example is
16
the routing table. Because the routing table is tracking the state of
17
equipment outside of the computer, it will at times contain stale data.
18
Therefore, once the route has been computed, there is no need to hold
19
the routing table static during transmission of the packet. After all,
20
you can hold the routing table static all you want, but that won't keep
21
the external Internet from changing, and it is the state of the external
22
Internet that really matters. In addition, routing entries are typically
23
added or deleted, rather than being modified in place.
24
25
A straightforward example of this use of RCU may be found in the
26
system-call auditing support. For example, a reader-writer locked
27
implementation of audit_filter_task() might be as follows:
28
29
static enum audit_state audit_filter_task(struct task_struct *tsk)
30
{
31
struct audit_entry *e;
32
enum audit_state state;
33
34
read_lock(&auditsc_lock);
35
/* Note: audit_netlink_sem held by caller. */
36
list_for_each_entry(e, &audit_tsklist, list) {
37
if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
38
read_unlock(&auditsc_lock);
39
return state;
40
}
41
}
42
read_unlock(&auditsc_lock);
43
return AUDIT_BUILD_CONTEXT;
44
}
45
46
Here the list is searched under the lock, but the lock is dropped before
47
the corresponding value is returned. By the time that this value is acted
48
on, the list may well have been modified. This makes sense, since if
49
you are turning auditing off, it is OK to audit a few extra system calls.
50
51
This means that RCU can be easily applied to the read side, as follows:
52
53
static enum audit_state audit_filter_task(struct task_struct *tsk)
54
{
55
struct audit_entry *e;
56
enum audit_state state;
57
58
rcu_read_lock();
59
/* Note: audit_netlink_sem held by caller. */
60
list_for_each_entry_rcu(e, &audit_tsklist, list) {
61
if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
62
rcu_read_unlock();
63
return state;
64
}
65
}
66
rcu_read_unlock();
67
return AUDIT_BUILD_CONTEXT;
68
}
69
70
The read_lock() and read_unlock() calls have become rcu_read_lock()
71
and rcu_read_unlock(), respectively, and the list_for_each_entry() has
72
become list_for_each_entry_rcu(). The _rcu() list-traversal primitives
73
insert the read-side memory barriers that are required on DEC Alpha CPUs.
74
75
The changes to the update side are also straightforward. A reader-writer
76
lock might be used as follows for deletion and insertion:
77
78
static inline int audit_del_rule(struct audit_rule *rule,
79
struct list_head *list)
80
{
81
struct audit_entry *e;
82
83
write_lock(&auditsc_lock);
84
list_for_each_entry(e, list, list) {
85
if (!audit_compare_rule(rule, &e->rule)) {
86
list_del(&e->list);
87
write_unlock(&auditsc_lock);
88
return 0;
89
}
90
}
91
write_unlock(&auditsc_lock);
92
return -EFAULT; /* No matching rule */
93
}
94
95
static inline int audit_add_rule(struct audit_entry *entry,
96
struct list_head *list)
97
{
98
write_lock(&auditsc_lock);
99
if (entry->rule.flags & AUDIT_PREPEND) {
100
entry->rule.flags &= ~AUDIT_PREPEND;
101
list_add(&entry->list, list);
102
} else {
103
list_add_tail(&entry->list, list);
104
}
105
write_unlock(&auditsc_lock);
106
return 0;
107
}
108
109
Following are the RCU equivalents for these two functions:
110
111
static inline int audit_del_rule(struct audit_rule *rule,
112
struct list_head *list)
113
{
114
struct audit_entry *e;
115
116
/* Do not use the _rcu iterator here, since this is the only
117
* deletion routine. */
118
list_for_each_entry(e, list, list) {
119
if (!audit_compare_rule(rule, &e->rule)) {
120
list_del_rcu(&e->list);
121
call_rcu(&e->rcu, audit_free_rule);
122
return 0;
123
}
124
}
125
return -EFAULT; /* No matching rule */
126
}
127
128
static inline int audit_add_rule(struct audit_entry *entry,
129
struct list_head *list)
130
{
131
if (entry->rule.flags & AUDIT_PREPEND) {
132
entry->rule.flags &= ~AUDIT_PREPEND;
133
list_add_rcu(&entry->list, list);
134
} else {
135
list_add_tail_rcu(&entry->list, list);
136
}
137
return 0;
138
}
139
140
Normally, the write_lock() and write_unlock() would be replaced by
141
a spin_lock() and a spin_unlock(), but in this case, all callers hold
142
audit_netlink_sem, so no additional locking is required. The auditsc_lock
143
can therefore be eliminated, since use of RCU eliminates the need for
144
writers to exclude readers. Normally, the write_lock() calls would
145
be converted into spin_lock() calls.
146
147
The list_del(), list_add(), and list_add_tail() primitives have been
148
replaced by list_del_rcu(), list_add_rcu(), and list_add_tail_rcu().
149
The _rcu() list-manipulation primitives add memory barriers that are
150
needed on weakly ordered CPUs (most of them!). The list_del_rcu()
151
primitive omits the pointer poisoning debug-assist code that would
152
otherwise cause concurrent readers to fail spectacularly.
153
154
So, when readers can tolerate stale data and when entries are either added
155
or deleted, without in-place modification, it is very easy to use RCU!
156
157
158
Example 2: Handling In-Place Updates
159
160
The system-call auditing code does not update auditing rules in place.
161
However, if it did, reader-writer-locked code to do so might look as
162
follows (presumably, the field_count is only permitted to decrease,
163
otherwise, the added fields would need to be filled in):
164
165
static inline int audit_upd_rule(struct audit_rule *rule,
166
struct list_head *list,
167
__u32 newaction,
168
__u32 newfield_count)
169
{
170
struct audit_entry *e;
171
struct audit_newentry *ne;
172
173
write_lock(&auditsc_lock);
174
/* Note: audit_netlink_sem held by caller. */
175
list_for_each_entry(e, list, list) {
176
if (!audit_compare_rule(rule, &e->rule)) {
177
e->rule.action = newaction;
178
e->rule.file_count = newfield_count;
179
write_unlock(&auditsc_lock);
180
return 0;
181
}
182
}
183
write_unlock(&auditsc_lock);
184
return -EFAULT; /* No matching rule */
185
}
186
187
The RCU version creates a copy, updates the copy, then replaces the old
188
entry with the newly updated entry. This sequence of actions, allowing
189
concurrent reads while doing a copy to perform an update, is what gives
190
RCU ("read-copy update") its name. The RCU code is as follows:
191
192
static inline int audit_upd_rule(struct audit_rule *rule,
193
struct list_head *list,
194
__u32 newaction,
195
__u32 newfield_count)
196
{
197
struct audit_entry *e;
198
struct audit_newentry *ne;
199
200
list_for_each_entry(e, list, list) {
201
if (!audit_compare_rule(rule, &e->rule)) {
202
ne = kmalloc(sizeof(*entry), GFP_ATOMIC);
203
if (ne == NULL)
204
return -ENOMEM;
205
audit_copy_rule(&ne->rule, &e->rule);
206
ne->rule.action = newaction;
207
ne->rule.file_count = newfield_count;
208
list_replace_rcu(e, ne);
209
call_rcu(&e->rcu, audit_free_rule);
210
return 0;
211
}
212
}
213
return -EFAULT; /* No matching rule */
214
}
215
216
Again, this assumes that the caller holds audit_netlink_sem. Normally,
217
the reader-writer lock would become a spinlock in this sort of code.
218
219
220
Example 3: Eliminating Stale Data
221
222
The auditing examples above tolerate stale data, as do most algorithms
223
that are tracking external state. Because there is a delay from the
224
time the external state changes before Linux becomes aware of the change,
225
additional RCU-induced staleness is normally not a problem.
226
227
However, there are many examples where stale data cannot be tolerated.
228
One example in the Linux kernel is the System V IPC (see the ipc_lock()
229
function in ipc/util.c). This code checks a "deleted" flag under a
230
per-entry spinlock, and, if the "deleted" flag is set, pretends that the
231
entry does not exist. For this to be helpful, the search function must
232
return holding the per-entry spinlock, as ipc_lock() does in fact do.
233
234
Quick Quiz: Why does the search function need to return holding the
235
per-entry lock for this deleted-flag technique to be helpful?
236
237
If the system-call audit module were to ever need to reject stale data,
238
one way to accomplish this would be to add a "deleted" flag and a "lock"
239
spinlock to the audit_entry structure, and modify audit_filter_task()
240
as follows:
241
242
static enum audit_state audit_filter_task(struct task_struct *tsk)
243
{
244
struct audit_entry *e;
245
enum audit_state state;
246
247
rcu_read_lock();
248
list_for_each_entry_rcu(e, &audit_tsklist, list) {
249
if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
250
spin_lock(&e->lock);
251
if (e->deleted) {
252
spin_unlock(&e->lock);
253
rcu_read_unlock();
254
return AUDIT_BUILD_CONTEXT;
255
}
256
rcu_read_unlock();
257
return state;
258
}
259
}
260
rcu_read_unlock();
261
return AUDIT_BUILD_CONTEXT;
262
}
263
264
Note that this example assumes that entries are only added and deleted.
265
Additional mechanism is required to deal correctly with the
266
update-in-place performed by audit_upd_rule(). For one thing,
267
audit_upd_rule() would need additional memory barriers to ensure
268
that the list_add_rcu() was really executed before the list_del_rcu().
269
270
The audit_del_rule() function would need to set the "deleted"
271
flag under the spinlock as follows:
272
273
static inline int audit_del_rule(struct audit_rule *rule,
274
struct list_head *list)
275
{
276
struct audit_entry *e;
277
278
/* Do not need to use the _rcu iterator here, since this
279
* is the only deletion routine. */
280
list_for_each_entry(e, list, list) {
281
if (!audit_compare_rule(rule, &e->rule)) {
282
spin_lock(&e->lock);
283
list_del_rcu(&e->list);
284
e->deleted = 1;
285
spin_unlock(&e->lock);
286
call_rcu(&e->rcu, audit_free_rule);
287
return 0;
288
}
289
}
290
return -EFAULT; /* No matching rule */
291
}
292
293
294
Summary
295
296
Read-mostly list-based data structures that can tolerate stale data are
297
the most amenable to use of RCU. The simplest case is where entries are
298
either added or deleted from the data structure (or atomically modified
299
in place), but non-atomic in-place modifications can be handled by making
300
a copy, updating the copy, then replacing the original with the copy.
301
If stale data cannot be tolerated, then a "deleted" flag may be used
302
in conjunction with a per-entry spinlock in order to allow the search
303
function to reject newly deleted data.
304
305
306
Answer to Quick Quiz
307
Why does the search function need to return holding the per-entry
308
lock for this deleted-flag technique to be helpful?
309
310
If the search function drops the per-entry lock before returning,
311
then the caller will be processing stale data in any case. If it
312
is really OK to be processing stale data, then you don't need a
313
"deleted" flag. If processing stale data really is a problem,
314
then you need to hold the per-entry lock across all of the code
315
that uses the value that was returned.
316
317