Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
awilliam
GitHub Repository: awilliam/linux-vfio
Path: blob/master/Documentation/RCU/rcuref.txt
10821 views
1
Reference-count design for elements of lists/arrays protected by RCU.
2
3
Reference counting on elements of lists which are protected by traditional
4
reader/writer spinlocks or semaphores are straightforward:
5
6
1. 2.
7
add() search_and_reference()
8
{ {
9
alloc_object read_lock(&list_lock);
10
... search_for_element
11
atomic_set(&el->rc, 1); atomic_inc(&el->rc);
12
write_lock(&list_lock); ...
13
add_element read_unlock(&list_lock);
14
... ...
15
write_unlock(&list_lock); }
16
}
17
18
3. 4.
19
release_referenced() delete()
20
{ {
21
... write_lock(&list_lock);
22
atomic_dec(&el->rc, relfunc) ...
23
... delete_element
24
} write_unlock(&list_lock);
25
...
26
if (atomic_dec_and_test(&el->rc))
27
kfree(el);
28
...
29
}
30
31
If this list/array is made lock free using RCU as in changing the
32
write_lock() in add() and delete() to spin_lock() and changing read_lock()
33
in search_and_reference() to rcu_read_lock(), the atomic_inc() in
34
search_and_reference() could potentially hold reference to an element which
35
has already been deleted from the list/array. Use atomic_inc_not_zero()
36
in this scenario as follows:
37
38
1. 2.
39
add() search_and_reference()
40
{ {
41
alloc_object rcu_read_lock();
42
... search_for_element
43
atomic_set(&el->rc, 1); if (!atomic_inc_not_zero(&el->rc)) {
44
spin_lock(&list_lock); rcu_read_unlock();
45
return FAIL;
46
add_element }
47
... ...
48
spin_unlock(&list_lock); rcu_read_unlock();
49
} }
50
3. 4.
51
release_referenced() delete()
52
{ {
53
... spin_lock(&list_lock);
54
if (atomic_dec_and_test(&el->rc)) ...
55
call_rcu(&el->head, el_free); delete_element
56
... spin_unlock(&list_lock);
57
} ...
58
if (atomic_dec_and_test(&el->rc))
59
call_rcu(&el->head, el_free);
60
...
61
}
62
63
Sometimes, a reference to the element needs to be obtained in the
64
update (write) stream. In such cases, atomic_inc_not_zero() might be
65
overkill, since we hold the update-side spinlock. One might instead
66
use atomic_inc() in such cases.
67
68