Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
freebsd
GitHub Repository: freebsd/freebsd-src
Path: blob/main/sys/contrib/openzfs/module/zfs/arc.c
48383 views
1
// SPDX-License-Identifier: CDDL-1.0
2
/*
3
* CDDL HEADER START
4
*
5
* The contents of this file are subject to the terms of the
6
* Common Development and Distribution License (the "License").
7
* You may not use this file except in compliance with the License.
8
*
9
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
10
* or https://opensource.org/licenses/CDDL-1.0.
11
* See the License for the specific language governing permissions
12
* and limitations under the License.
13
*
14
* When distributing Covered Code, include this CDDL HEADER in each
15
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
16
* If applicable, add the following below this CDDL HEADER, with the
17
* fields enclosed by brackets "[]" replaced with your own identifying
18
* information: Portions Copyright [yyyy] [name of copyright owner]
19
*
20
* CDDL HEADER END
21
*/
22
/*
23
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
24
* Copyright (c) 2018, Joyent, Inc.
25
* Copyright (c) 2011, 2020, Delphix. All rights reserved.
26
* Copyright (c) 2014, Saso Kiselkov. All rights reserved.
27
* Copyright (c) 2017, Nexenta Systems, Inc. All rights reserved.
28
* Copyright (c) 2019, loli10K <[email protected]>. All rights reserved.
29
* Copyright (c) 2020, George Amanakis. All rights reserved.
30
* Copyright (c) 2019, 2024, 2025, Klara, Inc.
31
* Copyright (c) 2019, Allan Jude
32
* Copyright (c) 2020, The FreeBSD Foundation [1]
33
* Copyright (c) 2021, 2024 by George Melikov. All rights reserved.
34
*
35
* [1] Portions of this software were developed by Allan Jude
36
* under sponsorship from the FreeBSD Foundation.
37
*/
38
39
/*
40
* DVA-based Adjustable Replacement Cache
41
*
42
* While much of the theory of operation used here is
43
* based on the self-tuning, low overhead replacement cache
44
* presented by Megiddo and Modha at FAST 2003, there are some
45
* significant differences:
46
*
47
* 1. The Megiddo and Modha model assumes any page is evictable.
48
* Pages in its cache cannot be "locked" into memory. This makes
49
* the eviction algorithm simple: evict the last page in the list.
50
* This also make the performance characteristics easy to reason
51
* about. Our cache is not so simple. At any given moment, some
52
* subset of the blocks in the cache are un-evictable because we
53
* have handed out a reference to them. Blocks are only evictable
54
* when there are no external references active. This makes
55
* eviction far more problematic: we choose to evict the evictable
56
* blocks that are the "lowest" in the list.
57
*
58
* There are times when it is not possible to evict the requested
59
* space. In these circumstances we are unable to adjust the cache
60
* size. To prevent the cache growing unbounded at these times we
61
* implement a "cache throttle" that slows the flow of new data
62
* into the cache until we can make space available.
63
*
64
* 2. The Megiddo and Modha model assumes a fixed cache size.
65
* Pages are evicted when the cache is full and there is a cache
66
* miss. Our model has a variable sized cache. It grows with
67
* high use, but also tries to react to memory pressure from the
68
* operating system: decreasing its size when system memory is
69
* tight.
70
*
71
* 3. The Megiddo and Modha model assumes a fixed page size. All
72
* elements of the cache are therefore exactly the same size. So
73
* when adjusting the cache size following a cache miss, its simply
74
* a matter of choosing a single page to evict. In our model, we
75
* have variable sized cache blocks (ranging from 512 bytes to
76
* 128K bytes). We therefore choose a set of blocks to evict to make
77
* space for a cache miss that approximates as closely as possible
78
* the space used by the new block.
79
*
80
* See also: "ARC: A Self-Tuning, Low Overhead Replacement Cache"
81
* by N. Megiddo & D. Modha, FAST 2003
82
*/
83
84
/*
85
* The locking model:
86
*
87
* A new reference to a cache buffer can be obtained in two
88
* ways: 1) via a hash table lookup using the DVA as a key,
89
* or 2) via one of the ARC lists. The arc_read() interface
90
* uses method 1, while the internal ARC algorithms for
91
* adjusting the cache use method 2. We therefore provide two
92
* types of locks: 1) the hash table lock array, and 2) the
93
* ARC list locks.
94
*
95
* Buffers do not have their own mutexes, rather they rely on the
96
* hash table mutexes for the bulk of their protection (i.e. most
97
* fields in the arc_buf_hdr_t are protected by these mutexes).
98
*
99
* buf_hash_find() returns the appropriate mutex (held) when it
100
* locates the requested buffer in the hash table. It returns
101
* NULL for the mutex if the buffer was not in the table.
102
*
103
* buf_hash_remove() expects the appropriate hash mutex to be
104
* already held before it is invoked.
105
*
106
* Each ARC state also has a mutex which is used to protect the
107
* buffer list associated with the state. When attempting to
108
* obtain a hash table lock while holding an ARC list lock you
109
* must use: mutex_tryenter() to avoid deadlock. Also note that
110
* the active state mutex must be held before the ghost state mutex.
111
*
112
* It as also possible to register a callback which is run when the
113
* metadata limit is reached and no buffers can be safely evicted. In
114
* this case the arc user should drop a reference on some arc buffers so
115
* they can be reclaimed. For example, when using the ZPL each dentry
116
* holds a references on a znode. These dentries must be pruned before
117
* the arc buffer holding the znode can be safely evicted.
118
*
119
* Note that the majority of the performance stats are manipulated
120
* with atomic operations.
121
*
122
* The L2ARC uses the l2ad_mtx on each vdev for the following:
123
*
124
* - L2ARC buflist creation
125
* - L2ARC buflist eviction
126
* - L2ARC write completion, which walks L2ARC buflists
127
* - ARC header destruction, as it removes from L2ARC buflists
128
* - ARC header release, as it removes from L2ARC buflists
129
*/
130
131
/*
132
* ARC operation:
133
*
134
* Every block that is in the ARC is tracked by an arc_buf_hdr_t structure.
135
* This structure can point either to a block that is still in the cache or to
136
* one that is only accessible in an L2 ARC device, or it can provide
137
* information about a block that was recently evicted. If a block is
138
* only accessible in the L2ARC, then the arc_buf_hdr_t only has enough
139
* information to retrieve it from the L2ARC device. This information is
140
* stored in the l2arc_buf_hdr_t sub-structure of the arc_buf_hdr_t. A block
141
* that is in this state cannot access the data directly.
142
*
143
* Blocks that are actively being referenced or have not been evicted
144
* are cached in the L1ARC. The L1ARC (l1arc_buf_hdr_t) is a structure within
145
* the arc_buf_hdr_t that will point to the data block in memory. A block can
146
* only be read by a consumer if it has an l1arc_buf_hdr_t. The L1ARC
147
* caches data in two ways -- in a list of ARC buffers (arc_buf_t) and
148
* also in the arc_buf_hdr_t's private physical data block pointer (b_pabd).
149
*
150
* The L1ARC's data pointer may or may not be uncompressed. The ARC has the
151
* ability to store the physical data (b_pabd) associated with the DVA of the
152
* arc_buf_hdr_t. Since the b_pabd is a copy of the on-disk physical block,
153
* it will match its on-disk compression characteristics. This behavior can be
154
* disabled by setting 'zfs_compressed_arc_enabled' to B_FALSE. When the
155
* compressed ARC functionality is disabled, the b_pabd will point to an
156
* uncompressed version of the on-disk data.
157
*
158
* Data in the L1ARC is not accessed by consumers of the ARC directly. Each
159
* arc_buf_hdr_t can have multiple ARC buffers (arc_buf_t) which reference it.
160
* Each ARC buffer (arc_buf_t) is being actively accessed by a specific ARC
161
* consumer. The ARC will provide references to this data and will keep it
162
* cached until it is no longer in use. The ARC caches only the L1ARC's physical
163
* data block and will evict any arc_buf_t that is no longer referenced. The
164
* amount of memory consumed by the arc_buf_ts' data buffers can be seen via the
165
* "overhead_size" kstat.
166
*
167
* Depending on the consumer, an arc_buf_t can be requested in uncompressed or
168
* compressed form. The typical case is that consumers will want uncompressed
169
* data, and when that happens a new data buffer is allocated where the data is
170
* decompressed for them to use. Currently the only consumer who wants
171
* compressed arc_buf_t's is "zfs send", when it streams data exactly as it
172
* exists on disk. When this happens, the arc_buf_t's data buffer is shared
173
* with the arc_buf_hdr_t.
174
*
175
* Here is a diagram showing an arc_buf_hdr_t referenced by two arc_buf_t's. The
176
* first one is owned by a compressed send consumer (and therefore references
177
* the same compressed data buffer as the arc_buf_hdr_t) and the second could be
178
* used by any other consumer (and has its own uncompressed copy of the data
179
* buffer).
180
*
181
* arc_buf_hdr_t
182
* +-----------+
183
* | fields |
184
* | common to |
185
* | L1- and |
186
* | L2ARC |
187
* +-----------+
188
* | l2arc_buf_hdr_t
189
* | |
190
* +-----------+
191
* | l1arc_buf_hdr_t
192
* | | arc_buf_t
193
* | b_buf +------------>+-----------+ arc_buf_t
194
* | b_pabd +-+ |b_next +---->+-----------+
195
* +-----------+ | |-----------| |b_next +-->NULL
196
* | |b_comp = T | +-----------+
197
* | |b_data +-+ |b_comp = F |
198
* | +-----------+ | |b_data +-+
199
* +->+------+ | +-----------+ |
200
* compressed | | | |
201
* data | |<--------------+ | uncompressed
202
* +------+ compressed, | data
203
* shared +-->+------+
204
* data | |
205
* | |
206
* +------+
207
*
208
* When a consumer reads a block, the ARC must first look to see if the
209
* arc_buf_hdr_t is cached. If the hdr is cached then the ARC allocates a new
210
* arc_buf_t and either copies uncompressed data into a new data buffer from an
211
* existing uncompressed arc_buf_t, decompresses the hdr's b_pabd buffer into a
212
* new data buffer, or shares the hdr's b_pabd buffer, depending on whether the
213
* hdr is compressed and the desired compression characteristics of the
214
* arc_buf_t consumer. If the arc_buf_t ends up sharing data with the
215
* arc_buf_hdr_t and both of them are uncompressed then the arc_buf_t must be
216
* the last buffer in the hdr's b_buf list, however a shared compressed buf can
217
* be anywhere in the hdr's list.
218
*
219
* The diagram below shows an example of an uncompressed ARC hdr that is
220
* sharing its data with an arc_buf_t (note that the shared uncompressed buf is
221
* the last element in the buf list):
222
*
223
* arc_buf_hdr_t
224
* +-----------+
225
* | |
226
* | |
227
* | |
228
* +-----------+
229
* l2arc_buf_hdr_t| |
230
* | |
231
* +-----------+
232
* l1arc_buf_hdr_t| |
233
* | | arc_buf_t (shared)
234
* | b_buf +------------>+---------+ arc_buf_t
235
* | | |b_next +---->+---------+
236
* | b_pabd +-+ |---------| |b_next +-->NULL
237
* +-----------+ | | | +---------+
238
* | |b_data +-+ | |
239
* | +---------+ | |b_data +-+
240
* +->+------+ | +---------+ |
241
* | | | |
242
* uncompressed | | | |
243
* data +------+ | |
244
* ^ +->+------+ |
245
* | uncompressed | | |
246
* | data | | |
247
* | +------+ |
248
* +---------------------------------+
249
*
250
* Writing to the ARC requires that the ARC first discard the hdr's b_pabd
251
* since the physical block is about to be rewritten. The new data contents
252
* will be contained in the arc_buf_t. As the I/O pipeline performs the write,
253
* it may compress the data before writing it to disk. The ARC will be called
254
* with the transformed data and will memcpy the transformed on-disk block into
255
* a newly allocated b_pabd. Writes are always done into buffers which have
256
* either been loaned (and hence are new and don't have other readers) or
257
* buffers which have been released (and hence have their own hdr, if there
258
* were originally other readers of the buf's original hdr). This ensures that
259
* the ARC only needs to update a single buf and its hdr after a write occurs.
260
*
261
* When the L2ARC is in use, it will also take advantage of the b_pabd. The
262
* L2ARC will always write the contents of b_pabd to the L2ARC. This means
263
* that when compressed ARC is enabled that the L2ARC blocks are identical
264
* to the on-disk block in the main data pool. This provides a significant
265
* advantage since the ARC can leverage the bp's checksum when reading from the
266
* L2ARC to determine if the contents are valid. However, if the compressed
267
* ARC is disabled, then the L2ARC's block must be transformed to look
268
* like the physical block in the main data pool before comparing the
269
* checksum and determining its validity.
270
*
271
* The L1ARC has a slightly different system for storing encrypted data.
272
* Raw (encrypted + possibly compressed) data has a few subtle differences from
273
* data that is just compressed. The biggest difference is that it is not
274
* possible to decrypt encrypted data (or vice-versa) if the keys aren't loaded.
275
* The other difference is that encryption cannot be treated as a suggestion.
276
* If a caller would prefer compressed data, but they actually wind up with
277
* uncompressed data the worst thing that could happen is there might be a
278
* performance hit. If the caller requests encrypted data, however, we must be
279
* sure they actually get it or else secret information could be leaked. Raw
280
* data is stored in hdr->b_crypt_hdr.b_rabd. An encrypted header, therefore,
281
* may have both an encrypted version and a decrypted version of its data at
282
* once. When a caller needs a raw arc_buf_t, it is allocated and the data is
283
* copied out of this header. To avoid complications with b_pabd, raw buffers
284
* cannot be shared.
285
*/
286
287
#include <sys/spa.h>
288
#include <sys/zio.h>
289
#include <sys/spa_impl.h>
290
#include <sys/zio_compress.h>
291
#include <sys/zio_checksum.h>
292
#include <sys/zfs_context.h>
293
#include <sys/arc.h>
294
#include <sys/zfs_refcount.h>
295
#include <sys/vdev.h>
296
#include <sys/vdev_impl.h>
297
#include <sys/dsl_pool.h>
298
#include <sys/multilist.h>
299
#include <sys/abd.h>
300
#include <sys/dbuf.h>
301
#include <sys/zil.h>
302
#include <sys/fm/fs/zfs.h>
303
#include <sys/callb.h>
304
#include <sys/kstat.h>
305
#include <sys/zthr.h>
306
#include <zfs_fletcher.h>
307
#include <sys/arc_impl.h>
308
#include <sys/trace_zfs.h>
309
#include <sys/aggsum.h>
310
#include <sys/wmsum.h>
311
#include <cityhash.h>
312
#include <sys/vdev_trim.h>
313
#include <sys/zfs_racct.h>
314
#include <sys/zstd/zstd.h>
315
316
#ifndef _KERNEL
317
/* set with ZFS_DEBUG=watch, to enable watchpoints on frozen buffers */
318
boolean_t arc_watch = B_FALSE;
319
#endif
320
321
/*
322
* This thread's job is to keep enough free memory in the system, by
323
* calling arc_kmem_reap_soon() plus arc_reduce_target_size(), which improves
324
* arc_available_memory().
325
*/
326
static zthr_t *arc_reap_zthr;
327
328
/*
329
* This thread's job is to keep arc_size under arc_c, by calling
330
* arc_evict(), which improves arc_is_overflowing().
331
*/
332
static zthr_t *arc_evict_zthr;
333
static arc_buf_hdr_t **arc_state_evict_markers;
334
static int arc_state_evict_marker_count;
335
336
static kmutex_t arc_evict_lock;
337
static boolean_t arc_evict_needed = B_FALSE;
338
static clock_t arc_last_uncached_flush;
339
340
static taskq_t *arc_evict_taskq;
341
static struct evict_arg *arc_evict_arg;
342
343
/*
344
* Count of bytes evicted since boot.
345
*/
346
static uint64_t arc_evict_count;
347
348
/*
349
* List of arc_evict_waiter_t's, representing threads waiting for the
350
* arc_evict_count to reach specific values.
351
*/
352
static list_t arc_evict_waiters;
353
354
/*
355
* When arc_is_overflowing(), arc_get_data_impl() waits for this percent of
356
* the requested amount of data to be evicted. For example, by default for
357
* every 2KB that's evicted, 1KB of it may be "reused" by a new allocation.
358
* Since this is above 100%, it ensures that progress is made towards getting
359
* arc_size under arc_c. Since this is finite, it ensures that allocations
360
* can still happen, even during the potentially long time that arc_size is
361
* more than arc_c.
362
*/
363
static uint_t zfs_arc_eviction_pct = 200;
364
365
/*
366
* The number of headers to evict in arc_evict_state_impl() before
367
* dropping the sublist lock and evicting from another sublist. A lower
368
* value means we're more likely to evict the "correct" header (i.e. the
369
* oldest header in the arc state), but comes with higher overhead
370
* (i.e. more invocations of arc_evict_state_impl()).
371
*/
372
static uint_t zfs_arc_evict_batch_limit = 10;
373
374
/* number of seconds before growing cache again */
375
uint_t arc_grow_retry = 5;
376
377
/*
378
* Minimum time between calls to arc_kmem_reap_soon().
379
*/
380
static const int arc_kmem_cache_reap_retry_ms = 1000;
381
382
/* shift of arc_c for calculating overflow limit in arc_get_data_impl */
383
static int zfs_arc_overflow_shift = 8;
384
385
/* log2(fraction of arc to reclaim) */
386
uint_t arc_shrink_shift = 7;
387
388
#ifdef _KERNEL
389
/* percent of pagecache to reclaim arc to */
390
uint_t zfs_arc_pc_percent = 0;
391
#endif
392
393
/*
394
* log2(fraction of ARC which must be free to allow growing).
395
* I.e. If there is less than arc_c >> arc_no_grow_shift free memory,
396
* when reading a new block into the ARC, we will evict an equal-sized block
397
* from the ARC.
398
*
399
* This must be less than arc_shrink_shift, so that when we shrink the ARC,
400
* we will still not allow it to grow.
401
*/
402
uint_t arc_no_grow_shift = 5;
403
404
405
/*
406
* minimum lifespan of a prefetch block in clock ticks
407
* (initialized in arc_init())
408
*/
409
static uint_t arc_min_prefetch_ms;
410
static uint_t arc_min_prescient_prefetch_ms;
411
412
/*
413
* If this percent of memory is free, don't throttle.
414
*/
415
uint_t arc_lotsfree_percent = 10;
416
417
/*
418
* The arc has filled available memory and has now warmed up.
419
*/
420
boolean_t arc_warm;
421
422
/*
423
* These tunables are for performance analysis.
424
*/
425
uint64_t zfs_arc_max = 0;
426
uint64_t zfs_arc_min = 0;
427
static uint64_t zfs_arc_dnode_limit = 0;
428
static uint_t zfs_arc_dnode_reduce_percent = 10;
429
static uint_t zfs_arc_grow_retry = 0;
430
static uint_t zfs_arc_shrink_shift = 0;
431
uint_t zfs_arc_average_blocksize = 8 * 1024; /* 8KB */
432
433
/*
434
* ARC dirty data constraints for arc_tempreserve_space() throttle:
435
* * total dirty data limit
436
* * anon block dirty limit
437
* * each pool's anon allowance
438
*/
439
static const unsigned long zfs_arc_dirty_limit_percent = 50;
440
static const unsigned long zfs_arc_anon_limit_percent = 25;
441
static const unsigned long zfs_arc_pool_dirty_percent = 20;
442
443
/*
444
* Enable or disable compressed arc buffers.
445
*/
446
int zfs_compressed_arc_enabled = B_TRUE;
447
448
/*
449
* Balance between metadata and data on ghost hits. Values above 100
450
* increase metadata caching by proportionally reducing effect of ghost
451
* data hits on target data/metadata rate.
452
*/
453
static uint_t zfs_arc_meta_balance = 500;
454
455
/*
456
* Percentage that can be consumed by dnodes of ARC meta buffers.
457
*/
458
static uint_t zfs_arc_dnode_limit_percent = 10;
459
460
/*
461
* These tunables are Linux-specific
462
*/
463
static uint64_t zfs_arc_sys_free = 0;
464
static uint_t zfs_arc_min_prefetch_ms = 0;
465
static uint_t zfs_arc_min_prescient_prefetch_ms = 0;
466
static uint_t zfs_arc_lotsfree_percent = 10;
467
468
/*
469
* Number of arc_prune threads
470
*/
471
static int zfs_arc_prune_task_threads = 1;
472
473
/* Used by spa_export/spa_destroy to flush the arc asynchronously */
474
static taskq_t *arc_flush_taskq;
475
476
/*
477
* Controls the number of ARC eviction threads to dispatch sublists to.
478
*
479
* Possible values:
480
* 0 (auto) compute the number of threads using a logarithmic formula.
481
* 1 (disabled) one thread - parallel eviction is disabled.
482
* 2+ (manual) set the number manually.
483
*
484
* See arc_evict_thread_init() for how "auto" is computed.
485
*/
486
static uint_t zfs_arc_evict_threads = 0;
487
488
/* The 7 states: */
489
arc_state_t ARC_anon;
490
arc_state_t ARC_mru;
491
arc_state_t ARC_mru_ghost;
492
arc_state_t ARC_mfu;
493
arc_state_t ARC_mfu_ghost;
494
arc_state_t ARC_l2c_only;
495
arc_state_t ARC_uncached;
496
497
arc_stats_t arc_stats = {
498
{ "hits", KSTAT_DATA_UINT64 },
499
{ "iohits", KSTAT_DATA_UINT64 },
500
{ "misses", KSTAT_DATA_UINT64 },
501
{ "demand_data_hits", KSTAT_DATA_UINT64 },
502
{ "demand_data_iohits", KSTAT_DATA_UINT64 },
503
{ "demand_data_misses", KSTAT_DATA_UINT64 },
504
{ "demand_metadata_hits", KSTAT_DATA_UINT64 },
505
{ "demand_metadata_iohits", KSTAT_DATA_UINT64 },
506
{ "demand_metadata_misses", KSTAT_DATA_UINT64 },
507
{ "prefetch_data_hits", KSTAT_DATA_UINT64 },
508
{ "prefetch_data_iohits", KSTAT_DATA_UINT64 },
509
{ "prefetch_data_misses", KSTAT_DATA_UINT64 },
510
{ "prefetch_metadata_hits", KSTAT_DATA_UINT64 },
511
{ "prefetch_metadata_iohits", KSTAT_DATA_UINT64 },
512
{ "prefetch_metadata_misses", KSTAT_DATA_UINT64 },
513
{ "mru_hits", KSTAT_DATA_UINT64 },
514
{ "mru_ghost_hits", KSTAT_DATA_UINT64 },
515
{ "mfu_hits", KSTAT_DATA_UINT64 },
516
{ "mfu_ghost_hits", KSTAT_DATA_UINT64 },
517
{ "uncached_hits", KSTAT_DATA_UINT64 },
518
{ "deleted", KSTAT_DATA_UINT64 },
519
{ "mutex_miss", KSTAT_DATA_UINT64 },
520
{ "access_skip", KSTAT_DATA_UINT64 },
521
{ "evict_skip", KSTAT_DATA_UINT64 },
522
{ "evict_not_enough", KSTAT_DATA_UINT64 },
523
{ "evict_l2_cached", KSTAT_DATA_UINT64 },
524
{ "evict_l2_eligible", KSTAT_DATA_UINT64 },
525
{ "evict_l2_eligible_mfu", KSTAT_DATA_UINT64 },
526
{ "evict_l2_eligible_mru", KSTAT_DATA_UINT64 },
527
{ "evict_l2_ineligible", KSTAT_DATA_UINT64 },
528
{ "evict_l2_skip", KSTAT_DATA_UINT64 },
529
{ "hash_elements", KSTAT_DATA_UINT64 },
530
{ "hash_elements_max", KSTAT_DATA_UINT64 },
531
{ "hash_collisions", KSTAT_DATA_UINT64 },
532
{ "hash_chains", KSTAT_DATA_UINT64 },
533
{ "hash_chain_max", KSTAT_DATA_UINT64 },
534
{ "meta", KSTAT_DATA_UINT64 },
535
{ "pd", KSTAT_DATA_UINT64 },
536
{ "pm", KSTAT_DATA_UINT64 },
537
{ "c", KSTAT_DATA_UINT64 },
538
{ "c_min", KSTAT_DATA_UINT64 },
539
{ "c_max", KSTAT_DATA_UINT64 },
540
{ "size", KSTAT_DATA_UINT64 },
541
{ "compressed_size", KSTAT_DATA_UINT64 },
542
{ "uncompressed_size", KSTAT_DATA_UINT64 },
543
{ "overhead_size", KSTAT_DATA_UINT64 },
544
{ "hdr_size", KSTAT_DATA_UINT64 },
545
{ "data_size", KSTAT_DATA_UINT64 },
546
{ "metadata_size", KSTAT_DATA_UINT64 },
547
{ "dbuf_size", KSTAT_DATA_UINT64 },
548
{ "dnode_size", KSTAT_DATA_UINT64 },
549
{ "bonus_size", KSTAT_DATA_UINT64 },
550
#if defined(COMPAT_FREEBSD11)
551
{ "other_size", KSTAT_DATA_UINT64 },
552
#endif
553
{ "anon_size", KSTAT_DATA_UINT64 },
554
{ "anon_data", KSTAT_DATA_UINT64 },
555
{ "anon_metadata", KSTAT_DATA_UINT64 },
556
{ "anon_evictable_data", KSTAT_DATA_UINT64 },
557
{ "anon_evictable_metadata", KSTAT_DATA_UINT64 },
558
{ "mru_size", KSTAT_DATA_UINT64 },
559
{ "mru_data", KSTAT_DATA_UINT64 },
560
{ "mru_metadata", KSTAT_DATA_UINT64 },
561
{ "mru_evictable_data", KSTAT_DATA_UINT64 },
562
{ "mru_evictable_metadata", KSTAT_DATA_UINT64 },
563
{ "mru_ghost_size", KSTAT_DATA_UINT64 },
564
{ "mru_ghost_data", KSTAT_DATA_UINT64 },
565
{ "mru_ghost_metadata", KSTAT_DATA_UINT64 },
566
{ "mru_ghost_evictable_data", KSTAT_DATA_UINT64 },
567
{ "mru_ghost_evictable_metadata", KSTAT_DATA_UINT64 },
568
{ "mfu_size", KSTAT_DATA_UINT64 },
569
{ "mfu_data", KSTAT_DATA_UINT64 },
570
{ "mfu_metadata", KSTAT_DATA_UINT64 },
571
{ "mfu_evictable_data", KSTAT_DATA_UINT64 },
572
{ "mfu_evictable_metadata", KSTAT_DATA_UINT64 },
573
{ "mfu_ghost_size", KSTAT_DATA_UINT64 },
574
{ "mfu_ghost_data", KSTAT_DATA_UINT64 },
575
{ "mfu_ghost_metadata", KSTAT_DATA_UINT64 },
576
{ "mfu_ghost_evictable_data", KSTAT_DATA_UINT64 },
577
{ "mfu_ghost_evictable_metadata", KSTAT_DATA_UINT64 },
578
{ "uncached_size", KSTAT_DATA_UINT64 },
579
{ "uncached_data", KSTAT_DATA_UINT64 },
580
{ "uncached_metadata", KSTAT_DATA_UINT64 },
581
{ "uncached_evictable_data", KSTAT_DATA_UINT64 },
582
{ "uncached_evictable_metadata", KSTAT_DATA_UINT64 },
583
{ "l2_hits", KSTAT_DATA_UINT64 },
584
{ "l2_misses", KSTAT_DATA_UINT64 },
585
{ "l2_prefetch_asize", KSTAT_DATA_UINT64 },
586
{ "l2_mru_asize", KSTAT_DATA_UINT64 },
587
{ "l2_mfu_asize", KSTAT_DATA_UINT64 },
588
{ "l2_bufc_data_asize", KSTAT_DATA_UINT64 },
589
{ "l2_bufc_metadata_asize", KSTAT_DATA_UINT64 },
590
{ "l2_feeds", KSTAT_DATA_UINT64 },
591
{ "l2_rw_clash", KSTAT_DATA_UINT64 },
592
{ "l2_read_bytes", KSTAT_DATA_UINT64 },
593
{ "l2_write_bytes", KSTAT_DATA_UINT64 },
594
{ "l2_writes_sent", KSTAT_DATA_UINT64 },
595
{ "l2_writes_done", KSTAT_DATA_UINT64 },
596
{ "l2_writes_error", KSTAT_DATA_UINT64 },
597
{ "l2_writes_lock_retry", KSTAT_DATA_UINT64 },
598
{ "l2_evict_lock_retry", KSTAT_DATA_UINT64 },
599
{ "l2_evict_reading", KSTAT_DATA_UINT64 },
600
{ "l2_evict_l1cached", KSTAT_DATA_UINT64 },
601
{ "l2_free_on_write", KSTAT_DATA_UINT64 },
602
{ "l2_abort_lowmem", KSTAT_DATA_UINT64 },
603
{ "l2_cksum_bad", KSTAT_DATA_UINT64 },
604
{ "l2_io_error", KSTAT_DATA_UINT64 },
605
{ "l2_size", KSTAT_DATA_UINT64 },
606
{ "l2_asize", KSTAT_DATA_UINT64 },
607
{ "l2_hdr_size", KSTAT_DATA_UINT64 },
608
{ "l2_log_blk_writes", KSTAT_DATA_UINT64 },
609
{ "l2_log_blk_avg_asize", KSTAT_DATA_UINT64 },
610
{ "l2_log_blk_asize", KSTAT_DATA_UINT64 },
611
{ "l2_log_blk_count", KSTAT_DATA_UINT64 },
612
{ "l2_data_to_meta_ratio", KSTAT_DATA_UINT64 },
613
{ "l2_rebuild_success", KSTAT_DATA_UINT64 },
614
{ "l2_rebuild_unsupported", KSTAT_DATA_UINT64 },
615
{ "l2_rebuild_io_errors", KSTAT_DATA_UINT64 },
616
{ "l2_rebuild_dh_errors", KSTAT_DATA_UINT64 },
617
{ "l2_rebuild_cksum_lb_errors", KSTAT_DATA_UINT64 },
618
{ "l2_rebuild_lowmem", KSTAT_DATA_UINT64 },
619
{ "l2_rebuild_size", KSTAT_DATA_UINT64 },
620
{ "l2_rebuild_asize", KSTAT_DATA_UINT64 },
621
{ "l2_rebuild_bufs", KSTAT_DATA_UINT64 },
622
{ "l2_rebuild_bufs_precached", KSTAT_DATA_UINT64 },
623
{ "l2_rebuild_log_blks", KSTAT_DATA_UINT64 },
624
{ "memory_throttle_count", KSTAT_DATA_UINT64 },
625
{ "memory_direct_count", KSTAT_DATA_UINT64 },
626
{ "memory_indirect_count", KSTAT_DATA_UINT64 },
627
{ "memory_all_bytes", KSTAT_DATA_UINT64 },
628
{ "memory_free_bytes", KSTAT_DATA_UINT64 },
629
{ "memory_available_bytes", KSTAT_DATA_INT64 },
630
{ "arc_no_grow", KSTAT_DATA_UINT64 },
631
{ "arc_tempreserve", KSTAT_DATA_UINT64 },
632
{ "arc_loaned_bytes", KSTAT_DATA_UINT64 },
633
{ "arc_prune", KSTAT_DATA_UINT64 },
634
{ "arc_meta_used", KSTAT_DATA_UINT64 },
635
{ "arc_dnode_limit", KSTAT_DATA_UINT64 },
636
{ "async_upgrade_sync", KSTAT_DATA_UINT64 },
637
{ "predictive_prefetch", KSTAT_DATA_UINT64 },
638
{ "demand_hit_predictive_prefetch", KSTAT_DATA_UINT64 },
639
{ "demand_iohit_predictive_prefetch", KSTAT_DATA_UINT64 },
640
{ "prescient_prefetch", KSTAT_DATA_UINT64 },
641
{ "demand_hit_prescient_prefetch", KSTAT_DATA_UINT64 },
642
{ "demand_iohit_prescient_prefetch", KSTAT_DATA_UINT64 },
643
{ "arc_need_free", KSTAT_DATA_UINT64 },
644
{ "arc_sys_free", KSTAT_DATA_UINT64 },
645
{ "arc_raw_size", KSTAT_DATA_UINT64 },
646
{ "cached_only_in_progress", KSTAT_DATA_UINT64 },
647
{ "abd_chunk_waste_size", KSTAT_DATA_UINT64 },
648
};
649
650
arc_sums_t arc_sums;
651
652
#define ARCSTAT_MAX(stat, val) { \
653
uint64_t m; \
654
while ((val) > (m = arc_stats.stat.value.ui64) && \
655
(m != atomic_cas_64(&arc_stats.stat.value.ui64, m, (val)))) \
656
continue; \
657
}
658
659
/*
660
* We define a macro to allow ARC hits/misses to be easily broken down by
661
* two separate conditions, giving a total of four different subtypes for
662
* each of hits and misses (so eight statistics total).
663
*/
664
#define ARCSTAT_CONDSTAT(cond1, stat1, notstat1, cond2, stat2, notstat2, stat) \
665
if (cond1) { \
666
if (cond2) { \
667
ARCSTAT_BUMP(arcstat_##stat1##_##stat2##_##stat); \
668
} else { \
669
ARCSTAT_BUMP(arcstat_##stat1##_##notstat2##_##stat); \
670
} \
671
} else { \
672
if (cond2) { \
673
ARCSTAT_BUMP(arcstat_##notstat1##_##stat2##_##stat); \
674
} else { \
675
ARCSTAT_BUMP(arcstat_##notstat1##_##notstat2##_##stat);\
676
} \
677
}
678
679
/*
680
* This macro allows us to use kstats as floating averages. Each time we
681
* update this kstat, we first factor it and the update value by
682
* ARCSTAT_AVG_FACTOR to shrink the new value's contribution to the overall
683
* average. This macro assumes that integer loads and stores are atomic, but
684
* is not safe for multiple writers updating the kstat in parallel (only the
685
* last writer's update will remain).
686
*/
687
#define ARCSTAT_F_AVG_FACTOR 3
688
#define ARCSTAT_F_AVG(stat, value) \
689
do { \
690
uint64_t x = ARCSTAT(stat); \
691
x = x - x / ARCSTAT_F_AVG_FACTOR + \
692
(value) / ARCSTAT_F_AVG_FACTOR; \
693
ARCSTAT(stat) = x; \
694
} while (0)
695
696
static kstat_t *arc_ksp;
697
698
/*
699
* There are several ARC variables that are critical to export as kstats --
700
* but we don't want to have to grovel around in the kstat whenever we wish to
701
* manipulate them. For these variables, we therefore define them to be in
702
* terms of the statistic variable. This assures that we are not introducing
703
* the possibility of inconsistency by having shadow copies of the variables,
704
* while still allowing the code to be readable.
705
*/
706
#define arc_tempreserve ARCSTAT(arcstat_tempreserve)
707
#define arc_loaned_bytes ARCSTAT(arcstat_loaned_bytes)
708
#define arc_dnode_limit ARCSTAT(arcstat_dnode_limit) /* max size for dnodes */
709
#define arc_need_free ARCSTAT(arcstat_need_free) /* waiting to be evicted */
710
711
hrtime_t arc_growtime;
712
list_t arc_prune_list;
713
kmutex_t arc_prune_mtx;
714
taskq_t *arc_prune_taskq;
715
716
#define GHOST_STATE(state) \
717
((state) == arc_mru_ghost || (state) == arc_mfu_ghost || \
718
(state) == arc_l2c_only)
719
720
#define HDR_IN_HASH_TABLE(hdr) ((hdr)->b_flags & ARC_FLAG_IN_HASH_TABLE)
721
#define HDR_IO_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS)
722
#define HDR_IO_ERROR(hdr) ((hdr)->b_flags & ARC_FLAG_IO_ERROR)
723
#define HDR_PREFETCH(hdr) ((hdr)->b_flags & ARC_FLAG_PREFETCH)
724
#define HDR_PRESCIENT_PREFETCH(hdr) \
725
((hdr)->b_flags & ARC_FLAG_PRESCIENT_PREFETCH)
726
#define HDR_COMPRESSION_ENABLED(hdr) \
727
((hdr)->b_flags & ARC_FLAG_COMPRESSED_ARC)
728
729
#define HDR_L2CACHE(hdr) ((hdr)->b_flags & ARC_FLAG_L2CACHE)
730
#define HDR_UNCACHED(hdr) ((hdr)->b_flags & ARC_FLAG_UNCACHED)
731
#define HDR_L2_READING(hdr) \
732
(((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) && \
733
((hdr)->b_flags & ARC_FLAG_HAS_L2HDR))
734
#define HDR_L2_WRITING(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITING)
735
#define HDR_L2_EVICTED(hdr) ((hdr)->b_flags & ARC_FLAG_L2_EVICTED)
736
#define HDR_L2_WRITE_HEAD(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITE_HEAD)
737
#define HDR_PROTECTED(hdr) ((hdr)->b_flags & ARC_FLAG_PROTECTED)
738
#define HDR_NOAUTH(hdr) ((hdr)->b_flags & ARC_FLAG_NOAUTH)
739
#define HDR_SHARED_DATA(hdr) ((hdr)->b_flags & ARC_FLAG_SHARED_DATA)
740
741
#define HDR_ISTYPE_METADATA(hdr) \
742
((hdr)->b_flags & ARC_FLAG_BUFC_METADATA)
743
#define HDR_ISTYPE_DATA(hdr) (!HDR_ISTYPE_METADATA(hdr))
744
745
#define HDR_HAS_L1HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L1HDR)
746
#define HDR_HAS_L2HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR)
747
#define HDR_HAS_RABD(hdr) \
748
(HDR_HAS_L1HDR(hdr) && HDR_PROTECTED(hdr) && \
749
(hdr)->b_crypt_hdr.b_rabd != NULL)
750
#define HDR_ENCRYPTED(hdr) \
751
(HDR_PROTECTED(hdr) && DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot))
752
#define HDR_AUTHENTICATED(hdr) \
753
(HDR_PROTECTED(hdr) && !DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot))
754
755
/* For storing compression mode in b_flags */
756
#define HDR_COMPRESS_OFFSET (highbit64(ARC_FLAG_COMPRESS_0) - 1)
757
758
#define HDR_GET_COMPRESS(hdr) ((enum zio_compress)BF32_GET((hdr)->b_flags, \
759
HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS))
760
#define HDR_SET_COMPRESS(hdr, cmp) BF32_SET((hdr)->b_flags, \
761
HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS, (cmp));
762
763
#define ARC_BUF_LAST(buf) ((buf)->b_next == NULL)
764
#define ARC_BUF_SHARED(buf) ((buf)->b_flags & ARC_BUF_FLAG_SHARED)
765
#define ARC_BUF_COMPRESSED(buf) ((buf)->b_flags & ARC_BUF_FLAG_COMPRESSED)
766
#define ARC_BUF_ENCRYPTED(buf) ((buf)->b_flags & ARC_BUF_FLAG_ENCRYPTED)
767
768
/*
769
* Other sizes
770
*/
771
772
#define HDR_FULL_SIZE ((int64_t)sizeof (arc_buf_hdr_t))
773
#define HDR_L2ONLY_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_l1hdr))
774
775
/*
776
* Hash table routines
777
*/
778
779
#define BUF_LOCKS 2048
780
typedef struct buf_hash_table {
781
uint64_t ht_mask;
782
arc_buf_hdr_t **ht_table;
783
kmutex_t ht_locks[BUF_LOCKS] ____cacheline_aligned;
784
} buf_hash_table_t;
785
786
static buf_hash_table_t buf_hash_table;
787
788
#define BUF_HASH_INDEX(spa, dva, birth) \
789
(buf_hash(spa, dva, birth) & buf_hash_table.ht_mask)
790
#define BUF_HASH_LOCK(idx) (&buf_hash_table.ht_locks[idx & (BUF_LOCKS-1)])
791
#define HDR_LOCK(hdr) \
792
(BUF_HASH_LOCK(BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth)))
793
794
uint64_t zfs_crc64_table[256];
795
796
/*
797
* Asynchronous ARC flush
798
*
799
* We track these in a list for arc_async_flush_guid_inuse().
800
* Used for both L1 and L2 async teardown.
801
*/
802
static list_t arc_async_flush_list;
803
static kmutex_t arc_async_flush_lock;
804
805
typedef struct arc_async_flush {
806
uint64_t af_spa_guid;
807
taskq_ent_t af_tqent;
808
uint_t af_cache_level; /* 1 or 2 to differentiate node */
809
list_node_t af_node;
810
} arc_async_flush_t;
811
812
813
/*
814
* Level 2 ARC
815
*/
816
817
#define L2ARC_WRITE_SIZE (32 * 1024 * 1024) /* initial write max */
818
#define L2ARC_HEADROOM 8 /* num of writes */
819
820
/*
821
* If we discover during ARC scan any buffers to be compressed, we boost
822
* our headroom for the next scanning cycle by this percentage multiple.
823
*/
824
#define L2ARC_HEADROOM_BOOST 200
825
#define L2ARC_FEED_SECS 1 /* caching interval secs */
826
#define L2ARC_FEED_MIN_MS 200 /* min caching interval ms */
827
828
/*
829
* We can feed L2ARC from two states of ARC buffers, mru and mfu,
830
* and each of the state has two types: data and metadata.
831
*/
832
#define L2ARC_FEED_TYPES 4
833
834
/* L2ARC Performance Tunables */
835
uint64_t l2arc_write_max = L2ARC_WRITE_SIZE; /* def max write size */
836
uint64_t l2arc_write_boost = L2ARC_WRITE_SIZE; /* extra warmup write */
837
uint64_t l2arc_headroom = L2ARC_HEADROOM; /* # of dev writes */
838
uint64_t l2arc_headroom_boost = L2ARC_HEADROOM_BOOST;
839
uint64_t l2arc_feed_secs = L2ARC_FEED_SECS; /* interval seconds */
840
uint64_t l2arc_feed_min_ms = L2ARC_FEED_MIN_MS; /* min interval msecs */
841
int l2arc_noprefetch = B_TRUE; /* don't cache prefetch bufs */
842
int l2arc_feed_again = B_TRUE; /* turbo warmup */
843
int l2arc_norw = B_FALSE; /* no reads during writes */
844
static uint_t l2arc_meta_percent = 33; /* limit on headers size */
845
846
/*
847
* L2ARC Internals
848
*/
849
static list_t L2ARC_dev_list; /* device list */
850
static list_t *l2arc_dev_list; /* device list pointer */
851
static kmutex_t l2arc_dev_mtx; /* device list mutex */
852
static l2arc_dev_t *l2arc_dev_last; /* last device used */
853
static list_t L2ARC_free_on_write; /* free after write buf list */
854
static list_t *l2arc_free_on_write; /* free after write list ptr */
855
static kmutex_t l2arc_free_on_write_mtx; /* mutex for list */
856
static uint64_t l2arc_ndev; /* number of devices */
857
858
typedef struct l2arc_read_callback {
859
arc_buf_hdr_t *l2rcb_hdr; /* read header */
860
blkptr_t l2rcb_bp; /* original blkptr */
861
zbookmark_phys_t l2rcb_zb; /* original bookmark */
862
int l2rcb_flags; /* original flags */
863
abd_t *l2rcb_abd; /* temporary buffer */
864
} l2arc_read_callback_t;
865
866
typedef struct l2arc_data_free {
867
/* protected by l2arc_free_on_write_mtx */
868
abd_t *l2df_abd;
869
size_t l2df_size;
870
arc_buf_contents_t l2df_type;
871
list_node_t l2df_list_node;
872
} l2arc_data_free_t;
873
874
typedef enum arc_fill_flags {
875
ARC_FILL_LOCKED = 1 << 0, /* hdr lock is held */
876
ARC_FILL_COMPRESSED = 1 << 1, /* fill with compressed data */
877
ARC_FILL_ENCRYPTED = 1 << 2, /* fill with encrypted data */
878
ARC_FILL_NOAUTH = 1 << 3, /* don't attempt to authenticate */
879
ARC_FILL_IN_PLACE = 1 << 4 /* fill in place (special case) */
880
} arc_fill_flags_t;
881
882
typedef enum arc_ovf_level {
883
ARC_OVF_NONE, /* ARC within target size. */
884
ARC_OVF_SOME, /* ARC is slightly overflowed. */
885
ARC_OVF_SEVERE /* ARC is severely overflowed. */
886
} arc_ovf_level_t;
887
888
static kmutex_t l2arc_feed_thr_lock;
889
static kcondvar_t l2arc_feed_thr_cv;
890
static uint8_t l2arc_thread_exit;
891
892
static kmutex_t l2arc_rebuild_thr_lock;
893
static kcondvar_t l2arc_rebuild_thr_cv;
894
895
enum arc_hdr_alloc_flags {
896
ARC_HDR_ALLOC_RDATA = 0x1,
897
ARC_HDR_USE_RESERVE = 0x4,
898
ARC_HDR_ALLOC_LINEAR = 0x8,
899
};
900
901
902
static abd_t *arc_get_data_abd(arc_buf_hdr_t *, uint64_t, const void *, int);
903
static void *arc_get_data_buf(arc_buf_hdr_t *, uint64_t, const void *);
904
static void arc_get_data_impl(arc_buf_hdr_t *, uint64_t, const void *, int);
905
static void arc_free_data_abd(arc_buf_hdr_t *, abd_t *, uint64_t, const void *);
906
static void arc_free_data_buf(arc_buf_hdr_t *, void *, uint64_t, const void *);
907
static void arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size,
908
const void *tag);
909
static void arc_hdr_free_abd(arc_buf_hdr_t *, boolean_t);
910
static void arc_hdr_alloc_abd(arc_buf_hdr_t *, int);
911
static void arc_hdr_destroy(arc_buf_hdr_t *);
912
static void arc_access(arc_buf_hdr_t *, arc_flags_t, boolean_t);
913
static void arc_buf_watch(arc_buf_t *);
914
static void arc_change_state(arc_state_t *, arc_buf_hdr_t *);
915
916
static arc_buf_contents_t arc_buf_type(arc_buf_hdr_t *);
917
static uint32_t arc_bufc_to_flags(arc_buf_contents_t);
918
static inline void arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags);
919
static inline void arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags);
920
921
static boolean_t l2arc_write_eligible(uint64_t, arc_buf_hdr_t *);
922
static void l2arc_read_done(zio_t *);
923
static void l2arc_do_free_on_write(void);
924
static void l2arc_hdr_arcstats_update(arc_buf_hdr_t *hdr, boolean_t incr,
925
boolean_t state_only);
926
927
static void arc_prune_async(uint64_t adjust);
928
929
#define l2arc_hdr_arcstats_increment(hdr) \
930
l2arc_hdr_arcstats_update((hdr), B_TRUE, B_FALSE)
931
#define l2arc_hdr_arcstats_decrement(hdr) \
932
l2arc_hdr_arcstats_update((hdr), B_FALSE, B_FALSE)
933
#define l2arc_hdr_arcstats_increment_state(hdr) \
934
l2arc_hdr_arcstats_update((hdr), B_TRUE, B_TRUE)
935
#define l2arc_hdr_arcstats_decrement_state(hdr) \
936
l2arc_hdr_arcstats_update((hdr), B_FALSE, B_TRUE)
937
938
/*
939
* l2arc_exclude_special : A zfs module parameter that controls whether buffers
940
* present on special vdevs are eligibile for caching in L2ARC. If
941
* set to 1, exclude dbufs on special vdevs from being cached to
942
* L2ARC.
943
*/
944
int l2arc_exclude_special = 0;
945
946
/*
947
* l2arc_mfuonly : A ZFS module parameter that controls whether only MFU
948
* metadata and data are cached from ARC into L2ARC.
949
*/
950
static int l2arc_mfuonly = 0;
951
952
/*
953
* L2ARC TRIM
954
* l2arc_trim_ahead : A ZFS module parameter that controls how much ahead of
955
* the current write size (l2arc_write_max) we should TRIM if we
956
* have filled the device. It is defined as a percentage of the
957
* write size. If set to 100 we trim twice the space required to
958
* accommodate upcoming writes. A minimum of 64MB will be trimmed.
959
* It also enables TRIM of the whole L2ARC device upon creation or
960
* addition to an existing pool or if the header of the device is
961
* invalid upon importing a pool or onlining a cache device. The
962
* default is 0, which disables TRIM on L2ARC altogether as it can
963
* put significant stress on the underlying storage devices. This
964
* will vary depending of how well the specific device handles
965
* these commands.
966
*/
967
static uint64_t l2arc_trim_ahead = 0;
968
969
/*
970
* Performance tuning of L2ARC persistence:
971
*
972
* l2arc_rebuild_enabled : A ZFS module parameter that controls whether adding
973
* an L2ARC device (either at pool import or later) will attempt
974
* to rebuild L2ARC buffer contents.
975
* l2arc_rebuild_blocks_min_l2size : A ZFS module parameter that controls
976
* whether log blocks are written to the L2ARC device. If the L2ARC
977
* device is less than 1GB, the amount of data l2arc_evict()
978
* evicts is significant compared to the amount of restored L2ARC
979
* data. In this case do not write log blocks in L2ARC in order
980
* not to waste space.
981
*/
982
static int l2arc_rebuild_enabled = B_TRUE;
983
static uint64_t l2arc_rebuild_blocks_min_l2size = 1024 * 1024 * 1024;
984
985
/* L2ARC persistence rebuild control routines. */
986
void l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen);
987
static __attribute__((noreturn)) void l2arc_dev_rebuild_thread(void *arg);
988
static int l2arc_rebuild(l2arc_dev_t *dev);
989
990
/* L2ARC persistence read I/O routines. */
991
static int l2arc_dev_hdr_read(l2arc_dev_t *dev);
992
static int l2arc_log_blk_read(l2arc_dev_t *dev,
993
const l2arc_log_blkptr_t *this_lp, const l2arc_log_blkptr_t *next_lp,
994
l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb,
995
zio_t *this_io, zio_t **next_io);
996
static zio_t *l2arc_log_blk_fetch(vdev_t *vd,
997
const l2arc_log_blkptr_t *lp, l2arc_log_blk_phys_t *lb);
998
static void l2arc_log_blk_fetch_abort(zio_t *zio);
999
1000
/* L2ARC persistence block restoration routines. */
1001
static void l2arc_log_blk_restore(l2arc_dev_t *dev,
1002
const l2arc_log_blk_phys_t *lb, uint64_t lb_asize);
1003
static void l2arc_hdr_restore(const l2arc_log_ent_phys_t *le,
1004
l2arc_dev_t *dev);
1005
1006
/* L2ARC persistence write I/O routines. */
1007
static uint64_t l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio,
1008
l2arc_write_callback_t *cb);
1009
1010
/* L2ARC persistence auxiliary routines. */
1011
boolean_t l2arc_log_blkptr_valid(l2arc_dev_t *dev,
1012
const l2arc_log_blkptr_t *lbp);
1013
static boolean_t l2arc_log_blk_insert(l2arc_dev_t *dev,
1014
const arc_buf_hdr_t *ab);
1015
boolean_t l2arc_range_check_overlap(uint64_t bottom,
1016
uint64_t top, uint64_t check);
1017
static void l2arc_blk_fetch_done(zio_t *zio);
1018
static inline uint64_t
1019
l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev);
1020
1021
/*
1022
* We use Cityhash for this. It's fast, and has good hash properties without
1023
* requiring any large static buffers.
1024
*/
1025
static uint64_t
1026
buf_hash(uint64_t spa, const dva_t *dva, uint64_t birth)
1027
{
1028
return (cityhash4(spa, dva->dva_word[0], dva->dva_word[1], birth));
1029
}
1030
1031
#define HDR_EMPTY(hdr) \
1032
((hdr)->b_dva.dva_word[0] == 0 && \
1033
(hdr)->b_dva.dva_word[1] == 0)
1034
1035
#define HDR_EMPTY_OR_LOCKED(hdr) \
1036
(HDR_EMPTY(hdr) || MUTEX_HELD(HDR_LOCK(hdr)))
1037
1038
#define HDR_EQUAL(spa, dva, birth, hdr) \
1039
((hdr)->b_dva.dva_word[0] == (dva)->dva_word[0]) && \
1040
((hdr)->b_dva.dva_word[1] == (dva)->dva_word[1]) && \
1041
((hdr)->b_birth == birth) && ((hdr)->b_spa == spa)
1042
1043
static void
1044
buf_discard_identity(arc_buf_hdr_t *hdr)
1045
{
1046
hdr->b_dva.dva_word[0] = 0;
1047
hdr->b_dva.dva_word[1] = 0;
1048
hdr->b_birth = 0;
1049
}
1050
1051
static arc_buf_hdr_t *
1052
buf_hash_find(uint64_t spa, const blkptr_t *bp, kmutex_t **lockp)
1053
{
1054
const dva_t *dva = BP_IDENTITY(bp);
1055
uint64_t birth = BP_GET_PHYSICAL_BIRTH(bp);
1056
uint64_t idx = BUF_HASH_INDEX(spa, dva, birth);
1057
kmutex_t *hash_lock = BUF_HASH_LOCK(idx);
1058
arc_buf_hdr_t *hdr;
1059
1060
mutex_enter(hash_lock);
1061
for (hdr = buf_hash_table.ht_table[idx]; hdr != NULL;
1062
hdr = hdr->b_hash_next) {
1063
if (HDR_EQUAL(spa, dva, birth, hdr)) {
1064
*lockp = hash_lock;
1065
return (hdr);
1066
}
1067
}
1068
mutex_exit(hash_lock);
1069
*lockp = NULL;
1070
return (NULL);
1071
}
1072
1073
/*
1074
* Insert an entry into the hash table. If there is already an element
1075
* equal to elem in the hash table, then the already existing element
1076
* will be returned and the new element will not be inserted.
1077
* Otherwise returns NULL.
1078
* If lockp == NULL, the caller is assumed to already hold the hash lock.
1079
*/
1080
static arc_buf_hdr_t *
1081
buf_hash_insert(arc_buf_hdr_t *hdr, kmutex_t **lockp)
1082
{
1083
uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth);
1084
kmutex_t *hash_lock = BUF_HASH_LOCK(idx);
1085
arc_buf_hdr_t *fhdr;
1086
uint32_t i;
1087
1088
ASSERT(!DVA_IS_EMPTY(&hdr->b_dva));
1089
ASSERT(hdr->b_birth != 0);
1090
ASSERT(!HDR_IN_HASH_TABLE(hdr));
1091
1092
if (lockp != NULL) {
1093
*lockp = hash_lock;
1094
mutex_enter(hash_lock);
1095
} else {
1096
ASSERT(MUTEX_HELD(hash_lock));
1097
}
1098
1099
for (fhdr = buf_hash_table.ht_table[idx], i = 0; fhdr != NULL;
1100
fhdr = fhdr->b_hash_next, i++) {
1101
if (HDR_EQUAL(hdr->b_spa, &hdr->b_dva, hdr->b_birth, fhdr))
1102
return (fhdr);
1103
}
1104
1105
hdr->b_hash_next = buf_hash_table.ht_table[idx];
1106
buf_hash_table.ht_table[idx] = hdr;
1107
arc_hdr_set_flags(hdr, ARC_FLAG_IN_HASH_TABLE);
1108
1109
/* collect some hash table performance data */
1110
if (i > 0) {
1111
ARCSTAT_BUMP(arcstat_hash_collisions);
1112
if (i == 1)
1113
ARCSTAT_BUMP(arcstat_hash_chains);
1114
ARCSTAT_MAX(arcstat_hash_chain_max, i);
1115
}
1116
ARCSTAT_BUMP(arcstat_hash_elements);
1117
1118
return (NULL);
1119
}
1120
1121
static void
1122
buf_hash_remove(arc_buf_hdr_t *hdr)
1123
{
1124
arc_buf_hdr_t *fhdr, **hdrp;
1125
uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth);
1126
1127
ASSERT(MUTEX_HELD(BUF_HASH_LOCK(idx)));
1128
ASSERT(HDR_IN_HASH_TABLE(hdr));
1129
1130
hdrp = &buf_hash_table.ht_table[idx];
1131
while ((fhdr = *hdrp) != hdr) {
1132
ASSERT3P(fhdr, !=, NULL);
1133
hdrp = &fhdr->b_hash_next;
1134
}
1135
*hdrp = hdr->b_hash_next;
1136
hdr->b_hash_next = NULL;
1137
arc_hdr_clear_flags(hdr, ARC_FLAG_IN_HASH_TABLE);
1138
1139
/* collect some hash table performance data */
1140
ARCSTAT_BUMPDOWN(arcstat_hash_elements);
1141
if (buf_hash_table.ht_table[idx] &&
1142
buf_hash_table.ht_table[idx]->b_hash_next == NULL)
1143
ARCSTAT_BUMPDOWN(arcstat_hash_chains);
1144
}
1145
1146
/*
1147
* Global data structures and functions for the buf kmem cache.
1148
*/
1149
1150
static kmem_cache_t *hdr_full_cache;
1151
static kmem_cache_t *hdr_l2only_cache;
1152
static kmem_cache_t *buf_cache;
1153
1154
static void
1155
buf_fini(void)
1156
{
1157
#if defined(_KERNEL)
1158
/*
1159
* Large allocations which do not require contiguous pages
1160
* should be using vmem_free() in the linux kernel.
1161
*/
1162
vmem_free(buf_hash_table.ht_table,
1163
(buf_hash_table.ht_mask + 1) * sizeof (void *));
1164
#else
1165
kmem_free(buf_hash_table.ht_table,
1166
(buf_hash_table.ht_mask + 1) * sizeof (void *));
1167
#endif
1168
for (int i = 0; i < BUF_LOCKS; i++)
1169
mutex_destroy(BUF_HASH_LOCK(i));
1170
kmem_cache_destroy(hdr_full_cache);
1171
kmem_cache_destroy(hdr_l2only_cache);
1172
kmem_cache_destroy(buf_cache);
1173
}
1174
1175
/*
1176
* Constructor callback - called when the cache is empty
1177
* and a new buf is requested.
1178
*/
1179
static int
1180
hdr_full_cons(void *vbuf, void *unused, int kmflag)
1181
{
1182
(void) unused, (void) kmflag;
1183
arc_buf_hdr_t *hdr = vbuf;
1184
1185
memset(hdr, 0, HDR_FULL_SIZE);
1186
hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS;
1187
zfs_refcount_create(&hdr->b_l1hdr.b_refcnt);
1188
#ifdef ZFS_DEBUG
1189
mutex_init(&hdr->b_l1hdr.b_freeze_lock, NULL, MUTEX_DEFAULT, NULL);
1190
#endif
1191
multilist_link_init(&hdr->b_l1hdr.b_arc_node);
1192
list_link_init(&hdr->b_l2hdr.b_l2node);
1193
arc_space_consume(HDR_FULL_SIZE, ARC_SPACE_HDRS);
1194
1195
return (0);
1196
}
1197
1198
static int
1199
hdr_l2only_cons(void *vbuf, void *unused, int kmflag)
1200
{
1201
(void) unused, (void) kmflag;
1202
arc_buf_hdr_t *hdr = vbuf;
1203
1204
memset(hdr, 0, HDR_L2ONLY_SIZE);
1205
arc_space_consume(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS);
1206
1207
return (0);
1208
}
1209
1210
static int
1211
buf_cons(void *vbuf, void *unused, int kmflag)
1212
{
1213
(void) unused, (void) kmflag;
1214
arc_buf_t *buf = vbuf;
1215
1216
memset(buf, 0, sizeof (arc_buf_t));
1217
arc_space_consume(sizeof (arc_buf_t), ARC_SPACE_HDRS);
1218
1219
return (0);
1220
}
1221
1222
/*
1223
* Destructor callback - called when a cached buf is
1224
* no longer required.
1225
*/
1226
static void
1227
hdr_full_dest(void *vbuf, void *unused)
1228
{
1229
(void) unused;
1230
arc_buf_hdr_t *hdr = vbuf;
1231
1232
ASSERT(HDR_EMPTY(hdr));
1233
zfs_refcount_destroy(&hdr->b_l1hdr.b_refcnt);
1234
#ifdef ZFS_DEBUG
1235
mutex_destroy(&hdr->b_l1hdr.b_freeze_lock);
1236
#endif
1237
ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node));
1238
arc_space_return(HDR_FULL_SIZE, ARC_SPACE_HDRS);
1239
}
1240
1241
static void
1242
hdr_l2only_dest(void *vbuf, void *unused)
1243
{
1244
(void) unused;
1245
arc_buf_hdr_t *hdr = vbuf;
1246
1247
ASSERT(HDR_EMPTY(hdr));
1248
arc_space_return(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS);
1249
}
1250
1251
static void
1252
buf_dest(void *vbuf, void *unused)
1253
{
1254
(void) unused;
1255
(void) vbuf;
1256
1257
arc_space_return(sizeof (arc_buf_t), ARC_SPACE_HDRS);
1258
}
1259
1260
static void
1261
buf_init(void)
1262
{
1263
uint64_t *ct = NULL;
1264
uint64_t hsize = 1ULL << 12;
1265
int i, j;
1266
1267
/*
1268
* The hash table is big enough to fill all of physical memory
1269
* with an average block size of zfs_arc_average_blocksize (default 8K).
1270
* By default, the table will take up
1271
* totalmem * sizeof(void*) / 8K (1MB per GB with 8-byte pointers).
1272
*/
1273
while (hsize * zfs_arc_average_blocksize < arc_all_memory())
1274
hsize <<= 1;
1275
retry:
1276
buf_hash_table.ht_mask = hsize - 1;
1277
#if defined(_KERNEL)
1278
/*
1279
* Large allocations which do not require contiguous pages
1280
* should be using vmem_alloc() in the linux kernel
1281
*/
1282
buf_hash_table.ht_table =
1283
vmem_zalloc(hsize * sizeof (void*), KM_SLEEP);
1284
#else
1285
buf_hash_table.ht_table =
1286
kmem_zalloc(hsize * sizeof (void*), KM_NOSLEEP);
1287
#endif
1288
if (buf_hash_table.ht_table == NULL) {
1289
ASSERT(hsize > (1ULL << 8));
1290
hsize >>= 1;
1291
goto retry;
1292
}
1293
1294
hdr_full_cache = kmem_cache_create("arc_buf_hdr_t_full", HDR_FULL_SIZE,
1295
0, hdr_full_cons, hdr_full_dest, NULL, NULL, NULL, KMC_RECLAIMABLE);
1296
hdr_l2only_cache = kmem_cache_create("arc_buf_hdr_t_l2only",
1297
HDR_L2ONLY_SIZE, 0, hdr_l2only_cons, hdr_l2only_dest, NULL,
1298
NULL, NULL, 0);
1299
buf_cache = kmem_cache_create("arc_buf_t", sizeof (arc_buf_t),
1300
0, buf_cons, buf_dest, NULL, NULL, NULL, 0);
1301
1302
for (i = 0; i < 256; i++)
1303
for (ct = zfs_crc64_table + i, *ct = i, j = 8; j > 0; j--)
1304
*ct = (*ct >> 1) ^ (-(*ct & 1) & ZFS_CRC64_POLY);
1305
1306
for (i = 0; i < BUF_LOCKS; i++)
1307
mutex_init(BUF_HASH_LOCK(i), NULL, MUTEX_DEFAULT, NULL);
1308
}
1309
1310
#define ARC_MINTIME (hz>>4) /* 62 ms */
1311
1312
/*
1313
* This is the size that the buf occupies in memory. If the buf is compressed,
1314
* it will correspond to the compressed size. You should use this method of
1315
* getting the buf size unless you explicitly need the logical size.
1316
*/
1317
uint64_t
1318
arc_buf_size(arc_buf_t *buf)
1319
{
1320
return (ARC_BUF_COMPRESSED(buf) ?
1321
HDR_GET_PSIZE(buf->b_hdr) : HDR_GET_LSIZE(buf->b_hdr));
1322
}
1323
1324
uint64_t
1325
arc_buf_lsize(arc_buf_t *buf)
1326
{
1327
return (HDR_GET_LSIZE(buf->b_hdr));
1328
}
1329
1330
/*
1331
* This function will return B_TRUE if the buffer is encrypted in memory.
1332
* This buffer can be decrypted by calling arc_untransform().
1333
*/
1334
boolean_t
1335
arc_is_encrypted(arc_buf_t *buf)
1336
{
1337
return (ARC_BUF_ENCRYPTED(buf) != 0);
1338
}
1339
1340
/*
1341
* Returns B_TRUE if the buffer represents data that has not had its MAC
1342
* verified yet.
1343
*/
1344
boolean_t
1345
arc_is_unauthenticated(arc_buf_t *buf)
1346
{
1347
return (HDR_NOAUTH(buf->b_hdr) != 0);
1348
}
1349
1350
void
1351
arc_get_raw_params(arc_buf_t *buf, boolean_t *byteorder, uint8_t *salt,
1352
uint8_t *iv, uint8_t *mac)
1353
{
1354
arc_buf_hdr_t *hdr = buf->b_hdr;
1355
1356
ASSERT(HDR_PROTECTED(hdr));
1357
1358
memcpy(salt, hdr->b_crypt_hdr.b_salt, ZIO_DATA_SALT_LEN);
1359
memcpy(iv, hdr->b_crypt_hdr.b_iv, ZIO_DATA_IV_LEN);
1360
memcpy(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN);
1361
*byteorder = (hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ?
1362
ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER;
1363
}
1364
1365
/*
1366
* Indicates how this buffer is compressed in memory. If it is not compressed
1367
* the value will be ZIO_COMPRESS_OFF. It can be made normally readable with
1368
* arc_untransform() as long as it is also unencrypted.
1369
*/
1370
enum zio_compress
1371
arc_get_compression(arc_buf_t *buf)
1372
{
1373
return (ARC_BUF_COMPRESSED(buf) ?
1374
HDR_GET_COMPRESS(buf->b_hdr) : ZIO_COMPRESS_OFF);
1375
}
1376
1377
/*
1378
* Return the compression algorithm used to store this data in the ARC. If ARC
1379
* compression is enabled or this is an encrypted block, this will be the same
1380
* as what's used to store it on-disk. Otherwise, this will be ZIO_COMPRESS_OFF.
1381
*/
1382
static inline enum zio_compress
1383
arc_hdr_get_compress(arc_buf_hdr_t *hdr)
1384
{
1385
return (HDR_COMPRESSION_ENABLED(hdr) ?
1386
HDR_GET_COMPRESS(hdr) : ZIO_COMPRESS_OFF);
1387
}
1388
1389
uint8_t
1390
arc_get_complevel(arc_buf_t *buf)
1391
{
1392
return (buf->b_hdr->b_complevel);
1393
}
1394
1395
__maybe_unused
1396
static inline boolean_t
1397
arc_buf_is_shared(arc_buf_t *buf)
1398
{
1399
boolean_t shared = (buf->b_data != NULL &&
1400
buf->b_hdr->b_l1hdr.b_pabd != NULL &&
1401
abd_is_linear(buf->b_hdr->b_l1hdr.b_pabd) &&
1402
buf->b_data == abd_to_buf(buf->b_hdr->b_l1hdr.b_pabd));
1403
IMPLY(shared, HDR_SHARED_DATA(buf->b_hdr));
1404
EQUIV(shared, ARC_BUF_SHARED(buf));
1405
IMPLY(shared, ARC_BUF_COMPRESSED(buf) || ARC_BUF_LAST(buf));
1406
1407
/*
1408
* It would be nice to assert arc_can_share() too, but the "hdr isn't
1409
* already being shared" requirement prevents us from doing that.
1410
*/
1411
1412
return (shared);
1413
}
1414
1415
/*
1416
* Free the checksum associated with this header. If there is no checksum, this
1417
* is a no-op.
1418
*/
1419
static inline void
1420
arc_cksum_free(arc_buf_hdr_t *hdr)
1421
{
1422
#ifdef ZFS_DEBUG
1423
ASSERT(HDR_HAS_L1HDR(hdr));
1424
1425
mutex_enter(&hdr->b_l1hdr.b_freeze_lock);
1426
if (hdr->b_l1hdr.b_freeze_cksum != NULL) {
1427
kmem_free(hdr->b_l1hdr.b_freeze_cksum, sizeof (zio_cksum_t));
1428
hdr->b_l1hdr.b_freeze_cksum = NULL;
1429
}
1430
mutex_exit(&hdr->b_l1hdr.b_freeze_lock);
1431
#endif
1432
}
1433
1434
/*
1435
* Return true iff at least one of the bufs on hdr is not compressed.
1436
* Encrypted buffers count as compressed.
1437
*/
1438
static boolean_t
1439
arc_hdr_has_uncompressed_buf(arc_buf_hdr_t *hdr)
1440
{
1441
ASSERT(hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY_OR_LOCKED(hdr));
1442
1443
for (arc_buf_t *b = hdr->b_l1hdr.b_buf; b != NULL; b = b->b_next) {
1444
if (!ARC_BUF_COMPRESSED(b)) {
1445
return (B_TRUE);
1446
}
1447
}
1448
return (B_FALSE);
1449
}
1450
1451
1452
/*
1453
* If we've turned on the ZFS_DEBUG_MODIFY flag, verify that the buf's data
1454
* matches the checksum that is stored in the hdr. If there is no checksum,
1455
* or if the buf is compressed, this is a no-op.
1456
*/
1457
static void
1458
arc_cksum_verify(arc_buf_t *buf)
1459
{
1460
#ifdef ZFS_DEBUG
1461
arc_buf_hdr_t *hdr = buf->b_hdr;
1462
zio_cksum_t zc;
1463
1464
if (!(zfs_flags & ZFS_DEBUG_MODIFY))
1465
return;
1466
1467
if (ARC_BUF_COMPRESSED(buf))
1468
return;
1469
1470
ASSERT(HDR_HAS_L1HDR(hdr));
1471
1472
mutex_enter(&hdr->b_l1hdr.b_freeze_lock);
1473
1474
if (hdr->b_l1hdr.b_freeze_cksum == NULL || HDR_IO_ERROR(hdr)) {
1475
mutex_exit(&hdr->b_l1hdr.b_freeze_lock);
1476
return;
1477
}
1478
1479
fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, &zc);
1480
if (!ZIO_CHECKSUM_EQUAL(*hdr->b_l1hdr.b_freeze_cksum, zc))
1481
panic("buffer modified while frozen!");
1482
mutex_exit(&hdr->b_l1hdr.b_freeze_lock);
1483
#endif
1484
}
1485
1486
/*
1487
* This function makes the assumption that data stored in the L2ARC
1488
* will be transformed exactly as it is in the main pool. Because of
1489
* this we can verify the checksum against the reading process's bp.
1490
*/
1491
static boolean_t
1492
arc_cksum_is_equal(arc_buf_hdr_t *hdr, zio_t *zio)
1493
{
1494
ASSERT(!BP_IS_EMBEDDED(zio->io_bp));
1495
VERIFY3U(BP_GET_PSIZE(zio->io_bp), ==, HDR_GET_PSIZE(hdr));
1496
1497
/*
1498
* Block pointers always store the checksum for the logical data.
1499
* If the block pointer has the gang bit set, then the checksum
1500
* it represents is for the reconstituted data and not for an
1501
* individual gang member. The zio pipeline, however, must be able to
1502
* determine the checksum of each of the gang constituents so it
1503
* treats the checksum comparison differently than what we need
1504
* for l2arc blocks. This prevents us from using the
1505
* zio_checksum_error() interface directly. Instead we must call the
1506
* zio_checksum_error_impl() so that we can ensure the checksum is
1507
* generated using the correct checksum algorithm and accounts for the
1508
* logical I/O size and not just a gang fragment.
1509
*/
1510
return (zio_checksum_error_impl(zio->io_spa, zio->io_bp,
1511
BP_GET_CHECKSUM(zio->io_bp), zio->io_abd, zio->io_size,
1512
zio->io_offset, NULL) == 0);
1513
}
1514
1515
/*
1516
* Given a buf full of data, if ZFS_DEBUG_MODIFY is enabled this computes a
1517
* checksum and attaches it to the buf's hdr so that we can ensure that the buf
1518
* isn't modified later on. If buf is compressed or there is already a checksum
1519
* on the hdr, this is a no-op (we only checksum uncompressed bufs).
1520
*/
1521
static void
1522
arc_cksum_compute(arc_buf_t *buf)
1523
{
1524
if (!(zfs_flags & ZFS_DEBUG_MODIFY))
1525
return;
1526
1527
#ifdef ZFS_DEBUG
1528
arc_buf_hdr_t *hdr = buf->b_hdr;
1529
ASSERT(HDR_HAS_L1HDR(hdr));
1530
mutex_enter(&hdr->b_l1hdr.b_freeze_lock);
1531
if (hdr->b_l1hdr.b_freeze_cksum != NULL || ARC_BUF_COMPRESSED(buf)) {
1532
mutex_exit(&hdr->b_l1hdr.b_freeze_lock);
1533
return;
1534
}
1535
1536
ASSERT(!ARC_BUF_ENCRYPTED(buf));
1537
ASSERT(!ARC_BUF_COMPRESSED(buf));
1538
hdr->b_l1hdr.b_freeze_cksum = kmem_alloc(sizeof (zio_cksum_t),
1539
KM_SLEEP);
1540
fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL,
1541
hdr->b_l1hdr.b_freeze_cksum);
1542
mutex_exit(&hdr->b_l1hdr.b_freeze_lock);
1543
#endif
1544
arc_buf_watch(buf);
1545
}
1546
1547
#ifndef _KERNEL
1548
void
1549
arc_buf_sigsegv(int sig, siginfo_t *si, void *unused)
1550
{
1551
(void) sig, (void) unused;
1552
panic("Got SIGSEGV at address: 0x%lx\n", (long)si->si_addr);
1553
}
1554
#endif
1555
1556
static void
1557
arc_buf_unwatch(arc_buf_t *buf)
1558
{
1559
#ifndef _KERNEL
1560
if (arc_watch) {
1561
ASSERT0(mprotect(buf->b_data, arc_buf_size(buf),
1562
PROT_READ | PROT_WRITE));
1563
}
1564
#else
1565
(void) buf;
1566
#endif
1567
}
1568
1569
static void
1570
arc_buf_watch(arc_buf_t *buf)
1571
{
1572
#ifndef _KERNEL
1573
if (arc_watch)
1574
ASSERT0(mprotect(buf->b_data, arc_buf_size(buf),
1575
PROT_READ));
1576
#else
1577
(void) buf;
1578
#endif
1579
}
1580
1581
static arc_buf_contents_t
1582
arc_buf_type(arc_buf_hdr_t *hdr)
1583
{
1584
arc_buf_contents_t type;
1585
if (HDR_ISTYPE_METADATA(hdr)) {
1586
type = ARC_BUFC_METADATA;
1587
} else {
1588
type = ARC_BUFC_DATA;
1589
}
1590
VERIFY3U(hdr->b_type, ==, type);
1591
return (type);
1592
}
1593
1594
boolean_t
1595
arc_is_metadata(arc_buf_t *buf)
1596
{
1597
return (HDR_ISTYPE_METADATA(buf->b_hdr) != 0);
1598
}
1599
1600
static uint32_t
1601
arc_bufc_to_flags(arc_buf_contents_t type)
1602
{
1603
switch (type) {
1604
case ARC_BUFC_DATA:
1605
/* metadata field is 0 if buffer contains normal data */
1606
return (0);
1607
case ARC_BUFC_METADATA:
1608
return (ARC_FLAG_BUFC_METADATA);
1609
default:
1610
break;
1611
}
1612
panic("undefined ARC buffer type!");
1613
return ((uint32_t)-1);
1614
}
1615
1616
void
1617
arc_buf_thaw(arc_buf_t *buf)
1618
{
1619
arc_buf_hdr_t *hdr = buf->b_hdr;
1620
1621
ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon);
1622
ASSERT(!HDR_IO_IN_PROGRESS(hdr));
1623
1624
arc_cksum_verify(buf);
1625
1626
/*
1627
* Compressed buffers do not manipulate the b_freeze_cksum.
1628
*/
1629
if (ARC_BUF_COMPRESSED(buf))
1630
return;
1631
1632
ASSERT(HDR_HAS_L1HDR(hdr));
1633
arc_cksum_free(hdr);
1634
arc_buf_unwatch(buf);
1635
}
1636
1637
void
1638
arc_buf_freeze(arc_buf_t *buf)
1639
{
1640
if (!(zfs_flags & ZFS_DEBUG_MODIFY))
1641
return;
1642
1643
if (ARC_BUF_COMPRESSED(buf))
1644
return;
1645
1646
ASSERT(HDR_HAS_L1HDR(buf->b_hdr));
1647
arc_cksum_compute(buf);
1648
}
1649
1650
/*
1651
* The arc_buf_hdr_t's b_flags should never be modified directly. Instead,
1652
* the following functions should be used to ensure that the flags are
1653
* updated in a thread-safe way. When manipulating the flags either
1654
* the hash_lock must be held or the hdr must be undiscoverable. This
1655
* ensures that we're not racing with any other threads when updating
1656
* the flags.
1657
*/
1658
static inline void
1659
arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags)
1660
{
1661
ASSERT(HDR_EMPTY_OR_LOCKED(hdr));
1662
hdr->b_flags |= flags;
1663
}
1664
1665
static inline void
1666
arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags)
1667
{
1668
ASSERT(HDR_EMPTY_OR_LOCKED(hdr));
1669
hdr->b_flags &= ~flags;
1670
}
1671
1672
/*
1673
* Setting the compression bits in the arc_buf_hdr_t's b_flags is
1674
* done in a special way since we have to clear and set bits
1675
* at the same time. Consumers that wish to set the compression bits
1676
* must use this function to ensure that the flags are updated in
1677
* thread-safe manner.
1678
*/
1679
static void
1680
arc_hdr_set_compress(arc_buf_hdr_t *hdr, enum zio_compress cmp)
1681
{
1682
ASSERT(HDR_EMPTY_OR_LOCKED(hdr));
1683
1684
/*
1685
* Holes and embedded blocks will always have a psize = 0 so
1686
* we ignore the compression of the blkptr and set the
1687
* want to uncompress them. Mark them as uncompressed.
1688
*/
1689
if (!zfs_compressed_arc_enabled || HDR_GET_PSIZE(hdr) == 0) {
1690
arc_hdr_clear_flags(hdr, ARC_FLAG_COMPRESSED_ARC);
1691
ASSERT(!HDR_COMPRESSION_ENABLED(hdr));
1692
} else {
1693
arc_hdr_set_flags(hdr, ARC_FLAG_COMPRESSED_ARC);
1694
ASSERT(HDR_COMPRESSION_ENABLED(hdr));
1695
}
1696
1697
HDR_SET_COMPRESS(hdr, cmp);
1698
ASSERT3U(HDR_GET_COMPRESS(hdr), ==, cmp);
1699
}
1700
1701
/*
1702
* Looks for another buf on the same hdr which has the data decompressed, copies
1703
* from it, and returns true. If no such buf exists, returns false.
1704
*/
1705
static boolean_t
1706
arc_buf_try_copy_decompressed_data(arc_buf_t *buf)
1707
{
1708
arc_buf_hdr_t *hdr = buf->b_hdr;
1709
boolean_t copied = B_FALSE;
1710
1711
ASSERT(HDR_HAS_L1HDR(hdr));
1712
ASSERT3P(buf->b_data, !=, NULL);
1713
ASSERT(!ARC_BUF_COMPRESSED(buf));
1714
1715
for (arc_buf_t *from = hdr->b_l1hdr.b_buf; from != NULL;
1716
from = from->b_next) {
1717
/* can't use our own data buffer */
1718
if (from == buf) {
1719
continue;
1720
}
1721
1722
if (!ARC_BUF_COMPRESSED(from)) {
1723
memcpy(buf->b_data, from->b_data, arc_buf_size(buf));
1724
copied = B_TRUE;
1725
break;
1726
}
1727
}
1728
1729
#ifdef ZFS_DEBUG
1730
/*
1731
* There were no decompressed bufs, so there should not be a
1732
* checksum on the hdr either.
1733
*/
1734
if (zfs_flags & ZFS_DEBUG_MODIFY)
1735
EQUIV(!copied, hdr->b_l1hdr.b_freeze_cksum == NULL);
1736
#endif
1737
1738
return (copied);
1739
}
1740
1741
/*
1742
* Allocates an ARC buf header that's in an evicted & L2-cached state.
1743
* This is used during l2arc reconstruction to make empty ARC buffers
1744
* which circumvent the regular disk->arc->l2arc path and instead come
1745
* into being in the reverse order, i.e. l2arc->arc.
1746
*/
1747
static arc_buf_hdr_t *
1748
arc_buf_alloc_l2only(size_t size, arc_buf_contents_t type, l2arc_dev_t *dev,
1749
dva_t dva, uint64_t daddr, int32_t psize, uint64_t asize, uint64_t birth,
1750
enum zio_compress compress, uint8_t complevel, boolean_t protected,
1751
boolean_t prefetch, arc_state_type_t arcs_state)
1752
{
1753
arc_buf_hdr_t *hdr;
1754
1755
ASSERT(size != 0);
1756
ASSERT(dev->l2ad_vdev != NULL);
1757
1758
hdr = kmem_cache_alloc(hdr_l2only_cache, KM_SLEEP);
1759
hdr->b_birth = birth;
1760
hdr->b_type = type;
1761
hdr->b_flags = 0;
1762
arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L2HDR);
1763
HDR_SET_LSIZE(hdr, size);
1764
HDR_SET_PSIZE(hdr, psize);
1765
HDR_SET_L2SIZE(hdr, asize);
1766
arc_hdr_set_compress(hdr, compress);
1767
hdr->b_complevel = complevel;
1768
if (protected)
1769
arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED);
1770
if (prefetch)
1771
arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH);
1772
hdr->b_spa = spa_load_guid(dev->l2ad_vdev->vdev_spa);
1773
1774
hdr->b_dva = dva;
1775
1776
hdr->b_l2hdr.b_dev = dev;
1777
hdr->b_l2hdr.b_daddr = daddr;
1778
hdr->b_l2hdr.b_arcs_state = arcs_state;
1779
1780
return (hdr);
1781
}
1782
1783
/*
1784
* Return the size of the block, b_pabd, that is stored in the arc_buf_hdr_t.
1785
*/
1786
static uint64_t
1787
arc_hdr_size(arc_buf_hdr_t *hdr)
1788
{
1789
uint64_t size;
1790
1791
if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF &&
1792
HDR_GET_PSIZE(hdr) > 0) {
1793
size = HDR_GET_PSIZE(hdr);
1794
} else {
1795
ASSERT3U(HDR_GET_LSIZE(hdr), !=, 0);
1796
size = HDR_GET_LSIZE(hdr);
1797
}
1798
return (size);
1799
}
1800
1801
static int
1802
arc_hdr_authenticate(arc_buf_hdr_t *hdr, spa_t *spa, uint64_t dsobj)
1803
{
1804
int ret;
1805
uint64_t csize;
1806
uint64_t lsize = HDR_GET_LSIZE(hdr);
1807
uint64_t psize = HDR_GET_PSIZE(hdr);
1808
abd_t *abd = hdr->b_l1hdr.b_pabd;
1809
boolean_t free_abd = B_FALSE;
1810
1811
ASSERT(HDR_EMPTY_OR_LOCKED(hdr));
1812
ASSERT(HDR_AUTHENTICATED(hdr));
1813
ASSERT3P(abd, !=, NULL);
1814
1815
/*
1816
* The MAC is calculated on the compressed data that is stored on disk.
1817
* However, if compressed arc is disabled we will only have the
1818
* decompressed data available to us now. Compress it into a temporary
1819
* abd so we can verify the MAC. The performance overhead of this will
1820
* be relatively low, since most objects in an encrypted objset will
1821
* be encrypted (instead of authenticated) anyway.
1822
*/
1823
if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF &&
1824
!HDR_COMPRESSION_ENABLED(hdr)) {
1825
abd = NULL;
1826
csize = zio_compress_data(HDR_GET_COMPRESS(hdr),
1827
hdr->b_l1hdr.b_pabd, &abd, lsize, MIN(lsize, psize),
1828
hdr->b_complevel);
1829
if (csize >= lsize || csize > psize) {
1830
ret = SET_ERROR(EIO);
1831
return (ret);
1832
}
1833
ASSERT3P(abd, !=, NULL);
1834
abd_zero_off(abd, csize, psize - csize);
1835
free_abd = B_TRUE;
1836
}
1837
1838
/*
1839
* Authentication is best effort. We authenticate whenever the key is
1840
* available. If we succeed we clear ARC_FLAG_NOAUTH.
1841
*/
1842
if (hdr->b_crypt_hdr.b_ot == DMU_OT_OBJSET) {
1843
ASSERT3U(HDR_GET_COMPRESS(hdr), ==, ZIO_COMPRESS_OFF);
1844
ASSERT3U(lsize, ==, psize);
1845
ret = spa_do_crypt_objset_mac_abd(B_FALSE, spa, dsobj, abd,
1846
psize, hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS);
1847
} else {
1848
ret = spa_do_crypt_mac_abd(B_FALSE, spa, dsobj, abd, psize,
1849
hdr->b_crypt_hdr.b_mac);
1850
}
1851
1852
if (ret == 0)
1853
arc_hdr_clear_flags(hdr, ARC_FLAG_NOAUTH);
1854
else if (ret == ENOENT)
1855
ret = 0;
1856
1857
if (free_abd)
1858
abd_free(abd);
1859
1860
return (ret);
1861
}
1862
1863
/*
1864
* This function will take a header that only has raw encrypted data in
1865
* b_crypt_hdr.b_rabd and decrypt it into a new buffer which is stored in
1866
* b_l1hdr.b_pabd. If designated in the header flags, this function will
1867
* also decompress the data.
1868
*/
1869
static int
1870
arc_hdr_decrypt(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb)
1871
{
1872
int ret;
1873
abd_t *cabd = NULL;
1874
boolean_t no_crypt = B_FALSE;
1875
boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS);
1876
1877
ASSERT(HDR_EMPTY_OR_LOCKED(hdr));
1878
ASSERT(HDR_ENCRYPTED(hdr));
1879
1880
arc_hdr_alloc_abd(hdr, 0);
1881
1882
ret = spa_do_crypt_abd(B_FALSE, spa, zb, hdr->b_crypt_hdr.b_ot,
1883
B_FALSE, bswap, hdr->b_crypt_hdr.b_salt, hdr->b_crypt_hdr.b_iv,
1884
hdr->b_crypt_hdr.b_mac, HDR_GET_PSIZE(hdr), hdr->b_l1hdr.b_pabd,
1885
hdr->b_crypt_hdr.b_rabd, &no_crypt);
1886
if (ret != 0)
1887
goto error;
1888
1889
if (no_crypt) {
1890
abd_copy(hdr->b_l1hdr.b_pabd, hdr->b_crypt_hdr.b_rabd,
1891
HDR_GET_PSIZE(hdr));
1892
}
1893
1894
/*
1895
* If this header has disabled arc compression but the b_pabd is
1896
* compressed after decrypting it, we need to decompress the newly
1897
* decrypted data.
1898
*/
1899
if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF &&
1900
!HDR_COMPRESSION_ENABLED(hdr)) {
1901
/*
1902
* We want to make sure that we are correctly honoring the
1903
* zfs_abd_scatter_enabled setting, so we allocate an abd here
1904
* and then loan a buffer from it, rather than allocating a
1905
* linear buffer and wrapping it in an abd later.
1906
*/
1907
cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 0);
1908
1909
ret = zio_decompress_data(HDR_GET_COMPRESS(hdr),
1910
hdr->b_l1hdr.b_pabd, cabd, HDR_GET_PSIZE(hdr),
1911
HDR_GET_LSIZE(hdr), &hdr->b_complevel);
1912
if (ret != 0) {
1913
goto error;
1914
}
1915
1916
arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd,
1917
arc_hdr_size(hdr), hdr);
1918
hdr->b_l1hdr.b_pabd = cabd;
1919
}
1920
1921
return (0);
1922
1923
error:
1924
arc_hdr_free_abd(hdr, B_FALSE);
1925
if (cabd != NULL)
1926
arc_free_data_abd(hdr, cabd, arc_hdr_size(hdr), hdr);
1927
1928
return (ret);
1929
}
1930
1931
/*
1932
* This function is called during arc_buf_fill() to prepare the header's
1933
* abd plaintext pointer for use. This involves authenticated protected
1934
* data and decrypting encrypted data into the plaintext abd.
1935
*/
1936
static int
1937
arc_fill_hdr_crypt(arc_buf_hdr_t *hdr, kmutex_t *hash_lock, spa_t *spa,
1938
const zbookmark_phys_t *zb, boolean_t noauth)
1939
{
1940
int ret;
1941
1942
ASSERT(HDR_PROTECTED(hdr));
1943
1944
if (hash_lock != NULL)
1945
mutex_enter(hash_lock);
1946
1947
if (HDR_NOAUTH(hdr) && !noauth) {
1948
/*
1949
* The caller requested authenticated data but our data has
1950
* not been authenticated yet. Verify the MAC now if we can.
1951
*/
1952
ret = arc_hdr_authenticate(hdr, spa, zb->zb_objset);
1953
if (ret != 0)
1954
goto error;
1955
} else if (HDR_HAS_RABD(hdr) && hdr->b_l1hdr.b_pabd == NULL) {
1956
/*
1957
* If we only have the encrypted version of the data, but the
1958
* unencrypted version was requested we take this opportunity
1959
* to store the decrypted version in the header for future use.
1960
*/
1961
ret = arc_hdr_decrypt(hdr, spa, zb);
1962
if (ret != 0)
1963
goto error;
1964
}
1965
1966
ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
1967
1968
if (hash_lock != NULL)
1969
mutex_exit(hash_lock);
1970
1971
return (0);
1972
1973
error:
1974
if (hash_lock != NULL)
1975
mutex_exit(hash_lock);
1976
1977
return (ret);
1978
}
1979
1980
/*
1981
* This function is used by the dbuf code to decrypt bonus buffers in place.
1982
* The dbuf code itself doesn't have any locking for decrypting a shared dnode
1983
* block, so we use the hash lock here to protect against concurrent calls to
1984
* arc_buf_fill().
1985
*/
1986
static void
1987
arc_buf_untransform_in_place(arc_buf_t *buf)
1988
{
1989
arc_buf_hdr_t *hdr = buf->b_hdr;
1990
1991
ASSERT(HDR_ENCRYPTED(hdr));
1992
ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE);
1993
ASSERT(HDR_EMPTY_OR_LOCKED(hdr));
1994
ASSERT3PF(hdr->b_l1hdr.b_pabd, !=, NULL, "hdr %px buf %px", hdr, buf);
1995
1996
zio_crypt_copy_dnode_bonus(hdr->b_l1hdr.b_pabd, buf->b_data,
1997
arc_buf_size(buf));
1998
buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED;
1999
buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED;
2000
}
2001
2002
/*
2003
* Given a buf that has a data buffer attached to it, this function will
2004
* efficiently fill the buf with data of the specified compression setting from
2005
* the hdr and update the hdr's b_freeze_cksum if necessary. If the buf and hdr
2006
* are already sharing a data buf, no copy is performed.
2007
*
2008
* If the buf is marked as compressed but uncompressed data was requested, this
2009
* will allocate a new data buffer for the buf, remove that flag, and fill the
2010
* buf with uncompressed data. You can't request a compressed buf on a hdr with
2011
* uncompressed data, and (since we haven't added support for it yet) if you
2012
* want compressed data your buf must already be marked as compressed and have
2013
* the correct-sized data buffer.
2014
*/
2015
static int
2016
arc_buf_fill(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb,
2017
arc_fill_flags_t flags)
2018
{
2019
int error = 0;
2020
arc_buf_hdr_t *hdr = buf->b_hdr;
2021
boolean_t hdr_compressed =
2022
(arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF);
2023
boolean_t compressed = (flags & ARC_FILL_COMPRESSED) != 0;
2024
boolean_t encrypted = (flags & ARC_FILL_ENCRYPTED) != 0;
2025
dmu_object_byteswap_t bswap = hdr->b_l1hdr.b_byteswap;
2026
kmutex_t *hash_lock = (flags & ARC_FILL_LOCKED) ? NULL : HDR_LOCK(hdr);
2027
2028
ASSERT3P(buf->b_data, !=, NULL);
2029
IMPLY(compressed, hdr_compressed || ARC_BUF_ENCRYPTED(buf));
2030
IMPLY(compressed, ARC_BUF_COMPRESSED(buf));
2031
IMPLY(encrypted, HDR_ENCRYPTED(hdr));
2032
IMPLY(encrypted, ARC_BUF_ENCRYPTED(buf));
2033
IMPLY(encrypted, ARC_BUF_COMPRESSED(buf));
2034
IMPLY(encrypted, !arc_buf_is_shared(buf));
2035
2036
/*
2037
* If the caller wanted encrypted data we just need to copy it from
2038
* b_rabd and potentially byteswap it. We won't be able to do any
2039
* further transforms on it.
2040
*/
2041
if (encrypted) {
2042
ASSERT(HDR_HAS_RABD(hdr));
2043
abd_copy_to_buf(buf->b_data, hdr->b_crypt_hdr.b_rabd,
2044
HDR_GET_PSIZE(hdr));
2045
goto byteswap;
2046
}
2047
2048
/*
2049
* Adjust encrypted and authenticated headers to accommodate
2050
* the request if needed. Dnode blocks (ARC_FILL_IN_PLACE) are
2051
* allowed to fail decryption due to keys not being loaded
2052
* without being marked as an IO error.
2053
*/
2054
if (HDR_PROTECTED(hdr)) {
2055
error = arc_fill_hdr_crypt(hdr, hash_lock, spa,
2056
zb, !!(flags & ARC_FILL_NOAUTH));
2057
if (error == EACCES && (flags & ARC_FILL_IN_PLACE) != 0) {
2058
return (error);
2059
} else if (error != 0) {
2060
if (hash_lock != NULL)
2061
mutex_enter(hash_lock);
2062
arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR);
2063
if (hash_lock != NULL)
2064
mutex_exit(hash_lock);
2065
return (error);
2066
}
2067
}
2068
2069
/*
2070
* There is a special case here for dnode blocks which are
2071
* decrypting their bonus buffers. These blocks may request to
2072
* be decrypted in-place. This is necessary because there may
2073
* be many dnodes pointing into this buffer and there is
2074
* currently no method to synchronize replacing the backing
2075
* b_data buffer and updating all of the pointers. Here we use
2076
* the hash lock to ensure there are no races. If the need
2077
* arises for other types to be decrypted in-place, they must
2078
* add handling here as well.
2079
*/
2080
if ((flags & ARC_FILL_IN_PLACE) != 0) {
2081
ASSERT(!hdr_compressed);
2082
ASSERT(!compressed);
2083
ASSERT(!encrypted);
2084
2085
if (HDR_ENCRYPTED(hdr) && ARC_BUF_ENCRYPTED(buf)) {
2086
ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE);
2087
2088
if (hash_lock != NULL)
2089
mutex_enter(hash_lock);
2090
arc_buf_untransform_in_place(buf);
2091
if (hash_lock != NULL)
2092
mutex_exit(hash_lock);
2093
2094
/* Compute the hdr's checksum if necessary */
2095
arc_cksum_compute(buf);
2096
}
2097
2098
return (0);
2099
}
2100
2101
if (hdr_compressed == compressed) {
2102
if (ARC_BUF_SHARED(buf)) {
2103
ASSERT(arc_buf_is_shared(buf));
2104
} else {
2105
abd_copy_to_buf(buf->b_data, hdr->b_l1hdr.b_pabd,
2106
arc_buf_size(buf));
2107
}
2108
} else {
2109
ASSERT(hdr_compressed);
2110
ASSERT(!compressed);
2111
2112
/*
2113
* If the buf is sharing its data with the hdr, unlink it and
2114
* allocate a new data buffer for the buf.
2115
*/
2116
if (ARC_BUF_SHARED(buf)) {
2117
ASSERTF(ARC_BUF_COMPRESSED(buf),
2118
"buf %p was uncompressed", buf);
2119
2120
/* We need to give the buf its own b_data */
2121
buf->b_flags &= ~ARC_BUF_FLAG_SHARED;
2122
buf->b_data =
2123
arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf);
2124
arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA);
2125
2126
/* Previously overhead was 0; just add new overhead */
2127
ARCSTAT_INCR(arcstat_overhead_size, HDR_GET_LSIZE(hdr));
2128
} else if (ARC_BUF_COMPRESSED(buf)) {
2129
ASSERT(!arc_buf_is_shared(buf));
2130
2131
/* We need to reallocate the buf's b_data */
2132
arc_free_data_buf(hdr, buf->b_data, HDR_GET_PSIZE(hdr),
2133
buf);
2134
buf->b_data =
2135
arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf);
2136
2137
/* We increased the size of b_data; update overhead */
2138
ARCSTAT_INCR(arcstat_overhead_size,
2139
HDR_GET_LSIZE(hdr) - HDR_GET_PSIZE(hdr));
2140
}
2141
2142
/*
2143
* Regardless of the buf's previous compression settings, it
2144
* should not be compressed at the end of this function.
2145
*/
2146
buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED;
2147
2148
/*
2149
* Try copying the data from another buf which already has a
2150
* decompressed version. If that's not possible, it's time to
2151
* bite the bullet and decompress the data from the hdr.
2152
*/
2153
if (arc_buf_try_copy_decompressed_data(buf)) {
2154
/* Skip byteswapping and checksumming (already done) */
2155
return (0);
2156
} else {
2157
abd_t dabd;
2158
abd_get_from_buf_struct(&dabd, buf->b_data,
2159
HDR_GET_LSIZE(hdr));
2160
error = zio_decompress_data(HDR_GET_COMPRESS(hdr),
2161
hdr->b_l1hdr.b_pabd, &dabd,
2162
HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr),
2163
&hdr->b_complevel);
2164
abd_free(&dabd);
2165
2166
/*
2167
* Absent hardware errors or software bugs, this should
2168
* be impossible, but log it anyway so we can debug it.
2169
*/
2170
if (error != 0) {
2171
zfs_dbgmsg(
2172
"hdr %px, compress %d, psize %d, lsize %d",
2173
hdr, arc_hdr_get_compress(hdr),
2174
HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr));
2175
if (hash_lock != NULL)
2176
mutex_enter(hash_lock);
2177
arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR);
2178
if (hash_lock != NULL)
2179
mutex_exit(hash_lock);
2180
return (SET_ERROR(EIO));
2181
}
2182
}
2183
}
2184
2185
byteswap:
2186
/* Byteswap the buf's data if necessary */
2187
if (bswap != DMU_BSWAP_NUMFUNCS) {
2188
ASSERT(!HDR_SHARED_DATA(hdr));
2189
ASSERT3U(bswap, <, DMU_BSWAP_NUMFUNCS);
2190
dmu_ot_byteswap[bswap].ob_func(buf->b_data, HDR_GET_LSIZE(hdr));
2191
}
2192
2193
/* Compute the hdr's checksum if necessary */
2194
arc_cksum_compute(buf);
2195
2196
return (0);
2197
}
2198
2199
/*
2200
* If this function is being called to decrypt an encrypted buffer or verify an
2201
* authenticated one, the key must be loaded and a mapping must be made
2202
* available in the keystore via spa_keystore_create_mapping() or one of its
2203
* callers.
2204
*/
2205
int
2206
arc_untransform(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb,
2207
boolean_t in_place)
2208
{
2209
int ret;
2210
arc_fill_flags_t flags = 0;
2211
2212
if (in_place)
2213
flags |= ARC_FILL_IN_PLACE;
2214
2215
ret = arc_buf_fill(buf, spa, zb, flags);
2216
if (ret == ECKSUM) {
2217
/*
2218
* Convert authentication and decryption errors to EIO
2219
* (and generate an ereport) before leaving the ARC.
2220
*/
2221
ret = SET_ERROR(EIO);
2222
spa_log_error(spa, zb, buf->b_hdr->b_birth);
2223
(void) zfs_ereport_post(FM_EREPORT_ZFS_AUTHENTICATION,
2224
spa, NULL, zb, NULL, 0);
2225
}
2226
2227
return (ret);
2228
}
2229
2230
/*
2231
* Increment the amount of evictable space in the arc_state_t's refcount.
2232
* We account for the space used by the hdr and the arc buf individually
2233
* so that we can add and remove them from the refcount individually.
2234
*/
2235
static void
2236
arc_evictable_space_increment(arc_buf_hdr_t *hdr, arc_state_t *state)
2237
{
2238
arc_buf_contents_t type = arc_buf_type(hdr);
2239
2240
ASSERT(HDR_HAS_L1HDR(hdr));
2241
2242
if (GHOST_STATE(state)) {
2243
ASSERT0P(hdr->b_l1hdr.b_buf);
2244
ASSERT0P(hdr->b_l1hdr.b_pabd);
2245
ASSERT(!HDR_HAS_RABD(hdr));
2246
(void) zfs_refcount_add_many(&state->arcs_esize[type],
2247
HDR_GET_LSIZE(hdr), hdr);
2248
return;
2249
}
2250
2251
if (hdr->b_l1hdr.b_pabd != NULL) {
2252
(void) zfs_refcount_add_many(&state->arcs_esize[type],
2253
arc_hdr_size(hdr), hdr);
2254
}
2255
if (HDR_HAS_RABD(hdr)) {
2256
(void) zfs_refcount_add_many(&state->arcs_esize[type],
2257
HDR_GET_PSIZE(hdr), hdr);
2258
}
2259
2260
for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL;
2261
buf = buf->b_next) {
2262
if (ARC_BUF_SHARED(buf))
2263
continue;
2264
(void) zfs_refcount_add_many(&state->arcs_esize[type],
2265
arc_buf_size(buf), buf);
2266
}
2267
}
2268
2269
/*
2270
* Decrement the amount of evictable space in the arc_state_t's refcount.
2271
* We account for the space used by the hdr and the arc buf individually
2272
* so that we can add and remove them from the refcount individually.
2273
*/
2274
static void
2275
arc_evictable_space_decrement(arc_buf_hdr_t *hdr, arc_state_t *state)
2276
{
2277
arc_buf_contents_t type = arc_buf_type(hdr);
2278
2279
ASSERT(HDR_HAS_L1HDR(hdr));
2280
2281
if (GHOST_STATE(state)) {
2282
ASSERT0P(hdr->b_l1hdr.b_buf);
2283
ASSERT0P(hdr->b_l1hdr.b_pabd);
2284
ASSERT(!HDR_HAS_RABD(hdr));
2285
(void) zfs_refcount_remove_many(&state->arcs_esize[type],
2286
HDR_GET_LSIZE(hdr), hdr);
2287
return;
2288
}
2289
2290
if (hdr->b_l1hdr.b_pabd != NULL) {
2291
(void) zfs_refcount_remove_many(&state->arcs_esize[type],
2292
arc_hdr_size(hdr), hdr);
2293
}
2294
if (HDR_HAS_RABD(hdr)) {
2295
(void) zfs_refcount_remove_many(&state->arcs_esize[type],
2296
HDR_GET_PSIZE(hdr), hdr);
2297
}
2298
2299
for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL;
2300
buf = buf->b_next) {
2301
if (ARC_BUF_SHARED(buf))
2302
continue;
2303
(void) zfs_refcount_remove_many(&state->arcs_esize[type],
2304
arc_buf_size(buf), buf);
2305
}
2306
}
2307
2308
/*
2309
* Add a reference to this hdr indicating that someone is actively
2310
* referencing that memory. When the refcount transitions from 0 to 1,
2311
* we remove it from the respective arc_state_t list to indicate that
2312
* it is not evictable.
2313
*/
2314
static void
2315
add_reference(arc_buf_hdr_t *hdr, const void *tag)
2316
{
2317
arc_state_t *state = hdr->b_l1hdr.b_state;
2318
2319
ASSERT(HDR_HAS_L1HDR(hdr));
2320
if (!HDR_EMPTY(hdr) && !MUTEX_HELD(HDR_LOCK(hdr))) {
2321
ASSERT(state == arc_anon);
2322
ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
2323
ASSERT0P(hdr->b_l1hdr.b_buf);
2324
}
2325
2326
if ((zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag) == 1) &&
2327
state != arc_anon && state != arc_l2c_only) {
2328
/* We don't use the L2-only state list. */
2329
multilist_remove(&state->arcs_list[arc_buf_type(hdr)], hdr);
2330
arc_evictable_space_decrement(hdr, state);
2331
}
2332
}
2333
2334
/*
2335
* Remove a reference from this hdr. When the reference transitions from
2336
* 1 to 0 and we're not anonymous, then we add this hdr to the arc_state_t's
2337
* list making it eligible for eviction.
2338
*/
2339
static int
2340
remove_reference(arc_buf_hdr_t *hdr, const void *tag)
2341
{
2342
int cnt;
2343
arc_state_t *state = hdr->b_l1hdr.b_state;
2344
2345
ASSERT(HDR_HAS_L1HDR(hdr));
2346
ASSERT(state == arc_anon || MUTEX_HELD(HDR_LOCK(hdr)));
2347
ASSERT(!GHOST_STATE(state)); /* arc_l2c_only counts as a ghost. */
2348
2349
if ((cnt = zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag)) != 0)
2350
return (cnt);
2351
2352
if (state == arc_anon) {
2353
arc_hdr_destroy(hdr);
2354
return (0);
2355
}
2356
if (state == arc_uncached && !HDR_PREFETCH(hdr)) {
2357
arc_change_state(arc_anon, hdr);
2358
arc_hdr_destroy(hdr);
2359
return (0);
2360
}
2361
multilist_insert(&state->arcs_list[arc_buf_type(hdr)], hdr);
2362
arc_evictable_space_increment(hdr, state);
2363
return (0);
2364
}
2365
2366
/*
2367
* Returns detailed information about a specific arc buffer. When the
2368
* state_index argument is set the function will calculate the arc header
2369
* list position for its arc state. Since this requires a linear traversal
2370
* callers are strongly encourage not to do this. However, it can be helpful
2371
* for targeted analysis so the functionality is provided.
2372
*/
2373
void
2374
arc_buf_info(arc_buf_t *ab, arc_buf_info_t *abi, int state_index)
2375
{
2376
(void) state_index;
2377
arc_buf_hdr_t *hdr = ab->b_hdr;
2378
l1arc_buf_hdr_t *l1hdr = NULL;
2379
l2arc_buf_hdr_t *l2hdr = NULL;
2380
arc_state_t *state = NULL;
2381
2382
memset(abi, 0, sizeof (arc_buf_info_t));
2383
2384
if (hdr == NULL)
2385
return;
2386
2387
abi->abi_flags = hdr->b_flags;
2388
2389
if (HDR_HAS_L1HDR(hdr)) {
2390
l1hdr = &hdr->b_l1hdr;
2391
state = l1hdr->b_state;
2392
}
2393
if (HDR_HAS_L2HDR(hdr))
2394
l2hdr = &hdr->b_l2hdr;
2395
2396
if (l1hdr) {
2397
abi->abi_bufcnt = 0;
2398
for (arc_buf_t *buf = l1hdr->b_buf; buf; buf = buf->b_next)
2399
abi->abi_bufcnt++;
2400
abi->abi_access = l1hdr->b_arc_access;
2401
abi->abi_mru_hits = l1hdr->b_mru_hits;
2402
abi->abi_mru_ghost_hits = l1hdr->b_mru_ghost_hits;
2403
abi->abi_mfu_hits = l1hdr->b_mfu_hits;
2404
abi->abi_mfu_ghost_hits = l1hdr->b_mfu_ghost_hits;
2405
abi->abi_holds = zfs_refcount_count(&l1hdr->b_refcnt);
2406
}
2407
2408
if (l2hdr) {
2409
abi->abi_l2arc_dattr = l2hdr->b_daddr;
2410
abi->abi_l2arc_hits = l2hdr->b_hits;
2411
}
2412
2413
abi->abi_state_type = state ? state->arcs_state : ARC_STATE_ANON;
2414
abi->abi_state_contents = arc_buf_type(hdr);
2415
abi->abi_size = arc_hdr_size(hdr);
2416
}
2417
2418
/*
2419
* Move the supplied buffer to the indicated state. The hash lock
2420
* for the buffer must be held by the caller.
2421
*/
2422
static void
2423
arc_change_state(arc_state_t *new_state, arc_buf_hdr_t *hdr)
2424
{
2425
arc_state_t *old_state;
2426
int64_t refcnt;
2427
boolean_t update_old, update_new;
2428
arc_buf_contents_t type = arc_buf_type(hdr);
2429
2430
/*
2431
* We almost always have an L1 hdr here, since we call arc_hdr_realloc()
2432
* in arc_read() when bringing a buffer out of the L2ARC. However, the
2433
* L1 hdr doesn't always exist when we change state to arc_anon before
2434
* destroying a header, in which case reallocating to add the L1 hdr is
2435
* pointless.
2436
*/
2437
if (HDR_HAS_L1HDR(hdr)) {
2438
old_state = hdr->b_l1hdr.b_state;
2439
refcnt = zfs_refcount_count(&hdr->b_l1hdr.b_refcnt);
2440
update_old = (hdr->b_l1hdr.b_buf != NULL ||
2441
hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr));
2442
2443
IMPLY(GHOST_STATE(old_state), hdr->b_l1hdr.b_buf == NULL);
2444
IMPLY(GHOST_STATE(new_state), hdr->b_l1hdr.b_buf == NULL);
2445
IMPLY(old_state == arc_anon, hdr->b_l1hdr.b_buf == NULL ||
2446
ARC_BUF_LAST(hdr->b_l1hdr.b_buf));
2447
} else {
2448
old_state = arc_l2c_only;
2449
refcnt = 0;
2450
update_old = B_FALSE;
2451
}
2452
update_new = update_old;
2453
if (GHOST_STATE(old_state))
2454
update_old = B_TRUE;
2455
if (GHOST_STATE(new_state))
2456
update_new = B_TRUE;
2457
2458
ASSERT(MUTEX_HELD(HDR_LOCK(hdr)));
2459
ASSERT3P(new_state, !=, old_state);
2460
2461
/*
2462
* If this buffer is evictable, transfer it from the
2463
* old state list to the new state list.
2464
*/
2465
if (refcnt == 0) {
2466
if (old_state != arc_anon && old_state != arc_l2c_only) {
2467
ASSERT(HDR_HAS_L1HDR(hdr));
2468
/* remove_reference() saves on insert. */
2469
if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) {
2470
multilist_remove(&old_state->arcs_list[type],
2471
hdr);
2472
arc_evictable_space_decrement(hdr, old_state);
2473
}
2474
}
2475
if (new_state != arc_anon && new_state != arc_l2c_only) {
2476
/*
2477
* An L1 header always exists here, since if we're
2478
* moving to some L1-cached state (i.e. not l2c_only or
2479
* anonymous), we realloc the header to add an L1hdr
2480
* beforehand.
2481
*/
2482
ASSERT(HDR_HAS_L1HDR(hdr));
2483
multilist_insert(&new_state->arcs_list[type], hdr);
2484
arc_evictable_space_increment(hdr, new_state);
2485
}
2486
}
2487
2488
ASSERT(!HDR_EMPTY(hdr));
2489
if (new_state == arc_anon && HDR_IN_HASH_TABLE(hdr))
2490
buf_hash_remove(hdr);
2491
2492
/* adjust state sizes (ignore arc_l2c_only) */
2493
2494
if (update_new && new_state != arc_l2c_only) {
2495
ASSERT(HDR_HAS_L1HDR(hdr));
2496
if (GHOST_STATE(new_state)) {
2497
2498
/*
2499
* When moving a header to a ghost state, we first
2500
* remove all arc buffers. Thus, we'll have no arc
2501
* buffer to use for the reference. As a result, we
2502
* use the arc header pointer for the reference.
2503
*/
2504
(void) zfs_refcount_add_many(
2505
&new_state->arcs_size[type],
2506
HDR_GET_LSIZE(hdr), hdr);
2507
ASSERT0P(hdr->b_l1hdr.b_pabd);
2508
ASSERT(!HDR_HAS_RABD(hdr));
2509
} else {
2510
2511
/*
2512
* Each individual buffer holds a unique reference,
2513
* thus we must remove each of these references one
2514
* at a time.
2515
*/
2516
for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL;
2517
buf = buf->b_next) {
2518
2519
/*
2520
* When the arc_buf_t is sharing the data
2521
* block with the hdr, the owner of the
2522
* reference belongs to the hdr. Only
2523
* add to the refcount if the arc_buf_t is
2524
* not shared.
2525
*/
2526
if (ARC_BUF_SHARED(buf))
2527
continue;
2528
2529
(void) zfs_refcount_add_many(
2530
&new_state->arcs_size[type],
2531
arc_buf_size(buf), buf);
2532
}
2533
2534
if (hdr->b_l1hdr.b_pabd != NULL) {
2535
(void) zfs_refcount_add_many(
2536
&new_state->arcs_size[type],
2537
arc_hdr_size(hdr), hdr);
2538
}
2539
2540
if (HDR_HAS_RABD(hdr)) {
2541
(void) zfs_refcount_add_many(
2542
&new_state->arcs_size[type],
2543
HDR_GET_PSIZE(hdr), hdr);
2544
}
2545
}
2546
}
2547
2548
if (update_old && old_state != arc_l2c_only) {
2549
ASSERT(HDR_HAS_L1HDR(hdr));
2550
if (GHOST_STATE(old_state)) {
2551
ASSERT0P(hdr->b_l1hdr.b_pabd);
2552
ASSERT(!HDR_HAS_RABD(hdr));
2553
2554
/*
2555
* When moving a header off of a ghost state,
2556
* the header will not contain any arc buffers.
2557
* We use the arc header pointer for the reference
2558
* which is exactly what we did when we put the
2559
* header on the ghost state.
2560
*/
2561
2562
(void) zfs_refcount_remove_many(
2563
&old_state->arcs_size[type],
2564
HDR_GET_LSIZE(hdr), hdr);
2565
} else {
2566
2567
/*
2568
* Each individual buffer holds a unique reference,
2569
* thus we must remove each of these references one
2570
* at a time.
2571
*/
2572
for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL;
2573
buf = buf->b_next) {
2574
2575
/*
2576
* When the arc_buf_t is sharing the data
2577
* block with the hdr, the owner of the
2578
* reference belongs to the hdr. Only
2579
* add to the refcount if the arc_buf_t is
2580
* not shared.
2581
*/
2582
if (ARC_BUF_SHARED(buf))
2583
continue;
2584
2585
(void) zfs_refcount_remove_many(
2586
&old_state->arcs_size[type],
2587
arc_buf_size(buf), buf);
2588
}
2589
ASSERT(hdr->b_l1hdr.b_pabd != NULL ||
2590
HDR_HAS_RABD(hdr));
2591
2592
if (hdr->b_l1hdr.b_pabd != NULL) {
2593
(void) zfs_refcount_remove_many(
2594
&old_state->arcs_size[type],
2595
arc_hdr_size(hdr), hdr);
2596
}
2597
2598
if (HDR_HAS_RABD(hdr)) {
2599
(void) zfs_refcount_remove_many(
2600
&old_state->arcs_size[type],
2601
HDR_GET_PSIZE(hdr), hdr);
2602
}
2603
}
2604
}
2605
2606
if (HDR_HAS_L1HDR(hdr)) {
2607
hdr->b_l1hdr.b_state = new_state;
2608
2609
if (HDR_HAS_L2HDR(hdr) && new_state != arc_l2c_only) {
2610
l2arc_hdr_arcstats_decrement_state(hdr);
2611
hdr->b_l2hdr.b_arcs_state = new_state->arcs_state;
2612
l2arc_hdr_arcstats_increment_state(hdr);
2613
}
2614
}
2615
}
2616
2617
void
2618
arc_space_consume(uint64_t space, arc_space_type_t type)
2619
{
2620
ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES);
2621
2622
switch (type) {
2623
default:
2624
break;
2625
case ARC_SPACE_DATA:
2626
ARCSTAT_INCR(arcstat_data_size, space);
2627
break;
2628
case ARC_SPACE_META:
2629
ARCSTAT_INCR(arcstat_metadata_size, space);
2630
break;
2631
case ARC_SPACE_BONUS:
2632
ARCSTAT_INCR(arcstat_bonus_size, space);
2633
break;
2634
case ARC_SPACE_DNODE:
2635
aggsum_add(&arc_sums.arcstat_dnode_size, space);
2636
break;
2637
case ARC_SPACE_DBUF:
2638
ARCSTAT_INCR(arcstat_dbuf_size, space);
2639
break;
2640
case ARC_SPACE_HDRS:
2641
ARCSTAT_INCR(arcstat_hdr_size, space);
2642
break;
2643
case ARC_SPACE_L2HDRS:
2644
aggsum_add(&arc_sums.arcstat_l2_hdr_size, space);
2645
break;
2646
case ARC_SPACE_ABD_CHUNK_WASTE:
2647
/*
2648
* Note: this includes space wasted by all scatter ABD's, not
2649
* just those allocated by the ARC. But the vast majority of
2650
* scatter ABD's come from the ARC, because other users are
2651
* very short-lived.
2652
*/
2653
ARCSTAT_INCR(arcstat_abd_chunk_waste_size, space);
2654
break;
2655
}
2656
2657
if (type != ARC_SPACE_DATA && type != ARC_SPACE_ABD_CHUNK_WASTE)
2658
ARCSTAT_INCR(arcstat_meta_used, space);
2659
2660
aggsum_add(&arc_sums.arcstat_size, space);
2661
}
2662
2663
void
2664
arc_space_return(uint64_t space, arc_space_type_t type)
2665
{
2666
ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES);
2667
2668
switch (type) {
2669
default:
2670
break;
2671
case ARC_SPACE_DATA:
2672
ARCSTAT_INCR(arcstat_data_size, -space);
2673
break;
2674
case ARC_SPACE_META:
2675
ARCSTAT_INCR(arcstat_metadata_size, -space);
2676
break;
2677
case ARC_SPACE_BONUS:
2678
ARCSTAT_INCR(arcstat_bonus_size, -space);
2679
break;
2680
case ARC_SPACE_DNODE:
2681
aggsum_add(&arc_sums.arcstat_dnode_size, -space);
2682
break;
2683
case ARC_SPACE_DBUF:
2684
ARCSTAT_INCR(arcstat_dbuf_size, -space);
2685
break;
2686
case ARC_SPACE_HDRS:
2687
ARCSTAT_INCR(arcstat_hdr_size, -space);
2688
break;
2689
case ARC_SPACE_L2HDRS:
2690
aggsum_add(&arc_sums.arcstat_l2_hdr_size, -space);
2691
break;
2692
case ARC_SPACE_ABD_CHUNK_WASTE:
2693
ARCSTAT_INCR(arcstat_abd_chunk_waste_size, -space);
2694
break;
2695
}
2696
2697
if (type != ARC_SPACE_DATA && type != ARC_SPACE_ABD_CHUNK_WASTE)
2698
ARCSTAT_INCR(arcstat_meta_used, -space);
2699
2700
ASSERT(aggsum_compare(&arc_sums.arcstat_size, space) >= 0);
2701
aggsum_add(&arc_sums.arcstat_size, -space);
2702
}
2703
2704
/*
2705
* Given a hdr and a buf, returns whether that buf can share its b_data buffer
2706
* with the hdr's b_pabd.
2707
*/
2708
static boolean_t
2709
arc_can_share(arc_buf_hdr_t *hdr, arc_buf_t *buf)
2710
{
2711
/*
2712
* The criteria for sharing a hdr's data are:
2713
* 1. the buffer is not encrypted
2714
* 2. the hdr's compression matches the buf's compression
2715
* 3. the hdr doesn't need to be byteswapped
2716
* 4. the hdr isn't already being shared
2717
* 5. the buf is either compressed or it is the last buf in the hdr list
2718
*
2719
* Criterion #5 maintains the invariant that shared uncompressed
2720
* bufs must be the final buf in the hdr's b_buf list. Reading this, you
2721
* might ask, "if a compressed buf is allocated first, won't that be the
2722
* last thing in the list?", but in that case it's impossible to create
2723
* a shared uncompressed buf anyway (because the hdr must be compressed
2724
* to have the compressed buf). You might also think that #3 is
2725
* sufficient to make this guarantee, however it's possible
2726
* (specifically in the rare L2ARC write race mentioned in
2727
* arc_buf_alloc_impl()) there will be an existing uncompressed buf that
2728
* is shareable, but wasn't at the time of its allocation. Rather than
2729
* allow a new shared uncompressed buf to be created and then shuffle
2730
* the list around to make it the last element, this simply disallows
2731
* sharing if the new buf isn't the first to be added.
2732
*/
2733
ASSERT3P(buf->b_hdr, ==, hdr);
2734
boolean_t hdr_compressed =
2735
arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF;
2736
boolean_t buf_compressed = ARC_BUF_COMPRESSED(buf) != 0;
2737
return (!ARC_BUF_ENCRYPTED(buf) &&
2738
buf_compressed == hdr_compressed &&
2739
hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS &&
2740
!HDR_SHARED_DATA(hdr) &&
2741
(ARC_BUF_LAST(buf) || ARC_BUF_COMPRESSED(buf)));
2742
}
2743
2744
/*
2745
* Allocate a buf for this hdr. If you care about the data that's in the hdr,
2746
* or if you want a compressed buffer, pass those flags in. Returns 0 if the
2747
* copy was made successfully, or an error code otherwise.
2748
*/
2749
static int
2750
arc_buf_alloc_impl(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb,
2751
const void *tag, boolean_t encrypted, boolean_t compressed,
2752
boolean_t noauth, boolean_t fill, arc_buf_t **ret)
2753
{
2754
arc_buf_t *buf;
2755
arc_fill_flags_t flags = ARC_FILL_LOCKED;
2756
2757
ASSERT(HDR_HAS_L1HDR(hdr));
2758
ASSERT3U(HDR_GET_LSIZE(hdr), >, 0);
2759
VERIFY(hdr->b_type == ARC_BUFC_DATA ||
2760
hdr->b_type == ARC_BUFC_METADATA);
2761
ASSERT3P(ret, !=, NULL);
2762
ASSERT0P(*ret);
2763
IMPLY(encrypted, compressed);
2764
2765
buf = *ret = kmem_cache_alloc(buf_cache, KM_PUSHPAGE);
2766
buf->b_hdr = hdr;
2767
buf->b_data = NULL;
2768
buf->b_next = hdr->b_l1hdr.b_buf;
2769
buf->b_flags = 0;
2770
2771
add_reference(hdr, tag);
2772
2773
/*
2774
* We're about to change the hdr's b_flags. We must either
2775
* hold the hash_lock or be undiscoverable.
2776
*/
2777
ASSERT(HDR_EMPTY_OR_LOCKED(hdr));
2778
2779
/*
2780
* Only honor requests for compressed bufs if the hdr is actually
2781
* compressed. This must be overridden if the buffer is encrypted since
2782
* encrypted buffers cannot be decompressed.
2783
*/
2784
if (encrypted) {
2785
buf->b_flags |= ARC_BUF_FLAG_COMPRESSED;
2786
buf->b_flags |= ARC_BUF_FLAG_ENCRYPTED;
2787
flags |= ARC_FILL_COMPRESSED | ARC_FILL_ENCRYPTED;
2788
} else if (compressed &&
2789
arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) {
2790
buf->b_flags |= ARC_BUF_FLAG_COMPRESSED;
2791
flags |= ARC_FILL_COMPRESSED;
2792
}
2793
2794
if (noauth) {
2795
ASSERT0(encrypted);
2796
flags |= ARC_FILL_NOAUTH;
2797
}
2798
2799
/*
2800
* If the hdr's data can be shared then we share the data buffer and
2801
* set the appropriate bit in the hdr's b_flags to indicate the hdr is
2802
* sharing it's b_pabd with the arc_buf_t. Otherwise, we allocate a new
2803
* buffer to store the buf's data.
2804
*
2805
* There are two additional restrictions here because we're sharing
2806
* hdr -> buf instead of the usual buf -> hdr. First, the hdr can't be
2807
* actively involved in an L2ARC write, because if this buf is used by
2808
* an arc_write() then the hdr's data buffer will be released when the
2809
* write completes, even though the L2ARC write might still be using it.
2810
* Second, the hdr's ABD must be linear so that the buf's user doesn't
2811
* need to be ABD-aware. It must be allocated via
2812
* zio_[data_]buf_alloc(), not as a page, because we need to be able
2813
* to abd_release_ownership_of_buf(), which isn't allowed on "linear
2814
* page" buffers because the ABD code needs to handle freeing them
2815
* specially.
2816
*/
2817
boolean_t can_share = arc_can_share(hdr, buf) &&
2818
!HDR_L2_WRITING(hdr) &&
2819
hdr->b_l1hdr.b_pabd != NULL &&
2820
abd_is_linear(hdr->b_l1hdr.b_pabd) &&
2821
!abd_is_linear_page(hdr->b_l1hdr.b_pabd);
2822
2823
/* Set up b_data and sharing */
2824
if (can_share) {
2825
buf->b_data = abd_to_buf(hdr->b_l1hdr.b_pabd);
2826
buf->b_flags |= ARC_BUF_FLAG_SHARED;
2827
arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA);
2828
} else {
2829
buf->b_data =
2830
arc_get_data_buf(hdr, arc_buf_size(buf), buf);
2831
ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf));
2832
}
2833
VERIFY3P(buf->b_data, !=, NULL);
2834
2835
hdr->b_l1hdr.b_buf = buf;
2836
2837
/*
2838
* If the user wants the data from the hdr, we need to either copy or
2839
* decompress the data.
2840
*/
2841
if (fill) {
2842
ASSERT3P(zb, !=, NULL);
2843
return (arc_buf_fill(buf, spa, zb, flags));
2844
}
2845
2846
return (0);
2847
}
2848
2849
static const char *arc_onloan_tag = "onloan";
2850
2851
static inline void
2852
arc_loaned_bytes_update(int64_t delta)
2853
{
2854
atomic_add_64(&arc_loaned_bytes, delta);
2855
2856
/* assert that it did not wrap around */
2857
ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0);
2858
}
2859
2860
/*
2861
* Loan out an anonymous arc buffer. Loaned buffers are not counted as in
2862
* flight data by arc_tempreserve_space() until they are "returned". Loaned
2863
* buffers must be returned to the arc before they can be used by the DMU or
2864
* freed.
2865
*/
2866
arc_buf_t *
2867
arc_loan_buf(spa_t *spa, boolean_t is_metadata, int size)
2868
{
2869
arc_buf_t *buf = arc_alloc_buf(spa, arc_onloan_tag,
2870
is_metadata ? ARC_BUFC_METADATA : ARC_BUFC_DATA, size);
2871
2872
arc_loaned_bytes_update(arc_buf_size(buf));
2873
2874
return (buf);
2875
}
2876
2877
arc_buf_t *
2878
arc_loan_compressed_buf(spa_t *spa, uint64_t psize, uint64_t lsize,
2879
enum zio_compress compression_type, uint8_t complevel)
2880
{
2881
arc_buf_t *buf = arc_alloc_compressed_buf(spa, arc_onloan_tag,
2882
psize, lsize, compression_type, complevel);
2883
2884
arc_loaned_bytes_update(arc_buf_size(buf));
2885
2886
return (buf);
2887
}
2888
2889
arc_buf_t *
2890
arc_loan_raw_buf(spa_t *spa, uint64_t dsobj, boolean_t byteorder,
2891
const uint8_t *salt, const uint8_t *iv, const uint8_t *mac,
2892
dmu_object_type_t ot, uint64_t psize, uint64_t lsize,
2893
enum zio_compress compression_type, uint8_t complevel)
2894
{
2895
arc_buf_t *buf = arc_alloc_raw_buf(spa, arc_onloan_tag, dsobj,
2896
byteorder, salt, iv, mac, ot, psize, lsize, compression_type,
2897
complevel);
2898
2899
atomic_add_64(&arc_loaned_bytes, psize);
2900
return (buf);
2901
}
2902
2903
2904
/*
2905
* Return a loaned arc buffer to the arc.
2906
*/
2907
void
2908
arc_return_buf(arc_buf_t *buf, const void *tag)
2909
{
2910
arc_buf_hdr_t *hdr = buf->b_hdr;
2911
2912
ASSERT3P(buf->b_data, !=, NULL);
2913
ASSERT(HDR_HAS_L1HDR(hdr));
2914
(void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag);
2915
(void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag);
2916
2917
arc_loaned_bytes_update(-arc_buf_size(buf));
2918
}
2919
2920
/* Detach an arc_buf from a dbuf (tag) */
2921
void
2922
arc_loan_inuse_buf(arc_buf_t *buf, const void *tag)
2923
{
2924
arc_buf_hdr_t *hdr = buf->b_hdr;
2925
2926
ASSERT3P(buf->b_data, !=, NULL);
2927
ASSERT(HDR_HAS_L1HDR(hdr));
2928
(void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag);
2929
(void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag);
2930
2931
arc_loaned_bytes_update(arc_buf_size(buf));
2932
}
2933
2934
static void
2935
l2arc_free_abd_on_write(abd_t *abd, size_t size, arc_buf_contents_t type)
2936
{
2937
l2arc_data_free_t *df = kmem_alloc(sizeof (*df), KM_SLEEP);
2938
2939
df->l2df_abd = abd;
2940
df->l2df_size = size;
2941
df->l2df_type = type;
2942
mutex_enter(&l2arc_free_on_write_mtx);
2943
list_insert_head(l2arc_free_on_write, df);
2944
mutex_exit(&l2arc_free_on_write_mtx);
2945
}
2946
2947
static void
2948
arc_hdr_free_on_write(arc_buf_hdr_t *hdr, boolean_t free_rdata)
2949
{
2950
arc_state_t *state = hdr->b_l1hdr.b_state;
2951
arc_buf_contents_t type = arc_buf_type(hdr);
2952
uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr);
2953
2954
/* protected by hash lock, if in the hash table */
2955
if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) {
2956
ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
2957
ASSERT(state != arc_anon && state != arc_l2c_only);
2958
2959
(void) zfs_refcount_remove_many(&state->arcs_esize[type],
2960
size, hdr);
2961
}
2962
(void) zfs_refcount_remove_many(&state->arcs_size[type], size, hdr);
2963
if (type == ARC_BUFC_METADATA) {
2964
arc_space_return(size, ARC_SPACE_META);
2965
} else {
2966
ASSERT(type == ARC_BUFC_DATA);
2967
arc_space_return(size, ARC_SPACE_DATA);
2968
}
2969
2970
if (free_rdata) {
2971
l2arc_free_abd_on_write(hdr->b_crypt_hdr.b_rabd, size, type);
2972
} else {
2973
l2arc_free_abd_on_write(hdr->b_l1hdr.b_pabd, size, type);
2974
}
2975
}
2976
2977
/*
2978
* Share the arc_buf_t's data with the hdr. Whenever we are sharing the
2979
* data buffer, we transfer the refcount ownership to the hdr and update
2980
* the appropriate kstats.
2981
*/
2982
static void
2983
arc_share_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf)
2984
{
2985
ASSERT(arc_can_share(hdr, buf));
2986
ASSERT0P(hdr->b_l1hdr.b_pabd);
2987
ASSERT(!ARC_BUF_ENCRYPTED(buf));
2988
ASSERT(HDR_EMPTY_OR_LOCKED(hdr));
2989
2990
/*
2991
* Start sharing the data buffer. We transfer the
2992
* refcount ownership to the hdr since it always owns
2993
* the refcount whenever an arc_buf_t is shared.
2994
*/
2995
zfs_refcount_transfer_ownership_many(
2996
&hdr->b_l1hdr.b_state->arcs_size[arc_buf_type(hdr)],
2997
arc_hdr_size(hdr), buf, hdr);
2998
hdr->b_l1hdr.b_pabd = abd_get_from_buf(buf->b_data, arc_buf_size(buf));
2999
abd_take_ownership_of_buf(hdr->b_l1hdr.b_pabd,
3000
HDR_ISTYPE_METADATA(hdr));
3001
arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA);
3002
buf->b_flags |= ARC_BUF_FLAG_SHARED;
3003
3004
/*
3005
* Since we've transferred ownership to the hdr we need
3006
* to increment its compressed and uncompressed kstats and
3007
* decrement the overhead size.
3008
*/
3009
ARCSTAT_INCR(arcstat_compressed_size, arc_hdr_size(hdr));
3010
ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr));
3011
ARCSTAT_INCR(arcstat_overhead_size, -arc_buf_size(buf));
3012
}
3013
3014
static void
3015
arc_unshare_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf)
3016
{
3017
ASSERT(arc_buf_is_shared(buf));
3018
ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
3019
ASSERT(HDR_EMPTY_OR_LOCKED(hdr));
3020
3021
/*
3022
* We are no longer sharing this buffer so we need
3023
* to transfer its ownership to the rightful owner.
3024
*/
3025
zfs_refcount_transfer_ownership_many(
3026
&hdr->b_l1hdr.b_state->arcs_size[arc_buf_type(hdr)],
3027
arc_hdr_size(hdr), hdr, buf);
3028
arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA);
3029
abd_release_ownership_of_buf(hdr->b_l1hdr.b_pabd);
3030
abd_free(hdr->b_l1hdr.b_pabd);
3031
hdr->b_l1hdr.b_pabd = NULL;
3032
buf->b_flags &= ~ARC_BUF_FLAG_SHARED;
3033
3034
/*
3035
* Since the buffer is no longer shared between
3036
* the arc buf and the hdr, count it as overhead.
3037
*/
3038
ARCSTAT_INCR(arcstat_compressed_size, -arc_hdr_size(hdr));
3039
ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr));
3040
ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf));
3041
}
3042
3043
/*
3044
* Remove an arc_buf_t from the hdr's buf list and return the last
3045
* arc_buf_t on the list. If no buffers remain on the list then return
3046
* NULL.
3047
*/
3048
static arc_buf_t *
3049
arc_buf_remove(arc_buf_hdr_t *hdr, arc_buf_t *buf)
3050
{
3051
ASSERT(HDR_HAS_L1HDR(hdr));
3052
ASSERT(HDR_EMPTY_OR_LOCKED(hdr));
3053
3054
arc_buf_t **bufp = &hdr->b_l1hdr.b_buf;
3055
arc_buf_t *lastbuf = NULL;
3056
3057
/*
3058
* Remove the buf from the hdr list and locate the last
3059
* remaining buffer on the list.
3060
*/
3061
while (*bufp != NULL) {
3062
if (*bufp == buf)
3063
*bufp = buf->b_next;
3064
3065
/*
3066
* If we've removed a buffer in the middle of
3067
* the list then update the lastbuf and update
3068
* bufp.
3069
*/
3070
if (*bufp != NULL) {
3071
lastbuf = *bufp;
3072
bufp = &(*bufp)->b_next;
3073
}
3074
}
3075
buf->b_next = NULL;
3076
ASSERT3P(lastbuf, !=, buf);
3077
IMPLY(lastbuf != NULL, ARC_BUF_LAST(lastbuf));
3078
3079
return (lastbuf);
3080
}
3081
3082
/*
3083
* Free up buf->b_data and pull the arc_buf_t off of the arc_buf_hdr_t's
3084
* list and free it.
3085
*/
3086
static void
3087
arc_buf_destroy_impl(arc_buf_t *buf)
3088
{
3089
arc_buf_hdr_t *hdr = buf->b_hdr;
3090
3091
/*
3092
* Free up the data associated with the buf but only if we're not
3093
* sharing this with the hdr. If we are sharing it with the hdr, the
3094
* hdr is responsible for doing the free.
3095
*/
3096
if (buf->b_data != NULL) {
3097
/*
3098
* We're about to change the hdr's b_flags. We must either
3099
* hold the hash_lock or be undiscoverable.
3100
*/
3101
ASSERT(HDR_EMPTY_OR_LOCKED(hdr));
3102
3103
arc_cksum_verify(buf);
3104
arc_buf_unwatch(buf);
3105
3106
if (ARC_BUF_SHARED(buf)) {
3107
arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA);
3108
} else {
3109
ASSERT(!arc_buf_is_shared(buf));
3110
uint64_t size = arc_buf_size(buf);
3111
arc_free_data_buf(hdr, buf->b_data, size, buf);
3112
ARCSTAT_INCR(arcstat_overhead_size, -size);
3113
}
3114
buf->b_data = NULL;
3115
3116
/*
3117
* If we have no more encrypted buffers and we've already
3118
* gotten a copy of the decrypted data we can free b_rabd
3119
* to save some space.
3120
*/
3121
if (ARC_BUF_ENCRYPTED(buf) && HDR_HAS_RABD(hdr) &&
3122
hdr->b_l1hdr.b_pabd != NULL && !HDR_IO_IN_PROGRESS(hdr)) {
3123
arc_buf_t *b;
3124
for (b = hdr->b_l1hdr.b_buf; b; b = b->b_next) {
3125
if (b != buf && ARC_BUF_ENCRYPTED(b))
3126
break;
3127
}
3128
if (b == NULL)
3129
arc_hdr_free_abd(hdr, B_TRUE);
3130
}
3131
}
3132
3133
arc_buf_t *lastbuf = arc_buf_remove(hdr, buf);
3134
3135
if (ARC_BUF_SHARED(buf) && !ARC_BUF_COMPRESSED(buf)) {
3136
/*
3137
* If the current arc_buf_t is sharing its data buffer with the
3138
* hdr, then reassign the hdr's b_pabd to share it with the new
3139
* buffer at the end of the list. The shared buffer is always
3140
* the last one on the hdr's buffer list.
3141
*
3142
* There is an equivalent case for compressed bufs, but since
3143
* they aren't guaranteed to be the last buf in the list and
3144
* that is an exceedingly rare case, we just allow that space be
3145
* wasted temporarily. We must also be careful not to share
3146
* encrypted buffers, since they cannot be shared.
3147
*/
3148
if (lastbuf != NULL && !ARC_BUF_ENCRYPTED(lastbuf)) {
3149
/* Only one buf can be shared at once */
3150
ASSERT(!arc_buf_is_shared(lastbuf));
3151
/* hdr is uncompressed so can't have compressed buf */
3152
ASSERT(!ARC_BUF_COMPRESSED(lastbuf));
3153
3154
ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
3155
arc_hdr_free_abd(hdr, B_FALSE);
3156
3157
/*
3158
* We must setup a new shared block between the
3159
* last buffer and the hdr. The data would have
3160
* been allocated by the arc buf so we need to transfer
3161
* ownership to the hdr since it's now being shared.
3162
*/
3163
arc_share_buf(hdr, lastbuf);
3164
}
3165
} else if (HDR_SHARED_DATA(hdr)) {
3166
/*
3167
* Uncompressed shared buffers are always at the end
3168
* of the list. Compressed buffers don't have the
3169
* same requirements. This makes it hard to
3170
* simply assert that the lastbuf is shared so
3171
* we rely on the hdr's compression flags to determine
3172
* if we have a compressed, shared buffer.
3173
*/
3174
ASSERT3P(lastbuf, !=, NULL);
3175
ASSERT(arc_buf_is_shared(lastbuf) ||
3176
arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF);
3177
}
3178
3179
/*
3180
* Free the checksum if we're removing the last uncompressed buf from
3181
* this hdr.
3182
*/
3183
if (!arc_hdr_has_uncompressed_buf(hdr)) {
3184
arc_cksum_free(hdr);
3185
}
3186
3187
/* clean up the buf */
3188
buf->b_hdr = NULL;
3189
kmem_cache_free(buf_cache, buf);
3190
}
3191
3192
static void
3193
arc_hdr_alloc_abd(arc_buf_hdr_t *hdr, int alloc_flags)
3194
{
3195
uint64_t size;
3196
boolean_t alloc_rdata = ((alloc_flags & ARC_HDR_ALLOC_RDATA) != 0);
3197
3198
ASSERT3U(HDR_GET_LSIZE(hdr), >, 0);
3199
ASSERT(HDR_HAS_L1HDR(hdr));
3200
ASSERT(!HDR_SHARED_DATA(hdr) || alloc_rdata);
3201
IMPLY(alloc_rdata, HDR_PROTECTED(hdr));
3202
3203
if (alloc_rdata) {
3204
size = HDR_GET_PSIZE(hdr);
3205
ASSERT0P(hdr->b_crypt_hdr.b_rabd);
3206
hdr->b_crypt_hdr.b_rabd = arc_get_data_abd(hdr, size, hdr,
3207
alloc_flags);
3208
ASSERT3P(hdr->b_crypt_hdr.b_rabd, !=, NULL);
3209
ARCSTAT_INCR(arcstat_raw_size, size);
3210
} else {
3211
size = arc_hdr_size(hdr);
3212
ASSERT0P(hdr->b_l1hdr.b_pabd);
3213
hdr->b_l1hdr.b_pabd = arc_get_data_abd(hdr, size, hdr,
3214
alloc_flags);
3215
ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
3216
}
3217
3218
ARCSTAT_INCR(arcstat_compressed_size, size);
3219
ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr));
3220
}
3221
3222
static void
3223
arc_hdr_free_abd(arc_buf_hdr_t *hdr, boolean_t free_rdata)
3224
{
3225
uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr);
3226
3227
ASSERT(HDR_HAS_L1HDR(hdr));
3228
ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr));
3229
IMPLY(free_rdata, HDR_HAS_RABD(hdr));
3230
3231
/*
3232
* If the hdr is currently being written to the l2arc then
3233
* we defer freeing the data by adding it to the l2arc_free_on_write
3234
* list. The l2arc will free the data once it's finished
3235
* writing it to the l2arc device.
3236
*/
3237
if (HDR_L2_WRITING(hdr)) {
3238
arc_hdr_free_on_write(hdr, free_rdata);
3239
ARCSTAT_BUMP(arcstat_l2_free_on_write);
3240
} else if (free_rdata) {
3241
arc_free_data_abd(hdr, hdr->b_crypt_hdr.b_rabd, size, hdr);
3242
} else {
3243
arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, size, hdr);
3244
}
3245
3246
if (free_rdata) {
3247
hdr->b_crypt_hdr.b_rabd = NULL;
3248
ARCSTAT_INCR(arcstat_raw_size, -size);
3249
} else {
3250
hdr->b_l1hdr.b_pabd = NULL;
3251
}
3252
3253
if (hdr->b_l1hdr.b_pabd == NULL && !HDR_HAS_RABD(hdr))
3254
hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS;
3255
3256
ARCSTAT_INCR(arcstat_compressed_size, -size);
3257
ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr));
3258
}
3259
3260
/*
3261
* Allocate empty anonymous ARC header. The header will get its identity
3262
* assigned and buffers attached later as part of read or write operations.
3263
*
3264
* In case of read arc_read() assigns header its identify (b_dva + b_birth),
3265
* inserts it into ARC hash to become globally visible and allocates physical
3266
* (b_pabd) or raw (b_rabd) ABD buffer to read into from disk. On disk read
3267
* completion arc_read_done() allocates ARC buffer(s) as needed, potentially
3268
* sharing one of them with the physical ABD buffer.
3269
*
3270
* In case of write arc_alloc_buf() allocates ARC buffer to be filled with
3271
* data. Then after compression and/or encryption arc_write_ready() allocates
3272
* and fills (or potentially shares) physical (b_pabd) or raw (b_rabd) ABD
3273
* buffer. On disk write completion arc_write_done() assigns the header its
3274
* new identity (b_dva + b_birth) and inserts into ARC hash.
3275
*
3276
* In case of partial overwrite the old data is read first as described. Then
3277
* arc_release() either allocates new anonymous ARC header and moves the ARC
3278
* buffer to it, or reuses the old ARC header by discarding its identity and
3279
* removing it from ARC hash. After buffer modification normal write process
3280
* follows as described.
3281
*/
3282
static arc_buf_hdr_t *
3283
arc_hdr_alloc(uint64_t spa, int32_t psize, int32_t lsize,
3284
boolean_t protected, enum zio_compress compression_type, uint8_t complevel,
3285
arc_buf_contents_t type)
3286
{
3287
arc_buf_hdr_t *hdr;
3288
3289
VERIFY(type == ARC_BUFC_DATA || type == ARC_BUFC_METADATA);
3290
hdr = kmem_cache_alloc(hdr_full_cache, KM_PUSHPAGE);
3291
3292
ASSERT(HDR_EMPTY(hdr));
3293
#ifdef ZFS_DEBUG
3294
ASSERT0P(hdr->b_l1hdr.b_freeze_cksum);
3295
#endif
3296
HDR_SET_PSIZE(hdr, psize);
3297
HDR_SET_LSIZE(hdr, lsize);
3298
hdr->b_spa = spa;
3299
hdr->b_type = type;
3300
hdr->b_flags = 0;
3301
arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L1HDR);
3302
arc_hdr_set_compress(hdr, compression_type);
3303
hdr->b_complevel = complevel;
3304
if (protected)
3305
arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED);
3306
3307
hdr->b_l1hdr.b_state = arc_anon;
3308
hdr->b_l1hdr.b_arc_access = 0;
3309
hdr->b_l1hdr.b_mru_hits = 0;
3310
hdr->b_l1hdr.b_mru_ghost_hits = 0;
3311
hdr->b_l1hdr.b_mfu_hits = 0;
3312
hdr->b_l1hdr.b_mfu_ghost_hits = 0;
3313
hdr->b_l1hdr.b_buf = NULL;
3314
3315
ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
3316
3317
return (hdr);
3318
}
3319
3320
/*
3321
* Transition between the two allocation states for the arc_buf_hdr struct.
3322
* The arc_buf_hdr struct can be allocated with (hdr_full_cache) or without
3323
* (hdr_l2only_cache) the fields necessary for the L1 cache - the smaller
3324
* version is used when a cache buffer is only in the L2ARC in order to reduce
3325
* memory usage.
3326
*/
3327
static arc_buf_hdr_t *
3328
arc_hdr_realloc(arc_buf_hdr_t *hdr, kmem_cache_t *old, kmem_cache_t *new)
3329
{
3330
ASSERT(HDR_HAS_L2HDR(hdr));
3331
3332
arc_buf_hdr_t *nhdr;
3333
l2arc_dev_t *dev = hdr->b_l2hdr.b_dev;
3334
3335
ASSERT((old == hdr_full_cache && new == hdr_l2only_cache) ||
3336
(old == hdr_l2only_cache && new == hdr_full_cache));
3337
3338
nhdr = kmem_cache_alloc(new, KM_PUSHPAGE);
3339
3340
ASSERT(MUTEX_HELD(HDR_LOCK(hdr)));
3341
buf_hash_remove(hdr);
3342
3343
memcpy(nhdr, hdr, HDR_L2ONLY_SIZE);
3344
3345
if (new == hdr_full_cache) {
3346
arc_hdr_set_flags(nhdr, ARC_FLAG_HAS_L1HDR);
3347
/*
3348
* arc_access and arc_change_state need to be aware that a
3349
* header has just come out of L2ARC, so we set its state to
3350
* l2c_only even though it's about to change.
3351
*/
3352
nhdr->b_l1hdr.b_state = arc_l2c_only;
3353
3354
/* Verify previous threads set to NULL before freeing */
3355
ASSERT0P(nhdr->b_l1hdr.b_pabd);
3356
ASSERT(!HDR_HAS_RABD(hdr));
3357
} else {
3358
ASSERT0P(hdr->b_l1hdr.b_buf);
3359
#ifdef ZFS_DEBUG
3360
ASSERT0P(hdr->b_l1hdr.b_freeze_cksum);
3361
#endif
3362
3363
/*
3364
* If we've reached here, We must have been called from
3365
* arc_evict_hdr(), as such we should have already been
3366
* removed from any ghost list we were previously on
3367
* (which protects us from racing with arc_evict_state),
3368
* thus no locking is needed during this check.
3369
*/
3370
ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node));
3371
3372
/*
3373
* A buffer must not be moved into the arc_l2c_only
3374
* state if it's not finished being written out to the
3375
* l2arc device. Otherwise, the b_l1hdr.b_pabd field
3376
* might try to be accessed, even though it was removed.
3377
*/
3378
VERIFY(!HDR_L2_WRITING(hdr));
3379
VERIFY0P(hdr->b_l1hdr.b_pabd);
3380
ASSERT(!HDR_HAS_RABD(hdr));
3381
3382
arc_hdr_clear_flags(nhdr, ARC_FLAG_HAS_L1HDR);
3383
}
3384
/*
3385
* The header has been reallocated so we need to re-insert it into any
3386
* lists it was on.
3387
*/
3388
(void) buf_hash_insert(nhdr, NULL);
3389
3390
ASSERT(list_link_active(&hdr->b_l2hdr.b_l2node));
3391
3392
mutex_enter(&dev->l2ad_mtx);
3393
3394
/*
3395
* We must place the realloc'ed header back into the list at
3396
* the same spot. Otherwise, if it's placed earlier in the list,
3397
* l2arc_write_buffers() could find it during the function's
3398
* write phase, and try to write it out to the l2arc.
3399
*/
3400
list_insert_after(&dev->l2ad_buflist, hdr, nhdr);
3401
list_remove(&dev->l2ad_buflist, hdr);
3402
3403
mutex_exit(&dev->l2ad_mtx);
3404
3405
/*
3406
* Since we're using the pointer address as the tag when
3407
* incrementing and decrementing the l2ad_alloc refcount, we
3408
* must remove the old pointer (that we're about to destroy) and
3409
* add the new pointer to the refcount. Otherwise we'd remove
3410
* the wrong pointer address when calling arc_hdr_destroy() later.
3411
*/
3412
3413
(void) zfs_refcount_remove_many(&dev->l2ad_alloc,
3414
arc_hdr_size(hdr), hdr);
3415
(void) zfs_refcount_add_many(&dev->l2ad_alloc,
3416
arc_hdr_size(nhdr), nhdr);
3417
3418
buf_discard_identity(hdr);
3419
kmem_cache_free(old, hdr);
3420
3421
return (nhdr);
3422
}
3423
3424
/*
3425
* This function is used by the send / receive code to convert a newly
3426
* allocated arc_buf_t to one that is suitable for a raw encrypted write. It
3427
* is also used to allow the root objset block to be updated without altering
3428
* its embedded MACs. Both block types will always be uncompressed so we do not
3429
* have to worry about compression type or psize.
3430
*/
3431
void
3432
arc_convert_to_raw(arc_buf_t *buf, uint64_t dsobj, boolean_t byteorder,
3433
dmu_object_type_t ot, const uint8_t *salt, const uint8_t *iv,
3434
const uint8_t *mac)
3435
{
3436
arc_buf_hdr_t *hdr = buf->b_hdr;
3437
3438
ASSERT(ot == DMU_OT_DNODE || ot == DMU_OT_OBJSET);
3439
ASSERT(HDR_HAS_L1HDR(hdr));
3440
ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon);
3441
3442
buf->b_flags |= (ARC_BUF_FLAG_COMPRESSED | ARC_BUF_FLAG_ENCRYPTED);
3443
arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED);
3444
hdr->b_crypt_hdr.b_dsobj = dsobj;
3445
hdr->b_crypt_hdr.b_ot = ot;
3446
hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ?
3447
DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot);
3448
if (!arc_hdr_has_uncompressed_buf(hdr))
3449
arc_cksum_free(hdr);
3450
3451
if (salt != NULL)
3452
memcpy(hdr->b_crypt_hdr.b_salt, salt, ZIO_DATA_SALT_LEN);
3453
if (iv != NULL)
3454
memcpy(hdr->b_crypt_hdr.b_iv, iv, ZIO_DATA_IV_LEN);
3455
if (mac != NULL)
3456
memcpy(hdr->b_crypt_hdr.b_mac, mac, ZIO_DATA_MAC_LEN);
3457
}
3458
3459
/*
3460
* Allocate a new arc_buf_hdr_t and arc_buf_t and return the buf to the caller.
3461
* The buf is returned thawed since we expect the consumer to modify it.
3462
*/
3463
arc_buf_t *
3464
arc_alloc_buf(spa_t *spa, const void *tag, arc_buf_contents_t type,
3465
int32_t size)
3466
{
3467
arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), size, size,
3468
B_FALSE, ZIO_COMPRESS_OFF, 0, type);
3469
3470
arc_buf_t *buf = NULL;
3471
VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE, B_FALSE,
3472
B_FALSE, B_FALSE, &buf));
3473
arc_buf_thaw(buf);
3474
3475
return (buf);
3476
}
3477
3478
/*
3479
* Allocate a compressed buf in the same manner as arc_alloc_buf. Don't use this
3480
* for bufs containing metadata.
3481
*/
3482
arc_buf_t *
3483
arc_alloc_compressed_buf(spa_t *spa, const void *tag, uint64_t psize,
3484
uint64_t lsize, enum zio_compress compression_type, uint8_t complevel)
3485
{
3486
ASSERT3U(lsize, >, 0);
3487
ASSERT3U(lsize, >=, psize);
3488
ASSERT3U(compression_type, >, ZIO_COMPRESS_OFF);
3489
ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS);
3490
3491
arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize,
3492
B_FALSE, compression_type, complevel, ARC_BUFC_DATA);
3493
3494
arc_buf_t *buf = NULL;
3495
VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE,
3496
B_TRUE, B_FALSE, B_FALSE, &buf));
3497
arc_buf_thaw(buf);
3498
3499
/*
3500
* To ensure that the hdr has the correct data in it if we call
3501
* arc_untransform() on this buf before it's been written to disk,
3502
* it's easiest if we just set up sharing between the buf and the hdr.
3503
*/
3504
arc_share_buf(hdr, buf);
3505
3506
return (buf);
3507
}
3508
3509
arc_buf_t *
3510
arc_alloc_raw_buf(spa_t *spa, const void *tag, uint64_t dsobj,
3511
boolean_t byteorder, const uint8_t *salt, const uint8_t *iv,
3512
const uint8_t *mac, dmu_object_type_t ot, uint64_t psize, uint64_t lsize,
3513
enum zio_compress compression_type, uint8_t complevel)
3514
{
3515
arc_buf_hdr_t *hdr;
3516
arc_buf_t *buf;
3517
arc_buf_contents_t type = DMU_OT_IS_METADATA(ot) ?
3518
ARC_BUFC_METADATA : ARC_BUFC_DATA;
3519
3520
ASSERT3U(lsize, >, 0);
3521
ASSERT3U(lsize, >=, psize);
3522
ASSERT3U(compression_type, >=, ZIO_COMPRESS_OFF);
3523
ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS);
3524
3525
hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, B_TRUE,
3526
compression_type, complevel, type);
3527
3528
hdr->b_crypt_hdr.b_dsobj = dsobj;
3529
hdr->b_crypt_hdr.b_ot = ot;
3530
hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ?
3531
DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot);
3532
memcpy(hdr->b_crypt_hdr.b_salt, salt, ZIO_DATA_SALT_LEN);
3533
memcpy(hdr->b_crypt_hdr.b_iv, iv, ZIO_DATA_IV_LEN);
3534
memcpy(hdr->b_crypt_hdr.b_mac, mac, ZIO_DATA_MAC_LEN);
3535
3536
/*
3537
* This buffer will be considered encrypted even if the ot is not an
3538
* encrypted type. It will become authenticated instead in
3539
* arc_write_ready().
3540
*/
3541
buf = NULL;
3542
VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_TRUE, B_TRUE,
3543
B_FALSE, B_FALSE, &buf));
3544
arc_buf_thaw(buf);
3545
3546
return (buf);
3547
}
3548
3549
static void
3550
l2arc_hdr_arcstats_update(arc_buf_hdr_t *hdr, boolean_t incr,
3551
boolean_t state_only)
3552
{
3553
uint64_t lsize = HDR_GET_LSIZE(hdr);
3554
uint64_t psize = HDR_GET_PSIZE(hdr);
3555
uint64_t asize = HDR_GET_L2SIZE(hdr);
3556
arc_buf_contents_t type = hdr->b_type;
3557
int64_t lsize_s;
3558
int64_t psize_s;
3559
int64_t asize_s;
3560
3561
/* For L2 we expect the header's b_l2size to be valid */
3562
ASSERT3U(asize, >=, psize);
3563
3564
if (incr) {
3565
lsize_s = lsize;
3566
psize_s = psize;
3567
asize_s = asize;
3568
} else {
3569
lsize_s = -lsize;
3570
psize_s = -psize;
3571
asize_s = -asize;
3572
}
3573
3574
/* If the buffer is a prefetch, count it as such. */
3575
if (HDR_PREFETCH(hdr)) {
3576
ARCSTAT_INCR(arcstat_l2_prefetch_asize, asize_s);
3577
} else {
3578
/*
3579
* We use the value stored in the L2 header upon initial
3580
* caching in L2ARC. This value will be updated in case
3581
* an MRU/MRU_ghost buffer transitions to MFU but the L2ARC
3582
* metadata (log entry) cannot currently be updated. Having
3583
* the ARC state in the L2 header solves the problem of a
3584
* possibly absent L1 header (apparent in buffers restored
3585
* from persistent L2ARC).
3586
*/
3587
switch (hdr->b_l2hdr.b_arcs_state) {
3588
case ARC_STATE_MRU_GHOST:
3589
case ARC_STATE_MRU:
3590
ARCSTAT_INCR(arcstat_l2_mru_asize, asize_s);
3591
break;
3592
case ARC_STATE_MFU_GHOST:
3593
case ARC_STATE_MFU:
3594
ARCSTAT_INCR(arcstat_l2_mfu_asize, asize_s);
3595
break;
3596
default:
3597
break;
3598
}
3599
}
3600
3601
if (state_only)
3602
return;
3603
3604
ARCSTAT_INCR(arcstat_l2_psize, psize_s);
3605
ARCSTAT_INCR(arcstat_l2_lsize, lsize_s);
3606
3607
switch (type) {
3608
case ARC_BUFC_DATA:
3609
ARCSTAT_INCR(arcstat_l2_bufc_data_asize, asize_s);
3610
break;
3611
case ARC_BUFC_METADATA:
3612
ARCSTAT_INCR(arcstat_l2_bufc_metadata_asize, asize_s);
3613
break;
3614
default:
3615
break;
3616
}
3617
}
3618
3619
3620
static void
3621
arc_hdr_l2hdr_destroy(arc_buf_hdr_t *hdr)
3622
{
3623
l2arc_buf_hdr_t *l2hdr = &hdr->b_l2hdr;
3624
l2arc_dev_t *dev = l2hdr->b_dev;
3625
3626
ASSERT(MUTEX_HELD(&dev->l2ad_mtx));
3627
ASSERT(HDR_HAS_L2HDR(hdr));
3628
3629
list_remove(&dev->l2ad_buflist, hdr);
3630
3631
l2arc_hdr_arcstats_decrement(hdr);
3632
if (dev->l2ad_vdev != NULL) {
3633
uint64_t asize = HDR_GET_L2SIZE(hdr);
3634
vdev_space_update(dev->l2ad_vdev, -asize, 0, 0);
3635
}
3636
3637
(void) zfs_refcount_remove_many(&dev->l2ad_alloc, arc_hdr_size(hdr),
3638
hdr);
3639
arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR);
3640
}
3641
3642
static void
3643
arc_hdr_destroy(arc_buf_hdr_t *hdr)
3644
{
3645
if (HDR_HAS_L1HDR(hdr)) {
3646
ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
3647
ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon);
3648
}
3649
ASSERT(!HDR_IO_IN_PROGRESS(hdr));
3650
ASSERT(!HDR_IN_HASH_TABLE(hdr));
3651
3652
if (HDR_HAS_L2HDR(hdr)) {
3653
l2arc_dev_t *dev = hdr->b_l2hdr.b_dev;
3654
boolean_t buflist_held = MUTEX_HELD(&dev->l2ad_mtx);
3655
3656
if (!buflist_held)
3657
mutex_enter(&dev->l2ad_mtx);
3658
3659
/*
3660
* Even though we checked this conditional above, we
3661
* need to check this again now that we have the
3662
* l2ad_mtx. This is because we could be racing with
3663
* another thread calling l2arc_evict() which might have
3664
* destroyed this header's L2 portion as we were waiting
3665
* to acquire the l2ad_mtx. If that happens, we don't
3666
* want to re-destroy the header's L2 portion.
3667
*/
3668
if (HDR_HAS_L2HDR(hdr)) {
3669
3670
if (!HDR_EMPTY(hdr))
3671
buf_discard_identity(hdr);
3672
3673
arc_hdr_l2hdr_destroy(hdr);
3674
}
3675
3676
if (!buflist_held)
3677
mutex_exit(&dev->l2ad_mtx);
3678
}
3679
3680
/*
3681
* The header's identify can only be safely discarded once it is no
3682
* longer discoverable. This requires removing it from the hash table
3683
* and the l2arc header list. After this point the hash lock can not
3684
* be used to protect the header.
3685
*/
3686
if (!HDR_EMPTY(hdr))
3687
buf_discard_identity(hdr);
3688
3689
if (HDR_HAS_L1HDR(hdr)) {
3690
arc_cksum_free(hdr);
3691
3692
while (hdr->b_l1hdr.b_buf != NULL)
3693
arc_buf_destroy_impl(hdr->b_l1hdr.b_buf);
3694
3695
if (hdr->b_l1hdr.b_pabd != NULL)
3696
arc_hdr_free_abd(hdr, B_FALSE);
3697
3698
if (HDR_HAS_RABD(hdr))
3699
arc_hdr_free_abd(hdr, B_TRUE);
3700
}
3701
3702
ASSERT0P(hdr->b_hash_next);
3703
if (HDR_HAS_L1HDR(hdr)) {
3704
ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node));
3705
ASSERT0P(hdr->b_l1hdr.b_acb);
3706
#ifdef ZFS_DEBUG
3707
ASSERT0P(hdr->b_l1hdr.b_freeze_cksum);
3708
#endif
3709
kmem_cache_free(hdr_full_cache, hdr);
3710
} else {
3711
kmem_cache_free(hdr_l2only_cache, hdr);
3712
}
3713
}
3714
3715
void
3716
arc_buf_destroy(arc_buf_t *buf, const void *tag)
3717
{
3718
arc_buf_hdr_t *hdr = buf->b_hdr;
3719
3720
if (hdr->b_l1hdr.b_state == arc_anon) {
3721
ASSERT3P(hdr->b_l1hdr.b_buf, ==, buf);
3722
ASSERT(ARC_BUF_LAST(buf));
3723
ASSERT(!HDR_IO_IN_PROGRESS(hdr));
3724
VERIFY0(remove_reference(hdr, tag));
3725
return;
3726
}
3727
3728
kmutex_t *hash_lock = HDR_LOCK(hdr);
3729
mutex_enter(hash_lock);
3730
3731
ASSERT3P(hdr, ==, buf->b_hdr);
3732
ASSERT3P(hdr->b_l1hdr.b_buf, !=, NULL);
3733
ASSERT3P(hash_lock, ==, HDR_LOCK(hdr));
3734
ASSERT3P(hdr->b_l1hdr.b_state, !=, arc_anon);
3735
ASSERT3P(buf->b_data, !=, NULL);
3736
3737
arc_buf_destroy_impl(buf);
3738
(void) remove_reference(hdr, tag);
3739
mutex_exit(hash_lock);
3740
}
3741
3742
/*
3743
* Evict the arc_buf_hdr that is provided as a parameter. The resultant
3744
* state of the header is dependent on its state prior to entering this
3745
* function. The following transitions are possible:
3746
*
3747
* - arc_mru -> arc_mru_ghost
3748
* - arc_mfu -> arc_mfu_ghost
3749
* - arc_mru_ghost -> arc_l2c_only
3750
* - arc_mru_ghost -> deleted
3751
* - arc_mfu_ghost -> arc_l2c_only
3752
* - arc_mfu_ghost -> deleted
3753
* - arc_uncached -> deleted
3754
*
3755
* Return total size of evicted data buffers for eviction progress tracking.
3756
* When evicting from ghost states return logical buffer size to make eviction
3757
* progress at the same (or at least comparable) rate as from non-ghost states.
3758
*
3759
* Return *real_evicted for actual ARC size reduction to wake up threads
3760
* waiting for it. For non-ghost states it includes size of evicted data
3761
* buffers (the headers are not freed there). For ghost states it includes
3762
* only the evicted headers size.
3763
*/
3764
static int64_t
3765
arc_evict_hdr(arc_buf_hdr_t *hdr, uint64_t *real_evicted)
3766
{
3767
arc_state_t *evicted_state, *state;
3768
int64_t bytes_evicted = 0;
3769
uint_t min_lifetime = HDR_PRESCIENT_PREFETCH(hdr) ?
3770
arc_min_prescient_prefetch_ms : arc_min_prefetch_ms;
3771
3772
ASSERT(MUTEX_HELD(HDR_LOCK(hdr)));
3773
ASSERT(HDR_HAS_L1HDR(hdr));
3774
ASSERT(!HDR_IO_IN_PROGRESS(hdr));
3775
ASSERT0P(hdr->b_l1hdr.b_buf);
3776
ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt));
3777
3778
*real_evicted = 0;
3779
state = hdr->b_l1hdr.b_state;
3780
if (GHOST_STATE(state)) {
3781
3782
/*
3783
* l2arc_write_buffers() relies on a header's L1 portion
3784
* (i.e. its b_pabd field) during it's write phase.
3785
* Thus, we cannot push a header onto the arc_l2c_only
3786
* state (removing its L1 piece) until the header is
3787
* done being written to the l2arc.
3788
*/
3789
if (HDR_HAS_L2HDR(hdr) && HDR_L2_WRITING(hdr)) {
3790
ARCSTAT_BUMP(arcstat_evict_l2_skip);
3791
return (bytes_evicted);
3792
}
3793
3794
ARCSTAT_BUMP(arcstat_deleted);
3795
bytes_evicted += HDR_GET_LSIZE(hdr);
3796
3797
DTRACE_PROBE1(arc__delete, arc_buf_hdr_t *, hdr);
3798
3799
if (HDR_HAS_L2HDR(hdr)) {
3800
ASSERT0P(hdr->b_l1hdr.b_pabd);
3801
ASSERT(!HDR_HAS_RABD(hdr));
3802
/*
3803
* This buffer is cached on the 2nd Level ARC;
3804
* don't destroy the header.
3805
*/
3806
arc_change_state(arc_l2c_only, hdr);
3807
/*
3808
* dropping from L1+L2 cached to L2-only,
3809
* realloc to remove the L1 header.
3810
*/
3811
(void) arc_hdr_realloc(hdr, hdr_full_cache,
3812
hdr_l2only_cache);
3813
*real_evicted += HDR_FULL_SIZE - HDR_L2ONLY_SIZE;
3814
} else {
3815
arc_change_state(arc_anon, hdr);
3816
arc_hdr_destroy(hdr);
3817
*real_evicted += HDR_FULL_SIZE;
3818
}
3819
return (bytes_evicted);
3820
}
3821
3822
ASSERT(state == arc_mru || state == arc_mfu || state == arc_uncached);
3823
evicted_state = (state == arc_uncached) ? arc_anon :
3824
((state == arc_mru) ? arc_mru_ghost : arc_mfu_ghost);
3825
3826
/* prefetch buffers have a minimum lifespan */
3827
if ((hdr->b_flags & (ARC_FLAG_PREFETCH | ARC_FLAG_INDIRECT)) &&
3828
ddi_get_lbolt() - hdr->b_l1hdr.b_arc_access <
3829
MSEC_TO_TICK(min_lifetime)) {
3830
ARCSTAT_BUMP(arcstat_evict_skip);
3831
return (bytes_evicted);
3832
}
3833
3834
if (HDR_HAS_L2HDR(hdr)) {
3835
ARCSTAT_INCR(arcstat_evict_l2_cached, HDR_GET_LSIZE(hdr));
3836
} else {
3837
if (l2arc_write_eligible(hdr->b_spa, hdr)) {
3838
ARCSTAT_INCR(arcstat_evict_l2_eligible,
3839
HDR_GET_LSIZE(hdr));
3840
3841
switch (state->arcs_state) {
3842
case ARC_STATE_MRU:
3843
ARCSTAT_INCR(
3844
arcstat_evict_l2_eligible_mru,
3845
HDR_GET_LSIZE(hdr));
3846
break;
3847
case ARC_STATE_MFU:
3848
ARCSTAT_INCR(
3849
arcstat_evict_l2_eligible_mfu,
3850
HDR_GET_LSIZE(hdr));
3851
break;
3852
default:
3853
break;
3854
}
3855
} else {
3856
ARCSTAT_INCR(arcstat_evict_l2_ineligible,
3857
HDR_GET_LSIZE(hdr));
3858
}
3859
}
3860
3861
bytes_evicted += arc_hdr_size(hdr);
3862
*real_evicted += arc_hdr_size(hdr);
3863
3864
/*
3865
* If this hdr is being evicted and has a compressed buffer then we
3866
* discard it here before we change states. This ensures that the
3867
* accounting is updated correctly in arc_free_data_impl().
3868
*/
3869
if (hdr->b_l1hdr.b_pabd != NULL)
3870
arc_hdr_free_abd(hdr, B_FALSE);
3871
3872
if (HDR_HAS_RABD(hdr))
3873
arc_hdr_free_abd(hdr, B_TRUE);
3874
3875
arc_change_state(evicted_state, hdr);
3876
DTRACE_PROBE1(arc__evict, arc_buf_hdr_t *, hdr);
3877
if (evicted_state == arc_anon) {
3878
arc_hdr_destroy(hdr);
3879
*real_evicted += HDR_FULL_SIZE;
3880
} else {
3881
ASSERT(HDR_IN_HASH_TABLE(hdr));
3882
}
3883
3884
return (bytes_evicted);
3885
}
3886
3887
static void
3888
arc_set_need_free(void)
3889
{
3890
ASSERT(MUTEX_HELD(&arc_evict_lock));
3891
int64_t remaining = arc_free_memory() - arc_sys_free / 2;
3892
arc_evict_waiter_t *aw = list_tail(&arc_evict_waiters);
3893
if (aw == NULL) {
3894
arc_need_free = MAX(-remaining, 0);
3895
} else {
3896
arc_need_free =
3897
MAX(-remaining, (int64_t)(aw->aew_count - arc_evict_count));
3898
}
3899
}
3900
3901
static uint64_t
3902
arc_evict_state_impl(multilist_t *ml, int idx, arc_buf_hdr_t *marker,
3903
uint64_t spa, uint64_t bytes)
3904
{
3905
multilist_sublist_t *mls;
3906
uint64_t bytes_evicted = 0, real_evicted = 0;
3907
arc_buf_hdr_t *hdr;
3908
kmutex_t *hash_lock;
3909
uint_t evict_count = zfs_arc_evict_batch_limit;
3910
3911
ASSERT3P(marker, !=, NULL);
3912
3913
mls = multilist_sublist_lock_idx(ml, idx);
3914
3915
for (hdr = multilist_sublist_prev(mls, marker); likely(hdr != NULL);
3916
hdr = multilist_sublist_prev(mls, marker)) {
3917
if ((evict_count == 0) || (bytes_evicted >= bytes))
3918
break;
3919
3920
/*
3921
* To keep our iteration location, move the marker
3922
* forward. Since we're not holding hdr's hash lock, we
3923
* must be very careful and not remove 'hdr' from the
3924
* sublist. Otherwise, other consumers might mistake the
3925
* 'hdr' as not being on a sublist when they call the
3926
* multilist_link_active() function (they all rely on
3927
* the hash lock protecting concurrent insertions and
3928
* removals). multilist_sublist_move_forward() was
3929
* specifically implemented to ensure this is the case
3930
* (only 'marker' will be removed and re-inserted).
3931
*/
3932
multilist_sublist_move_forward(mls, marker);
3933
3934
/*
3935
* The only case where the b_spa field should ever be
3936
* zero, is the marker headers inserted by
3937
* arc_evict_state(). It's possible for multiple threads
3938
* to be calling arc_evict_state() concurrently (e.g.
3939
* dsl_pool_close() and zio_inject_fault()), so we must
3940
* skip any markers we see from these other threads.
3941
*/
3942
if (hdr->b_spa == 0)
3943
continue;
3944
3945
/* we're only interested in evicting buffers of a certain spa */
3946
if (spa != 0 && hdr->b_spa != spa) {
3947
ARCSTAT_BUMP(arcstat_evict_skip);
3948
continue;
3949
}
3950
3951
hash_lock = HDR_LOCK(hdr);
3952
3953
/*
3954
* We aren't calling this function from any code path
3955
* that would already be holding a hash lock, so we're
3956
* asserting on this assumption to be defensive in case
3957
* this ever changes. Without this check, it would be
3958
* possible to incorrectly increment arcstat_mutex_miss
3959
* below (e.g. if the code changed such that we called
3960
* this function with a hash lock held).
3961
*/
3962
ASSERT(!MUTEX_HELD(hash_lock));
3963
3964
if (mutex_tryenter(hash_lock)) {
3965
uint64_t revicted;
3966
uint64_t evicted = arc_evict_hdr(hdr, &revicted);
3967
mutex_exit(hash_lock);
3968
3969
bytes_evicted += evicted;
3970
real_evicted += revicted;
3971
3972
/*
3973
* If evicted is zero, arc_evict_hdr() must have
3974
* decided to skip this header, don't increment
3975
* evict_count in this case.
3976
*/
3977
if (evicted != 0)
3978
evict_count--;
3979
3980
} else {
3981
ARCSTAT_BUMP(arcstat_mutex_miss);
3982
}
3983
}
3984
3985
multilist_sublist_unlock(mls);
3986
3987
/*
3988
* Increment the count of evicted bytes, and wake up any threads that
3989
* are waiting for the count to reach this value. Since the list is
3990
* ordered by ascending aew_count, we pop off the beginning of the
3991
* list until we reach the end, or a waiter that's past the current
3992
* "count". Doing this outside the loop reduces the number of times
3993
* we need to acquire the global arc_evict_lock.
3994
*
3995
* Only wake when there's sufficient free memory in the system
3996
* (specifically, arc_sys_free/2, which by default is a bit more than
3997
* 1/64th of RAM). See the comments in arc_wait_for_eviction().
3998
*/
3999
mutex_enter(&arc_evict_lock);
4000
arc_evict_count += real_evicted;
4001
4002
if (arc_free_memory() > arc_sys_free / 2) {
4003
arc_evict_waiter_t *aw;
4004
while ((aw = list_head(&arc_evict_waiters)) != NULL &&
4005
aw->aew_count <= arc_evict_count) {
4006
list_remove(&arc_evict_waiters, aw);
4007
cv_broadcast(&aw->aew_cv);
4008
}
4009
}
4010
arc_set_need_free();
4011
mutex_exit(&arc_evict_lock);
4012
4013
/*
4014
* If the ARC size is reduced from arc_c_max to arc_c_min (especially
4015
* if the average cached block is small), eviction can be on-CPU for
4016
* many seconds. To ensure that other threads that may be bound to
4017
* this CPU are able to make progress, make a voluntary preemption
4018
* call here.
4019
*/
4020
kpreempt(KPREEMPT_SYNC);
4021
4022
return (bytes_evicted);
4023
}
4024
4025
static arc_buf_hdr_t *
4026
arc_state_alloc_marker(void)
4027
{
4028
arc_buf_hdr_t *marker = kmem_cache_alloc(hdr_full_cache, KM_SLEEP);
4029
4030
/*
4031
* A b_spa of 0 is used to indicate that this header is
4032
* a marker. This fact is used in arc_evict_state_impl().
4033
*/
4034
marker->b_spa = 0;
4035
4036
return (marker);
4037
}
4038
4039
static void
4040
arc_state_free_marker(arc_buf_hdr_t *marker)
4041
{
4042
kmem_cache_free(hdr_full_cache, marker);
4043
}
4044
4045
/*
4046
* Allocate an array of buffer headers used as placeholders during arc state
4047
* eviction.
4048
*/
4049
static arc_buf_hdr_t **
4050
arc_state_alloc_markers(int count)
4051
{
4052
arc_buf_hdr_t **markers;
4053
4054
markers = kmem_zalloc(sizeof (*markers) * count, KM_SLEEP);
4055
for (int i = 0; i < count; i++)
4056
markers[i] = arc_state_alloc_marker();
4057
return (markers);
4058
}
4059
4060
static void
4061
arc_state_free_markers(arc_buf_hdr_t **markers, int count)
4062
{
4063
for (int i = 0; i < count; i++)
4064
arc_state_free_marker(markers[i]);
4065
kmem_free(markers, sizeof (*markers) * count);
4066
}
4067
4068
typedef struct evict_arg {
4069
taskq_ent_t eva_tqent;
4070
multilist_t *eva_ml;
4071
arc_buf_hdr_t *eva_marker;
4072
int eva_idx;
4073
uint64_t eva_spa;
4074
uint64_t eva_bytes;
4075
uint64_t eva_evicted;
4076
} evict_arg_t;
4077
4078
static void
4079
arc_evict_task(void *arg)
4080
{
4081
evict_arg_t *eva = arg;
4082
eva->eva_evicted = arc_evict_state_impl(eva->eva_ml, eva->eva_idx,
4083
eva->eva_marker, eva->eva_spa, eva->eva_bytes);
4084
}
4085
4086
static void
4087
arc_evict_thread_init(void)
4088
{
4089
if (zfs_arc_evict_threads == 0) {
4090
/*
4091
* Compute number of threads we want to use for eviction.
4092
*
4093
* Normally, it's log2(ncpus) + ncpus/32, which gets us to the
4094
* default max of 16 threads at ~256 CPUs.
4095
*
4096
* However, that formula goes to two threads at 4 CPUs, which
4097
* is still rather to low to be really useful, so we just go
4098
* with 1 thread at fewer than 6 cores.
4099
*/
4100
if (max_ncpus < 6)
4101
zfs_arc_evict_threads = 1;
4102
else
4103
zfs_arc_evict_threads =
4104
(highbit64(max_ncpus) - 1) + max_ncpus / 32;
4105
} else if (zfs_arc_evict_threads > max_ncpus)
4106
zfs_arc_evict_threads = max_ncpus;
4107
4108
if (zfs_arc_evict_threads > 1) {
4109
arc_evict_taskq = taskq_create("arc_evict",
4110
zfs_arc_evict_threads, defclsyspri, 0, INT_MAX,
4111
TASKQ_PREPOPULATE);
4112
arc_evict_arg = kmem_zalloc(
4113
sizeof (evict_arg_t) * zfs_arc_evict_threads, KM_SLEEP);
4114
}
4115
}
4116
4117
/*
4118
* The minimum number of bytes we can evict at once is a block size.
4119
* So, SPA_MAXBLOCKSIZE is a reasonable minimal value per an eviction task.
4120
* We use this value to compute a scaling factor for the eviction tasks.
4121
*/
4122
#define MIN_EVICT_SIZE (SPA_MAXBLOCKSIZE)
4123
4124
/*
4125
* Evict buffers from the given arc state, until we've removed the
4126
* specified number of bytes. Move the removed buffers to the
4127
* appropriate evict state.
4128
*
4129
* This function makes a "best effort". It skips over any buffers
4130
* it can't get a hash_lock on, and so, may not catch all candidates.
4131
* It may also return without evicting as much space as requested.
4132
*
4133
* If bytes is specified using the special value ARC_EVICT_ALL, this
4134
* will evict all available (i.e. unlocked and evictable) buffers from
4135
* the given arc state; which is used by arc_flush().
4136
*/
4137
static uint64_t
4138
arc_evict_state(arc_state_t *state, arc_buf_contents_t type, uint64_t spa,
4139
uint64_t bytes)
4140
{
4141
uint64_t total_evicted = 0;
4142
multilist_t *ml = &state->arcs_list[type];
4143
int num_sublists;
4144
arc_buf_hdr_t **markers;
4145
evict_arg_t *eva = NULL;
4146
4147
num_sublists = multilist_get_num_sublists(ml);
4148
4149
boolean_t use_evcttq = zfs_arc_evict_threads > 1;
4150
4151
/*
4152
* If we've tried to evict from each sublist, made some
4153
* progress, but still have not hit the target number of bytes
4154
* to evict, we want to keep trying. The markers allow us to
4155
* pick up where we left off for each individual sublist, rather
4156
* than starting from the tail each time.
4157
*/
4158
if (zthr_iscurthread(arc_evict_zthr)) {
4159
markers = arc_state_evict_markers;
4160
ASSERT3S(num_sublists, <=, arc_state_evict_marker_count);
4161
} else {
4162
markers = arc_state_alloc_markers(num_sublists);
4163
}
4164
for (int i = 0; i < num_sublists; i++) {
4165
multilist_sublist_t *mls;
4166
4167
mls = multilist_sublist_lock_idx(ml, i);
4168
multilist_sublist_insert_tail(mls, markers[i]);
4169
multilist_sublist_unlock(mls);
4170
}
4171
4172
if (use_evcttq) {
4173
if (zthr_iscurthread(arc_evict_zthr))
4174
eva = arc_evict_arg;
4175
else
4176
eva = kmem_alloc(sizeof (evict_arg_t) *
4177
zfs_arc_evict_threads, KM_NOSLEEP);
4178
if (eva) {
4179
for (int i = 0; i < zfs_arc_evict_threads; i++) {
4180
taskq_init_ent(&eva[i].eva_tqent);
4181
eva[i].eva_ml = ml;
4182
eva[i].eva_spa = spa;
4183
}
4184
} else {
4185
/*
4186
* Fall back to the regular single evict if it is not
4187
* possible to allocate memory for the taskq entries.
4188
*/
4189
use_evcttq = B_FALSE;
4190
}
4191
}
4192
4193
/*
4194
* Start eviction using a randomly selected sublist, this is to try and
4195
* evenly balance eviction across all sublists. Always starting at the
4196
* same sublist (e.g. index 0) would cause evictions to favor certain
4197
* sublists over others.
4198
*/
4199
uint64_t scan_evicted = 0;
4200
int sublists_left = num_sublists;
4201
int sublist_idx = multilist_get_random_index(ml);
4202
4203
/*
4204
* While we haven't hit our target number of bytes to evict, or
4205
* we're evicting all available buffers.
4206
*/
4207
while (total_evicted < bytes) {
4208
uint64_t evict = MIN_EVICT_SIZE;
4209
uint_t ntasks = zfs_arc_evict_threads;
4210
4211
if (use_evcttq) {
4212
if (sublists_left < ntasks)
4213
ntasks = sublists_left;
4214
4215
if (ntasks < 2)
4216
use_evcttq = B_FALSE;
4217
}
4218
4219
if (use_evcttq) {
4220
uint64_t left = bytes - total_evicted;
4221
4222
if (bytes == ARC_EVICT_ALL) {
4223
evict = bytes;
4224
} else if (left > ntasks * MIN_EVICT_SIZE) {
4225
evict = DIV_ROUND_UP(left, ntasks);
4226
} else {
4227
ntasks = DIV_ROUND_UP(left, MIN_EVICT_SIZE);
4228
if (ntasks == 1)
4229
use_evcttq = B_FALSE;
4230
}
4231
}
4232
4233
for (int i = 0; sublists_left > 0; i++, sublist_idx++,
4234
sublists_left--) {
4235
uint64_t bytes_remaining;
4236
uint64_t bytes_evicted;
4237
4238
/* we've reached the end, wrap to the beginning */
4239
if (sublist_idx >= num_sublists)
4240
sublist_idx = 0;
4241
4242
if (use_evcttq) {
4243
if (i == ntasks)
4244
break;
4245
4246
eva[i].eva_marker = markers[sublist_idx];
4247
eva[i].eva_idx = sublist_idx;
4248
eva[i].eva_bytes = evict;
4249
4250
taskq_dispatch_ent(arc_evict_taskq,
4251
arc_evict_task, &eva[i], 0,
4252
&eva[i].eva_tqent);
4253
4254
continue;
4255
}
4256
4257
if (total_evicted < bytes)
4258
bytes_remaining = bytes - total_evicted;
4259
else
4260
break;
4261
4262
bytes_evicted = arc_evict_state_impl(ml, sublist_idx,
4263
markers[sublist_idx], spa, bytes_remaining);
4264
4265
scan_evicted += bytes_evicted;
4266
total_evicted += bytes_evicted;
4267
}
4268
4269
if (use_evcttq) {
4270
taskq_wait(arc_evict_taskq);
4271
4272
for (int i = 0; i < ntasks; i++) {
4273
scan_evicted += eva[i].eva_evicted;
4274
total_evicted += eva[i].eva_evicted;
4275
}
4276
}
4277
4278
/*
4279
* If we scanned all sublists and didn't evict anything, we
4280
* have no reason to believe we'll evict more during another
4281
* scan, so break the loop.
4282
*/
4283
if (scan_evicted == 0 && sublists_left == 0) {
4284
/* This isn't possible, let's make that obvious */
4285
ASSERT3S(bytes, !=, 0);
4286
4287
/*
4288
* When bytes is ARC_EVICT_ALL, the only way to
4289
* break the loop is when scan_evicted is zero.
4290
* In that case, we actually have evicted enough,
4291
* so we don't want to increment the kstat.
4292
*/
4293
if (bytes != ARC_EVICT_ALL) {
4294
ASSERT3S(total_evicted, <, bytes);
4295
ARCSTAT_BUMP(arcstat_evict_not_enough);
4296
}
4297
4298
break;
4299
}
4300
4301
/*
4302
* If we scanned all sublists but still have more to do,
4303
* reset the counts so we can go around again.
4304
*/
4305
if (sublists_left == 0) {
4306
sublists_left = num_sublists;
4307
sublist_idx = multilist_get_random_index(ml);
4308
scan_evicted = 0;
4309
4310
/*
4311
* Since we're about to reconsider all sublists,
4312
* re-enable use of the evict threads if available.
4313
*/
4314
use_evcttq = (zfs_arc_evict_threads > 1 && eva != NULL);
4315
}
4316
}
4317
4318
if (eva != NULL && eva != arc_evict_arg)
4319
kmem_free(eva, sizeof (evict_arg_t) * zfs_arc_evict_threads);
4320
4321
for (int i = 0; i < num_sublists; i++) {
4322
multilist_sublist_t *mls = multilist_sublist_lock_idx(ml, i);
4323
multilist_sublist_remove(mls, markers[i]);
4324
multilist_sublist_unlock(mls);
4325
}
4326
4327
if (markers != arc_state_evict_markers)
4328
arc_state_free_markers(markers, num_sublists);
4329
4330
return (total_evicted);
4331
}
4332
4333
/*
4334
* Flush all "evictable" data of the given type from the arc state
4335
* specified. This will not evict any "active" buffers (i.e. referenced).
4336
*
4337
* When 'retry' is set to B_FALSE, the function will make a single pass
4338
* over the state and evict any buffers that it can. Since it doesn't
4339
* continually retry the eviction, it might end up leaving some buffers
4340
* in the ARC due to lock misses.
4341
*
4342
* When 'retry' is set to B_TRUE, the function will continually retry the
4343
* eviction until *all* evictable buffers have been removed from the
4344
* state. As a result, if concurrent insertions into the state are
4345
* allowed (e.g. if the ARC isn't shutting down), this function might
4346
* wind up in an infinite loop, continually trying to evict buffers.
4347
*/
4348
static uint64_t
4349
arc_flush_state(arc_state_t *state, uint64_t spa, arc_buf_contents_t type,
4350
boolean_t retry)
4351
{
4352
uint64_t evicted = 0;
4353
4354
while (zfs_refcount_count(&state->arcs_esize[type]) != 0) {
4355
evicted += arc_evict_state(state, type, spa, ARC_EVICT_ALL);
4356
4357
if (!retry)
4358
break;
4359
}
4360
4361
return (evicted);
4362
}
4363
4364
/*
4365
* Evict the specified number of bytes from the state specified. This
4366
* function prevents us from trying to evict more from a state's list
4367
* than is "evictable", and to skip evicting altogether when passed a
4368
* negative value for "bytes". In contrast, arc_evict_state() will
4369
* evict everything it can, when passed a negative value for "bytes".
4370
*/
4371
static uint64_t
4372
arc_evict_impl(arc_state_t *state, arc_buf_contents_t type, int64_t bytes)
4373
{
4374
uint64_t delta;
4375
4376
if (bytes > 0 && zfs_refcount_count(&state->arcs_esize[type]) > 0) {
4377
delta = MIN(zfs_refcount_count(&state->arcs_esize[type]),
4378
bytes);
4379
return (arc_evict_state(state, type, 0, delta));
4380
}
4381
4382
return (0);
4383
}
4384
4385
/*
4386
* Adjust specified fraction, taking into account initial ghost state(s) size,
4387
* ghost hit bytes towards increasing the fraction, ghost hit bytes towards
4388
* decreasing it, plus a balance factor, controlling the decrease rate, used
4389
* to balance metadata vs data.
4390
*/
4391
static uint64_t
4392
arc_evict_adj(uint64_t frac, uint64_t total, uint64_t up, uint64_t down,
4393
uint_t balance)
4394
{
4395
if (total < 32 || up + down == 0)
4396
return (frac);
4397
4398
/*
4399
* We should not have more ghost hits than ghost size, but they may
4400
* get close. To avoid overflows below up/down should not be bigger
4401
* than 1/5 of total. But to limit maximum adjustment speed restrict
4402
* it some more.
4403
*/
4404
if (up + down >= total / 16) {
4405
uint64_t scale = (up + down) / (total / 32);
4406
up /= scale;
4407
down /= scale;
4408
}
4409
4410
/* Get maximal dynamic range by choosing optimal shifts. */
4411
int s = highbit64(total);
4412
s = MIN(64 - s, 32);
4413
4414
ASSERT3U(frac, <=, 1ULL << 32);
4415
uint64_t ofrac = (1ULL << 32) - frac;
4416
4417
if (frac >= 4 * ofrac)
4418
up /= frac / (2 * ofrac + 1);
4419
up = (up << s) / (total >> (32 - s));
4420
if (ofrac >= 4 * frac)
4421
down /= ofrac / (2 * frac + 1);
4422
down = (down << s) / (total >> (32 - s));
4423
down = down * 100 / balance;
4424
4425
ASSERT3U(up, <=, (1ULL << 32) - frac);
4426
ASSERT3U(down, <=, frac);
4427
return (frac + up - down);
4428
}
4429
4430
/*
4431
* Calculate (x * multiplier / divisor) without unnecesary overflows.
4432
*/
4433
static uint64_t
4434
arc_mf(uint64_t x, uint64_t multiplier, uint64_t divisor)
4435
{
4436
uint64_t q = (x / divisor);
4437
uint64_t r = (x % divisor);
4438
4439
return ((q * multiplier) + ((r * multiplier) / divisor));
4440
}
4441
4442
/*
4443
* Evict buffers from the cache, such that arcstat_size is capped by arc_c.
4444
*/
4445
static uint64_t
4446
arc_evict(void)
4447
{
4448
uint64_t bytes, total_evicted = 0;
4449
int64_t e, mrud, mrum, mfud, mfum, w;
4450
static uint64_t ogrd, ogrm, ogfd, ogfm;
4451
static uint64_t gsrd, gsrm, gsfd, gsfm;
4452
uint64_t ngrd, ngrm, ngfd, ngfm;
4453
4454
/* Get current size of ARC states we can evict from. */
4455
mrud = zfs_refcount_count(&arc_mru->arcs_size[ARC_BUFC_DATA]) +
4456
zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_DATA]);
4457
mrum = zfs_refcount_count(&arc_mru->arcs_size[ARC_BUFC_METADATA]) +
4458
zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_METADATA]);
4459
mfud = zfs_refcount_count(&arc_mfu->arcs_size[ARC_BUFC_DATA]);
4460
mfum = zfs_refcount_count(&arc_mfu->arcs_size[ARC_BUFC_METADATA]);
4461
uint64_t d = mrud + mfud;
4462
uint64_t m = mrum + mfum;
4463
uint64_t t = d + m;
4464
4465
/* Get ARC ghost hits since last eviction. */
4466
ngrd = wmsum_value(&arc_mru_ghost->arcs_hits[ARC_BUFC_DATA]);
4467
uint64_t grd = ngrd - ogrd;
4468
ogrd = ngrd;
4469
ngrm = wmsum_value(&arc_mru_ghost->arcs_hits[ARC_BUFC_METADATA]);
4470
uint64_t grm = ngrm - ogrm;
4471
ogrm = ngrm;
4472
ngfd = wmsum_value(&arc_mfu_ghost->arcs_hits[ARC_BUFC_DATA]);
4473
uint64_t gfd = ngfd - ogfd;
4474
ogfd = ngfd;
4475
ngfm = wmsum_value(&arc_mfu_ghost->arcs_hits[ARC_BUFC_METADATA]);
4476
uint64_t gfm = ngfm - ogfm;
4477
ogfm = ngfm;
4478
4479
/* Adjust ARC states balance based on ghost hits. */
4480
arc_meta = arc_evict_adj(arc_meta, gsrd + gsrm + gsfd + gsfm,
4481
grm + gfm, grd + gfd, zfs_arc_meta_balance);
4482
arc_pd = arc_evict_adj(arc_pd, gsrd + gsfd, grd, gfd, 100);
4483
arc_pm = arc_evict_adj(arc_pm, gsrm + gsfm, grm, gfm, 100);
4484
4485
uint64_t asize = aggsum_value(&arc_sums.arcstat_size);
4486
uint64_t ac = arc_c;
4487
int64_t wt = t - (asize - ac);
4488
4489
/*
4490
* Try to reduce pinned dnodes if more than 3/4 of wanted metadata
4491
* target is not evictable or if they go over arc_dnode_limit.
4492
*/
4493
int64_t prune = 0;
4494
int64_t dn = aggsum_value(&arc_sums.arcstat_dnode_size);
4495
int64_t nem = zfs_refcount_count(&arc_mru->arcs_size[ARC_BUFC_METADATA])
4496
+ zfs_refcount_count(&arc_mfu->arcs_size[ARC_BUFC_METADATA])
4497
- zfs_refcount_count(&arc_mru->arcs_esize[ARC_BUFC_METADATA])
4498
- zfs_refcount_count(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]);
4499
w = wt * (int64_t)(arc_meta >> 16) >> 16;
4500
if (nem > w * 3 / 4) {
4501
prune = dn / sizeof (dnode_t) *
4502
zfs_arc_dnode_reduce_percent / 100;
4503
if (nem < w && w > 4)
4504
prune = arc_mf(prune, nem - w * 3 / 4, w / 4);
4505
}
4506
if (dn > arc_dnode_limit) {
4507
prune = MAX(prune, (dn - arc_dnode_limit) / sizeof (dnode_t) *
4508
zfs_arc_dnode_reduce_percent / 100);
4509
}
4510
if (prune > 0)
4511
arc_prune_async(prune);
4512
4513
/* Evict MRU metadata. */
4514
w = wt * (int64_t)(arc_meta * arc_pm >> 48) >> 16;
4515
e = MIN((int64_t)(asize - ac), (int64_t)(mrum - w));
4516
bytes = arc_evict_impl(arc_mru, ARC_BUFC_METADATA, e);
4517
total_evicted += bytes;
4518
mrum -= bytes;
4519
asize -= bytes;
4520
4521
/* Evict MFU metadata. */
4522
w = wt * (int64_t)(arc_meta >> 16) >> 16;
4523
e = MIN((int64_t)(asize - ac), (int64_t)(m - bytes - w));
4524
bytes = arc_evict_impl(arc_mfu, ARC_BUFC_METADATA, e);
4525
total_evicted += bytes;
4526
mfum -= bytes;
4527
asize -= bytes;
4528
4529
/* Evict MRU data. */
4530
wt -= m - total_evicted;
4531
w = wt * (int64_t)(arc_pd >> 16) >> 16;
4532
e = MIN((int64_t)(asize - ac), (int64_t)(mrud - w));
4533
bytes = arc_evict_impl(arc_mru, ARC_BUFC_DATA, e);
4534
total_evicted += bytes;
4535
mrud -= bytes;
4536
asize -= bytes;
4537
4538
/* Evict MFU data. */
4539
e = asize - ac;
4540
bytes = arc_evict_impl(arc_mfu, ARC_BUFC_DATA, e);
4541
mfud -= bytes;
4542
total_evicted += bytes;
4543
4544
/*
4545
* Evict ghost lists
4546
*
4547
* Size of each state's ghost list represents how much that state
4548
* may grow by shrinking the other states. Would it need to shrink
4549
* other states to zero (that is unlikely), its ghost size would be
4550
* equal to sum of other three state sizes. But excessive ghost
4551
* size may result in false ghost hits (too far back), that may
4552
* never result in real cache hits if several states are competing.
4553
* So choose some arbitraty point of 1/2 of other state sizes.
4554
*/
4555
gsrd = (mrum + mfud + mfum) / 2;
4556
e = zfs_refcount_count(&arc_mru_ghost->arcs_size[ARC_BUFC_DATA]) -
4557
gsrd;
4558
(void) arc_evict_impl(arc_mru_ghost, ARC_BUFC_DATA, e);
4559
4560
gsrm = (mrud + mfud + mfum) / 2;
4561
e = zfs_refcount_count(&arc_mru_ghost->arcs_size[ARC_BUFC_METADATA]) -
4562
gsrm;
4563
(void) arc_evict_impl(arc_mru_ghost, ARC_BUFC_METADATA, e);
4564
4565
gsfd = (mrud + mrum + mfum) / 2;
4566
e = zfs_refcount_count(&arc_mfu_ghost->arcs_size[ARC_BUFC_DATA]) -
4567
gsfd;
4568
(void) arc_evict_impl(arc_mfu_ghost, ARC_BUFC_DATA, e);
4569
4570
gsfm = (mrud + mrum + mfud) / 2;
4571
e = zfs_refcount_count(&arc_mfu_ghost->arcs_size[ARC_BUFC_METADATA]) -
4572
gsfm;
4573
(void) arc_evict_impl(arc_mfu_ghost, ARC_BUFC_METADATA, e);
4574
4575
return (total_evicted);
4576
}
4577
4578
static void
4579
arc_flush_impl(uint64_t guid, boolean_t retry)
4580
{
4581
ASSERT(!retry || guid == 0);
4582
4583
(void) arc_flush_state(arc_mru, guid, ARC_BUFC_DATA, retry);
4584
(void) arc_flush_state(arc_mru, guid, ARC_BUFC_METADATA, retry);
4585
4586
(void) arc_flush_state(arc_mfu, guid, ARC_BUFC_DATA, retry);
4587
(void) arc_flush_state(arc_mfu, guid, ARC_BUFC_METADATA, retry);
4588
4589
(void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_DATA, retry);
4590
(void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_METADATA, retry);
4591
4592
(void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_DATA, retry);
4593
(void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_METADATA, retry);
4594
4595
(void) arc_flush_state(arc_uncached, guid, ARC_BUFC_DATA, retry);
4596
(void) arc_flush_state(arc_uncached, guid, ARC_BUFC_METADATA, retry);
4597
}
4598
4599
void
4600
arc_flush(spa_t *spa, boolean_t retry)
4601
{
4602
/*
4603
* If retry is B_TRUE, a spa must not be specified since we have
4604
* no good way to determine if all of a spa's buffers have been
4605
* evicted from an arc state.
4606
*/
4607
ASSERT(!retry || spa == NULL);
4608
4609
arc_flush_impl(spa != NULL ? spa_load_guid(spa) : 0, retry);
4610
}
4611
4612
static arc_async_flush_t *
4613
arc_async_flush_add(uint64_t spa_guid, uint_t level)
4614
{
4615
arc_async_flush_t *af = kmem_alloc(sizeof (*af), KM_SLEEP);
4616
af->af_spa_guid = spa_guid;
4617
af->af_cache_level = level;
4618
taskq_init_ent(&af->af_tqent);
4619
list_link_init(&af->af_node);
4620
4621
mutex_enter(&arc_async_flush_lock);
4622
list_insert_tail(&arc_async_flush_list, af);
4623
mutex_exit(&arc_async_flush_lock);
4624
4625
return (af);
4626
}
4627
4628
static void
4629
arc_async_flush_remove(uint64_t spa_guid, uint_t level)
4630
{
4631
mutex_enter(&arc_async_flush_lock);
4632
for (arc_async_flush_t *af = list_head(&arc_async_flush_list);
4633
af != NULL; af = list_next(&arc_async_flush_list, af)) {
4634
if (af->af_spa_guid == spa_guid &&
4635
af->af_cache_level == level) {
4636
list_remove(&arc_async_flush_list, af);
4637
kmem_free(af, sizeof (*af));
4638
break;
4639
}
4640
}
4641
mutex_exit(&arc_async_flush_lock);
4642
}
4643
4644
static void
4645
arc_flush_task(void *arg)
4646
{
4647
arc_async_flush_t *af = arg;
4648
hrtime_t start_time = gethrtime();
4649
uint64_t spa_guid = af->af_spa_guid;
4650
4651
arc_flush_impl(spa_guid, B_FALSE);
4652
arc_async_flush_remove(spa_guid, af->af_cache_level);
4653
4654
uint64_t elapsed = NSEC2MSEC(gethrtime() - start_time);
4655
if (elapsed > 0) {
4656
zfs_dbgmsg("spa %llu arc flushed in %llu ms",
4657
(u_longlong_t)spa_guid, (u_longlong_t)elapsed);
4658
}
4659
}
4660
4661
/*
4662
* ARC buffers use the spa's load guid and can continue to exist after
4663
* the spa_t is gone (exported). The blocks are orphaned since each
4664
* spa import has a different load guid.
4665
*
4666
* It's OK if the spa is re-imported while this asynchronous flush is
4667
* still in progress. The new spa_load_guid will be different.
4668
*
4669
* Also, arc_fini will wait for any arc_flush_task to finish.
4670
*/
4671
void
4672
arc_flush_async(spa_t *spa)
4673
{
4674
uint64_t spa_guid = spa_load_guid(spa);
4675
arc_async_flush_t *af = arc_async_flush_add(spa_guid, 1);
4676
4677
taskq_dispatch_ent(arc_flush_taskq, arc_flush_task,
4678
af, TQ_SLEEP, &af->af_tqent);
4679
}
4680
4681
/*
4682
* Check if a guid is still in-use as part of an async teardown task
4683
*/
4684
boolean_t
4685
arc_async_flush_guid_inuse(uint64_t spa_guid)
4686
{
4687
mutex_enter(&arc_async_flush_lock);
4688
for (arc_async_flush_t *af = list_head(&arc_async_flush_list);
4689
af != NULL; af = list_next(&arc_async_flush_list, af)) {
4690
if (af->af_spa_guid == spa_guid) {
4691
mutex_exit(&arc_async_flush_lock);
4692
return (B_TRUE);
4693
}
4694
}
4695
mutex_exit(&arc_async_flush_lock);
4696
return (B_FALSE);
4697
}
4698
4699
uint64_t
4700
arc_reduce_target_size(uint64_t to_free)
4701
{
4702
/*
4703
* Get the actual arc size. Even if we don't need it, this updates
4704
* the aggsum lower bound estimate for arc_is_overflowing().
4705
*/
4706
uint64_t asize = aggsum_value(&arc_sums.arcstat_size);
4707
4708
/*
4709
* All callers want the ARC to actually evict (at least) this much
4710
* memory. Therefore we reduce from the lower of the current size and
4711
* the target size. This way, even if arc_c is much higher than
4712
* arc_size (as can be the case after many calls to arc_freed(), we will
4713
* immediately have arc_c < arc_size and therefore the arc_evict_zthr
4714
* will evict.
4715
*/
4716
uint64_t c = arc_c;
4717
if (c > arc_c_min) {
4718
c = MIN(c, MAX(asize, arc_c_min));
4719
to_free = MIN(to_free, c - arc_c_min);
4720
arc_c = c - to_free;
4721
} else {
4722
to_free = 0;
4723
}
4724
4725
/*
4726
* Since dbuf cache size is a fraction of target ARC size, we should
4727
* notify dbuf about the reduction, which might be significant,
4728
* especially if current ARC size was much smaller than the target.
4729
*/
4730
dbuf_cache_reduce_target_size();
4731
4732
/*
4733
* Whether or not we reduced the target size, request eviction if the
4734
* current size is over it now, since caller obviously wants some RAM.
4735
*/
4736
if (asize > arc_c) {
4737
/* See comment in arc_evict_cb_check() on why lock+flag */
4738
mutex_enter(&arc_evict_lock);
4739
arc_evict_needed = B_TRUE;
4740
mutex_exit(&arc_evict_lock);
4741
zthr_wakeup(arc_evict_zthr);
4742
}
4743
4744
return (to_free);
4745
}
4746
4747
/*
4748
* Determine if the system is under memory pressure and is asking
4749
* to reclaim memory. A return value of B_TRUE indicates that the system
4750
* is under memory pressure and that the arc should adjust accordingly.
4751
*/
4752
boolean_t
4753
arc_reclaim_needed(void)
4754
{
4755
return (arc_available_memory() < 0);
4756
}
4757
4758
void
4759
arc_kmem_reap_soon(void)
4760
{
4761
size_t i;
4762
kmem_cache_t *prev_cache = NULL;
4763
kmem_cache_t *prev_data_cache = NULL;
4764
4765
#ifdef _KERNEL
4766
#if defined(_ILP32)
4767
/*
4768
* Reclaim unused memory from all kmem caches.
4769
*/
4770
kmem_reap();
4771
#endif
4772
#endif
4773
4774
for (i = 0; i < SPA_MAXBLOCKSIZE >> SPA_MINBLOCKSHIFT; i++) {
4775
#if defined(_ILP32)
4776
/* reach upper limit of cache size on 32-bit */
4777
if (zio_buf_cache[i] == NULL)
4778
break;
4779
#endif
4780
if (zio_buf_cache[i] != prev_cache) {
4781
prev_cache = zio_buf_cache[i];
4782
kmem_cache_reap_now(zio_buf_cache[i]);
4783
}
4784
if (zio_data_buf_cache[i] != prev_data_cache) {
4785
prev_data_cache = zio_data_buf_cache[i];
4786
kmem_cache_reap_now(zio_data_buf_cache[i]);
4787
}
4788
}
4789
kmem_cache_reap_now(buf_cache);
4790
kmem_cache_reap_now(hdr_full_cache);
4791
kmem_cache_reap_now(hdr_l2only_cache);
4792
kmem_cache_reap_now(zfs_btree_leaf_cache);
4793
abd_cache_reap_now();
4794
}
4795
4796
static boolean_t
4797
arc_evict_cb_check(void *arg, zthr_t *zthr)
4798
{
4799
(void) arg, (void) zthr;
4800
4801
#ifdef ZFS_DEBUG
4802
/*
4803
* This is necessary in order to keep the kstat information
4804
* up to date for tools that display kstat data such as the
4805
* mdb ::arc dcmd and the Linux crash utility. These tools
4806
* typically do not call kstat's update function, but simply
4807
* dump out stats from the most recent update. Without
4808
* this call, these commands may show stale stats for the
4809
* anon, mru, mru_ghost, mfu, and mfu_ghost lists. Even
4810
* with this call, the data might be out of date if the
4811
* evict thread hasn't been woken recently; but that should
4812
* suffice. The arc_state_t structures can be queried
4813
* directly if more accurate information is needed.
4814
*/
4815
if (arc_ksp != NULL)
4816
arc_ksp->ks_update(arc_ksp, KSTAT_READ);
4817
#endif
4818
4819
/*
4820
* We have to rely on arc_wait_for_eviction() to tell us when to
4821
* evict, rather than checking if we are overflowing here, so that we
4822
* are sure to not leave arc_wait_for_eviction() waiting on aew_cv.
4823
* If we have become "not overflowing" since arc_wait_for_eviction()
4824
* checked, we need to wake it up. We could broadcast the CV here,
4825
* but arc_wait_for_eviction() may have not yet gone to sleep. We
4826
* would need to use a mutex to ensure that this function doesn't
4827
* broadcast until arc_wait_for_eviction() has gone to sleep (e.g.
4828
* the arc_evict_lock). However, the lock ordering of such a lock
4829
* would necessarily be incorrect with respect to the zthr_lock,
4830
* which is held before this function is called, and is held by
4831
* arc_wait_for_eviction() when it calls zthr_wakeup().
4832
*/
4833
if (arc_evict_needed)
4834
return (B_TRUE);
4835
4836
/*
4837
* If we have buffers in uncached state, evict them periodically.
4838
*/
4839
return ((zfs_refcount_count(&arc_uncached->arcs_esize[ARC_BUFC_DATA]) +
4840
zfs_refcount_count(&arc_uncached->arcs_esize[ARC_BUFC_METADATA]) &&
4841
ddi_get_lbolt() - arc_last_uncached_flush >
4842
MSEC_TO_TICK(arc_min_prefetch_ms / 2)));
4843
}
4844
4845
/*
4846
* Keep arc_size under arc_c by running arc_evict which evicts data
4847
* from the ARC.
4848
*/
4849
static void
4850
arc_evict_cb(void *arg, zthr_t *zthr)
4851
{
4852
(void) arg;
4853
4854
uint64_t evicted = 0;
4855
fstrans_cookie_t cookie = spl_fstrans_mark();
4856
4857
/* Always try to evict from uncached state. */
4858
arc_last_uncached_flush = ddi_get_lbolt();
4859
evicted += arc_flush_state(arc_uncached, 0, ARC_BUFC_DATA, B_FALSE);
4860
evicted += arc_flush_state(arc_uncached, 0, ARC_BUFC_METADATA, B_FALSE);
4861
4862
/* Evict from other states only if told to. */
4863
if (arc_evict_needed)
4864
evicted += arc_evict();
4865
4866
/*
4867
* If evicted is zero, we couldn't evict anything
4868
* via arc_evict(). This could be due to hash lock
4869
* collisions, but more likely due to the majority of
4870
* arc buffers being unevictable. Therefore, even if
4871
* arc_size is above arc_c, another pass is unlikely to
4872
* be helpful and could potentially cause us to enter an
4873
* infinite loop. Additionally, zthr_iscancelled() is
4874
* checked here so that if the arc is shutting down, the
4875
* broadcast will wake any remaining arc evict waiters.
4876
*
4877
* Note we cancel using zthr instead of arc_evict_zthr
4878
* because the latter may not yet be initializd when the
4879
* callback is first invoked.
4880
*/
4881
mutex_enter(&arc_evict_lock);
4882
arc_evict_needed = !zthr_iscancelled(zthr) &&
4883
evicted > 0 && aggsum_compare(&arc_sums.arcstat_size, arc_c) > 0;
4884
if (!arc_evict_needed) {
4885
/*
4886
* We're either no longer overflowing, or we
4887
* can't evict anything more, so we should wake
4888
* arc_get_data_impl() sooner.
4889
*/
4890
arc_evict_waiter_t *aw;
4891
while ((aw = list_remove_head(&arc_evict_waiters)) != NULL) {
4892
cv_broadcast(&aw->aew_cv);
4893
}
4894
arc_set_need_free();
4895
}
4896
mutex_exit(&arc_evict_lock);
4897
spl_fstrans_unmark(cookie);
4898
}
4899
4900
static boolean_t
4901
arc_reap_cb_check(void *arg, zthr_t *zthr)
4902
{
4903
(void) arg, (void) zthr;
4904
4905
int64_t free_memory = arc_available_memory();
4906
static int reap_cb_check_counter = 0;
4907
4908
/*
4909
* If a kmem reap is already active, don't schedule more. We must
4910
* check for this because kmem_cache_reap_soon() won't actually
4911
* block on the cache being reaped (this is to prevent callers from
4912
* becoming implicitly blocked by a system-wide kmem reap -- which,
4913
* on a system with many, many full magazines, can take minutes).
4914
*/
4915
if (!kmem_cache_reap_active() && free_memory < 0) {
4916
4917
arc_no_grow = B_TRUE;
4918
arc_warm = B_TRUE;
4919
/*
4920
* Wait at least zfs_grow_retry (default 5) seconds
4921
* before considering growing.
4922
*/
4923
arc_growtime = gethrtime() + SEC2NSEC(arc_grow_retry);
4924
return (B_TRUE);
4925
} else if (free_memory < arc_c >> arc_no_grow_shift) {
4926
arc_no_grow = B_TRUE;
4927
} else if (gethrtime() >= arc_growtime) {
4928
arc_no_grow = B_FALSE;
4929
}
4930
4931
/*
4932
* Called unconditionally every 60 seconds to reclaim unused
4933
* zstd compression and decompression context. This is done
4934
* here to avoid the need for an independent thread.
4935
*/
4936
if (!((reap_cb_check_counter++) % 60))
4937
zfs_zstd_cache_reap_now();
4938
4939
return (B_FALSE);
4940
}
4941
4942
/*
4943
* Keep enough free memory in the system by reaping the ARC's kmem
4944
* caches. To cause more slabs to be reapable, we may reduce the
4945
* target size of the cache (arc_c), causing the arc_evict_cb()
4946
* to free more buffers.
4947
*/
4948
static void
4949
arc_reap_cb(void *arg, zthr_t *zthr)
4950
{
4951
int64_t can_free, free_memory, to_free;
4952
4953
(void) arg, (void) zthr;
4954
fstrans_cookie_t cookie = spl_fstrans_mark();
4955
4956
/*
4957
* Kick off asynchronous kmem_reap()'s of all our caches.
4958
*/
4959
arc_kmem_reap_soon();
4960
4961
/*
4962
* Wait at least arc_kmem_cache_reap_retry_ms between
4963
* arc_kmem_reap_soon() calls. Without this check it is possible to
4964
* end up in a situation where we spend lots of time reaping
4965
* caches, while we're near arc_c_min. Waiting here also gives the
4966
* subsequent free memory check a chance of finding that the
4967
* asynchronous reap has already freed enough memory, and we don't
4968
* need to call arc_reduce_target_size().
4969
*/
4970
delay((hz * arc_kmem_cache_reap_retry_ms + 999) / 1000);
4971
4972
/*
4973
* Reduce the target size as needed to maintain the amount of free
4974
* memory in the system at a fraction of the arc_size (1/128th by
4975
* default). If oversubscribed (free_memory < 0) then reduce the
4976
* target arc_size by the deficit amount plus the fractional
4977
* amount. If free memory is positive but less than the fractional
4978
* amount, reduce by what is needed to hit the fractional amount.
4979
*/
4980
free_memory = arc_available_memory();
4981
can_free = arc_c - arc_c_min;
4982
to_free = (MAX(can_free, 0) >> arc_shrink_shift) - free_memory;
4983
if (to_free > 0)
4984
arc_reduce_target_size(to_free);
4985
spl_fstrans_unmark(cookie);
4986
}
4987
4988
#ifdef _KERNEL
4989
/*
4990
* Determine the amount of memory eligible for eviction contained in the
4991
* ARC. All clean data reported by the ghost lists can always be safely
4992
* evicted. Due to arc_c_min, the same does not hold for all clean data
4993
* contained by the regular mru and mfu lists.
4994
*
4995
* In the case of the regular mru and mfu lists, we need to report as
4996
* much clean data as possible, such that evicting that same reported
4997
* data will not bring arc_size below arc_c_min. Thus, in certain
4998
* circumstances, the total amount of clean data in the mru and mfu
4999
* lists might not actually be evictable.
5000
*
5001
* The following two distinct cases are accounted for:
5002
*
5003
* 1. The sum of the amount of dirty data contained by both the mru and
5004
* mfu lists, plus the ARC's other accounting (e.g. the anon list),
5005
* is greater than or equal to arc_c_min.
5006
* (i.e. amount of dirty data >= arc_c_min)
5007
*
5008
* This is the easy case; all clean data contained by the mru and mfu
5009
* lists is evictable. Evicting all clean data can only drop arc_size
5010
* to the amount of dirty data, which is greater than arc_c_min.
5011
*
5012
* 2. The sum of the amount of dirty data contained by both the mru and
5013
* mfu lists, plus the ARC's other accounting (e.g. the anon list),
5014
* is less than arc_c_min.
5015
* (i.e. arc_c_min > amount of dirty data)
5016
*
5017
* 2.1. arc_size is greater than or equal arc_c_min.
5018
* (i.e. arc_size >= arc_c_min > amount of dirty data)
5019
*
5020
* In this case, not all clean data from the regular mru and mfu
5021
* lists is actually evictable; we must leave enough clean data
5022
* to keep arc_size above arc_c_min. Thus, the maximum amount of
5023
* evictable data from the two lists combined, is exactly the
5024
* difference between arc_size and arc_c_min.
5025
*
5026
* 2.2. arc_size is less than arc_c_min
5027
* (i.e. arc_c_min > arc_size > amount of dirty data)
5028
*
5029
* In this case, none of the data contained in the mru and mfu
5030
* lists is evictable, even if it's clean. Since arc_size is
5031
* already below arc_c_min, evicting any more would only
5032
* increase this negative difference.
5033
*/
5034
5035
#endif /* _KERNEL */
5036
5037
/*
5038
* Adapt arc info given the number of bytes we are trying to add and
5039
* the state that we are coming from. This function is only called
5040
* when we are adding new content to the cache.
5041
*/
5042
static void
5043
arc_adapt(uint64_t bytes)
5044
{
5045
/*
5046
* Wake reap thread if we do not have any available memory
5047
*/
5048
if (arc_reclaim_needed()) {
5049
zthr_wakeup(arc_reap_zthr);
5050
return;
5051
}
5052
5053
if (arc_no_grow)
5054
return;
5055
5056
if (arc_c >= arc_c_max)
5057
return;
5058
5059
/*
5060
* If we're within (2 * maxblocksize) bytes of the target
5061
* cache size, increment the target cache size
5062
*/
5063
if (aggsum_upper_bound(&arc_sums.arcstat_size) +
5064
2 * SPA_MAXBLOCKSIZE >= arc_c) {
5065
uint64_t dc = MAX(bytes, SPA_OLD_MAXBLOCKSIZE);
5066
if (atomic_add_64_nv(&arc_c, dc) > arc_c_max)
5067
arc_c = arc_c_max;
5068
}
5069
}
5070
5071
/*
5072
* Check if ARC current size has grown past our upper thresholds.
5073
*/
5074
static arc_ovf_level_t
5075
arc_is_overflowing(boolean_t lax, boolean_t use_reserve)
5076
{
5077
/*
5078
* We just compare the lower bound here for performance reasons. Our
5079
* primary goals are to make sure that the arc never grows without
5080
* bound, and that it can reach its maximum size. This check
5081
* accomplishes both goals. The maximum amount we could run over by is
5082
* 2 * aggsum_borrow_multiplier * NUM_CPUS * the average size of a block
5083
* in the ARC. In practice, that's in the tens of MB, which is low
5084
* enough to be safe.
5085
*/
5086
int64_t arc_over = aggsum_lower_bound(&arc_sums.arcstat_size) - arc_c -
5087
zfs_max_recordsize;
5088
int64_t dn_over = aggsum_lower_bound(&arc_sums.arcstat_dnode_size) -
5089
arc_dnode_limit;
5090
5091
/* Always allow at least one block of overflow. */
5092
if (arc_over < 0 && dn_over <= 0)
5093
return (ARC_OVF_NONE);
5094
5095
/* If we are under memory pressure, report severe overflow. */
5096
if (!lax)
5097
return (ARC_OVF_SEVERE);
5098
5099
/* We are not under pressure, so be more or less relaxed. */
5100
int64_t overflow = (arc_c >> zfs_arc_overflow_shift) / 2;
5101
if (use_reserve)
5102
overflow *= 3;
5103
return (arc_over < overflow ? ARC_OVF_SOME : ARC_OVF_SEVERE);
5104
}
5105
5106
static abd_t *
5107
arc_get_data_abd(arc_buf_hdr_t *hdr, uint64_t size, const void *tag,
5108
int alloc_flags)
5109
{
5110
arc_buf_contents_t type = arc_buf_type(hdr);
5111
5112
arc_get_data_impl(hdr, size, tag, alloc_flags);
5113
if (alloc_flags & ARC_HDR_ALLOC_LINEAR)
5114
return (abd_alloc_linear(size, type == ARC_BUFC_METADATA));
5115
else
5116
return (abd_alloc(size, type == ARC_BUFC_METADATA));
5117
}
5118
5119
static void *
5120
arc_get_data_buf(arc_buf_hdr_t *hdr, uint64_t size, const void *tag)
5121
{
5122
arc_buf_contents_t type = arc_buf_type(hdr);
5123
5124
arc_get_data_impl(hdr, size, tag, 0);
5125
if (type == ARC_BUFC_METADATA) {
5126
return (zio_buf_alloc(size));
5127
} else {
5128
ASSERT(type == ARC_BUFC_DATA);
5129
return (zio_data_buf_alloc(size));
5130
}
5131
}
5132
5133
/*
5134
* Wait for the specified amount of data (in bytes) to be evicted from the
5135
* ARC, and for there to be sufficient free memory in the system.
5136
* The lax argument specifies that caller does not have a specific reason
5137
* to wait, not aware of any memory pressure. Low memory handlers though
5138
* should set it to B_FALSE to wait for all required evictions to complete.
5139
* The use_reserve argument allows some callers to wait less than others
5140
* to not block critical code paths, possibly blocking other resources.
5141
*/
5142
void
5143
arc_wait_for_eviction(uint64_t amount, boolean_t lax, boolean_t use_reserve)
5144
{
5145
switch (arc_is_overflowing(lax, use_reserve)) {
5146
case ARC_OVF_NONE:
5147
return;
5148
case ARC_OVF_SOME:
5149
/*
5150
* This is a bit racy without taking arc_evict_lock, but the
5151
* worst that can happen is we either call zthr_wakeup() extra
5152
* time due to race with other thread here, or the set flag
5153
* get cleared by arc_evict_cb(), which is unlikely due to
5154
* big hysteresis, but also not important since at this level
5155
* of overflow the eviction is purely advisory. Same time
5156
* taking the global lock here every time without waiting for
5157
* the actual eviction creates a significant lock contention.
5158
*/
5159
if (!arc_evict_needed) {
5160
arc_evict_needed = B_TRUE;
5161
zthr_wakeup(arc_evict_zthr);
5162
}
5163
return;
5164
case ARC_OVF_SEVERE:
5165
default:
5166
{
5167
arc_evict_waiter_t aw;
5168
list_link_init(&aw.aew_node);
5169
cv_init(&aw.aew_cv, NULL, CV_DEFAULT, NULL);
5170
5171
uint64_t last_count = 0;
5172
mutex_enter(&arc_evict_lock);
5173
if (!list_is_empty(&arc_evict_waiters)) {
5174
arc_evict_waiter_t *last =
5175
list_tail(&arc_evict_waiters);
5176
last_count = last->aew_count;
5177
} else if (!arc_evict_needed) {
5178
arc_evict_needed = B_TRUE;
5179
zthr_wakeup(arc_evict_zthr);
5180
}
5181
/*
5182
* Note, the last waiter's count may be less than
5183
* arc_evict_count if we are low on memory in which
5184
* case arc_evict_state_impl() may have deferred
5185
* wakeups (but still incremented arc_evict_count).
5186
*/
5187
aw.aew_count = MAX(last_count, arc_evict_count) + amount;
5188
5189
list_insert_tail(&arc_evict_waiters, &aw);
5190
5191
arc_set_need_free();
5192
5193
DTRACE_PROBE3(arc__wait__for__eviction,
5194
uint64_t, amount,
5195
uint64_t, arc_evict_count,
5196
uint64_t, aw.aew_count);
5197
5198
/*
5199
* We will be woken up either when arc_evict_count reaches
5200
* aew_count, or when the ARC is no longer overflowing and
5201
* eviction completes.
5202
* In case of "false" wakeup, we will still be on the list.
5203
*/
5204
do {
5205
cv_wait(&aw.aew_cv, &arc_evict_lock);
5206
} while (list_link_active(&aw.aew_node));
5207
mutex_exit(&arc_evict_lock);
5208
5209
cv_destroy(&aw.aew_cv);
5210
}
5211
}
5212
}
5213
5214
/*
5215
* Allocate a block and return it to the caller. If we are hitting the
5216
* hard limit for the cache size, we must sleep, waiting for the eviction
5217
* thread to catch up. If we're past the target size but below the hard
5218
* limit, we'll only signal the reclaim thread and continue on.
5219
*/
5220
static void
5221
arc_get_data_impl(arc_buf_hdr_t *hdr, uint64_t size, const void *tag,
5222
int alloc_flags)
5223
{
5224
arc_adapt(size);
5225
5226
/*
5227
* If arc_size is currently overflowing, we must be adding data
5228
* faster than we are evicting. To ensure we don't compound the
5229
* problem by adding more data and forcing arc_size to grow even
5230
* further past it's target size, we wait for the eviction thread to
5231
* make some progress. We also wait for there to be sufficient free
5232
* memory in the system, as measured by arc_free_memory().
5233
*
5234
* Specifically, we wait for zfs_arc_eviction_pct percent of the
5235
* requested size to be evicted. This should be more than 100%, to
5236
* ensure that that progress is also made towards getting arc_size
5237
* under arc_c. See the comment above zfs_arc_eviction_pct.
5238
*/
5239
arc_wait_for_eviction(size * zfs_arc_eviction_pct / 100,
5240
B_TRUE, alloc_flags & ARC_HDR_USE_RESERVE);
5241
5242
arc_buf_contents_t type = arc_buf_type(hdr);
5243
if (type == ARC_BUFC_METADATA) {
5244
arc_space_consume(size, ARC_SPACE_META);
5245
} else {
5246
arc_space_consume(size, ARC_SPACE_DATA);
5247
}
5248
5249
/*
5250
* Update the state size. Note that ghost states have a
5251
* "ghost size" and so don't need to be updated.
5252
*/
5253
arc_state_t *state = hdr->b_l1hdr.b_state;
5254
if (!GHOST_STATE(state)) {
5255
5256
(void) zfs_refcount_add_many(&state->arcs_size[type], size,
5257
tag);
5258
5259
/*
5260
* If this is reached via arc_read, the link is
5261
* protected by the hash lock. If reached via
5262
* arc_buf_alloc, the header should not be accessed by
5263
* any other thread. And, if reached via arc_read_done,
5264
* the hash lock will protect it if it's found in the
5265
* hash table; otherwise no other thread should be
5266
* trying to [add|remove]_reference it.
5267
*/
5268
if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) {
5269
ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
5270
(void) zfs_refcount_add_many(&state->arcs_esize[type],
5271
size, tag);
5272
}
5273
}
5274
}
5275
5276
static void
5277
arc_free_data_abd(arc_buf_hdr_t *hdr, abd_t *abd, uint64_t size,
5278
const void *tag)
5279
{
5280
arc_free_data_impl(hdr, size, tag);
5281
abd_free(abd);
5282
}
5283
5284
static void
5285
arc_free_data_buf(arc_buf_hdr_t *hdr, void *buf, uint64_t size, const void *tag)
5286
{
5287
arc_buf_contents_t type = arc_buf_type(hdr);
5288
5289
arc_free_data_impl(hdr, size, tag);
5290
if (type == ARC_BUFC_METADATA) {
5291
zio_buf_free(buf, size);
5292
} else {
5293
ASSERT(type == ARC_BUFC_DATA);
5294
zio_data_buf_free(buf, size);
5295
}
5296
}
5297
5298
/*
5299
* Free the arc data buffer.
5300
*/
5301
static void
5302
arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, const void *tag)
5303
{
5304
arc_state_t *state = hdr->b_l1hdr.b_state;
5305
arc_buf_contents_t type = arc_buf_type(hdr);
5306
5307
/* protected by hash lock, if in the hash table */
5308
if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) {
5309
ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt));
5310
ASSERT(state != arc_anon && state != arc_l2c_only);
5311
5312
(void) zfs_refcount_remove_many(&state->arcs_esize[type],
5313
size, tag);
5314
}
5315
(void) zfs_refcount_remove_many(&state->arcs_size[type], size, tag);
5316
5317
VERIFY3U(hdr->b_type, ==, type);
5318
if (type == ARC_BUFC_METADATA) {
5319
arc_space_return(size, ARC_SPACE_META);
5320
} else {
5321
ASSERT(type == ARC_BUFC_DATA);
5322
arc_space_return(size, ARC_SPACE_DATA);
5323
}
5324
}
5325
5326
/*
5327
* This routine is called whenever a buffer is accessed.
5328
*/
5329
static void
5330
arc_access(arc_buf_hdr_t *hdr, arc_flags_t arc_flags, boolean_t hit)
5331
{
5332
ASSERT(MUTEX_HELD(HDR_LOCK(hdr)));
5333
ASSERT(HDR_HAS_L1HDR(hdr));
5334
5335
/*
5336
* Update buffer prefetch status.
5337
*/
5338
boolean_t was_prefetch = HDR_PREFETCH(hdr);
5339
boolean_t now_prefetch = arc_flags & ARC_FLAG_PREFETCH;
5340
if (was_prefetch != now_prefetch) {
5341
if (was_prefetch) {
5342
ARCSTAT_CONDSTAT(hit, demand_hit, demand_iohit,
5343
HDR_PRESCIENT_PREFETCH(hdr), prescient, predictive,
5344
prefetch);
5345
}
5346
if (HDR_HAS_L2HDR(hdr))
5347
l2arc_hdr_arcstats_decrement_state(hdr);
5348
if (was_prefetch) {
5349
arc_hdr_clear_flags(hdr,
5350
ARC_FLAG_PREFETCH | ARC_FLAG_PRESCIENT_PREFETCH);
5351
} else {
5352
arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH);
5353
}
5354
if (HDR_HAS_L2HDR(hdr))
5355
l2arc_hdr_arcstats_increment_state(hdr);
5356
}
5357
if (now_prefetch) {
5358
if (arc_flags & ARC_FLAG_PRESCIENT_PREFETCH) {
5359
arc_hdr_set_flags(hdr, ARC_FLAG_PRESCIENT_PREFETCH);
5360
ARCSTAT_BUMP(arcstat_prescient_prefetch);
5361
} else {
5362
ARCSTAT_BUMP(arcstat_predictive_prefetch);
5363
}
5364
}
5365
if (arc_flags & ARC_FLAG_L2CACHE)
5366
arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE);
5367
5368
clock_t now = ddi_get_lbolt();
5369
if (hdr->b_l1hdr.b_state == arc_anon) {
5370
arc_state_t *new_state;
5371
/*
5372
* This buffer is not in the cache, and does not appear in
5373
* our "ghost" lists. Add it to the MRU or uncached state.
5374
*/
5375
ASSERT0(hdr->b_l1hdr.b_arc_access);
5376
hdr->b_l1hdr.b_arc_access = now;
5377
if (HDR_UNCACHED(hdr)) {
5378
new_state = arc_uncached;
5379
DTRACE_PROBE1(new_state__uncached, arc_buf_hdr_t *,
5380
hdr);
5381
} else {
5382
new_state = arc_mru;
5383
DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr);
5384
}
5385
arc_change_state(new_state, hdr);
5386
} else if (hdr->b_l1hdr.b_state == arc_mru) {
5387
/*
5388
* This buffer has been accessed once recently and either
5389
* its read is still in progress or it is in the cache.
5390
*/
5391
if (HDR_IO_IN_PROGRESS(hdr)) {
5392
hdr->b_l1hdr.b_arc_access = now;
5393
return;
5394
}
5395
hdr->b_l1hdr.b_mru_hits++;
5396
ARCSTAT_BUMP(arcstat_mru_hits);
5397
5398
/*
5399
* If the previous access was a prefetch, then it already
5400
* handled possible promotion, so nothing more to do for now.
5401
*/
5402
if (was_prefetch) {
5403
hdr->b_l1hdr.b_arc_access = now;
5404
return;
5405
}
5406
5407
/*
5408
* If more than ARC_MINTIME have passed from the previous
5409
* hit, promote the buffer to the MFU state.
5410
*/
5411
if (ddi_time_after(now, hdr->b_l1hdr.b_arc_access +
5412
ARC_MINTIME)) {
5413
hdr->b_l1hdr.b_arc_access = now;
5414
DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr);
5415
arc_change_state(arc_mfu, hdr);
5416
}
5417
} else if (hdr->b_l1hdr.b_state == arc_mru_ghost) {
5418
arc_state_t *new_state;
5419
/*
5420
* This buffer has been accessed once recently, but was
5421
* evicted from the cache. Would we have bigger MRU, it
5422
* would be an MRU hit, so handle it the same way, except
5423
* we don't need to check the previous access time.
5424
*/
5425
hdr->b_l1hdr.b_mru_ghost_hits++;
5426
ARCSTAT_BUMP(arcstat_mru_ghost_hits);
5427
hdr->b_l1hdr.b_arc_access = now;
5428
wmsum_add(&arc_mru_ghost->arcs_hits[arc_buf_type(hdr)],
5429
arc_hdr_size(hdr));
5430
if (was_prefetch) {
5431
new_state = arc_mru;
5432
DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr);
5433
} else {
5434
new_state = arc_mfu;
5435
DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr);
5436
}
5437
arc_change_state(new_state, hdr);
5438
} else if (hdr->b_l1hdr.b_state == arc_mfu) {
5439
/*
5440
* This buffer has been accessed more than once and either
5441
* still in the cache or being restored from one of ghosts.
5442
*/
5443
if (!HDR_IO_IN_PROGRESS(hdr)) {
5444
hdr->b_l1hdr.b_mfu_hits++;
5445
ARCSTAT_BUMP(arcstat_mfu_hits);
5446
}
5447
hdr->b_l1hdr.b_arc_access = now;
5448
} else if (hdr->b_l1hdr.b_state == arc_mfu_ghost) {
5449
/*
5450
* This buffer has been accessed more than once recently, but
5451
* has been evicted from the cache. Would we have bigger MFU
5452
* it would stay in cache, so move it back to MFU state.
5453
*/
5454
hdr->b_l1hdr.b_mfu_ghost_hits++;
5455
ARCSTAT_BUMP(arcstat_mfu_ghost_hits);
5456
hdr->b_l1hdr.b_arc_access = now;
5457
wmsum_add(&arc_mfu_ghost->arcs_hits[arc_buf_type(hdr)],
5458
arc_hdr_size(hdr));
5459
DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr);
5460
arc_change_state(arc_mfu, hdr);
5461
} else if (hdr->b_l1hdr.b_state == arc_uncached) {
5462
/*
5463
* This buffer is uncacheable, but we got a hit. Probably
5464
* a demand read after prefetch. Nothing more to do here.
5465
*/
5466
if (!HDR_IO_IN_PROGRESS(hdr))
5467
ARCSTAT_BUMP(arcstat_uncached_hits);
5468
hdr->b_l1hdr.b_arc_access = now;
5469
} else if (hdr->b_l1hdr.b_state == arc_l2c_only) {
5470
/*
5471
* This buffer is on the 2nd Level ARC and was not accessed
5472
* for a long time, so treat it as new and put into MRU.
5473
*/
5474
hdr->b_l1hdr.b_arc_access = now;
5475
DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr);
5476
arc_change_state(arc_mru, hdr);
5477
} else {
5478
cmn_err(CE_PANIC, "invalid arc state 0x%p",
5479
hdr->b_l1hdr.b_state);
5480
}
5481
}
5482
5483
/*
5484
* This routine is called by dbuf_hold() to update the arc_access() state
5485
* which otherwise would be skipped for entries in the dbuf cache.
5486
*/
5487
void
5488
arc_buf_access(arc_buf_t *buf)
5489
{
5490
arc_buf_hdr_t *hdr = buf->b_hdr;
5491
5492
/*
5493
* Avoid taking the hash_lock when possible as an optimization.
5494
* The header must be checked again under the hash_lock in order
5495
* to handle the case where it is concurrently being released.
5496
*/
5497
if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr))
5498
return;
5499
5500
kmutex_t *hash_lock = HDR_LOCK(hdr);
5501
mutex_enter(hash_lock);
5502
5503
if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) {
5504
mutex_exit(hash_lock);
5505
ARCSTAT_BUMP(arcstat_access_skip);
5506
return;
5507
}
5508
5509
ASSERT(hdr->b_l1hdr.b_state == arc_mru ||
5510
hdr->b_l1hdr.b_state == arc_mfu ||
5511
hdr->b_l1hdr.b_state == arc_uncached);
5512
5513
DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr);
5514
arc_access(hdr, 0, B_TRUE);
5515
mutex_exit(hash_lock);
5516
5517
ARCSTAT_BUMP(arcstat_hits);
5518
ARCSTAT_CONDSTAT(B_TRUE /* demand */, demand, prefetch,
5519
!HDR_ISTYPE_METADATA(hdr), data, metadata, hits);
5520
}
5521
5522
/* a generic arc_read_done_func_t which you can use */
5523
void
5524
arc_bcopy_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp,
5525
arc_buf_t *buf, void *arg)
5526
{
5527
(void) zio, (void) zb, (void) bp;
5528
5529
if (buf == NULL)
5530
return;
5531
5532
memcpy(arg, buf->b_data, arc_buf_size(buf));
5533
arc_buf_destroy(buf, arg);
5534
}
5535
5536
/* a generic arc_read_done_func_t */
5537
void
5538
arc_getbuf_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp,
5539
arc_buf_t *buf, void *arg)
5540
{
5541
(void) zb, (void) bp;
5542
arc_buf_t **bufp = arg;
5543
5544
if (buf == NULL) {
5545
ASSERT(zio == NULL || zio->io_error != 0);
5546
*bufp = NULL;
5547
} else {
5548
ASSERT(zio == NULL || zio->io_error == 0);
5549
*bufp = buf;
5550
ASSERT(buf->b_data != NULL);
5551
}
5552
}
5553
5554
static void
5555
arc_hdr_verify(arc_buf_hdr_t *hdr, blkptr_t *bp)
5556
{
5557
if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) {
5558
ASSERT0(HDR_GET_PSIZE(hdr));
5559
ASSERT3U(arc_hdr_get_compress(hdr), ==, ZIO_COMPRESS_OFF);
5560
} else {
5561
if (HDR_COMPRESSION_ENABLED(hdr)) {
5562
ASSERT3U(arc_hdr_get_compress(hdr), ==,
5563
BP_GET_COMPRESS(bp));
5564
}
5565
ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp));
5566
ASSERT3U(HDR_GET_PSIZE(hdr), ==, BP_GET_PSIZE(bp));
5567
ASSERT3U(!!HDR_PROTECTED(hdr), ==, BP_IS_PROTECTED(bp));
5568
}
5569
}
5570
5571
static void
5572
arc_read_done(zio_t *zio)
5573
{
5574
blkptr_t *bp = zio->io_bp;
5575
arc_buf_hdr_t *hdr = zio->io_private;
5576
kmutex_t *hash_lock = NULL;
5577
arc_callback_t *callback_list;
5578
arc_callback_t *acb;
5579
5580
/*
5581
* The hdr was inserted into hash-table and removed from lists
5582
* prior to starting I/O. We should find this header, since
5583
* it's in the hash table, and it should be legit since it's
5584
* not possible to evict it during the I/O. The only possible
5585
* reason for it not to be found is if we were freed during the
5586
* read.
5587
*/
5588
if (HDR_IN_HASH_TABLE(hdr)) {
5589
arc_buf_hdr_t *found;
5590
5591
ASSERT3U(hdr->b_birth, ==, BP_GET_PHYSICAL_BIRTH(zio->io_bp));
5592
ASSERT3U(hdr->b_dva.dva_word[0], ==,
5593
BP_IDENTITY(zio->io_bp)->dva_word[0]);
5594
ASSERT3U(hdr->b_dva.dva_word[1], ==,
5595
BP_IDENTITY(zio->io_bp)->dva_word[1]);
5596
5597
found = buf_hash_find(hdr->b_spa, zio->io_bp, &hash_lock);
5598
5599
ASSERT((found == hdr &&
5600
DVA_EQUAL(&hdr->b_dva, BP_IDENTITY(zio->io_bp))) ||
5601
(found == hdr && HDR_L2_READING(hdr)));
5602
ASSERT3P(hash_lock, !=, NULL);
5603
}
5604
5605
if (BP_IS_PROTECTED(bp)) {
5606
hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp);
5607
hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset;
5608
zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt,
5609
hdr->b_crypt_hdr.b_iv);
5610
5611
if (zio->io_error == 0) {
5612
if (BP_GET_TYPE(bp) == DMU_OT_INTENT_LOG) {
5613
void *tmpbuf;
5614
5615
tmpbuf = abd_borrow_buf_copy(zio->io_abd,
5616
sizeof (zil_chain_t));
5617
zio_crypt_decode_mac_zil(tmpbuf,
5618
hdr->b_crypt_hdr.b_mac);
5619
abd_return_buf(zio->io_abd, tmpbuf,
5620
sizeof (zil_chain_t));
5621
} else {
5622
zio_crypt_decode_mac_bp(bp,
5623
hdr->b_crypt_hdr.b_mac);
5624
}
5625
}
5626
}
5627
5628
if (zio->io_error == 0) {
5629
/* byteswap if necessary */
5630
if (BP_SHOULD_BYTESWAP(zio->io_bp)) {
5631
if (BP_GET_LEVEL(zio->io_bp) > 0) {
5632
hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64;
5633
} else {
5634
hdr->b_l1hdr.b_byteswap =
5635
DMU_OT_BYTESWAP(BP_GET_TYPE(zio->io_bp));
5636
}
5637
} else {
5638
hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS;
5639
}
5640
if (!HDR_L2_READING(hdr)) {
5641
hdr->b_complevel = zio->io_prop.zp_complevel;
5642
}
5643
}
5644
5645
arc_hdr_clear_flags(hdr, ARC_FLAG_L2_EVICTED);
5646
if (l2arc_noprefetch && HDR_PREFETCH(hdr))
5647
arc_hdr_clear_flags(hdr, ARC_FLAG_L2CACHE);
5648
5649
callback_list = hdr->b_l1hdr.b_acb;
5650
ASSERT3P(callback_list, !=, NULL);
5651
hdr->b_l1hdr.b_acb = NULL;
5652
5653
/*
5654
* If a read request has a callback (i.e. acb_done is not NULL), then we
5655
* make a buf containing the data according to the parameters which were
5656
* passed in. The implementation of arc_buf_alloc_impl() ensures that we
5657
* aren't needlessly decompressing the data multiple times.
5658
*/
5659
int callback_cnt = 0;
5660
for (acb = callback_list; acb != NULL; acb = acb->acb_next) {
5661
5662
/* We need the last one to call below in original order. */
5663
callback_list = acb;
5664
5665
if (!acb->acb_done || acb->acb_nobuf)
5666
continue;
5667
5668
callback_cnt++;
5669
5670
if (zio->io_error != 0)
5671
continue;
5672
5673
int error = arc_buf_alloc_impl(hdr, zio->io_spa,
5674
&acb->acb_zb, acb->acb_private, acb->acb_encrypted,
5675
acb->acb_compressed, acb->acb_noauth, B_TRUE,
5676
&acb->acb_buf);
5677
5678
/*
5679
* Assert non-speculative zios didn't fail because an
5680
* encryption key wasn't loaded
5681
*/
5682
ASSERT((zio->io_flags & ZIO_FLAG_SPECULATIVE) ||
5683
error != EACCES);
5684
5685
/*
5686
* If we failed to decrypt, report an error now (as the zio
5687
* layer would have done if it had done the transforms).
5688
*/
5689
if (error == ECKSUM) {
5690
ASSERT(BP_IS_PROTECTED(bp));
5691
error = SET_ERROR(EIO);
5692
if ((zio->io_flags & ZIO_FLAG_SPECULATIVE) == 0) {
5693
spa_log_error(zio->io_spa, &acb->acb_zb,
5694
BP_GET_PHYSICAL_BIRTH(zio->io_bp));
5695
(void) zfs_ereport_post(
5696
FM_EREPORT_ZFS_AUTHENTICATION,
5697
zio->io_spa, NULL, &acb->acb_zb, zio, 0);
5698
}
5699
}
5700
5701
if (error != 0) {
5702
/*
5703
* Decompression or decryption failed. Set
5704
* io_error so that when we call acb_done
5705
* (below), we will indicate that the read
5706
* failed. Note that in the unusual case
5707
* where one callback is compressed and another
5708
* uncompressed, we will mark all of them
5709
* as failed, even though the uncompressed
5710
* one can't actually fail. In this case,
5711
* the hdr will not be anonymous, because
5712
* if there are multiple callbacks, it's
5713
* because multiple threads found the same
5714
* arc buf in the hash table.
5715
*/
5716
zio->io_error = error;
5717
}
5718
}
5719
5720
/*
5721
* If there are multiple callbacks, we must have the hash lock,
5722
* because the only way for multiple threads to find this hdr is
5723
* in the hash table. This ensures that if there are multiple
5724
* callbacks, the hdr is not anonymous. If it were anonymous,
5725
* we couldn't use arc_buf_destroy() in the error case below.
5726
*/
5727
ASSERT(callback_cnt < 2 || hash_lock != NULL);
5728
5729
if (zio->io_error == 0) {
5730
arc_hdr_verify(hdr, zio->io_bp);
5731
} else {
5732
arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR);
5733
if (hdr->b_l1hdr.b_state != arc_anon)
5734
arc_change_state(arc_anon, hdr);
5735
if (HDR_IN_HASH_TABLE(hdr))
5736
buf_hash_remove(hdr);
5737
}
5738
5739
arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS);
5740
(void) remove_reference(hdr, hdr);
5741
5742
if (hash_lock != NULL)
5743
mutex_exit(hash_lock);
5744
5745
/* execute each callback and free its structure */
5746
while ((acb = callback_list) != NULL) {
5747
if (acb->acb_done != NULL) {
5748
if (zio->io_error != 0 && acb->acb_buf != NULL) {
5749
/*
5750
* If arc_buf_alloc_impl() fails during
5751
* decompression, the buf will still be
5752
* allocated, and needs to be freed here.
5753
*/
5754
arc_buf_destroy(acb->acb_buf,
5755
acb->acb_private);
5756
acb->acb_buf = NULL;
5757
}
5758
acb->acb_done(zio, &zio->io_bookmark, zio->io_bp,
5759
acb->acb_buf, acb->acb_private);
5760
}
5761
5762
if (acb->acb_zio_dummy != NULL) {
5763
acb->acb_zio_dummy->io_error = zio->io_error;
5764
zio_nowait(acb->acb_zio_dummy);
5765
}
5766
5767
callback_list = acb->acb_prev;
5768
if (acb->acb_wait) {
5769
mutex_enter(&acb->acb_wait_lock);
5770
acb->acb_wait_error = zio->io_error;
5771
acb->acb_wait = B_FALSE;
5772
cv_signal(&acb->acb_wait_cv);
5773
mutex_exit(&acb->acb_wait_lock);
5774
/* acb will be freed by the waiting thread. */
5775
} else {
5776
kmem_free(acb, sizeof (arc_callback_t));
5777
}
5778
}
5779
}
5780
5781
/*
5782
* Lookup the block at the specified DVA (in bp), and return the manner in
5783
* which the block is cached. A zero return indicates not cached.
5784
*/
5785
int
5786
arc_cached(spa_t *spa, const blkptr_t *bp)
5787
{
5788
arc_buf_hdr_t *hdr = NULL;
5789
kmutex_t *hash_lock = NULL;
5790
uint64_t guid = spa_load_guid(spa);
5791
int flags = 0;
5792
5793
if (BP_IS_EMBEDDED(bp))
5794
return (ARC_CACHED_EMBEDDED);
5795
5796
hdr = buf_hash_find(guid, bp, &hash_lock);
5797
if (hdr == NULL)
5798
return (0);
5799
5800
if (HDR_HAS_L1HDR(hdr)) {
5801
arc_state_t *state = hdr->b_l1hdr.b_state;
5802
/*
5803
* We switch to ensure that any future arc_state_type_t
5804
* changes are handled. This is just a shift to promote
5805
* more compile-time checking.
5806
*/
5807
switch (state->arcs_state) {
5808
case ARC_STATE_ANON:
5809
break;
5810
case ARC_STATE_MRU:
5811
flags |= ARC_CACHED_IN_MRU | ARC_CACHED_IN_L1;
5812
break;
5813
case ARC_STATE_MFU:
5814
flags |= ARC_CACHED_IN_MFU | ARC_CACHED_IN_L1;
5815
break;
5816
case ARC_STATE_UNCACHED:
5817
/* The header is still in L1, probably not for long */
5818
flags |= ARC_CACHED_IN_L1;
5819
break;
5820
default:
5821
break;
5822
}
5823
}
5824
if (HDR_HAS_L2HDR(hdr))
5825
flags |= ARC_CACHED_IN_L2;
5826
5827
mutex_exit(hash_lock);
5828
5829
return (flags);
5830
}
5831
5832
/*
5833
* "Read" the block at the specified DVA (in bp) via the
5834
* cache. If the block is found in the cache, invoke the provided
5835
* callback immediately and return. Note that the `zio' parameter
5836
* in the callback will be NULL in this case, since no IO was
5837
* required. If the block is not in the cache pass the read request
5838
* on to the spa with a substitute callback function, so that the
5839
* requested block will be added to the cache.
5840
*
5841
* If a read request arrives for a block that has a read in-progress,
5842
* either wait for the in-progress read to complete (and return the
5843
* results); or, if this is a read with a "done" func, add a record
5844
* to the read to invoke the "done" func when the read completes,
5845
* and return; or just return.
5846
*
5847
* arc_read_done() will invoke all the requested "done" functions
5848
* for readers of this block.
5849
*/
5850
int
5851
arc_read(zio_t *pio, spa_t *spa, const blkptr_t *bp,
5852
arc_read_done_func_t *done, void *private, zio_priority_t priority,
5853
int zio_flags, arc_flags_t *arc_flags, const zbookmark_phys_t *zb)
5854
{
5855
arc_buf_hdr_t *hdr = NULL;
5856
kmutex_t *hash_lock = NULL;
5857
zio_t *rzio;
5858
uint64_t guid = spa_load_guid(spa);
5859
boolean_t compressed_read = (zio_flags & ZIO_FLAG_RAW_COMPRESS) != 0;
5860
boolean_t encrypted_read = BP_IS_ENCRYPTED(bp) &&
5861
(zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0;
5862
boolean_t noauth_read = BP_IS_AUTHENTICATED(bp) &&
5863
(zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0;
5864
boolean_t embedded_bp = !!BP_IS_EMBEDDED(bp);
5865
boolean_t no_buf = *arc_flags & ARC_FLAG_NO_BUF;
5866
arc_buf_t *buf = NULL;
5867
int rc = 0;
5868
boolean_t bp_validation = B_FALSE;
5869
5870
ASSERT(!embedded_bp ||
5871
BPE_GET_ETYPE(bp) == BP_EMBEDDED_TYPE_DATA);
5872
ASSERT(!BP_IS_HOLE(bp));
5873
ASSERT(!BP_IS_REDACTED(bp));
5874
5875
/*
5876
* Normally SPL_FSTRANS will already be set since kernel threads which
5877
* expect to call the DMU interfaces will set it when created. System
5878
* calls are similarly handled by setting/cleaning the bit in the
5879
* registered callback (module/os/.../zfs/zpl_*).
5880
*
5881
* External consumers such as Lustre which call the exported DMU
5882
* interfaces may not have set SPL_FSTRANS. To avoid a deadlock
5883
* on the hash_lock always set and clear the bit.
5884
*/
5885
fstrans_cookie_t cookie = spl_fstrans_mark();
5886
top:
5887
if (!embedded_bp) {
5888
/*
5889
* Embedded BP's have no DVA and require no I/O to "read".
5890
* Create an anonymous arc buf to back it.
5891
*/
5892
hdr = buf_hash_find(guid, bp, &hash_lock);
5893
}
5894
5895
/*
5896
* Determine if we have an L1 cache hit or a cache miss. For simplicity
5897
* we maintain encrypted data separately from compressed / uncompressed
5898
* data. If the user is requesting raw encrypted data and we don't have
5899
* that in the header we will read from disk to guarantee that we can
5900
* get it even if the encryption keys aren't loaded.
5901
*/
5902
if (hdr != NULL && HDR_HAS_L1HDR(hdr) && (HDR_HAS_RABD(hdr) ||
5903
(hdr->b_l1hdr.b_pabd != NULL && !encrypted_read))) {
5904
boolean_t is_data = !HDR_ISTYPE_METADATA(hdr);
5905
5906
/*
5907
* Verify the block pointer contents are reasonable. This
5908
* should always be the case since the blkptr is protected by
5909
* a checksum.
5910
*/
5911
if (zfs_blkptr_verify(spa, bp, BLK_CONFIG_SKIP,
5912
BLK_VERIFY_LOG)) {
5913
mutex_exit(hash_lock);
5914
rc = SET_ERROR(ECKSUM);
5915
goto done;
5916
}
5917
5918
if (HDR_IO_IN_PROGRESS(hdr)) {
5919
if (*arc_flags & ARC_FLAG_CACHED_ONLY) {
5920
mutex_exit(hash_lock);
5921
ARCSTAT_BUMP(arcstat_cached_only_in_progress);
5922
rc = SET_ERROR(ENOENT);
5923
goto done;
5924
}
5925
5926
zio_t *head_zio = hdr->b_l1hdr.b_acb->acb_zio_head;
5927
ASSERT3P(head_zio, !=, NULL);
5928
if ((hdr->b_flags & ARC_FLAG_PRIO_ASYNC_READ) &&
5929
priority == ZIO_PRIORITY_SYNC_READ) {
5930
/*
5931
* This is a sync read that needs to wait for
5932
* an in-flight async read. Request that the
5933
* zio have its priority upgraded.
5934
*/
5935
zio_change_priority(head_zio, priority);
5936
DTRACE_PROBE1(arc__async__upgrade__sync,
5937
arc_buf_hdr_t *, hdr);
5938
ARCSTAT_BUMP(arcstat_async_upgrade_sync);
5939
}
5940
5941
DTRACE_PROBE1(arc__iohit, arc_buf_hdr_t *, hdr);
5942
arc_access(hdr, *arc_flags, B_FALSE);
5943
5944
/*
5945
* If there are multiple threads reading the same block
5946
* and that block is not yet in the ARC, then only one
5947
* thread will do the physical I/O and all other
5948
* threads will wait until that I/O completes.
5949
* Synchronous reads use the acb_wait_cv whereas nowait
5950
* reads register a callback. Both are signalled/called
5951
* in arc_read_done.
5952
*
5953
* Errors of the physical I/O may need to be propagated.
5954
* Synchronous read errors are returned here from
5955
* arc_read_done via acb_wait_error. Nowait reads
5956
* attach the acb_zio_dummy zio to pio and
5957
* arc_read_done propagates the physical I/O's io_error
5958
* to acb_zio_dummy, and thereby to pio.
5959
*/
5960
arc_callback_t *acb = NULL;
5961
if (done || pio || *arc_flags & ARC_FLAG_WAIT) {
5962
acb = kmem_zalloc(sizeof (arc_callback_t),
5963
KM_SLEEP);
5964
acb->acb_done = done;
5965
acb->acb_private = private;
5966
acb->acb_compressed = compressed_read;
5967
acb->acb_encrypted = encrypted_read;
5968
acb->acb_noauth = noauth_read;
5969
acb->acb_nobuf = no_buf;
5970
if (*arc_flags & ARC_FLAG_WAIT) {
5971
acb->acb_wait = B_TRUE;
5972
mutex_init(&acb->acb_wait_lock, NULL,
5973
MUTEX_DEFAULT, NULL);
5974
cv_init(&acb->acb_wait_cv, NULL,
5975
CV_DEFAULT, NULL);
5976
}
5977
acb->acb_zb = *zb;
5978
if (pio != NULL) {
5979
acb->acb_zio_dummy = zio_null(pio,
5980
spa, NULL, NULL, NULL, zio_flags);
5981
}
5982
acb->acb_zio_head = head_zio;
5983
acb->acb_next = hdr->b_l1hdr.b_acb;
5984
hdr->b_l1hdr.b_acb->acb_prev = acb;
5985
hdr->b_l1hdr.b_acb = acb;
5986
}
5987
mutex_exit(hash_lock);
5988
5989
ARCSTAT_BUMP(arcstat_iohits);
5990
ARCSTAT_CONDSTAT(!(*arc_flags & ARC_FLAG_PREFETCH),
5991
demand, prefetch, is_data, data, metadata, iohits);
5992
5993
if (*arc_flags & ARC_FLAG_WAIT) {
5994
mutex_enter(&acb->acb_wait_lock);
5995
while (acb->acb_wait) {
5996
cv_wait(&acb->acb_wait_cv,
5997
&acb->acb_wait_lock);
5998
}
5999
rc = acb->acb_wait_error;
6000
mutex_exit(&acb->acb_wait_lock);
6001
mutex_destroy(&acb->acb_wait_lock);
6002
cv_destroy(&acb->acb_wait_cv);
6003
kmem_free(acb, sizeof (arc_callback_t));
6004
}
6005
goto out;
6006
}
6007
6008
ASSERT(hdr->b_l1hdr.b_state == arc_mru ||
6009
hdr->b_l1hdr.b_state == arc_mfu ||
6010
hdr->b_l1hdr.b_state == arc_uncached);
6011
6012
DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr);
6013
arc_access(hdr, *arc_flags, B_TRUE);
6014
6015
if (done && !no_buf) {
6016
ASSERT(!embedded_bp || !BP_IS_HOLE(bp));
6017
6018
/* Get a buf with the desired data in it. */
6019
rc = arc_buf_alloc_impl(hdr, spa, zb, private,
6020
encrypted_read, compressed_read, noauth_read,
6021
B_TRUE, &buf);
6022
if (rc == ECKSUM) {
6023
/*
6024
* Convert authentication and decryption errors
6025
* to EIO (and generate an ereport if needed)
6026
* before leaving the ARC.
6027
*/
6028
rc = SET_ERROR(EIO);
6029
if ((zio_flags & ZIO_FLAG_SPECULATIVE) == 0) {
6030
spa_log_error(spa, zb, hdr->b_birth);
6031
(void) zfs_ereport_post(
6032
FM_EREPORT_ZFS_AUTHENTICATION,
6033
spa, NULL, zb, NULL, 0);
6034
}
6035
}
6036
if (rc != 0) {
6037
arc_buf_destroy_impl(buf);
6038
buf = NULL;
6039
(void) remove_reference(hdr, private);
6040
}
6041
6042
/* assert any errors weren't due to unloaded keys */
6043
ASSERT((zio_flags & ZIO_FLAG_SPECULATIVE) ||
6044
rc != EACCES);
6045
}
6046
mutex_exit(hash_lock);
6047
ARCSTAT_BUMP(arcstat_hits);
6048
ARCSTAT_CONDSTAT(!(*arc_flags & ARC_FLAG_PREFETCH),
6049
demand, prefetch, is_data, data, metadata, hits);
6050
*arc_flags |= ARC_FLAG_CACHED;
6051
goto done;
6052
} else {
6053
uint64_t lsize = BP_GET_LSIZE(bp);
6054
uint64_t psize = BP_GET_PSIZE(bp);
6055
arc_callback_t *acb;
6056
vdev_t *vd = NULL;
6057
uint64_t addr = 0;
6058
boolean_t devw = B_FALSE;
6059
uint64_t size;
6060
abd_t *hdr_abd;
6061
int alloc_flags = encrypted_read ? ARC_HDR_ALLOC_RDATA : 0;
6062
arc_buf_contents_t type = BP_GET_BUFC_TYPE(bp);
6063
int config_lock;
6064
int error;
6065
6066
if (*arc_flags & ARC_FLAG_CACHED_ONLY) {
6067
if (hash_lock != NULL)
6068
mutex_exit(hash_lock);
6069
rc = SET_ERROR(ENOENT);
6070
goto done;
6071
}
6072
6073
if (zio_flags & ZIO_FLAG_CONFIG_WRITER) {
6074
config_lock = BLK_CONFIG_HELD;
6075
} else if (hash_lock != NULL) {
6076
/*
6077
* Prevent lock order reversal
6078
*/
6079
config_lock = BLK_CONFIG_NEEDED_TRY;
6080
} else {
6081
config_lock = BLK_CONFIG_NEEDED;
6082
}
6083
6084
/*
6085
* Verify the block pointer contents are reasonable. This
6086
* should always be the case since the blkptr is protected by
6087
* a checksum.
6088
*/
6089
if (!bp_validation && (error = zfs_blkptr_verify(spa, bp,
6090
config_lock, BLK_VERIFY_LOG))) {
6091
if (hash_lock != NULL)
6092
mutex_exit(hash_lock);
6093
if (error == EBUSY && !zfs_blkptr_verify(spa, bp,
6094
BLK_CONFIG_NEEDED, BLK_VERIFY_LOG)) {
6095
bp_validation = B_TRUE;
6096
goto top;
6097
}
6098
rc = SET_ERROR(ECKSUM);
6099
goto done;
6100
}
6101
6102
if (hdr == NULL) {
6103
/*
6104
* This block is not in the cache or it has
6105
* embedded data.
6106
*/
6107
arc_buf_hdr_t *exists = NULL;
6108
hdr = arc_hdr_alloc(guid, psize, lsize,
6109
BP_IS_PROTECTED(bp), BP_GET_COMPRESS(bp), 0, type);
6110
6111
if (!embedded_bp) {
6112
hdr->b_dva = *BP_IDENTITY(bp);
6113
hdr->b_birth = BP_GET_PHYSICAL_BIRTH(bp);
6114
exists = buf_hash_insert(hdr, &hash_lock);
6115
}
6116
if (exists != NULL) {
6117
/* somebody beat us to the hash insert */
6118
mutex_exit(hash_lock);
6119
buf_discard_identity(hdr);
6120
arc_hdr_destroy(hdr);
6121
goto top; /* restart the IO request */
6122
}
6123
} else {
6124
/*
6125
* This block is in the ghost cache or encrypted data
6126
* was requested and we didn't have it. If it was
6127
* L2-only (and thus didn't have an L1 hdr),
6128
* we realloc the header to add an L1 hdr.
6129
*/
6130
if (!HDR_HAS_L1HDR(hdr)) {
6131
hdr = arc_hdr_realloc(hdr, hdr_l2only_cache,
6132
hdr_full_cache);
6133
}
6134
6135
if (GHOST_STATE(hdr->b_l1hdr.b_state)) {
6136
ASSERT0P(hdr->b_l1hdr.b_pabd);
6137
ASSERT(!HDR_HAS_RABD(hdr));
6138
ASSERT(!HDR_IO_IN_PROGRESS(hdr));
6139
ASSERT0(zfs_refcount_count(
6140
&hdr->b_l1hdr.b_refcnt));
6141
ASSERT0P(hdr->b_l1hdr.b_buf);
6142
#ifdef ZFS_DEBUG
6143
ASSERT0P(hdr->b_l1hdr.b_freeze_cksum);
6144
#endif
6145
} else if (HDR_IO_IN_PROGRESS(hdr)) {
6146
/*
6147
* If this header already had an IO in progress
6148
* and we are performing another IO to fetch
6149
* encrypted data we must wait until the first
6150
* IO completes so as not to confuse
6151
* arc_read_done(). This should be very rare
6152
* and so the performance impact shouldn't
6153
* matter.
6154
*/
6155
arc_callback_t *acb = kmem_zalloc(
6156
sizeof (arc_callback_t), KM_SLEEP);
6157
acb->acb_wait = B_TRUE;
6158
mutex_init(&acb->acb_wait_lock, NULL,
6159
MUTEX_DEFAULT, NULL);
6160
cv_init(&acb->acb_wait_cv, NULL, CV_DEFAULT,
6161
NULL);
6162
acb->acb_zio_head =
6163
hdr->b_l1hdr.b_acb->acb_zio_head;
6164
acb->acb_next = hdr->b_l1hdr.b_acb;
6165
hdr->b_l1hdr.b_acb->acb_prev = acb;
6166
hdr->b_l1hdr.b_acb = acb;
6167
mutex_exit(hash_lock);
6168
mutex_enter(&acb->acb_wait_lock);
6169
while (acb->acb_wait) {
6170
cv_wait(&acb->acb_wait_cv,
6171
&acb->acb_wait_lock);
6172
}
6173
mutex_exit(&acb->acb_wait_lock);
6174
mutex_destroy(&acb->acb_wait_lock);
6175
cv_destroy(&acb->acb_wait_cv);
6176
kmem_free(acb, sizeof (arc_callback_t));
6177
goto top;
6178
}
6179
}
6180
if (*arc_flags & ARC_FLAG_UNCACHED) {
6181
arc_hdr_set_flags(hdr, ARC_FLAG_UNCACHED);
6182
if (!encrypted_read)
6183
alloc_flags |= ARC_HDR_ALLOC_LINEAR;
6184
}
6185
6186
/*
6187
* Take additional reference for IO_IN_PROGRESS. It stops
6188
* arc_access() from putting this header without any buffers
6189
* and so other references but obviously nonevictable onto
6190
* the evictable list of MRU or MFU state.
6191
*/
6192
add_reference(hdr, hdr);
6193
if (!embedded_bp)
6194
arc_access(hdr, *arc_flags, B_FALSE);
6195
arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS);
6196
arc_hdr_alloc_abd(hdr, alloc_flags);
6197
if (encrypted_read) {
6198
ASSERT(HDR_HAS_RABD(hdr));
6199
size = HDR_GET_PSIZE(hdr);
6200
hdr_abd = hdr->b_crypt_hdr.b_rabd;
6201
zio_flags |= ZIO_FLAG_RAW;
6202
} else {
6203
ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
6204
size = arc_hdr_size(hdr);
6205
hdr_abd = hdr->b_l1hdr.b_pabd;
6206
6207
if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) {
6208
zio_flags |= ZIO_FLAG_RAW_COMPRESS;
6209
}
6210
6211
/*
6212
* For authenticated bp's, we do not ask the ZIO layer
6213
* to authenticate them since this will cause the entire
6214
* IO to fail if the key isn't loaded. Instead, we
6215
* defer authentication until arc_buf_fill(), which will
6216
* verify the data when the key is available.
6217
*/
6218
if (BP_IS_AUTHENTICATED(bp))
6219
zio_flags |= ZIO_FLAG_RAW_ENCRYPT;
6220
}
6221
6222
if (BP_IS_AUTHENTICATED(bp))
6223
arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH);
6224
if (BP_GET_LEVEL(bp) > 0)
6225
arc_hdr_set_flags(hdr, ARC_FLAG_INDIRECT);
6226
ASSERT(!GHOST_STATE(hdr->b_l1hdr.b_state));
6227
6228
acb = kmem_zalloc(sizeof (arc_callback_t), KM_SLEEP);
6229
acb->acb_done = done;
6230
acb->acb_private = private;
6231
acb->acb_compressed = compressed_read;
6232
acb->acb_encrypted = encrypted_read;
6233
acb->acb_noauth = noauth_read;
6234
acb->acb_nobuf = no_buf;
6235
acb->acb_zb = *zb;
6236
6237
ASSERT0P(hdr->b_l1hdr.b_acb);
6238
hdr->b_l1hdr.b_acb = acb;
6239
6240
if (HDR_HAS_L2HDR(hdr) &&
6241
(vd = hdr->b_l2hdr.b_dev->l2ad_vdev) != NULL) {
6242
devw = hdr->b_l2hdr.b_dev->l2ad_writing;
6243
addr = hdr->b_l2hdr.b_daddr;
6244
/*
6245
* Lock out L2ARC device removal.
6246
*/
6247
if (vdev_is_dead(vd) ||
6248
!spa_config_tryenter(spa, SCL_L2ARC, vd, RW_READER))
6249
vd = NULL;
6250
}
6251
6252
/*
6253
* We count both async reads and scrub IOs as asynchronous so
6254
* that both can be upgraded in the event of a cache hit while
6255
* the read IO is still in-flight.
6256
*/
6257
if (priority == ZIO_PRIORITY_ASYNC_READ ||
6258
priority == ZIO_PRIORITY_SCRUB)
6259
arc_hdr_set_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ);
6260
else
6261
arc_hdr_clear_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ);
6262
6263
/*
6264
* At this point, we have a level 1 cache miss or a blkptr
6265
* with embedded data. Try again in L2ARC if possible.
6266
*/
6267
ASSERT3U(HDR_GET_LSIZE(hdr), ==, lsize);
6268
6269
/*
6270
* Skip ARC stat bump for block pointers with embedded
6271
* data. The data are read from the blkptr itself via
6272
* decode_embedded_bp_compressed().
6273
*/
6274
if (!embedded_bp) {
6275
DTRACE_PROBE4(arc__miss, arc_buf_hdr_t *, hdr,
6276
blkptr_t *, bp, uint64_t, lsize,
6277
zbookmark_phys_t *, zb);
6278
ARCSTAT_BUMP(arcstat_misses);
6279
ARCSTAT_CONDSTAT(!(*arc_flags & ARC_FLAG_PREFETCH),
6280
demand, prefetch, !HDR_ISTYPE_METADATA(hdr), data,
6281
metadata, misses);
6282
zfs_racct_read(spa, size, 1,
6283
(*arc_flags & ARC_FLAG_UNCACHED) ?
6284
DMU_UNCACHEDIO : 0);
6285
}
6286
6287
/* Check if the spa even has l2 configured */
6288
const boolean_t spa_has_l2 = l2arc_ndev != 0 &&
6289
spa->spa_l2cache.sav_count > 0;
6290
6291
if (vd != NULL && spa_has_l2 && !(l2arc_norw && devw)) {
6292
/*
6293
* Read from the L2ARC if the following are true:
6294
* 1. The L2ARC vdev was previously cached.
6295
* 2. This buffer still has L2ARC metadata.
6296
* 3. This buffer isn't currently writing to the L2ARC.
6297
* 4. The L2ARC entry wasn't evicted, which may
6298
* also have invalidated the vdev.
6299
*/
6300
if (HDR_HAS_L2HDR(hdr) &&
6301
!HDR_L2_WRITING(hdr) && !HDR_L2_EVICTED(hdr)) {
6302
l2arc_read_callback_t *cb;
6303
abd_t *abd;
6304
uint64_t asize;
6305
6306
DTRACE_PROBE1(l2arc__hit, arc_buf_hdr_t *, hdr);
6307
ARCSTAT_BUMP(arcstat_l2_hits);
6308
hdr->b_l2hdr.b_hits++;
6309
6310
cb = kmem_zalloc(sizeof (l2arc_read_callback_t),
6311
KM_SLEEP);
6312
cb->l2rcb_hdr = hdr;
6313
cb->l2rcb_bp = *bp;
6314
cb->l2rcb_zb = *zb;
6315
cb->l2rcb_flags = zio_flags;
6316
6317
/*
6318
* When Compressed ARC is disabled, but the
6319
* L2ARC block is compressed, arc_hdr_size()
6320
* will have returned LSIZE rather than PSIZE.
6321
*/
6322
if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF &&
6323
!HDR_COMPRESSION_ENABLED(hdr) &&
6324
HDR_GET_PSIZE(hdr) != 0) {
6325
size = HDR_GET_PSIZE(hdr);
6326
}
6327
6328
asize = vdev_psize_to_asize(vd, size);
6329
if (asize != size) {
6330
abd = abd_alloc_for_io(asize,
6331
HDR_ISTYPE_METADATA(hdr));
6332
cb->l2rcb_abd = abd;
6333
} else {
6334
abd = hdr_abd;
6335
}
6336
6337
ASSERT(addr >= VDEV_LABEL_START_SIZE &&
6338
addr + asize <= vd->vdev_psize -
6339
VDEV_LABEL_END_SIZE);
6340
6341
/*
6342
* l2arc read. The SCL_L2ARC lock will be
6343
* released by l2arc_read_done().
6344
* Issue a null zio if the underlying buffer
6345
* was squashed to zero size by compression.
6346
*/
6347
ASSERT3U(arc_hdr_get_compress(hdr), !=,
6348
ZIO_COMPRESS_EMPTY);
6349
rzio = zio_read_phys(pio, vd, addr,
6350
asize, abd,
6351
ZIO_CHECKSUM_OFF,
6352
l2arc_read_done, cb, priority,
6353
zio_flags | ZIO_FLAG_CANFAIL |
6354
ZIO_FLAG_DONT_PROPAGATE |
6355
ZIO_FLAG_DONT_RETRY, B_FALSE);
6356
acb->acb_zio_head = rzio;
6357
6358
if (hash_lock != NULL)
6359
mutex_exit(hash_lock);
6360
6361
DTRACE_PROBE2(l2arc__read, vdev_t *, vd,
6362
zio_t *, rzio);
6363
ARCSTAT_INCR(arcstat_l2_read_bytes,
6364
HDR_GET_PSIZE(hdr));
6365
6366
if (*arc_flags & ARC_FLAG_NOWAIT) {
6367
zio_nowait(rzio);
6368
goto out;
6369
}
6370
6371
ASSERT(*arc_flags & ARC_FLAG_WAIT);
6372
if (zio_wait(rzio) == 0)
6373
goto out;
6374
6375
/* l2arc read error; goto zio_read() */
6376
if (hash_lock != NULL)
6377
mutex_enter(hash_lock);
6378
} else {
6379
DTRACE_PROBE1(l2arc__miss,
6380
arc_buf_hdr_t *, hdr);
6381
ARCSTAT_BUMP(arcstat_l2_misses);
6382
if (HDR_L2_WRITING(hdr))
6383
ARCSTAT_BUMP(arcstat_l2_rw_clash);
6384
spa_config_exit(spa, SCL_L2ARC, vd);
6385
}
6386
} else {
6387
if (vd != NULL)
6388
spa_config_exit(spa, SCL_L2ARC, vd);
6389
6390
/*
6391
* Only a spa with l2 should contribute to l2
6392
* miss stats. (Including the case of having a
6393
* faulted cache device - that's also a miss.)
6394
*/
6395
if (spa_has_l2) {
6396
/*
6397
* Skip ARC stat bump for block pointers with
6398
* embedded data. The data are read from the
6399
* blkptr itself via
6400
* decode_embedded_bp_compressed().
6401
*/
6402
if (!embedded_bp) {
6403
DTRACE_PROBE1(l2arc__miss,
6404
arc_buf_hdr_t *, hdr);
6405
ARCSTAT_BUMP(arcstat_l2_misses);
6406
}
6407
}
6408
}
6409
6410
rzio = zio_read(pio, spa, bp, hdr_abd, size,
6411
arc_read_done, hdr, priority, zio_flags, zb);
6412
acb->acb_zio_head = rzio;
6413
6414
if (hash_lock != NULL)
6415
mutex_exit(hash_lock);
6416
6417
if (*arc_flags & ARC_FLAG_WAIT) {
6418
rc = zio_wait(rzio);
6419
goto out;
6420
}
6421
6422
ASSERT(*arc_flags & ARC_FLAG_NOWAIT);
6423
zio_nowait(rzio);
6424
}
6425
6426
out:
6427
/* embedded bps don't actually go to disk */
6428
if (!embedded_bp)
6429
spa_read_history_add(spa, zb, *arc_flags);
6430
spl_fstrans_unmark(cookie);
6431
return (rc);
6432
6433
done:
6434
if (done)
6435
done(NULL, zb, bp, buf, private);
6436
if (pio && rc != 0) {
6437
zio_t *zio = zio_null(pio, spa, NULL, NULL, NULL, zio_flags);
6438
zio->io_error = rc;
6439
zio_nowait(zio);
6440
}
6441
goto out;
6442
}
6443
6444
arc_prune_t *
6445
arc_add_prune_callback(arc_prune_func_t *func, void *private)
6446
{
6447
arc_prune_t *p;
6448
6449
p = kmem_alloc(sizeof (*p), KM_SLEEP);
6450
p->p_pfunc = func;
6451
p->p_private = private;
6452
list_link_init(&p->p_node);
6453
zfs_refcount_create(&p->p_refcnt);
6454
6455
mutex_enter(&arc_prune_mtx);
6456
zfs_refcount_add(&p->p_refcnt, &arc_prune_list);
6457
list_insert_head(&arc_prune_list, p);
6458
mutex_exit(&arc_prune_mtx);
6459
6460
return (p);
6461
}
6462
6463
void
6464
arc_remove_prune_callback(arc_prune_t *p)
6465
{
6466
boolean_t wait = B_FALSE;
6467
mutex_enter(&arc_prune_mtx);
6468
list_remove(&arc_prune_list, p);
6469
if (zfs_refcount_remove(&p->p_refcnt, &arc_prune_list) > 0)
6470
wait = B_TRUE;
6471
mutex_exit(&arc_prune_mtx);
6472
6473
/* wait for arc_prune_task to finish */
6474
if (wait)
6475
taskq_wait_outstanding(arc_prune_taskq, 0);
6476
ASSERT0(zfs_refcount_count(&p->p_refcnt));
6477
zfs_refcount_destroy(&p->p_refcnt);
6478
kmem_free(p, sizeof (*p));
6479
}
6480
6481
/*
6482
* Helper function for arc_prune_async() it is responsible for safely
6483
* handling the execution of a registered arc_prune_func_t.
6484
*/
6485
static void
6486
arc_prune_task(void *ptr)
6487
{
6488
arc_prune_t *ap = (arc_prune_t *)ptr;
6489
arc_prune_func_t *func = ap->p_pfunc;
6490
6491
if (func != NULL)
6492
func(ap->p_adjust, ap->p_private);
6493
6494
(void) zfs_refcount_remove(&ap->p_refcnt, func);
6495
}
6496
6497
/*
6498
* Notify registered consumers they must drop holds on a portion of the ARC
6499
* buffers they reference. This provides a mechanism to ensure the ARC can
6500
* honor the metadata limit and reclaim otherwise pinned ARC buffers.
6501
*
6502
* This operation is performed asynchronously so it may be safely called
6503
* in the context of the arc_reclaim_thread(). A reference is taken here
6504
* for each registered arc_prune_t and the arc_prune_task() is responsible
6505
* for releasing it once the registered arc_prune_func_t has completed.
6506
*/
6507
static void
6508
arc_prune_async(uint64_t adjust)
6509
{
6510
arc_prune_t *ap;
6511
6512
mutex_enter(&arc_prune_mtx);
6513
for (ap = list_head(&arc_prune_list); ap != NULL;
6514
ap = list_next(&arc_prune_list, ap)) {
6515
6516
if (zfs_refcount_count(&ap->p_refcnt) >= 2)
6517
continue;
6518
6519
zfs_refcount_add(&ap->p_refcnt, ap->p_pfunc);
6520
ap->p_adjust = adjust;
6521
if (taskq_dispatch(arc_prune_taskq, arc_prune_task,
6522
ap, TQ_SLEEP) == TASKQID_INVALID) {
6523
(void) zfs_refcount_remove(&ap->p_refcnt, ap->p_pfunc);
6524
continue;
6525
}
6526
ARCSTAT_BUMP(arcstat_prune);
6527
}
6528
mutex_exit(&arc_prune_mtx);
6529
}
6530
6531
/*
6532
* Notify the arc that a block was freed, and thus will never be used again.
6533
*/
6534
void
6535
arc_freed(spa_t *spa, const blkptr_t *bp)
6536
{
6537
arc_buf_hdr_t *hdr;
6538
kmutex_t *hash_lock;
6539
uint64_t guid = spa_load_guid(spa);
6540
6541
ASSERT(!BP_IS_EMBEDDED(bp));
6542
6543
hdr = buf_hash_find(guid, bp, &hash_lock);
6544
if (hdr == NULL)
6545
return;
6546
6547
/*
6548
* We might be trying to free a block that is still doing I/O
6549
* (i.e. prefetch) or has some other reference (i.e. a dedup-ed,
6550
* dmu_sync-ed block). A block may also have a reference if it is
6551
* part of a dedup-ed, dmu_synced write. The dmu_sync() function would
6552
* have written the new block to its final resting place on disk but
6553
* without the dedup flag set. This would have left the hdr in the MRU
6554
* state and discoverable. When the txg finally syncs it detects that
6555
* the block was overridden in open context and issues an override I/O.
6556
* Since this is a dedup block, the override I/O will determine if the
6557
* block is already in the DDT. If so, then it will replace the io_bp
6558
* with the bp from the DDT and allow the I/O to finish. When the I/O
6559
* reaches the done callback, dbuf_write_override_done, it will
6560
* check to see if the io_bp and io_bp_override are identical.
6561
* If they are not, then it indicates that the bp was replaced with
6562
* the bp in the DDT and the override bp is freed. This allows
6563
* us to arrive here with a reference on a block that is being
6564
* freed. So if we have an I/O in progress, or a reference to
6565
* this hdr, then we don't destroy the hdr.
6566
*/
6567
if (!HDR_HAS_L1HDR(hdr) ||
6568
zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) {
6569
arc_change_state(arc_anon, hdr);
6570
arc_hdr_destroy(hdr);
6571
mutex_exit(hash_lock);
6572
} else {
6573
mutex_exit(hash_lock);
6574
}
6575
6576
}
6577
6578
/*
6579
* Release this buffer from the cache, making it an anonymous buffer. This
6580
* must be done after a read and prior to modifying the buffer contents.
6581
* If the buffer has more than one reference, we must make
6582
* a new hdr for the buffer.
6583
*/
6584
void
6585
arc_release(arc_buf_t *buf, const void *tag)
6586
{
6587
arc_buf_hdr_t *hdr = buf->b_hdr;
6588
6589
/*
6590
* It would be nice to assert that if its DMU metadata (level >
6591
* 0 || it's the dnode file), then it must be syncing context.
6592
* But we don't know that information at this level.
6593
*/
6594
6595
ASSERT(HDR_HAS_L1HDR(hdr));
6596
6597
/*
6598
* We don't grab the hash lock prior to this check, because if
6599
* the buffer's header is in the arc_anon state, it won't be
6600
* linked into the hash table.
6601
*/
6602
if (hdr->b_l1hdr.b_state == arc_anon) {
6603
ASSERT(!HDR_IO_IN_PROGRESS(hdr));
6604
ASSERT(!HDR_IN_HASH_TABLE(hdr));
6605
ASSERT(!HDR_HAS_L2HDR(hdr));
6606
6607
ASSERT3P(hdr->b_l1hdr.b_buf, ==, buf);
6608
ASSERT(ARC_BUF_LAST(buf));
6609
ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), ==, 1);
6610
ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node));
6611
6612
hdr->b_l1hdr.b_arc_access = 0;
6613
6614
/*
6615
* If the buf is being overridden then it may already
6616
* have a hdr that is not empty.
6617
*/
6618
buf_discard_identity(hdr);
6619
arc_buf_thaw(buf);
6620
6621
return;
6622
}
6623
6624
kmutex_t *hash_lock = HDR_LOCK(hdr);
6625
mutex_enter(hash_lock);
6626
6627
/*
6628
* This assignment is only valid as long as the hash_lock is
6629
* held, we must be careful not to reference state or the
6630
* b_state field after dropping the lock.
6631
*/
6632
arc_state_t *state = hdr->b_l1hdr.b_state;
6633
ASSERT3P(hash_lock, ==, HDR_LOCK(hdr));
6634
ASSERT3P(state, !=, arc_anon);
6635
ASSERT3P(state, !=, arc_l2c_only);
6636
6637
/* this buffer is not on any list */
6638
ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), >, 0);
6639
6640
/*
6641
* Do we have more than one buf?
6642
*/
6643
if (hdr->b_l1hdr.b_buf != buf || !ARC_BUF_LAST(buf)) {
6644
arc_buf_hdr_t *nhdr;
6645
uint64_t spa = hdr->b_spa;
6646
uint64_t psize = HDR_GET_PSIZE(hdr);
6647
uint64_t lsize = HDR_GET_LSIZE(hdr);
6648
boolean_t protected = HDR_PROTECTED(hdr);
6649
enum zio_compress compress = arc_hdr_get_compress(hdr);
6650
arc_buf_contents_t type = arc_buf_type(hdr);
6651
6652
if (ARC_BUF_SHARED(buf) && !ARC_BUF_COMPRESSED(buf)) {
6653
ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf);
6654
ASSERT(ARC_BUF_LAST(buf));
6655
}
6656
6657
/*
6658
* Pull the buffer off of this hdr and find the last buffer
6659
* in the hdr's buffer list.
6660
*/
6661
VERIFY3S(remove_reference(hdr, tag), >, 0);
6662
arc_buf_t *lastbuf = arc_buf_remove(hdr, buf);
6663
ASSERT3P(lastbuf, !=, NULL);
6664
6665
/*
6666
* If the current arc_buf_t and the hdr are sharing their data
6667
* buffer, then we must stop sharing that block.
6668
*/
6669
if (ARC_BUF_SHARED(buf)) {
6670
ASSERT(!arc_buf_is_shared(lastbuf));
6671
6672
/*
6673
* First, sever the block sharing relationship between
6674
* buf and the arc_buf_hdr_t.
6675
*/
6676
arc_unshare_buf(hdr, buf);
6677
6678
/*
6679
* Now we need to recreate the hdr's b_pabd. Since we
6680
* have lastbuf handy, we try to share with it, but if
6681
* we can't then we allocate a new b_pabd and copy the
6682
* data from buf into it.
6683
*/
6684
if (arc_can_share(hdr, lastbuf)) {
6685
arc_share_buf(hdr, lastbuf);
6686
} else {
6687
arc_hdr_alloc_abd(hdr, 0);
6688
abd_copy_from_buf(hdr->b_l1hdr.b_pabd,
6689
buf->b_data, psize);
6690
}
6691
} else if (HDR_SHARED_DATA(hdr)) {
6692
/*
6693
* Uncompressed shared buffers are always at the end
6694
* of the list. Compressed buffers don't have the
6695
* same requirements. This makes it hard to
6696
* simply assert that the lastbuf is shared so
6697
* we rely on the hdr's compression flags to determine
6698
* if we have a compressed, shared buffer.
6699
*/
6700
ASSERT(arc_buf_is_shared(lastbuf) ||
6701
arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF);
6702
ASSERT(!arc_buf_is_shared(buf));
6703
}
6704
6705
ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr));
6706
6707
(void) zfs_refcount_remove_many(&state->arcs_size[type],
6708
arc_buf_size(buf), buf);
6709
6710
arc_cksum_verify(buf);
6711
arc_buf_unwatch(buf);
6712
6713
/* if this is the last uncompressed buf free the checksum */
6714
if (!arc_hdr_has_uncompressed_buf(hdr))
6715
arc_cksum_free(hdr);
6716
6717
mutex_exit(hash_lock);
6718
6719
nhdr = arc_hdr_alloc(spa, psize, lsize, protected,
6720
compress, hdr->b_complevel, type);
6721
ASSERT0P(nhdr->b_l1hdr.b_buf);
6722
ASSERT0(zfs_refcount_count(&nhdr->b_l1hdr.b_refcnt));
6723
VERIFY3U(nhdr->b_type, ==, type);
6724
ASSERT(!HDR_SHARED_DATA(nhdr));
6725
6726
nhdr->b_l1hdr.b_buf = buf;
6727
(void) zfs_refcount_add(&nhdr->b_l1hdr.b_refcnt, tag);
6728
buf->b_hdr = nhdr;
6729
6730
(void) zfs_refcount_add_many(&arc_anon->arcs_size[type],
6731
arc_buf_size(buf), buf);
6732
} else {
6733
ASSERT(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 1);
6734
/* protected by hash lock, or hdr is on arc_anon */
6735
ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node));
6736
ASSERT(!HDR_IO_IN_PROGRESS(hdr));
6737
6738
if (HDR_HAS_L2HDR(hdr)) {
6739
mutex_enter(&hdr->b_l2hdr.b_dev->l2ad_mtx);
6740
/* Recheck to prevent race with l2arc_evict(). */
6741
if (HDR_HAS_L2HDR(hdr))
6742
arc_hdr_l2hdr_destroy(hdr);
6743
mutex_exit(&hdr->b_l2hdr.b_dev->l2ad_mtx);
6744
}
6745
6746
hdr->b_l1hdr.b_mru_hits = 0;
6747
hdr->b_l1hdr.b_mru_ghost_hits = 0;
6748
hdr->b_l1hdr.b_mfu_hits = 0;
6749
hdr->b_l1hdr.b_mfu_ghost_hits = 0;
6750
arc_change_state(arc_anon, hdr);
6751
hdr->b_l1hdr.b_arc_access = 0;
6752
6753
mutex_exit(hash_lock);
6754
buf_discard_identity(hdr);
6755
arc_buf_thaw(buf);
6756
}
6757
}
6758
6759
int
6760
arc_released(arc_buf_t *buf)
6761
{
6762
return (buf->b_data != NULL &&
6763
buf->b_hdr->b_l1hdr.b_state == arc_anon);
6764
}
6765
6766
#ifdef ZFS_DEBUG
6767
int
6768
arc_referenced(arc_buf_t *buf)
6769
{
6770
return (zfs_refcount_count(&buf->b_hdr->b_l1hdr.b_refcnt));
6771
}
6772
#endif
6773
6774
static void
6775
arc_write_ready(zio_t *zio)
6776
{
6777
arc_write_callback_t *callback = zio->io_private;
6778
arc_buf_t *buf = callback->awcb_buf;
6779
arc_buf_hdr_t *hdr = buf->b_hdr;
6780
blkptr_t *bp = zio->io_bp;
6781
uint64_t psize = BP_IS_HOLE(bp) ? 0 : BP_GET_PSIZE(bp);
6782
fstrans_cookie_t cookie = spl_fstrans_mark();
6783
6784
ASSERT(HDR_HAS_L1HDR(hdr));
6785
ASSERT(!zfs_refcount_is_zero(&buf->b_hdr->b_l1hdr.b_refcnt));
6786
ASSERT3P(hdr->b_l1hdr.b_buf, !=, NULL);
6787
6788
/*
6789
* If we're reexecuting this zio because the pool suspended, then
6790
* cleanup any state that was previously set the first time the
6791
* callback was invoked.
6792
*/
6793
if (zio->io_flags & ZIO_FLAG_REEXECUTED) {
6794
arc_cksum_free(hdr);
6795
arc_buf_unwatch(buf);
6796
if (hdr->b_l1hdr.b_pabd != NULL) {
6797
if (ARC_BUF_SHARED(buf)) {
6798
arc_unshare_buf(hdr, buf);
6799
} else {
6800
ASSERT(!arc_buf_is_shared(buf));
6801
arc_hdr_free_abd(hdr, B_FALSE);
6802
}
6803
}
6804
6805
if (HDR_HAS_RABD(hdr))
6806
arc_hdr_free_abd(hdr, B_TRUE);
6807
}
6808
ASSERT0P(hdr->b_l1hdr.b_pabd);
6809
ASSERT(!HDR_HAS_RABD(hdr));
6810
ASSERT(!HDR_SHARED_DATA(hdr));
6811
ASSERT(!arc_buf_is_shared(buf));
6812
6813
callback->awcb_ready(zio, buf, callback->awcb_private);
6814
6815
if (HDR_IO_IN_PROGRESS(hdr)) {
6816
ASSERT(zio->io_flags & ZIO_FLAG_REEXECUTED);
6817
} else {
6818
arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS);
6819
add_reference(hdr, hdr); /* For IO_IN_PROGRESS. */
6820
}
6821
6822
if (BP_IS_PROTECTED(bp)) {
6823
/* ZIL blocks are written through zio_rewrite */
6824
ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG);
6825
6826
if (BP_SHOULD_BYTESWAP(bp)) {
6827
if (BP_GET_LEVEL(bp) > 0) {
6828
hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64;
6829
} else {
6830
hdr->b_l1hdr.b_byteswap =
6831
DMU_OT_BYTESWAP(BP_GET_TYPE(bp));
6832
}
6833
} else {
6834
hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS;
6835
}
6836
6837
arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED);
6838
hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp);
6839
hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset;
6840
zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt,
6841
hdr->b_crypt_hdr.b_iv);
6842
zio_crypt_decode_mac_bp(bp, hdr->b_crypt_hdr.b_mac);
6843
} else {
6844
arc_hdr_clear_flags(hdr, ARC_FLAG_PROTECTED);
6845
}
6846
6847
/*
6848
* If this block was written for raw encryption but the zio layer
6849
* ended up only authenticating it, adjust the buffer flags now.
6850
*/
6851
if (BP_IS_AUTHENTICATED(bp) && ARC_BUF_ENCRYPTED(buf)) {
6852
arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH);
6853
buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED;
6854
if (BP_GET_COMPRESS(bp) == ZIO_COMPRESS_OFF)
6855
buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED;
6856
} else if (BP_IS_HOLE(bp) && ARC_BUF_ENCRYPTED(buf)) {
6857
buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED;
6858
buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED;
6859
}
6860
6861
/* this must be done after the buffer flags are adjusted */
6862
arc_cksum_compute(buf);
6863
6864
enum zio_compress compress;
6865
if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) {
6866
compress = ZIO_COMPRESS_OFF;
6867
} else {
6868
ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp));
6869
compress = BP_GET_COMPRESS(bp);
6870
}
6871
HDR_SET_PSIZE(hdr, psize);
6872
arc_hdr_set_compress(hdr, compress);
6873
hdr->b_complevel = zio->io_prop.zp_complevel;
6874
6875
if (zio->io_error != 0 || psize == 0)
6876
goto out;
6877
6878
/*
6879
* Fill the hdr with data. If the buffer is encrypted we have no choice
6880
* but to copy the data into b_radb. If the hdr is compressed, the data
6881
* we want is available from the zio, otherwise we can take it from
6882
* the buf.
6883
*
6884
* We might be able to share the buf's data with the hdr here. However,
6885
* doing so would cause the ARC to be full of linear ABDs if we write a
6886
* lot of shareable data. As a compromise, we check whether scattered
6887
* ABDs are allowed, and assume that if they are then the user wants
6888
* the ARC to be primarily filled with them regardless of the data being
6889
* written. Therefore, if they're allowed then we allocate one and copy
6890
* the data into it; otherwise, we share the data directly if we can.
6891
*/
6892
if (ARC_BUF_ENCRYPTED(buf)) {
6893
ASSERT3U(psize, >, 0);
6894
ASSERT(ARC_BUF_COMPRESSED(buf));
6895
arc_hdr_alloc_abd(hdr, ARC_HDR_ALLOC_RDATA |
6896
ARC_HDR_USE_RESERVE);
6897
abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize);
6898
} else if (!(HDR_UNCACHED(hdr) ||
6899
abd_size_alloc_linear(arc_buf_size(buf))) ||
6900
!arc_can_share(hdr, buf)) {
6901
/*
6902
* Ideally, we would always copy the io_abd into b_pabd, but the
6903
* user may have disabled compressed ARC, thus we must check the
6904
* hdr's compression setting rather than the io_bp's.
6905
*/
6906
if (BP_IS_ENCRYPTED(bp)) {
6907
ASSERT3U(psize, >, 0);
6908
arc_hdr_alloc_abd(hdr, ARC_HDR_ALLOC_RDATA |
6909
ARC_HDR_USE_RESERVE);
6910
abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize);
6911
} else if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF &&
6912
!ARC_BUF_COMPRESSED(buf)) {
6913
ASSERT3U(psize, >, 0);
6914
arc_hdr_alloc_abd(hdr, ARC_HDR_USE_RESERVE);
6915
abd_copy(hdr->b_l1hdr.b_pabd, zio->io_abd, psize);
6916
} else {
6917
ASSERT3U(zio->io_orig_size, ==, arc_hdr_size(hdr));
6918
arc_hdr_alloc_abd(hdr, ARC_HDR_USE_RESERVE);
6919
abd_copy_from_buf(hdr->b_l1hdr.b_pabd, buf->b_data,
6920
arc_buf_size(buf));
6921
}
6922
} else {
6923
ASSERT3P(buf->b_data, ==, abd_to_buf(zio->io_orig_abd));
6924
ASSERT3U(zio->io_orig_size, ==, arc_buf_size(buf));
6925
ASSERT3P(hdr->b_l1hdr.b_buf, ==, buf);
6926
ASSERT(ARC_BUF_LAST(buf));
6927
6928
arc_share_buf(hdr, buf);
6929
}
6930
6931
out:
6932
arc_hdr_verify(hdr, bp);
6933
spl_fstrans_unmark(cookie);
6934
}
6935
6936
static void
6937
arc_write_children_ready(zio_t *zio)
6938
{
6939
arc_write_callback_t *callback = zio->io_private;
6940
arc_buf_t *buf = callback->awcb_buf;
6941
6942
callback->awcb_children_ready(zio, buf, callback->awcb_private);
6943
}
6944
6945
static void
6946
arc_write_done(zio_t *zio)
6947
{
6948
arc_write_callback_t *callback = zio->io_private;
6949
arc_buf_t *buf = callback->awcb_buf;
6950
arc_buf_hdr_t *hdr = buf->b_hdr;
6951
6952
ASSERT0P(hdr->b_l1hdr.b_acb);
6953
6954
if (zio->io_error == 0) {
6955
arc_hdr_verify(hdr, zio->io_bp);
6956
6957
if (BP_IS_HOLE(zio->io_bp) || BP_IS_EMBEDDED(zio->io_bp)) {
6958
buf_discard_identity(hdr);
6959
} else {
6960
hdr->b_dva = *BP_IDENTITY(zio->io_bp);
6961
hdr->b_birth = BP_GET_PHYSICAL_BIRTH(zio->io_bp);
6962
}
6963
} else {
6964
ASSERT(HDR_EMPTY(hdr));
6965
}
6966
6967
/*
6968
* If the block to be written was all-zero or compressed enough to be
6969
* embedded in the BP, no write was performed so there will be no
6970
* dva/birth/checksum. The buffer must therefore remain anonymous
6971
* (and uncached).
6972
*/
6973
if (!HDR_EMPTY(hdr)) {
6974
arc_buf_hdr_t *exists;
6975
kmutex_t *hash_lock;
6976
6977
ASSERT0(zio->io_error);
6978
6979
arc_cksum_verify(buf);
6980
6981
exists = buf_hash_insert(hdr, &hash_lock);
6982
if (exists != NULL) {
6983
/*
6984
* This can only happen if we overwrite for
6985
* sync-to-convergence, because we remove
6986
* buffers from the hash table when we arc_free().
6987
*/
6988
if (zio->io_flags & ZIO_FLAG_IO_REWRITE) {
6989
if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp))
6990
panic("bad overwrite, hdr=%p exists=%p",
6991
(void *)hdr, (void *)exists);
6992
ASSERT(zfs_refcount_is_zero(
6993
&exists->b_l1hdr.b_refcnt));
6994
arc_change_state(arc_anon, exists);
6995
arc_hdr_destroy(exists);
6996
mutex_exit(hash_lock);
6997
exists = buf_hash_insert(hdr, &hash_lock);
6998
ASSERT0P(exists);
6999
} else if (zio->io_flags & ZIO_FLAG_NOPWRITE) {
7000
/* nopwrite */
7001
ASSERT(zio->io_prop.zp_nopwrite);
7002
if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp))
7003
panic("bad nopwrite, hdr=%p exists=%p",
7004
(void *)hdr, (void *)exists);
7005
} else {
7006
/* Dedup */
7007
ASSERT3P(hdr->b_l1hdr.b_buf, !=, NULL);
7008
ASSERT(ARC_BUF_LAST(hdr->b_l1hdr.b_buf));
7009
ASSERT(hdr->b_l1hdr.b_state == arc_anon);
7010
ASSERT(BP_GET_DEDUP(zio->io_bp));
7011
ASSERT0(BP_GET_LEVEL(zio->io_bp));
7012
}
7013
}
7014
arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS);
7015
VERIFY3S(remove_reference(hdr, hdr), >, 0);
7016
/* if it's not anon, we are doing a scrub */
7017
if (exists == NULL && hdr->b_l1hdr.b_state == arc_anon)
7018
arc_access(hdr, 0, B_FALSE);
7019
mutex_exit(hash_lock);
7020
} else {
7021
arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS);
7022
VERIFY3S(remove_reference(hdr, hdr), >, 0);
7023
}
7024
7025
callback->awcb_done(zio, buf, callback->awcb_private);
7026
7027
abd_free(zio->io_abd);
7028
kmem_free(callback, sizeof (arc_write_callback_t));
7029
}
7030
7031
zio_t *
7032
arc_write(zio_t *pio, spa_t *spa, uint64_t txg,
7033
blkptr_t *bp, arc_buf_t *buf, boolean_t uncached, boolean_t l2arc,
7034
const zio_prop_t *zp, arc_write_done_func_t *ready,
7035
arc_write_done_func_t *children_ready, arc_write_done_func_t *done,
7036
void *private, zio_priority_t priority, int zio_flags,
7037
const zbookmark_phys_t *zb)
7038
{
7039
arc_buf_hdr_t *hdr = buf->b_hdr;
7040
arc_write_callback_t *callback;
7041
zio_t *zio;
7042
zio_prop_t localprop = *zp;
7043
7044
ASSERT3P(ready, !=, NULL);
7045
ASSERT3P(done, !=, NULL);
7046
ASSERT(!HDR_IO_ERROR(hdr));
7047
ASSERT(!HDR_IO_IN_PROGRESS(hdr));
7048
ASSERT0P(hdr->b_l1hdr.b_acb);
7049
ASSERT3P(hdr->b_l1hdr.b_buf, !=, NULL);
7050
if (uncached)
7051
arc_hdr_set_flags(hdr, ARC_FLAG_UNCACHED);
7052
else if (l2arc)
7053
arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE);
7054
7055
if (ARC_BUF_ENCRYPTED(buf)) {
7056
ASSERT(ARC_BUF_COMPRESSED(buf));
7057
localprop.zp_encrypt = B_TRUE;
7058
localprop.zp_compress = HDR_GET_COMPRESS(hdr);
7059
localprop.zp_complevel = hdr->b_complevel;
7060
localprop.zp_byteorder =
7061
(hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ?
7062
ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER;
7063
memcpy(localprop.zp_salt, hdr->b_crypt_hdr.b_salt,
7064
ZIO_DATA_SALT_LEN);
7065
memcpy(localprop.zp_iv, hdr->b_crypt_hdr.b_iv,
7066
ZIO_DATA_IV_LEN);
7067
memcpy(localprop.zp_mac, hdr->b_crypt_hdr.b_mac,
7068
ZIO_DATA_MAC_LEN);
7069
if (DMU_OT_IS_ENCRYPTED(localprop.zp_type)) {
7070
localprop.zp_nopwrite = B_FALSE;
7071
localprop.zp_copies =
7072
MIN(localprop.zp_copies, SPA_DVAS_PER_BP - 1);
7073
localprop.zp_gang_copies =
7074
MIN(localprop.zp_gang_copies, SPA_DVAS_PER_BP - 1);
7075
}
7076
zio_flags |= ZIO_FLAG_RAW;
7077
} else if (ARC_BUF_COMPRESSED(buf)) {
7078
ASSERT3U(HDR_GET_LSIZE(hdr), !=, arc_buf_size(buf));
7079
localprop.zp_compress = HDR_GET_COMPRESS(hdr);
7080
localprop.zp_complevel = hdr->b_complevel;
7081
zio_flags |= ZIO_FLAG_RAW_COMPRESS;
7082
}
7083
callback = kmem_zalloc(sizeof (arc_write_callback_t), KM_SLEEP);
7084
callback->awcb_ready = ready;
7085
callback->awcb_children_ready = children_ready;
7086
callback->awcb_done = done;
7087
callback->awcb_private = private;
7088
callback->awcb_buf = buf;
7089
7090
/*
7091
* The hdr's b_pabd is now stale, free it now. A new data block
7092
* will be allocated when the zio pipeline calls arc_write_ready().
7093
*/
7094
if (hdr->b_l1hdr.b_pabd != NULL) {
7095
/*
7096
* If the buf is currently sharing the data block with
7097
* the hdr then we need to break that relationship here.
7098
* The hdr will remain with a NULL data pointer and the
7099
* buf will take sole ownership of the block.
7100
*/
7101
if (ARC_BUF_SHARED(buf)) {
7102
arc_unshare_buf(hdr, buf);
7103
} else {
7104
ASSERT(!arc_buf_is_shared(buf));
7105
arc_hdr_free_abd(hdr, B_FALSE);
7106
}
7107
VERIFY3P(buf->b_data, !=, NULL);
7108
}
7109
7110
if (HDR_HAS_RABD(hdr))
7111
arc_hdr_free_abd(hdr, B_TRUE);
7112
7113
if (!(zio_flags & ZIO_FLAG_RAW))
7114
arc_hdr_set_compress(hdr, ZIO_COMPRESS_OFF);
7115
7116
ASSERT(!arc_buf_is_shared(buf));
7117
ASSERT0P(hdr->b_l1hdr.b_pabd);
7118
7119
zio = zio_write(pio, spa, txg, bp,
7120
abd_get_from_buf(buf->b_data, HDR_GET_LSIZE(hdr)),
7121
HDR_GET_LSIZE(hdr), arc_buf_size(buf), &localprop, arc_write_ready,
7122
(children_ready != NULL) ? arc_write_children_ready : NULL,
7123
arc_write_done, callback, priority, zio_flags, zb);
7124
7125
return (zio);
7126
}
7127
7128
void
7129
arc_tempreserve_clear(uint64_t reserve)
7130
{
7131
atomic_add_64(&arc_tempreserve, -reserve);
7132
ASSERT((int64_t)arc_tempreserve >= 0);
7133
}
7134
7135
int
7136
arc_tempreserve_space(spa_t *spa, uint64_t reserve, uint64_t txg)
7137
{
7138
int error;
7139
uint64_t anon_size;
7140
7141
if (!arc_no_grow &&
7142
reserve > arc_c/4 &&
7143
reserve * 4 > (2ULL << SPA_MAXBLOCKSHIFT))
7144
arc_c = MIN(arc_c_max, reserve * 4);
7145
7146
/*
7147
* Throttle when the calculated memory footprint for the TXG
7148
* exceeds the target ARC size.
7149
*/
7150
if (reserve > arc_c) {
7151
DMU_TX_STAT_BUMP(dmu_tx_memory_reserve);
7152
return (SET_ERROR(ERESTART));
7153
}
7154
7155
/*
7156
* Don't count loaned bufs as in flight dirty data to prevent long
7157
* network delays from blocking transactions that are ready to be
7158
* assigned to a txg.
7159
*/
7160
7161
/* assert that it has not wrapped around */
7162
ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0);
7163
7164
anon_size = MAX((int64_t)
7165
(zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_DATA]) +
7166
zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_METADATA]) -
7167
arc_loaned_bytes), 0);
7168
7169
/*
7170
* Writes will, almost always, require additional memory allocations
7171
* in order to compress/encrypt/etc the data. We therefore need to
7172
* make sure that there is sufficient available memory for this.
7173
*/
7174
error = arc_memory_throttle(spa, reserve, txg);
7175
if (error != 0)
7176
return (error);
7177
7178
/*
7179
* Throttle writes when the amount of dirty data in the cache
7180
* gets too large. We try to keep the cache less than half full
7181
* of dirty blocks so that our sync times don't grow too large.
7182
*
7183
* In the case of one pool being built on another pool, we want
7184
* to make sure we don't end up throttling the lower (backing)
7185
* pool when the upper pool is the majority contributor to dirty
7186
* data. To insure we make forward progress during throttling, we
7187
* also check the current pool's net dirty data and only throttle
7188
* if it exceeds zfs_arc_pool_dirty_percent of the anonymous dirty
7189
* data in the cache.
7190
*
7191
* Note: if two requests come in concurrently, we might let them
7192
* both succeed, when one of them should fail. Not a huge deal.
7193
*/
7194
uint64_t total_dirty = reserve + arc_tempreserve + anon_size;
7195
uint64_t spa_dirty_anon = spa_dirty_data(spa);
7196
uint64_t rarc_c = arc_warm ? arc_c : arc_c_max;
7197
if (total_dirty > rarc_c * zfs_arc_dirty_limit_percent / 100 &&
7198
anon_size > rarc_c * zfs_arc_anon_limit_percent / 100 &&
7199
spa_dirty_anon > anon_size * zfs_arc_pool_dirty_percent / 100) {
7200
#ifdef ZFS_DEBUG
7201
uint64_t meta_esize = zfs_refcount_count(
7202
&arc_anon->arcs_esize[ARC_BUFC_METADATA]);
7203
uint64_t data_esize =
7204
zfs_refcount_count(&arc_anon->arcs_esize[ARC_BUFC_DATA]);
7205
dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK "
7206
"anon_data=%lluK tempreserve=%lluK rarc_c=%lluK\n",
7207
(u_longlong_t)arc_tempreserve >> 10,
7208
(u_longlong_t)meta_esize >> 10,
7209
(u_longlong_t)data_esize >> 10,
7210
(u_longlong_t)reserve >> 10,
7211
(u_longlong_t)rarc_c >> 10);
7212
#endif
7213
DMU_TX_STAT_BUMP(dmu_tx_dirty_throttle);
7214
return (SET_ERROR(ERESTART));
7215
}
7216
atomic_add_64(&arc_tempreserve, reserve);
7217
return (0);
7218
}
7219
7220
static void
7221
arc_kstat_update_state(arc_state_t *state, kstat_named_t *size,
7222
kstat_named_t *data, kstat_named_t *metadata,
7223
kstat_named_t *evict_data, kstat_named_t *evict_metadata)
7224
{
7225
data->value.ui64 =
7226
zfs_refcount_count(&state->arcs_size[ARC_BUFC_DATA]);
7227
metadata->value.ui64 =
7228
zfs_refcount_count(&state->arcs_size[ARC_BUFC_METADATA]);
7229
size->value.ui64 = data->value.ui64 + metadata->value.ui64;
7230
evict_data->value.ui64 =
7231
zfs_refcount_count(&state->arcs_esize[ARC_BUFC_DATA]);
7232
evict_metadata->value.ui64 =
7233
zfs_refcount_count(&state->arcs_esize[ARC_BUFC_METADATA]);
7234
}
7235
7236
static int
7237
arc_kstat_update(kstat_t *ksp, int rw)
7238
{
7239
arc_stats_t *as = ksp->ks_data;
7240
7241
if (rw == KSTAT_WRITE)
7242
return (SET_ERROR(EACCES));
7243
7244
as->arcstat_hits.value.ui64 =
7245
wmsum_value(&arc_sums.arcstat_hits);
7246
as->arcstat_iohits.value.ui64 =
7247
wmsum_value(&arc_sums.arcstat_iohits);
7248
as->arcstat_misses.value.ui64 =
7249
wmsum_value(&arc_sums.arcstat_misses);
7250
as->arcstat_demand_data_hits.value.ui64 =
7251
wmsum_value(&arc_sums.arcstat_demand_data_hits);
7252
as->arcstat_demand_data_iohits.value.ui64 =
7253
wmsum_value(&arc_sums.arcstat_demand_data_iohits);
7254
as->arcstat_demand_data_misses.value.ui64 =
7255
wmsum_value(&arc_sums.arcstat_demand_data_misses);
7256
as->arcstat_demand_metadata_hits.value.ui64 =
7257
wmsum_value(&arc_sums.arcstat_demand_metadata_hits);
7258
as->arcstat_demand_metadata_iohits.value.ui64 =
7259
wmsum_value(&arc_sums.arcstat_demand_metadata_iohits);
7260
as->arcstat_demand_metadata_misses.value.ui64 =
7261
wmsum_value(&arc_sums.arcstat_demand_metadata_misses);
7262
as->arcstat_prefetch_data_hits.value.ui64 =
7263
wmsum_value(&arc_sums.arcstat_prefetch_data_hits);
7264
as->arcstat_prefetch_data_iohits.value.ui64 =
7265
wmsum_value(&arc_sums.arcstat_prefetch_data_iohits);
7266
as->arcstat_prefetch_data_misses.value.ui64 =
7267
wmsum_value(&arc_sums.arcstat_prefetch_data_misses);
7268
as->arcstat_prefetch_metadata_hits.value.ui64 =
7269
wmsum_value(&arc_sums.arcstat_prefetch_metadata_hits);
7270
as->arcstat_prefetch_metadata_iohits.value.ui64 =
7271
wmsum_value(&arc_sums.arcstat_prefetch_metadata_iohits);
7272
as->arcstat_prefetch_metadata_misses.value.ui64 =
7273
wmsum_value(&arc_sums.arcstat_prefetch_metadata_misses);
7274
as->arcstat_mru_hits.value.ui64 =
7275
wmsum_value(&arc_sums.arcstat_mru_hits);
7276
as->arcstat_mru_ghost_hits.value.ui64 =
7277
wmsum_value(&arc_sums.arcstat_mru_ghost_hits);
7278
as->arcstat_mfu_hits.value.ui64 =
7279
wmsum_value(&arc_sums.arcstat_mfu_hits);
7280
as->arcstat_mfu_ghost_hits.value.ui64 =
7281
wmsum_value(&arc_sums.arcstat_mfu_ghost_hits);
7282
as->arcstat_uncached_hits.value.ui64 =
7283
wmsum_value(&arc_sums.arcstat_uncached_hits);
7284
as->arcstat_deleted.value.ui64 =
7285
wmsum_value(&arc_sums.arcstat_deleted);
7286
as->arcstat_mutex_miss.value.ui64 =
7287
wmsum_value(&arc_sums.arcstat_mutex_miss);
7288
as->arcstat_access_skip.value.ui64 =
7289
wmsum_value(&arc_sums.arcstat_access_skip);
7290
as->arcstat_evict_skip.value.ui64 =
7291
wmsum_value(&arc_sums.arcstat_evict_skip);
7292
as->arcstat_evict_not_enough.value.ui64 =
7293
wmsum_value(&arc_sums.arcstat_evict_not_enough);
7294
as->arcstat_evict_l2_cached.value.ui64 =
7295
wmsum_value(&arc_sums.arcstat_evict_l2_cached);
7296
as->arcstat_evict_l2_eligible.value.ui64 =
7297
wmsum_value(&arc_sums.arcstat_evict_l2_eligible);
7298
as->arcstat_evict_l2_eligible_mfu.value.ui64 =
7299
wmsum_value(&arc_sums.arcstat_evict_l2_eligible_mfu);
7300
as->arcstat_evict_l2_eligible_mru.value.ui64 =
7301
wmsum_value(&arc_sums.arcstat_evict_l2_eligible_mru);
7302
as->arcstat_evict_l2_ineligible.value.ui64 =
7303
wmsum_value(&arc_sums.arcstat_evict_l2_ineligible);
7304
as->arcstat_evict_l2_skip.value.ui64 =
7305
wmsum_value(&arc_sums.arcstat_evict_l2_skip);
7306
as->arcstat_hash_elements.value.ui64 =
7307
as->arcstat_hash_elements_max.value.ui64 =
7308
wmsum_value(&arc_sums.arcstat_hash_elements);
7309
as->arcstat_hash_collisions.value.ui64 =
7310
wmsum_value(&arc_sums.arcstat_hash_collisions);
7311
as->arcstat_hash_chains.value.ui64 =
7312
wmsum_value(&arc_sums.arcstat_hash_chains);
7313
as->arcstat_size.value.ui64 =
7314
aggsum_value(&arc_sums.arcstat_size);
7315
as->arcstat_compressed_size.value.ui64 =
7316
wmsum_value(&arc_sums.arcstat_compressed_size);
7317
as->arcstat_uncompressed_size.value.ui64 =
7318
wmsum_value(&arc_sums.arcstat_uncompressed_size);
7319
as->arcstat_overhead_size.value.ui64 =
7320
wmsum_value(&arc_sums.arcstat_overhead_size);
7321
as->arcstat_hdr_size.value.ui64 =
7322
wmsum_value(&arc_sums.arcstat_hdr_size);
7323
as->arcstat_data_size.value.ui64 =
7324
wmsum_value(&arc_sums.arcstat_data_size);
7325
as->arcstat_metadata_size.value.ui64 =
7326
wmsum_value(&arc_sums.arcstat_metadata_size);
7327
as->arcstat_dbuf_size.value.ui64 =
7328
wmsum_value(&arc_sums.arcstat_dbuf_size);
7329
#if defined(COMPAT_FREEBSD11)
7330
as->arcstat_other_size.value.ui64 =
7331
wmsum_value(&arc_sums.arcstat_bonus_size) +
7332
aggsum_value(&arc_sums.arcstat_dnode_size) +
7333
wmsum_value(&arc_sums.arcstat_dbuf_size);
7334
#endif
7335
7336
arc_kstat_update_state(arc_anon,
7337
&as->arcstat_anon_size,
7338
&as->arcstat_anon_data,
7339
&as->arcstat_anon_metadata,
7340
&as->arcstat_anon_evictable_data,
7341
&as->arcstat_anon_evictable_metadata);
7342
arc_kstat_update_state(arc_mru,
7343
&as->arcstat_mru_size,
7344
&as->arcstat_mru_data,
7345
&as->arcstat_mru_metadata,
7346
&as->arcstat_mru_evictable_data,
7347
&as->arcstat_mru_evictable_metadata);
7348
arc_kstat_update_state(arc_mru_ghost,
7349
&as->arcstat_mru_ghost_size,
7350
&as->arcstat_mru_ghost_data,
7351
&as->arcstat_mru_ghost_metadata,
7352
&as->arcstat_mru_ghost_evictable_data,
7353
&as->arcstat_mru_ghost_evictable_metadata);
7354
arc_kstat_update_state(arc_mfu,
7355
&as->arcstat_mfu_size,
7356
&as->arcstat_mfu_data,
7357
&as->arcstat_mfu_metadata,
7358
&as->arcstat_mfu_evictable_data,
7359
&as->arcstat_mfu_evictable_metadata);
7360
arc_kstat_update_state(arc_mfu_ghost,
7361
&as->arcstat_mfu_ghost_size,
7362
&as->arcstat_mfu_ghost_data,
7363
&as->arcstat_mfu_ghost_metadata,
7364
&as->arcstat_mfu_ghost_evictable_data,
7365
&as->arcstat_mfu_ghost_evictable_metadata);
7366
arc_kstat_update_state(arc_uncached,
7367
&as->arcstat_uncached_size,
7368
&as->arcstat_uncached_data,
7369
&as->arcstat_uncached_metadata,
7370
&as->arcstat_uncached_evictable_data,
7371
&as->arcstat_uncached_evictable_metadata);
7372
7373
as->arcstat_dnode_size.value.ui64 =
7374
aggsum_value(&arc_sums.arcstat_dnode_size);
7375
as->arcstat_bonus_size.value.ui64 =
7376
wmsum_value(&arc_sums.arcstat_bonus_size);
7377
as->arcstat_l2_hits.value.ui64 =
7378
wmsum_value(&arc_sums.arcstat_l2_hits);
7379
as->arcstat_l2_misses.value.ui64 =
7380
wmsum_value(&arc_sums.arcstat_l2_misses);
7381
as->arcstat_l2_prefetch_asize.value.ui64 =
7382
wmsum_value(&arc_sums.arcstat_l2_prefetch_asize);
7383
as->arcstat_l2_mru_asize.value.ui64 =
7384
wmsum_value(&arc_sums.arcstat_l2_mru_asize);
7385
as->arcstat_l2_mfu_asize.value.ui64 =
7386
wmsum_value(&arc_sums.arcstat_l2_mfu_asize);
7387
as->arcstat_l2_bufc_data_asize.value.ui64 =
7388
wmsum_value(&arc_sums.arcstat_l2_bufc_data_asize);
7389
as->arcstat_l2_bufc_metadata_asize.value.ui64 =
7390
wmsum_value(&arc_sums.arcstat_l2_bufc_metadata_asize);
7391
as->arcstat_l2_feeds.value.ui64 =
7392
wmsum_value(&arc_sums.arcstat_l2_feeds);
7393
as->arcstat_l2_rw_clash.value.ui64 =
7394
wmsum_value(&arc_sums.arcstat_l2_rw_clash);
7395
as->arcstat_l2_read_bytes.value.ui64 =
7396
wmsum_value(&arc_sums.arcstat_l2_read_bytes);
7397
as->arcstat_l2_write_bytes.value.ui64 =
7398
wmsum_value(&arc_sums.arcstat_l2_write_bytes);
7399
as->arcstat_l2_writes_sent.value.ui64 =
7400
wmsum_value(&arc_sums.arcstat_l2_writes_sent);
7401
as->arcstat_l2_writes_done.value.ui64 =
7402
wmsum_value(&arc_sums.arcstat_l2_writes_done);
7403
as->arcstat_l2_writes_error.value.ui64 =
7404
wmsum_value(&arc_sums.arcstat_l2_writes_error);
7405
as->arcstat_l2_writes_lock_retry.value.ui64 =
7406
wmsum_value(&arc_sums.arcstat_l2_writes_lock_retry);
7407
as->arcstat_l2_evict_lock_retry.value.ui64 =
7408
wmsum_value(&arc_sums.arcstat_l2_evict_lock_retry);
7409
as->arcstat_l2_evict_reading.value.ui64 =
7410
wmsum_value(&arc_sums.arcstat_l2_evict_reading);
7411
as->arcstat_l2_evict_l1cached.value.ui64 =
7412
wmsum_value(&arc_sums.arcstat_l2_evict_l1cached);
7413
as->arcstat_l2_free_on_write.value.ui64 =
7414
wmsum_value(&arc_sums.arcstat_l2_free_on_write);
7415
as->arcstat_l2_abort_lowmem.value.ui64 =
7416
wmsum_value(&arc_sums.arcstat_l2_abort_lowmem);
7417
as->arcstat_l2_cksum_bad.value.ui64 =
7418
wmsum_value(&arc_sums.arcstat_l2_cksum_bad);
7419
as->arcstat_l2_io_error.value.ui64 =
7420
wmsum_value(&arc_sums.arcstat_l2_io_error);
7421
as->arcstat_l2_lsize.value.ui64 =
7422
wmsum_value(&arc_sums.arcstat_l2_lsize);
7423
as->arcstat_l2_psize.value.ui64 =
7424
wmsum_value(&arc_sums.arcstat_l2_psize);
7425
as->arcstat_l2_hdr_size.value.ui64 =
7426
aggsum_value(&arc_sums.arcstat_l2_hdr_size);
7427
as->arcstat_l2_log_blk_writes.value.ui64 =
7428
wmsum_value(&arc_sums.arcstat_l2_log_blk_writes);
7429
as->arcstat_l2_log_blk_asize.value.ui64 =
7430
wmsum_value(&arc_sums.arcstat_l2_log_blk_asize);
7431
as->arcstat_l2_log_blk_count.value.ui64 =
7432
wmsum_value(&arc_sums.arcstat_l2_log_blk_count);
7433
as->arcstat_l2_rebuild_success.value.ui64 =
7434
wmsum_value(&arc_sums.arcstat_l2_rebuild_success);
7435
as->arcstat_l2_rebuild_abort_unsupported.value.ui64 =
7436
wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_unsupported);
7437
as->arcstat_l2_rebuild_abort_io_errors.value.ui64 =
7438
wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_io_errors);
7439
as->arcstat_l2_rebuild_abort_dh_errors.value.ui64 =
7440
wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_dh_errors);
7441
as->arcstat_l2_rebuild_abort_cksum_lb_errors.value.ui64 =
7442
wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors);
7443
as->arcstat_l2_rebuild_abort_lowmem.value.ui64 =
7444
wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_lowmem);
7445
as->arcstat_l2_rebuild_size.value.ui64 =
7446
wmsum_value(&arc_sums.arcstat_l2_rebuild_size);
7447
as->arcstat_l2_rebuild_asize.value.ui64 =
7448
wmsum_value(&arc_sums.arcstat_l2_rebuild_asize);
7449
as->arcstat_l2_rebuild_bufs.value.ui64 =
7450
wmsum_value(&arc_sums.arcstat_l2_rebuild_bufs);
7451
as->arcstat_l2_rebuild_bufs_precached.value.ui64 =
7452
wmsum_value(&arc_sums.arcstat_l2_rebuild_bufs_precached);
7453
as->arcstat_l2_rebuild_log_blks.value.ui64 =
7454
wmsum_value(&arc_sums.arcstat_l2_rebuild_log_blks);
7455
as->arcstat_memory_throttle_count.value.ui64 =
7456
wmsum_value(&arc_sums.arcstat_memory_throttle_count);
7457
as->arcstat_memory_direct_count.value.ui64 =
7458
wmsum_value(&arc_sums.arcstat_memory_direct_count);
7459
as->arcstat_memory_indirect_count.value.ui64 =
7460
wmsum_value(&arc_sums.arcstat_memory_indirect_count);
7461
7462
as->arcstat_memory_all_bytes.value.ui64 =
7463
arc_all_memory();
7464
as->arcstat_memory_free_bytes.value.ui64 =
7465
arc_free_memory();
7466
as->arcstat_memory_available_bytes.value.i64 =
7467
arc_available_memory();
7468
7469
as->arcstat_prune.value.ui64 =
7470
wmsum_value(&arc_sums.arcstat_prune);
7471
as->arcstat_meta_used.value.ui64 =
7472
wmsum_value(&arc_sums.arcstat_meta_used);
7473
as->arcstat_async_upgrade_sync.value.ui64 =
7474
wmsum_value(&arc_sums.arcstat_async_upgrade_sync);
7475
as->arcstat_predictive_prefetch.value.ui64 =
7476
wmsum_value(&arc_sums.arcstat_predictive_prefetch);
7477
as->arcstat_demand_hit_predictive_prefetch.value.ui64 =
7478
wmsum_value(&arc_sums.arcstat_demand_hit_predictive_prefetch);
7479
as->arcstat_demand_iohit_predictive_prefetch.value.ui64 =
7480
wmsum_value(&arc_sums.arcstat_demand_iohit_predictive_prefetch);
7481
as->arcstat_prescient_prefetch.value.ui64 =
7482
wmsum_value(&arc_sums.arcstat_prescient_prefetch);
7483
as->arcstat_demand_hit_prescient_prefetch.value.ui64 =
7484
wmsum_value(&arc_sums.arcstat_demand_hit_prescient_prefetch);
7485
as->arcstat_demand_iohit_prescient_prefetch.value.ui64 =
7486
wmsum_value(&arc_sums.arcstat_demand_iohit_prescient_prefetch);
7487
as->arcstat_raw_size.value.ui64 =
7488
wmsum_value(&arc_sums.arcstat_raw_size);
7489
as->arcstat_cached_only_in_progress.value.ui64 =
7490
wmsum_value(&arc_sums.arcstat_cached_only_in_progress);
7491
as->arcstat_abd_chunk_waste_size.value.ui64 =
7492
wmsum_value(&arc_sums.arcstat_abd_chunk_waste_size);
7493
7494
return (0);
7495
}
7496
7497
/*
7498
* This function *must* return indices evenly distributed between all
7499
* sublists of the multilist. This is needed due to how the ARC eviction
7500
* code is laid out; arc_evict_state() assumes ARC buffers are evenly
7501
* distributed between all sublists and uses this assumption when
7502
* deciding which sublist to evict from and how much to evict from it.
7503
*/
7504
static unsigned int
7505
arc_state_multilist_index_func(multilist_t *ml, void *obj)
7506
{
7507
arc_buf_hdr_t *hdr = obj;
7508
7509
/*
7510
* We rely on b_dva to generate evenly distributed index
7511
* numbers using buf_hash below. So, as an added precaution,
7512
* let's make sure we never add empty buffers to the arc lists.
7513
*/
7514
ASSERT(!HDR_EMPTY(hdr));
7515
7516
/*
7517
* The assumption here, is the hash value for a given
7518
* arc_buf_hdr_t will remain constant throughout its lifetime
7519
* (i.e. its b_spa, b_dva, and b_birth fields don't change).
7520
* Thus, we don't need to store the header's sublist index
7521
* on insertion, as this index can be recalculated on removal.
7522
*
7523
* Also, the low order bits of the hash value are thought to be
7524
* distributed evenly. Otherwise, in the case that the multilist
7525
* has a power of two number of sublists, each sublists' usage
7526
* would not be evenly distributed. In this context full 64bit
7527
* division would be a waste of time, so limit it to 32 bits.
7528
*/
7529
return ((unsigned int)buf_hash(hdr->b_spa, &hdr->b_dva, hdr->b_birth) %
7530
multilist_get_num_sublists(ml));
7531
}
7532
7533
static unsigned int
7534
arc_state_l2c_multilist_index_func(multilist_t *ml, void *obj)
7535
{
7536
panic("Header %p insert into arc_l2c_only %p", obj, ml);
7537
}
7538
7539
#define WARN_IF_TUNING_IGNORED(tuning, value, do_warn) do { \
7540
if ((do_warn) && (tuning) && ((tuning) != (value))) { \
7541
cmn_err(CE_WARN, \
7542
"ignoring tunable %s (using %llu instead)", \
7543
(#tuning), (u_longlong_t)(value)); \
7544
} \
7545
} while (0)
7546
7547
/*
7548
* Called during module initialization and periodically thereafter to
7549
* apply reasonable changes to the exposed performance tunings. Can also be
7550
* called explicitly by param_set_arc_*() functions when ARC tunables are
7551
* updated manually. Non-zero zfs_* values which differ from the currently set
7552
* values will be applied.
7553
*/
7554
void
7555
arc_tuning_update(boolean_t verbose)
7556
{
7557
uint64_t allmem = arc_all_memory();
7558
7559
/* Valid range: 32M - <arc_c_max> */
7560
if ((zfs_arc_min) && (zfs_arc_min != arc_c_min) &&
7561
(zfs_arc_min >= 2ULL << SPA_MAXBLOCKSHIFT) &&
7562
(zfs_arc_min <= arc_c_max)) {
7563
arc_c_min = zfs_arc_min;
7564
arc_c = MAX(arc_c, arc_c_min);
7565
}
7566
WARN_IF_TUNING_IGNORED(zfs_arc_min, arc_c_min, verbose);
7567
7568
/* Valid range: 64M - <all physical memory> */
7569
if ((zfs_arc_max) && (zfs_arc_max != arc_c_max) &&
7570
(zfs_arc_max >= MIN_ARC_MAX) && (zfs_arc_max < allmem) &&
7571
(zfs_arc_max > arc_c_min)) {
7572
arc_c_max = zfs_arc_max;
7573
arc_c = MIN(arc_c, arc_c_max);
7574
if (arc_dnode_limit > arc_c_max)
7575
arc_dnode_limit = arc_c_max;
7576
}
7577
WARN_IF_TUNING_IGNORED(zfs_arc_max, arc_c_max, verbose);
7578
7579
/* Valid range: 0 - <all physical memory> */
7580
arc_dnode_limit = zfs_arc_dnode_limit ? zfs_arc_dnode_limit :
7581
MIN(zfs_arc_dnode_limit_percent, 100) * arc_c_max / 100;
7582
WARN_IF_TUNING_IGNORED(zfs_arc_dnode_limit, arc_dnode_limit, verbose);
7583
7584
/* Valid range: 1 - N */
7585
if (zfs_arc_grow_retry)
7586
arc_grow_retry = zfs_arc_grow_retry;
7587
7588
/* Valid range: 1 - N */
7589
if (zfs_arc_shrink_shift) {
7590
arc_shrink_shift = zfs_arc_shrink_shift;
7591
arc_no_grow_shift = MIN(arc_no_grow_shift, arc_shrink_shift -1);
7592
}
7593
7594
/* Valid range: 1 - N ms */
7595
if (zfs_arc_min_prefetch_ms)
7596
arc_min_prefetch_ms = zfs_arc_min_prefetch_ms;
7597
7598
/* Valid range: 1 - N ms */
7599
if (zfs_arc_min_prescient_prefetch_ms) {
7600
arc_min_prescient_prefetch_ms =
7601
zfs_arc_min_prescient_prefetch_ms;
7602
}
7603
7604
/* Valid range: 0 - 100 */
7605
if (zfs_arc_lotsfree_percent <= 100)
7606
arc_lotsfree_percent = zfs_arc_lotsfree_percent;
7607
WARN_IF_TUNING_IGNORED(zfs_arc_lotsfree_percent, arc_lotsfree_percent,
7608
verbose);
7609
7610
/* Valid range: 0 - <all physical memory> */
7611
if ((zfs_arc_sys_free) && (zfs_arc_sys_free != arc_sys_free))
7612
arc_sys_free = MIN(zfs_arc_sys_free, allmem);
7613
WARN_IF_TUNING_IGNORED(zfs_arc_sys_free, arc_sys_free, verbose);
7614
}
7615
7616
static void
7617
arc_state_multilist_init(multilist_t *ml,
7618
multilist_sublist_index_func_t *index_func, int *maxcountp)
7619
{
7620
multilist_create(ml, sizeof (arc_buf_hdr_t),
7621
offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), index_func);
7622
*maxcountp = MAX(*maxcountp, multilist_get_num_sublists(ml));
7623
}
7624
7625
static void
7626
arc_state_init(void)
7627
{
7628
int num_sublists = 0;
7629
7630
arc_state_multilist_init(&arc_mru->arcs_list[ARC_BUFC_METADATA],
7631
arc_state_multilist_index_func, &num_sublists);
7632
arc_state_multilist_init(&arc_mru->arcs_list[ARC_BUFC_DATA],
7633
arc_state_multilist_index_func, &num_sublists);
7634
arc_state_multilist_init(&arc_mru_ghost->arcs_list[ARC_BUFC_METADATA],
7635
arc_state_multilist_index_func, &num_sublists);
7636
arc_state_multilist_init(&arc_mru_ghost->arcs_list[ARC_BUFC_DATA],
7637
arc_state_multilist_index_func, &num_sublists);
7638
arc_state_multilist_init(&arc_mfu->arcs_list[ARC_BUFC_METADATA],
7639
arc_state_multilist_index_func, &num_sublists);
7640
arc_state_multilist_init(&arc_mfu->arcs_list[ARC_BUFC_DATA],
7641
arc_state_multilist_index_func, &num_sublists);
7642
arc_state_multilist_init(&arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA],
7643
arc_state_multilist_index_func, &num_sublists);
7644
arc_state_multilist_init(&arc_mfu_ghost->arcs_list[ARC_BUFC_DATA],
7645
arc_state_multilist_index_func, &num_sublists);
7646
arc_state_multilist_init(&arc_uncached->arcs_list[ARC_BUFC_METADATA],
7647
arc_state_multilist_index_func, &num_sublists);
7648
arc_state_multilist_init(&arc_uncached->arcs_list[ARC_BUFC_DATA],
7649
arc_state_multilist_index_func, &num_sublists);
7650
7651
/*
7652
* L2 headers should never be on the L2 state list since they don't
7653
* have L1 headers allocated. Special index function asserts that.
7654
*/
7655
arc_state_multilist_init(&arc_l2c_only->arcs_list[ARC_BUFC_METADATA],
7656
arc_state_l2c_multilist_index_func, &num_sublists);
7657
arc_state_multilist_init(&arc_l2c_only->arcs_list[ARC_BUFC_DATA],
7658
arc_state_l2c_multilist_index_func, &num_sublists);
7659
7660
/*
7661
* Keep track of the number of markers needed to reclaim buffers from
7662
* any ARC state. The markers will be pre-allocated so as to minimize
7663
* the number of memory allocations performed by the eviction thread.
7664
*/
7665
arc_state_evict_marker_count = num_sublists;
7666
7667
zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_METADATA]);
7668
zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_DATA]);
7669
zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_METADATA]);
7670
zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_DATA]);
7671
zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]);
7672
zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]);
7673
zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]);
7674
zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_DATA]);
7675
zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]);
7676
zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]);
7677
zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]);
7678
zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]);
7679
zfs_refcount_create(&arc_uncached->arcs_esize[ARC_BUFC_METADATA]);
7680
zfs_refcount_create(&arc_uncached->arcs_esize[ARC_BUFC_DATA]);
7681
7682
zfs_refcount_create(&arc_anon->arcs_size[ARC_BUFC_DATA]);
7683
zfs_refcount_create(&arc_anon->arcs_size[ARC_BUFC_METADATA]);
7684
zfs_refcount_create(&arc_mru->arcs_size[ARC_BUFC_DATA]);
7685
zfs_refcount_create(&arc_mru->arcs_size[ARC_BUFC_METADATA]);
7686
zfs_refcount_create(&arc_mru_ghost->arcs_size[ARC_BUFC_DATA]);
7687
zfs_refcount_create(&arc_mru_ghost->arcs_size[ARC_BUFC_METADATA]);
7688
zfs_refcount_create(&arc_mfu->arcs_size[ARC_BUFC_DATA]);
7689
zfs_refcount_create(&arc_mfu->arcs_size[ARC_BUFC_METADATA]);
7690
zfs_refcount_create(&arc_mfu_ghost->arcs_size[ARC_BUFC_DATA]);
7691
zfs_refcount_create(&arc_mfu_ghost->arcs_size[ARC_BUFC_METADATA]);
7692
zfs_refcount_create(&arc_l2c_only->arcs_size[ARC_BUFC_DATA]);
7693
zfs_refcount_create(&arc_l2c_only->arcs_size[ARC_BUFC_METADATA]);
7694
zfs_refcount_create(&arc_uncached->arcs_size[ARC_BUFC_DATA]);
7695
zfs_refcount_create(&arc_uncached->arcs_size[ARC_BUFC_METADATA]);
7696
7697
wmsum_init(&arc_mru_ghost->arcs_hits[ARC_BUFC_DATA], 0);
7698
wmsum_init(&arc_mru_ghost->arcs_hits[ARC_BUFC_METADATA], 0);
7699
wmsum_init(&arc_mfu_ghost->arcs_hits[ARC_BUFC_DATA], 0);
7700
wmsum_init(&arc_mfu_ghost->arcs_hits[ARC_BUFC_METADATA], 0);
7701
7702
wmsum_init(&arc_sums.arcstat_hits, 0);
7703
wmsum_init(&arc_sums.arcstat_iohits, 0);
7704
wmsum_init(&arc_sums.arcstat_misses, 0);
7705
wmsum_init(&arc_sums.arcstat_demand_data_hits, 0);
7706
wmsum_init(&arc_sums.arcstat_demand_data_iohits, 0);
7707
wmsum_init(&arc_sums.arcstat_demand_data_misses, 0);
7708
wmsum_init(&arc_sums.arcstat_demand_metadata_hits, 0);
7709
wmsum_init(&arc_sums.arcstat_demand_metadata_iohits, 0);
7710
wmsum_init(&arc_sums.arcstat_demand_metadata_misses, 0);
7711
wmsum_init(&arc_sums.arcstat_prefetch_data_hits, 0);
7712
wmsum_init(&arc_sums.arcstat_prefetch_data_iohits, 0);
7713
wmsum_init(&arc_sums.arcstat_prefetch_data_misses, 0);
7714
wmsum_init(&arc_sums.arcstat_prefetch_metadata_hits, 0);
7715
wmsum_init(&arc_sums.arcstat_prefetch_metadata_iohits, 0);
7716
wmsum_init(&arc_sums.arcstat_prefetch_metadata_misses, 0);
7717
wmsum_init(&arc_sums.arcstat_mru_hits, 0);
7718
wmsum_init(&arc_sums.arcstat_mru_ghost_hits, 0);
7719
wmsum_init(&arc_sums.arcstat_mfu_hits, 0);
7720
wmsum_init(&arc_sums.arcstat_mfu_ghost_hits, 0);
7721
wmsum_init(&arc_sums.arcstat_uncached_hits, 0);
7722
wmsum_init(&arc_sums.arcstat_deleted, 0);
7723
wmsum_init(&arc_sums.arcstat_mutex_miss, 0);
7724
wmsum_init(&arc_sums.arcstat_access_skip, 0);
7725
wmsum_init(&arc_sums.arcstat_evict_skip, 0);
7726
wmsum_init(&arc_sums.arcstat_evict_not_enough, 0);
7727
wmsum_init(&arc_sums.arcstat_evict_l2_cached, 0);
7728
wmsum_init(&arc_sums.arcstat_evict_l2_eligible, 0);
7729
wmsum_init(&arc_sums.arcstat_evict_l2_eligible_mfu, 0);
7730
wmsum_init(&arc_sums.arcstat_evict_l2_eligible_mru, 0);
7731
wmsum_init(&arc_sums.arcstat_evict_l2_ineligible, 0);
7732
wmsum_init(&arc_sums.arcstat_evict_l2_skip, 0);
7733
wmsum_init(&arc_sums.arcstat_hash_elements, 0);
7734
wmsum_init(&arc_sums.arcstat_hash_collisions, 0);
7735
wmsum_init(&arc_sums.arcstat_hash_chains, 0);
7736
aggsum_init(&arc_sums.arcstat_size, 0);
7737
wmsum_init(&arc_sums.arcstat_compressed_size, 0);
7738
wmsum_init(&arc_sums.arcstat_uncompressed_size, 0);
7739
wmsum_init(&arc_sums.arcstat_overhead_size, 0);
7740
wmsum_init(&arc_sums.arcstat_hdr_size, 0);
7741
wmsum_init(&arc_sums.arcstat_data_size, 0);
7742
wmsum_init(&arc_sums.arcstat_metadata_size, 0);
7743
wmsum_init(&arc_sums.arcstat_dbuf_size, 0);
7744
aggsum_init(&arc_sums.arcstat_dnode_size, 0);
7745
wmsum_init(&arc_sums.arcstat_bonus_size, 0);
7746
wmsum_init(&arc_sums.arcstat_l2_hits, 0);
7747
wmsum_init(&arc_sums.arcstat_l2_misses, 0);
7748
wmsum_init(&arc_sums.arcstat_l2_prefetch_asize, 0);
7749
wmsum_init(&arc_sums.arcstat_l2_mru_asize, 0);
7750
wmsum_init(&arc_sums.arcstat_l2_mfu_asize, 0);
7751
wmsum_init(&arc_sums.arcstat_l2_bufc_data_asize, 0);
7752
wmsum_init(&arc_sums.arcstat_l2_bufc_metadata_asize, 0);
7753
wmsum_init(&arc_sums.arcstat_l2_feeds, 0);
7754
wmsum_init(&arc_sums.arcstat_l2_rw_clash, 0);
7755
wmsum_init(&arc_sums.arcstat_l2_read_bytes, 0);
7756
wmsum_init(&arc_sums.arcstat_l2_write_bytes, 0);
7757
wmsum_init(&arc_sums.arcstat_l2_writes_sent, 0);
7758
wmsum_init(&arc_sums.arcstat_l2_writes_done, 0);
7759
wmsum_init(&arc_sums.arcstat_l2_writes_error, 0);
7760
wmsum_init(&arc_sums.arcstat_l2_writes_lock_retry, 0);
7761
wmsum_init(&arc_sums.arcstat_l2_evict_lock_retry, 0);
7762
wmsum_init(&arc_sums.arcstat_l2_evict_reading, 0);
7763
wmsum_init(&arc_sums.arcstat_l2_evict_l1cached, 0);
7764
wmsum_init(&arc_sums.arcstat_l2_free_on_write, 0);
7765
wmsum_init(&arc_sums.arcstat_l2_abort_lowmem, 0);
7766
wmsum_init(&arc_sums.arcstat_l2_cksum_bad, 0);
7767
wmsum_init(&arc_sums.arcstat_l2_io_error, 0);
7768
wmsum_init(&arc_sums.arcstat_l2_lsize, 0);
7769
wmsum_init(&arc_sums.arcstat_l2_psize, 0);
7770
aggsum_init(&arc_sums.arcstat_l2_hdr_size, 0);
7771
wmsum_init(&arc_sums.arcstat_l2_log_blk_writes, 0);
7772
wmsum_init(&arc_sums.arcstat_l2_log_blk_asize, 0);
7773
wmsum_init(&arc_sums.arcstat_l2_log_blk_count, 0);
7774
wmsum_init(&arc_sums.arcstat_l2_rebuild_success, 0);
7775
wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_unsupported, 0);
7776
wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_io_errors, 0);
7777
wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_dh_errors, 0);
7778
wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors, 0);
7779
wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_lowmem, 0);
7780
wmsum_init(&arc_sums.arcstat_l2_rebuild_size, 0);
7781
wmsum_init(&arc_sums.arcstat_l2_rebuild_asize, 0);
7782
wmsum_init(&arc_sums.arcstat_l2_rebuild_bufs, 0);
7783
wmsum_init(&arc_sums.arcstat_l2_rebuild_bufs_precached, 0);
7784
wmsum_init(&arc_sums.arcstat_l2_rebuild_log_blks, 0);
7785
wmsum_init(&arc_sums.arcstat_memory_throttle_count, 0);
7786
wmsum_init(&arc_sums.arcstat_memory_direct_count, 0);
7787
wmsum_init(&arc_sums.arcstat_memory_indirect_count, 0);
7788
wmsum_init(&arc_sums.arcstat_prune, 0);
7789
wmsum_init(&arc_sums.arcstat_meta_used, 0);
7790
wmsum_init(&arc_sums.arcstat_async_upgrade_sync, 0);
7791
wmsum_init(&arc_sums.arcstat_predictive_prefetch, 0);
7792
wmsum_init(&arc_sums.arcstat_demand_hit_predictive_prefetch, 0);
7793
wmsum_init(&arc_sums.arcstat_demand_iohit_predictive_prefetch, 0);
7794
wmsum_init(&arc_sums.arcstat_prescient_prefetch, 0);
7795
wmsum_init(&arc_sums.arcstat_demand_hit_prescient_prefetch, 0);
7796
wmsum_init(&arc_sums.arcstat_demand_iohit_prescient_prefetch, 0);
7797
wmsum_init(&arc_sums.arcstat_raw_size, 0);
7798
wmsum_init(&arc_sums.arcstat_cached_only_in_progress, 0);
7799
wmsum_init(&arc_sums.arcstat_abd_chunk_waste_size, 0);
7800
7801
arc_anon->arcs_state = ARC_STATE_ANON;
7802
arc_mru->arcs_state = ARC_STATE_MRU;
7803
arc_mru_ghost->arcs_state = ARC_STATE_MRU_GHOST;
7804
arc_mfu->arcs_state = ARC_STATE_MFU;
7805
arc_mfu_ghost->arcs_state = ARC_STATE_MFU_GHOST;
7806
arc_l2c_only->arcs_state = ARC_STATE_L2C_ONLY;
7807
arc_uncached->arcs_state = ARC_STATE_UNCACHED;
7808
}
7809
7810
static void
7811
arc_state_fini(void)
7812
{
7813
zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_METADATA]);
7814
zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_DATA]);
7815
zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_METADATA]);
7816
zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_DATA]);
7817
zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]);
7818
zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]);
7819
zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]);
7820
zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_DATA]);
7821
zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]);
7822
zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]);
7823
zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]);
7824
zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]);
7825
zfs_refcount_destroy(&arc_uncached->arcs_esize[ARC_BUFC_METADATA]);
7826
zfs_refcount_destroy(&arc_uncached->arcs_esize[ARC_BUFC_DATA]);
7827
7828
zfs_refcount_destroy(&arc_anon->arcs_size[ARC_BUFC_DATA]);
7829
zfs_refcount_destroy(&arc_anon->arcs_size[ARC_BUFC_METADATA]);
7830
zfs_refcount_destroy(&arc_mru->arcs_size[ARC_BUFC_DATA]);
7831
zfs_refcount_destroy(&arc_mru->arcs_size[ARC_BUFC_METADATA]);
7832
zfs_refcount_destroy(&arc_mru_ghost->arcs_size[ARC_BUFC_DATA]);
7833
zfs_refcount_destroy(&arc_mru_ghost->arcs_size[ARC_BUFC_METADATA]);
7834
zfs_refcount_destroy(&arc_mfu->arcs_size[ARC_BUFC_DATA]);
7835
zfs_refcount_destroy(&arc_mfu->arcs_size[ARC_BUFC_METADATA]);
7836
zfs_refcount_destroy(&arc_mfu_ghost->arcs_size[ARC_BUFC_DATA]);
7837
zfs_refcount_destroy(&arc_mfu_ghost->arcs_size[ARC_BUFC_METADATA]);
7838
zfs_refcount_destroy(&arc_l2c_only->arcs_size[ARC_BUFC_DATA]);
7839
zfs_refcount_destroy(&arc_l2c_only->arcs_size[ARC_BUFC_METADATA]);
7840
zfs_refcount_destroy(&arc_uncached->arcs_size[ARC_BUFC_DATA]);
7841
zfs_refcount_destroy(&arc_uncached->arcs_size[ARC_BUFC_METADATA]);
7842
7843
multilist_destroy(&arc_mru->arcs_list[ARC_BUFC_METADATA]);
7844
multilist_destroy(&arc_mru_ghost->arcs_list[ARC_BUFC_METADATA]);
7845
multilist_destroy(&arc_mfu->arcs_list[ARC_BUFC_METADATA]);
7846
multilist_destroy(&arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA]);
7847
multilist_destroy(&arc_mru->arcs_list[ARC_BUFC_DATA]);
7848
multilist_destroy(&arc_mru_ghost->arcs_list[ARC_BUFC_DATA]);
7849
multilist_destroy(&arc_mfu->arcs_list[ARC_BUFC_DATA]);
7850
multilist_destroy(&arc_mfu_ghost->arcs_list[ARC_BUFC_DATA]);
7851
multilist_destroy(&arc_l2c_only->arcs_list[ARC_BUFC_METADATA]);
7852
multilist_destroy(&arc_l2c_only->arcs_list[ARC_BUFC_DATA]);
7853
multilist_destroy(&arc_uncached->arcs_list[ARC_BUFC_METADATA]);
7854
multilist_destroy(&arc_uncached->arcs_list[ARC_BUFC_DATA]);
7855
7856
wmsum_fini(&arc_mru_ghost->arcs_hits[ARC_BUFC_DATA]);
7857
wmsum_fini(&arc_mru_ghost->arcs_hits[ARC_BUFC_METADATA]);
7858
wmsum_fini(&arc_mfu_ghost->arcs_hits[ARC_BUFC_DATA]);
7859
wmsum_fini(&arc_mfu_ghost->arcs_hits[ARC_BUFC_METADATA]);
7860
7861
wmsum_fini(&arc_sums.arcstat_hits);
7862
wmsum_fini(&arc_sums.arcstat_iohits);
7863
wmsum_fini(&arc_sums.arcstat_misses);
7864
wmsum_fini(&arc_sums.arcstat_demand_data_hits);
7865
wmsum_fini(&arc_sums.arcstat_demand_data_iohits);
7866
wmsum_fini(&arc_sums.arcstat_demand_data_misses);
7867
wmsum_fini(&arc_sums.arcstat_demand_metadata_hits);
7868
wmsum_fini(&arc_sums.arcstat_demand_metadata_iohits);
7869
wmsum_fini(&arc_sums.arcstat_demand_metadata_misses);
7870
wmsum_fini(&arc_sums.arcstat_prefetch_data_hits);
7871
wmsum_fini(&arc_sums.arcstat_prefetch_data_iohits);
7872
wmsum_fini(&arc_sums.arcstat_prefetch_data_misses);
7873
wmsum_fini(&arc_sums.arcstat_prefetch_metadata_hits);
7874
wmsum_fini(&arc_sums.arcstat_prefetch_metadata_iohits);
7875
wmsum_fini(&arc_sums.arcstat_prefetch_metadata_misses);
7876
wmsum_fini(&arc_sums.arcstat_mru_hits);
7877
wmsum_fini(&arc_sums.arcstat_mru_ghost_hits);
7878
wmsum_fini(&arc_sums.arcstat_mfu_hits);
7879
wmsum_fini(&arc_sums.arcstat_mfu_ghost_hits);
7880
wmsum_fini(&arc_sums.arcstat_uncached_hits);
7881
wmsum_fini(&arc_sums.arcstat_deleted);
7882
wmsum_fini(&arc_sums.arcstat_mutex_miss);
7883
wmsum_fini(&arc_sums.arcstat_access_skip);
7884
wmsum_fini(&arc_sums.arcstat_evict_skip);
7885
wmsum_fini(&arc_sums.arcstat_evict_not_enough);
7886
wmsum_fini(&arc_sums.arcstat_evict_l2_cached);
7887
wmsum_fini(&arc_sums.arcstat_evict_l2_eligible);
7888
wmsum_fini(&arc_sums.arcstat_evict_l2_eligible_mfu);
7889
wmsum_fini(&arc_sums.arcstat_evict_l2_eligible_mru);
7890
wmsum_fini(&arc_sums.arcstat_evict_l2_ineligible);
7891
wmsum_fini(&arc_sums.arcstat_evict_l2_skip);
7892
wmsum_fini(&arc_sums.arcstat_hash_elements);
7893
wmsum_fini(&arc_sums.arcstat_hash_collisions);
7894
wmsum_fini(&arc_sums.arcstat_hash_chains);
7895
aggsum_fini(&arc_sums.arcstat_size);
7896
wmsum_fini(&arc_sums.arcstat_compressed_size);
7897
wmsum_fini(&arc_sums.arcstat_uncompressed_size);
7898
wmsum_fini(&arc_sums.arcstat_overhead_size);
7899
wmsum_fini(&arc_sums.arcstat_hdr_size);
7900
wmsum_fini(&arc_sums.arcstat_data_size);
7901
wmsum_fini(&arc_sums.arcstat_metadata_size);
7902
wmsum_fini(&arc_sums.arcstat_dbuf_size);
7903
aggsum_fini(&arc_sums.arcstat_dnode_size);
7904
wmsum_fini(&arc_sums.arcstat_bonus_size);
7905
wmsum_fini(&arc_sums.arcstat_l2_hits);
7906
wmsum_fini(&arc_sums.arcstat_l2_misses);
7907
wmsum_fini(&arc_sums.arcstat_l2_prefetch_asize);
7908
wmsum_fini(&arc_sums.arcstat_l2_mru_asize);
7909
wmsum_fini(&arc_sums.arcstat_l2_mfu_asize);
7910
wmsum_fini(&arc_sums.arcstat_l2_bufc_data_asize);
7911
wmsum_fini(&arc_sums.arcstat_l2_bufc_metadata_asize);
7912
wmsum_fini(&arc_sums.arcstat_l2_feeds);
7913
wmsum_fini(&arc_sums.arcstat_l2_rw_clash);
7914
wmsum_fini(&arc_sums.arcstat_l2_read_bytes);
7915
wmsum_fini(&arc_sums.arcstat_l2_write_bytes);
7916
wmsum_fini(&arc_sums.arcstat_l2_writes_sent);
7917
wmsum_fini(&arc_sums.arcstat_l2_writes_done);
7918
wmsum_fini(&arc_sums.arcstat_l2_writes_error);
7919
wmsum_fini(&arc_sums.arcstat_l2_writes_lock_retry);
7920
wmsum_fini(&arc_sums.arcstat_l2_evict_lock_retry);
7921
wmsum_fini(&arc_sums.arcstat_l2_evict_reading);
7922
wmsum_fini(&arc_sums.arcstat_l2_evict_l1cached);
7923
wmsum_fini(&arc_sums.arcstat_l2_free_on_write);
7924
wmsum_fini(&arc_sums.arcstat_l2_abort_lowmem);
7925
wmsum_fini(&arc_sums.arcstat_l2_cksum_bad);
7926
wmsum_fini(&arc_sums.arcstat_l2_io_error);
7927
wmsum_fini(&arc_sums.arcstat_l2_lsize);
7928
wmsum_fini(&arc_sums.arcstat_l2_psize);
7929
aggsum_fini(&arc_sums.arcstat_l2_hdr_size);
7930
wmsum_fini(&arc_sums.arcstat_l2_log_blk_writes);
7931
wmsum_fini(&arc_sums.arcstat_l2_log_blk_asize);
7932
wmsum_fini(&arc_sums.arcstat_l2_log_blk_count);
7933
wmsum_fini(&arc_sums.arcstat_l2_rebuild_success);
7934
wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_unsupported);
7935
wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_io_errors);
7936
wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_dh_errors);
7937
wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors);
7938
wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_lowmem);
7939
wmsum_fini(&arc_sums.arcstat_l2_rebuild_size);
7940
wmsum_fini(&arc_sums.arcstat_l2_rebuild_asize);
7941
wmsum_fini(&arc_sums.arcstat_l2_rebuild_bufs);
7942
wmsum_fini(&arc_sums.arcstat_l2_rebuild_bufs_precached);
7943
wmsum_fini(&arc_sums.arcstat_l2_rebuild_log_blks);
7944
wmsum_fini(&arc_sums.arcstat_memory_throttle_count);
7945
wmsum_fini(&arc_sums.arcstat_memory_direct_count);
7946
wmsum_fini(&arc_sums.arcstat_memory_indirect_count);
7947
wmsum_fini(&arc_sums.arcstat_prune);
7948
wmsum_fini(&arc_sums.arcstat_meta_used);
7949
wmsum_fini(&arc_sums.arcstat_async_upgrade_sync);
7950
wmsum_fini(&arc_sums.arcstat_predictive_prefetch);
7951
wmsum_fini(&arc_sums.arcstat_demand_hit_predictive_prefetch);
7952
wmsum_fini(&arc_sums.arcstat_demand_iohit_predictive_prefetch);
7953
wmsum_fini(&arc_sums.arcstat_prescient_prefetch);
7954
wmsum_fini(&arc_sums.arcstat_demand_hit_prescient_prefetch);
7955
wmsum_fini(&arc_sums.arcstat_demand_iohit_prescient_prefetch);
7956
wmsum_fini(&arc_sums.arcstat_raw_size);
7957
wmsum_fini(&arc_sums.arcstat_cached_only_in_progress);
7958
wmsum_fini(&arc_sums.arcstat_abd_chunk_waste_size);
7959
}
7960
7961
uint64_t
7962
arc_target_bytes(void)
7963
{
7964
return (arc_c);
7965
}
7966
7967
void
7968
arc_set_limits(uint64_t allmem)
7969
{
7970
/* Set min cache to 1/32 of all memory, or 32MB, whichever is more. */
7971
arc_c_min = MAX(allmem / 32, 2ULL << SPA_MAXBLOCKSHIFT);
7972
7973
/* How to set default max varies by platform. */
7974
arc_c_max = arc_default_max(arc_c_min, allmem);
7975
}
7976
7977
void
7978
arc_init(void)
7979
{
7980
uint64_t percent, allmem = arc_all_memory();
7981
mutex_init(&arc_evict_lock, NULL, MUTEX_DEFAULT, NULL);
7982
list_create(&arc_evict_waiters, sizeof (arc_evict_waiter_t),
7983
offsetof(arc_evict_waiter_t, aew_node));
7984
7985
arc_min_prefetch_ms = 1000;
7986
arc_min_prescient_prefetch_ms = 6000;
7987
7988
#if defined(_KERNEL)
7989
arc_lowmem_init();
7990
#endif
7991
7992
arc_set_limits(allmem);
7993
7994
#ifdef _KERNEL
7995
/*
7996
* If zfs_arc_max is non-zero at init, meaning it was set in the kernel
7997
* environment before the module was loaded, don't block setting the
7998
* maximum because it is less than arc_c_min, instead, reset arc_c_min
7999
* to a lower value.
8000
* zfs_arc_min will be handled by arc_tuning_update().
8001
*/
8002
if (zfs_arc_max != 0 && zfs_arc_max >= MIN_ARC_MAX &&
8003
zfs_arc_max < allmem) {
8004
arc_c_max = zfs_arc_max;
8005
if (arc_c_min >= arc_c_max) {
8006
arc_c_min = MAX(zfs_arc_max / 2,
8007
2ULL << SPA_MAXBLOCKSHIFT);
8008
}
8009
}
8010
#else
8011
/*
8012
* In userland, there's only the memory pressure that we artificially
8013
* create (see arc_available_memory()). Don't let arc_c get too
8014
* small, because it can cause transactions to be larger than
8015
* arc_c, causing arc_tempreserve_space() to fail.
8016
*/
8017
arc_c_min = MAX(arc_c_max / 2, 2ULL << SPA_MAXBLOCKSHIFT);
8018
#endif
8019
8020
arc_c = arc_c_min;
8021
/*
8022
* 32-bit fixed point fractions of metadata from total ARC size,
8023
* MRU data from all data and MRU metadata from all metadata.
8024
*/
8025
arc_meta = (1ULL << 32) / 4; /* Metadata is 25% of arc_c. */
8026
arc_pd = (1ULL << 32) / 2; /* Data MRU is 50% of data. */
8027
arc_pm = (1ULL << 32) / 2; /* Metadata MRU is 50% of metadata. */
8028
8029
percent = MIN(zfs_arc_dnode_limit_percent, 100);
8030
arc_dnode_limit = arc_c_max * percent / 100;
8031
8032
/* Apply user specified tunings */
8033
arc_tuning_update(B_TRUE);
8034
8035
/* if kmem_flags are set, lets try to use less memory */
8036
if (kmem_debugging())
8037
arc_c = arc_c / 2;
8038
if (arc_c < arc_c_min)
8039
arc_c = arc_c_min;
8040
8041
arc_register_hotplug();
8042
8043
arc_state_init();
8044
8045
buf_init();
8046
8047
list_create(&arc_prune_list, sizeof (arc_prune_t),
8048
offsetof(arc_prune_t, p_node));
8049
mutex_init(&arc_prune_mtx, NULL, MUTEX_DEFAULT, NULL);
8050
8051
arc_prune_taskq = taskq_create("arc_prune", zfs_arc_prune_task_threads,
8052
defclsyspri, 100, INT_MAX, TASKQ_PREPOPULATE | TASKQ_DYNAMIC);
8053
8054
arc_evict_thread_init();
8055
8056
list_create(&arc_async_flush_list, sizeof (arc_async_flush_t),
8057
offsetof(arc_async_flush_t, af_node));
8058
mutex_init(&arc_async_flush_lock, NULL, MUTEX_DEFAULT, NULL);
8059
arc_flush_taskq = taskq_create("arc_flush", MIN(boot_ncpus, 4),
8060
defclsyspri, 1, INT_MAX, TASKQ_DYNAMIC);
8061
8062
arc_ksp = kstat_create("zfs", 0, "arcstats", "misc", KSTAT_TYPE_NAMED,
8063
sizeof (arc_stats) / sizeof (kstat_named_t), KSTAT_FLAG_VIRTUAL);
8064
8065
if (arc_ksp != NULL) {
8066
arc_ksp->ks_data = &arc_stats;
8067
arc_ksp->ks_update = arc_kstat_update;
8068
kstat_install(arc_ksp);
8069
}
8070
8071
arc_state_evict_markers =
8072
arc_state_alloc_markers(arc_state_evict_marker_count);
8073
arc_evict_zthr = zthr_create_timer("arc_evict",
8074
arc_evict_cb_check, arc_evict_cb, NULL, SEC2NSEC(1), defclsyspri);
8075
arc_reap_zthr = zthr_create_timer("arc_reap",
8076
arc_reap_cb_check, arc_reap_cb, NULL, SEC2NSEC(1), minclsyspri);
8077
8078
arc_warm = B_FALSE;
8079
8080
/*
8081
* Calculate maximum amount of dirty data per pool.
8082
*
8083
* If it has been set by a module parameter, take that.
8084
* Otherwise, use a percentage of physical memory defined by
8085
* zfs_dirty_data_max_percent (default 10%) with a cap at
8086
* zfs_dirty_data_max_max (default 4G or 25% of physical memory).
8087
*/
8088
#ifdef __LP64__
8089
if (zfs_dirty_data_max_max == 0)
8090
zfs_dirty_data_max_max = MIN(4ULL * 1024 * 1024 * 1024,
8091
allmem * zfs_dirty_data_max_max_percent / 100);
8092
#else
8093
if (zfs_dirty_data_max_max == 0)
8094
zfs_dirty_data_max_max = MIN(1ULL * 1024 * 1024 * 1024,
8095
allmem * zfs_dirty_data_max_max_percent / 100);
8096
#endif
8097
8098
if (zfs_dirty_data_max == 0) {
8099
zfs_dirty_data_max = allmem *
8100
zfs_dirty_data_max_percent / 100;
8101
zfs_dirty_data_max = MIN(zfs_dirty_data_max,
8102
zfs_dirty_data_max_max);
8103
}
8104
8105
if (zfs_wrlog_data_max == 0) {
8106
8107
/*
8108
* dp_wrlog_total is reduced for each txg at the end of
8109
* spa_sync(). However, dp_dirty_total is reduced every time
8110
* a block is written out. Thus under normal operation,
8111
* dp_wrlog_total could grow 2 times as big as
8112
* zfs_dirty_data_max.
8113
*/
8114
zfs_wrlog_data_max = zfs_dirty_data_max * 2;
8115
}
8116
}
8117
8118
void
8119
arc_fini(void)
8120
{
8121
arc_prune_t *p;
8122
8123
#ifdef _KERNEL
8124
arc_lowmem_fini();
8125
#endif /* _KERNEL */
8126
8127
/* Wait for any background flushes */
8128
taskq_wait(arc_flush_taskq);
8129
taskq_destroy(arc_flush_taskq);
8130
8131
/* Use B_TRUE to ensure *all* buffers are evicted */
8132
arc_flush(NULL, B_TRUE);
8133
8134
if (arc_ksp != NULL) {
8135
kstat_delete(arc_ksp);
8136
arc_ksp = NULL;
8137
}
8138
8139
taskq_wait(arc_prune_taskq);
8140
taskq_destroy(arc_prune_taskq);
8141
8142
list_destroy(&arc_async_flush_list);
8143
mutex_destroy(&arc_async_flush_lock);
8144
8145
mutex_enter(&arc_prune_mtx);
8146
while ((p = list_remove_head(&arc_prune_list)) != NULL) {
8147
(void) zfs_refcount_remove(&p->p_refcnt, &arc_prune_list);
8148
zfs_refcount_destroy(&p->p_refcnt);
8149
kmem_free(p, sizeof (*p));
8150
}
8151
mutex_exit(&arc_prune_mtx);
8152
8153
list_destroy(&arc_prune_list);
8154
mutex_destroy(&arc_prune_mtx);
8155
8156
if (arc_evict_taskq != NULL)
8157
taskq_wait(arc_evict_taskq);
8158
8159
(void) zthr_cancel(arc_evict_zthr);
8160
(void) zthr_cancel(arc_reap_zthr);
8161
arc_state_free_markers(arc_state_evict_markers,
8162
arc_state_evict_marker_count);
8163
8164
if (arc_evict_taskq != NULL) {
8165
taskq_destroy(arc_evict_taskq);
8166
kmem_free(arc_evict_arg,
8167
sizeof (evict_arg_t) * zfs_arc_evict_threads);
8168
}
8169
8170
mutex_destroy(&arc_evict_lock);
8171
list_destroy(&arc_evict_waiters);
8172
8173
/*
8174
* Free any buffers that were tagged for destruction. This needs
8175
* to occur before arc_state_fini() runs and destroys the aggsum
8176
* values which are updated when freeing scatter ABDs.
8177
*/
8178
l2arc_do_free_on_write();
8179
8180
/*
8181
* buf_fini() must proceed arc_state_fini() because buf_fin() may
8182
* trigger the release of kmem magazines, which can callback to
8183
* arc_space_return() which accesses aggsums freed in act_state_fini().
8184
*/
8185
buf_fini();
8186
arc_state_fini();
8187
8188
arc_unregister_hotplug();
8189
8190
/*
8191
* We destroy the zthrs after all the ARC state has been
8192
* torn down to avoid the case of them receiving any
8193
* wakeup() signals after they are destroyed.
8194
*/
8195
zthr_destroy(arc_evict_zthr);
8196
zthr_destroy(arc_reap_zthr);
8197
8198
ASSERT0(arc_loaned_bytes);
8199
}
8200
8201
/*
8202
* Level 2 ARC
8203
*
8204
* The level 2 ARC (L2ARC) is a cache layer in-between main memory and disk.
8205
* It uses dedicated storage devices to hold cached data, which are populated
8206
* using large infrequent writes. The main role of this cache is to boost
8207
* the performance of random read workloads. The intended L2ARC devices
8208
* include short-stroked disks, solid state disks, and other media with
8209
* substantially faster read latency than disk.
8210
*
8211
* +-----------------------+
8212
* | ARC |
8213
* +-----------------------+
8214
* | ^ ^
8215
* | | |
8216
* l2arc_feed_thread() arc_read()
8217
* | | |
8218
* | l2arc read |
8219
* V | |
8220
* +---------------+ |
8221
* | L2ARC | |
8222
* +---------------+ |
8223
* | ^ |
8224
* l2arc_write() | |
8225
* | | |
8226
* V | |
8227
* +-------+ +-------+
8228
* | vdev | | vdev |
8229
* | cache | | cache |
8230
* +-------+ +-------+
8231
* +=========+ .-----.
8232
* : L2ARC : |-_____-|
8233
* : devices : | Disks |
8234
* +=========+ `-_____-'
8235
*
8236
* Read requests are satisfied from the following sources, in order:
8237
*
8238
* 1) ARC
8239
* 2) vdev cache of L2ARC devices
8240
* 3) L2ARC devices
8241
* 4) vdev cache of disks
8242
* 5) disks
8243
*
8244
* Some L2ARC device types exhibit extremely slow write performance.
8245
* To accommodate for this there are some significant differences between
8246
* the L2ARC and traditional cache design:
8247
*
8248
* 1. There is no eviction path from the ARC to the L2ARC. Evictions from
8249
* the ARC behave as usual, freeing buffers and placing headers on ghost
8250
* lists. The ARC does not send buffers to the L2ARC during eviction as
8251
* this would add inflated write latencies for all ARC memory pressure.
8252
*
8253
* 2. The L2ARC attempts to cache data from the ARC before it is evicted.
8254
* It does this by periodically scanning buffers from the eviction-end of
8255
* the MFU and MRU ARC lists, copying them to the L2ARC devices if they are
8256
* not already there. It scans until a headroom of buffers is satisfied,
8257
* which itself is a buffer for ARC eviction. If a compressible buffer is
8258
* found during scanning and selected for writing to an L2ARC device, we
8259
* temporarily boost scanning headroom during the next scan cycle to make
8260
* sure we adapt to compression effects (which might significantly reduce
8261
* the data volume we write to L2ARC). The thread that does this is
8262
* l2arc_feed_thread(), illustrated below; example sizes are included to
8263
* provide a better sense of ratio than this diagram:
8264
*
8265
* head --> tail
8266
* +---------------------+----------+
8267
* ARC_mfu |:::::#:::::::::::::::|o#o###o###|-->. # already on L2ARC
8268
* +---------------------+----------+ | o L2ARC eligible
8269
* ARC_mru |:#:::::::::::::::::::|#o#ooo####|-->| : ARC buffer
8270
* +---------------------+----------+ |
8271
* 15.9 Gbytes ^ 32 Mbytes |
8272
* headroom |
8273
* l2arc_feed_thread()
8274
* |
8275
* l2arc write hand <--[oooo]--'
8276
* | 8 Mbyte
8277
* | write max
8278
* V
8279
* +==============================+
8280
* L2ARC dev |####|#|###|###| |####| ... |
8281
* +==============================+
8282
* 32 Gbytes
8283
*
8284
* 3. If an ARC buffer is copied to the L2ARC but then hit instead of
8285
* evicted, then the L2ARC has cached a buffer much sooner than it probably
8286
* needed to, potentially wasting L2ARC device bandwidth and storage. It is
8287
* safe to say that this is an uncommon case, since buffers at the end of
8288
* the ARC lists have moved there due to inactivity.
8289
*
8290
* 4. If the ARC evicts faster than the L2ARC can maintain a headroom,
8291
* then the L2ARC simply misses copying some buffers. This serves as a
8292
* pressure valve to prevent heavy read workloads from both stalling the ARC
8293
* with waits and clogging the L2ARC with writes. This also helps prevent
8294
* the potential for the L2ARC to churn if it attempts to cache content too
8295
* quickly, such as during backups of the entire pool.
8296
*
8297
* 5. After system boot and before the ARC has filled main memory, there are
8298
* no evictions from the ARC and so the tails of the ARC_mfu and ARC_mru
8299
* lists can remain mostly static. Instead of searching from tail of these
8300
* lists as pictured, the l2arc_feed_thread() will search from the list heads
8301
* for eligible buffers, greatly increasing its chance of finding them.
8302
*
8303
* The L2ARC device write speed is also boosted during this time so that
8304
* the L2ARC warms up faster. Since there have been no ARC evictions yet,
8305
* there are no L2ARC reads, and no fear of degrading read performance
8306
* through increased writes.
8307
*
8308
* 6. Writes to the L2ARC devices are grouped and sent in-sequence, so that
8309
* the vdev queue can aggregate them into larger and fewer writes. Each
8310
* device is written to in a rotor fashion, sweeping writes through
8311
* available space then repeating.
8312
*
8313
* 7. The L2ARC does not store dirty content. It never needs to flush
8314
* write buffers back to disk based storage.
8315
*
8316
* 8. If an ARC buffer is written (and dirtied) which also exists in the
8317
* L2ARC, the now stale L2ARC buffer is immediately dropped.
8318
*
8319
* The performance of the L2ARC can be tweaked by a number of tunables, which
8320
* may be necessary for different workloads:
8321
*
8322
* l2arc_write_max max write bytes per interval
8323
* l2arc_write_boost extra write bytes during device warmup
8324
* l2arc_noprefetch skip caching prefetched buffers
8325
* l2arc_headroom number of max device writes to precache
8326
* l2arc_headroom_boost when we find compressed buffers during ARC
8327
* scanning, we multiply headroom by this
8328
* percentage factor for the next scan cycle,
8329
* since more compressed buffers are likely to
8330
* be present
8331
* l2arc_feed_secs seconds between L2ARC writing
8332
*
8333
* Tunables may be removed or added as future performance improvements are
8334
* integrated, and also may become zpool properties.
8335
*
8336
* There are three key functions that control how the L2ARC warms up:
8337
*
8338
* l2arc_write_eligible() check if a buffer is eligible to cache
8339
* l2arc_write_size() calculate how much to write
8340
* l2arc_write_interval() calculate sleep delay between writes
8341
*
8342
* These three functions determine what to write, how much, and how quickly
8343
* to send writes.
8344
*
8345
* L2ARC persistence:
8346
*
8347
* When writing buffers to L2ARC, we periodically add some metadata to
8348
* make sure we can pick them up after reboot, thus dramatically reducing
8349
* the impact that any downtime has on the performance of storage systems
8350
* with large caches.
8351
*
8352
* The implementation works fairly simply by integrating the following two
8353
* modifications:
8354
*
8355
* *) When writing to the L2ARC, we occasionally write a "l2arc log block",
8356
* which is an additional piece of metadata which describes what's been
8357
* written. This allows us to rebuild the arc_buf_hdr_t structures of the
8358
* main ARC buffers. There are 2 linked-lists of log blocks headed by
8359
* dh_start_lbps[2]. We alternate which chain we append to, so they are
8360
* time-wise and offset-wise interleaved, but that is an optimization rather
8361
* than for correctness. The log block also includes a pointer to the
8362
* previous block in its chain.
8363
*
8364
* *) We reserve SPA_MINBLOCKSIZE of space at the start of each L2ARC device
8365
* for our header bookkeeping purposes. This contains a device header,
8366
* which contains our top-level reference structures. We update it each
8367
* time we write a new log block, so that we're able to locate it in the
8368
* L2ARC device. If this write results in an inconsistent device header
8369
* (e.g. due to power failure), we detect this by verifying the header's
8370
* checksum and simply fail to reconstruct the L2ARC after reboot.
8371
*
8372
* Implementation diagram:
8373
*
8374
* +=== L2ARC device (not to scale) ======================================+
8375
* | ___two newest log block pointers__.__________ |
8376
* | / \dh_start_lbps[1] |
8377
* | / \ \dh_start_lbps[0]|
8378
* |.___/__. V V |
8379
* ||L2 dev|....|lb |bufs |lb |bufs |lb |bufs |lb |bufs |lb |---(empty)---|
8380
* || hdr| ^ /^ /^ / / |
8381
* |+------+ ...--\-------/ \-----/--\------/ / |
8382
* | \--------------/ \--------------/ |
8383
* +======================================================================+
8384
*
8385
* As can be seen on the diagram, rather than using a simple linked list,
8386
* we use a pair of linked lists with alternating elements. This is a
8387
* performance enhancement due to the fact that we only find out the
8388
* address of the next log block access once the current block has been
8389
* completely read in. Obviously, this hurts performance, because we'd be
8390
* keeping the device's I/O queue at only a 1 operation deep, thus
8391
* incurring a large amount of I/O round-trip latency. Having two lists
8392
* allows us to fetch two log blocks ahead of where we are currently
8393
* rebuilding L2ARC buffers.
8394
*
8395
* On-device data structures:
8396
*
8397
* L2ARC device header: l2arc_dev_hdr_phys_t
8398
* L2ARC log block: l2arc_log_blk_phys_t
8399
*
8400
* L2ARC reconstruction:
8401
*
8402
* When writing data, we simply write in the standard rotary fashion,
8403
* evicting buffers as we go and simply writing new data over them (writing
8404
* a new log block every now and then). This obviously means that once we
8405
* loop around the end of the device, we will start cutting into an already
8406
* committed log block (and its referenced data buffers), like so:
8407
*
8408
* current write head__ __old tail
8409
* \ /
8410
* V V
8411
* <--|bufs |lb |bufs |lb | |bufs |lb |bufs |lb |-->
8412
* ^ ^^^^^^^^^___________________________________
8413
* | \
8414
* <<nextwrite>> may overwrite this blk and/or its bufs --'
8415
*
8416
* When importing the pool, we detect this situation and use it to stop
8417
* our scanning process (see l2arc_rebuild).
8418
*
8419
* There is one significant caveat to consider when rebuilding ARC contents
8420
* from an L2ARC device: what about invalidated buffers? Given the above
8421
* construction, we cannot update blocks which we've already written to amend
8422
* them to remove buffers which were invalidated. Thus, during reconstruction,
8423
* we might be populating the cache with buffers for data that's not on the
8424
* main pool anymore, or may have been overwritten!
8425
*
8426
* As it turns out, this isn't a problem. Every arc_read request includes
8427
* both the DVA and, crucially, the birth TXG of the BP the caller is
8428
* looking for. So even if the cache were populated by completely rotten
8429
* blocks for data that had been long deleted and/or overwritten, we'll
8430
* never actually return bad data from the cache, since the DVA with the
8431
* birth TXG uniquely identify a block in space and time - once created,
8432
* a block is immutable on disk. The worst thing we have done is wasted
8433
* some time and memory at l2arc rebuild to reconstruct outdated ARC
8434
* entries that will get dropped from the l2arc as it is being updated
8435
* with new blocks.
8436
*
8437
* L2ARC buffers that have been evicted by l2arc_evict() ahead of the write
8438
* hand are not restored. This is done by saving the offset (in bytes)
8439
* l2arc_evict() has evicted to in the L2ARC device header and taking it
8440
* into account when restoring buffers.
8441
*/
8442
8443
static boolean_t
8444
l2arc_write_eligible(uint64_t spa_guid, arc_buf_hdr_t *hdr)
8445
{
8446
/*
8447
* A buffer is *not* eligible for the L2ARC if it:
8448
* 1. belongs to a different spa.
8449
* 2. is already cached on the L2ARC.
8450
* 3. has an I/O in progress (it may be an incomplete read).
8451
* 4. is flagged not eligible (zfs property).
8452
*/
8453
if (hdr->b_spa != spa_guid || HDR_HAS_L2HDR(hdr) ||
8454
HDR_IO_IN_PROGRESS(hdr) || !HDR_L2CACHE(hdr))
8455
return (B_FALSE);
8456
8457
return (B_TRUE);
8458
}
8459
8460
static uint64_t
8461
l2arc_write_size(l2arc_dev_t *dev)
8462
{
8463
uint64_t size;
8464
8465
/*
8466
* Make sure our globals have meaningful values in case the user
8467
* altered them.
8468
*/
8469
size = l2arc_write_max;
8470
if (size == 0) {
8471
cmn_err(CE_NOTE, "l2arc_write_max must be greater than zero, "
8472
"resetting it to the default (%d)", L2ARC_WRITE_SIZE);
8473
size = l2arc_write_max = L2ARC_WRITE_SIZE;
8474
}
8475
8476
if (arc_warm == B_FALSE)
8477
size += l2arc_write_boost;
8478
8479
/* We need to add in the worst case scenario of log block overhead. */
8480
size += l2arc_log_blk_overhead(size, dev);
8481
if (dev->l2ad_vdev->vdev_has_trim && l2arc_trim_ahead > 0) {
8482
/*
8483
* Trim ahead of the write size 64MB or (l2arc_trim_ahead/100)
8484
* times the writesize, whichever is greater.
8485
*/
8486
size += MAX(64 * 1024 * 1024,
8487
(size * l2arc_trim_ahead) / 100);
8488
}
8489
8490
/*
8491
* Make sure the write size does not exceed the size of the cache
8492
* device. This is important in l2arc_evict(), otherwise infinite
8493
* iteration can occur.
8494
*/
8495
size = MIN(size, (dev->l2ad_end - dev->l2ad_start) / 4);
8496
8497
size = P2ROUNDUP(size, 1ULL << dev->l2ad_vdev->vdev_ashift);
8498
8499
return (size);
8500
8501
}
8502
8503
static clock_t
8504
l2arc_write_interval(clock_t began, uint64_t wanted, uint64_t wrote)
8505
{
8506
clock_t interval, next, now;
8507
8508
/*
8509
* If the ARC lists are busy, increase our write rate; if the
8510
* lists are stale, idle back. This is achieved by checking
8511
* how much we previously wrote - if it was more than half of
8512
* what we wanted, schedule the next write much sooner.
8513
*/
8514
if (l2arc_feed_again && wrote > (wanted / 2))
8515
interval = (hz * l2arc_feed_min_ms) / 1000;
8516
else
8517
interval = hz * l2arc_feed_secs;
8518
8519
now = ddi_get_lbolt();
8520
next = MAX(now, MIN(now + interval, began + interval));
8521
8522
return (next);
8523
}
8524
8525
static boolean_t
8526
l2arc_dev_invalid(const l2arc_dev_t *dev)
8527
{
8528
/*
8529
* We want to skip devices that are being rebuilt, trimmed,
8530
* removed, or belong to a spa that is being exported.
8531
*/
8532
return (dev->l2ad_vdev == NULL || vdev_is_dead(dev->l2ad_vdev) ||
8533
dev->l2ad_rebuild || dev->l2ad_trim_all ||
8534
dev->l2ad_spa == NULL || dev->l2ad_spa->spa_is_exporting);
8535
}
8536
8537
/*
8538
* Cycle through L2ARC devices. This is how L2ARC load balances.
8539
* If a device is returned, this also returns holding the spa config lock.
8540
*/
8541
static l2arc_dev_t *
8542
l2arc_dev_get_next(void)
8543
{
8544
l2arc_dev_t *first, *next = NULL;
8545
8546
/*
8547
* Lock out the removal of spas (spa_namespace_lock), then removal
8548
* of cache devices (l2arc_dev_mtx). Once a device has been selected,
8549
* both locks will be dropped and a spa config lock held instead.
8550
*/
8551
mutex_enter(&spa_namespace_lock);
8552
mutex_enter(&l2arc_dev_mtx);
8553
8554
/* if there are no vdevs, there is nothing to do */
8555
if (l2arc_ndev == 0)
8556
goto out;
8557
8558
first = NULL;
8559
next = l2arc_dev_last;
8560
do {
8561
/* loop around the list looking for a non-faulted vdev */
8562
if (next == NULL) {
8563
next = list_head(l2arc_dev_list);
8564
} else {
8565
next = list_next(l2arc_dev_list, next);
8566
if (next == NULL)
8567
next = list_head(l2arc_dev_list);
8568
}
8569
8570
/* if we have come back to the start, bail out */
8571
if (first == NULL)
8572
first = next;
8573
else if (next == first)
8574
break;
8575
8576
ASSERT3P(next, !=, NULL);
8577
} while (l2arc_dev_invalid(next));
8578
8579
/* if we were unable to find any usable vdevs, return NULL */
8580
if (l2arc_dev_invalid(next))
8581
next = NULL;
8582
8583
l2arc_dev_last = next;
8584
8585
out:
8586
mutex_exit(&l2arc_dev_mtx);
8587
8588
/*
8589
* Grab the config lock to prevent the 'next' device from being
8590
* removed while we are writing to it.
8591
*/
8592
if (next != NULL)
8593
spa_config_enter(next->l2ad_spa, SCL_L2ARC, next, RW_READER);
8594
mutex_exit(&spa_namespace_lock);
8595
8596
return (next);
8597
}
8598
8599
/*
8600
* Free buffers that were tagged for destruction.
8601
*/
8602
static void
8603
l2arc_do_free_on_write(void)
8604
{
8605
l2arc_data_free_t *df;
8606
8607
mutex_enter(&l2arc_free_on_write_mtx);
8608
while ((df = list_remove_head(l2arc_free_on_write)) != NULL) {
8609
ASSERT3P(df->l2df_abd, !=, NULL);
8610
abd_free(df->l2df_abd);
8611
kmem_free(df, sizeof (l2arc_data_free_t));
8612
}
8613
mutex_exit(&l2arc_free_on_write_mtx);
8614
}
8615
8616
/*
8617
* A write to a cache device has completed. Update all headers to allow
8618
* reads from these buffers to begin.
8619
*/
8620
static void
8621
l2arc_write_done(zio_t *zio)
8622
{
8623
l2arc_write_callback_t *cb;
8624
l2arc_lb_abd_buf_t *abd_buf;
8625
l2arc_lb_ptr_buf_t *lb_ptr_buf;
8626
l2arc_dev_t *dev;
8627
l2arc_dev_hdr_phys_t *l2dhdr;
8628
list_t *buflist;
8629
arc_buf_hdr_t *head, *hdr, *hdr_prev;
8630
kmutex_t *hash_lock;
8631
int64_t bytes_dropped = 0;
8632
8633
cb = zio->io_private;
8634
ASSERT3P(cb, !=, NULL);
8635
dev = cb->l2wcb_dev;
8636
l2dhdr = dev->l2ad_dev_hdr;
8637
ASSERT3P(dev, !=, NULL);
8638
head = cb->l2wcb_head;
8639
ASSERT3P(head, !=, NULL);
8640
buflist = &dev->l2ad_buflist;
8641
ASSERT3P(buflist, !=, NULL);
8642
DTRACE_PROBE2(l2arc__iodone, zio_t *, zio,
8643
l2arc_write_callback_t *, cb);
8644
8645
/*
8646
* All writes completed, or an error was hit.
8647
*/
8648
top:
8649
mutex_enter(&dev->l2ad_mtx);
8650
for (hdr = list_prev(buflist, head); hdr; hdr = hdr_prev) {
8651
hdr_prev = list_prev(buflist, hdr);
8652
8653
hash_lock = HDR_LOCK(hdr);
8654
8655
/*
8656
* We cannot use mutex_enter or else we can deadlock
8657
* with l2arc_write_buffers (due to swapping the order
8658
* the hash lock and l2ad_mtx are taken).
8659
*/
8660
if (!mutex_tryenter(hash_lock)) {
8661
/*
8662
* Missed the hash lock. We must retry so we
8663
* don't leave the ARC_FLAG_L2_WRITING bit set.
8664
*/
8665
ARCSTAT_BUMP(arcstat_l2_writes_lock_retry);
8666
8667
/*
8668
* We don't want to rescan the headers we've
8669
* already marked as having been written out, so
8670
* we reinsert the head node so we can pick up
8671
* where we left off.
8672
*/
8673
list_remove(buflist, head);
8674
list_insert_after(buflist, hdr, head);
8675
8676
mutex_exit(&dev->l2ad_mtx);
8677
8678
/*
8679
* We wait for the hash lock to become available
8680
* to try and prevent busy waiting, and increase
8681
* the chance we'll be able to acquire the lock
8682
* the next time around.
8683
*/
8684
mutex_enter(hash_lock);
8685
mutex_exit(hash_lock);
8686
goto top;
8687
}
8688
8689
/*
8690
* We could not have been moved into the arc_l2c_only
8691
* state while in-flight due to our ARC_FLAG_L2_WRITING
8692
* bit being set. Let's just ensure that's being enforced.
8693
*/
8694
ASSERT(HDR_HAS_L1HDR(hdr));
8695
8696
/*
8697
* Skipped - drop L2ARC entry and mark the header as no
8698
* longer L2 eligibile.
8699
*/
8700
if (zio->io_error != 0) {
8701
/*
8702
* Error - drop L2ARC entry.
8703
*/
8704
list_remove(buflist, hdr);
8705
arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR);
8706
8707
uint64_t psize = HDR_GET_PSIZE(hdr);
8708
l2arc_hdr_arcstats_decrement(hdr);
8709
8710
ASSERT(dev->l2ad_vdev != NULL);
8711
8712
bytes_dropped +=
8713
vdev_psize_to_asize(dev->l2ad_vdev, psize);
8714
(void) zfs_refcount_remove_many(&dev->l2ad_alloc,
8715
arc_hdr_size(hdr), hdr);
8716
}
8717
8718
/*
8719
* Allow ARC to begin reads and ghost list evictions to
8720
* this L2ARC entry.
8721
*/
8722
arc_hdr_clear_flags(hdr, ARC_FLAG_L2_WRITING);
8723
8724
mutex_exit(hash_lock);
8725
}
8726
8727
/*
8728
* Free the allocated abd buffers for writing the log blocks.
8729
* If the zio failed reclaim the allocated space and remove the
8730
* pointers to these log blocks from the log block pointer list
8731
* of the L2ARC device.
8732
*/
8733
while ((abd_buf = list_remove_tail(&cb->l2wcb_abd_list)) != NULL) {
8734
abd_free(abd_buf->abd);
8735
zio_buf_free(abd_buf, sizeof (*abd_buf));
8736
if (zio->io_error != 0) {
8737
lb_ptr_buf = list_remove_head(&dev->l2ad_lbptr_list);
8738
/*
8739
* L2BLK_GET_PSIZE returns aligned size for log
8740
* blocks.
8741
*/
8742
uint64_t asize =
8743
L2BLK_GET_PSIZE((lb_ptr_buf->lb_ptr)->lbp_prop);
8744
bytes_dropped += asize;
8745
ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize);
8746
ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count);
8747
zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize,
8748
lb_ptr_buf);
8749
(void) zfs_refcount_remove(&dev->l2ad_lb_count,
8750
lb_ptr_buf);
8751
kmem_free(lb_ptr_buf->lb_ptr,
8752
sizeof (l2arc_log_blkptr_t));
8753
kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t));
8754
}
8755
}
8756
list_destroy(&cb->l2wcb_abd_list);
8757
8758
if (zio->io_error != 0) {
8759
ARCSTAT_BUMP(arcstat_l2_writes_error);
8760
8761
/*
8762
* Restore the lbps array in the header to its previous state.
8763
* If the list of log block pointers is empty, zero out the
8764
* log block pointers in the device header.
8765
*/
8766
lb_ptr_buf = list_head(&dev->l2ad_lbptr_list);
8767
for (int i = 0; i < 2; i++) {
8768
if (lb_ptr_buf == NULL) {
8769
/*
8770
* If the list is empty zero out the device
8771
* header. Otherwise zero out the second log
8772
* block pointer in the header.
8773
*/
8774
if (i == 0) {
8775
memset(l2dhdr, 0,
8776
dev->l2ad_dev_hdr_asize);
8777
} else {
8778
memset(&l2dhdr->dh_start_lbps[i], 0,
8779
sizeof (l2arc_log_blkptr_t));
8780
}
8781
break;
8782
}
8783
memcpy(&l2dhdr->dh_start_lbps[i], lb_ptr_buf->lb_ptr,
8784
sizeof (l2arc_log_blkptr_t));
8785
lb_ptr_buf = list_next(&dev->l2ad_lbptr_list,
8786
lb_ptr_buf);
8787
}
8788
}
8789
8790
ARCSTAT_BUMP(arcstat_l2_writes_done);
8791
list_remove(buflist, head);
8792
ASSERT(!HDR_HAS_L1HDR(head));
8793
kmem_cache_free(hdr_l2only_cache, head);
8794
mutex_exit(&dev->l2ad_mtx);
8795
8796
ASSERT(dev->l2ad_vdev != NULL);
8797
vdev_space_update(dev->l2ad_vdev, -bytes_dropped, 0, 0);
8798
8799
l2arc_do_free_on_write();
8800
8801
kmem_free(cb, sizeof (l2arc_write_callback_t));
8802
}
8803
8804
static int
8805
l2arc_untransform(zio_t *zio, l2arc_read_callback_t *cb)
8806
{
8807
int ret;
8808
spa_t *spa = zio->io_spa;
8809
arc_buf_hdr_t *hdr = cb->l2rcb_hdr;
8810
blkptr_t *bp = zio->io_bp;
8811
uint8_t salt[ZIO_DATA_SALT_LEN];
8812
uint8_t iv[ZIO_DATA_IV_LEN];
8813
uint8_t mac[ZIO_DATA_MAC_LEN];
8814
boolean_t no_crypt = B_FALSE;
8815
8816
/*
8817
* ZIL data is never be written to the L2ARC, so we don't need
8818
* special handling for its unique MAC storage.
8819
*/
8820
ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG);
8821
ASSERT(MUTEX_HELD(HDR_LOCK(hdr)));
8822
ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
8823
8824
/*
8825
* If the data was encrypted, decrypt it now. Note that
8826
* we must check the bp here and not the hdr, since the
8827
* hdr does not have its encryption parameters updated
8828
* until arc_read_done().
8829
*/
8830
if (BP_IS_ENCRYPTED(bp)) {
8831
abd_t *eabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr,
8832
ARC_HDR_USE_RESERVE);
8833
8834
zio_crypt_decode_params_bp(bp, salt, iv);
8835
zio_crypt_decode_mac_bp(bp, mac);
8836
8837
ret = spa_do_crypt_abd(B_FALSE, spa, &cb->l2rcb_zb,
8838
BP_GET_TYPE(bp), BP_GET_DEDUP(bp), BP_SHOULD_BYTESWAP(bp),
8839
salt, iv, mac, HDR_GET_PSIZE(hdr), eabd,
8840
hdr->b_l1hdr.b_pabd, &no_crypt);
8841
if (ret != 0) {
8842
arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr);
8843
goto error;
8844
}
8845
8846
/*
8847
* If we actually performed decryption, replace b_pabd
8848
* with the decrypted data. Otherwise we can just throw
8849
* our decryption buffer away.
8850
*/
8851
if (!no_crypt) {
8852
arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd,
8853
arc_hdr_size(hdr), hdr);
8854
hdr->b_l1hdr.b_pabd = eabd;
8855
zio->io_abd = eabd;
8856
} else {
8857
arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr);
8858
}
8859
}
8860
8861
/*
8862
* If the L2ARC block was compressed, but ARC compression
8863
* is disabled we decompress the data into a new buffer and
8864
* replace the existing data.
8865
*/
8866
if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF &&
8867
!HDR_COMPRESSION_ENABLED(hdr)) {
8868
abd_t *cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr,
8869
ARC_HDR_USE_RESERVE);
8870
8871
ret = zio_decompress_data(HDR_GET_COMPRESS(hdr),
8872
hdr->b_l1hdr.b_pabd, cabd, HDR_GET_PSIZE(hdr),
8873
HDR_GET_LSIZE(hdr), &hdr->b_complevel);
8874
if (ret != 0) {
8875
arc_free_data_abd(hdr, cabd, arc_hdr_size(hdr), hdr);
8876
goto error;
8877
}
8878
8879
arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd,
8880
arc_hdr_size(hdr), hdr);
8881
hdr->b_l1hdr.b_pabd = cabd;
8882
zio->io_abd = cabd;
8883
zio->io_size = HDR_GET_LSIZE(hdr);
8884
}
8885
8886
return (0);
8887
8888
error:
8889
return (ret);
8890
}
8891
8892
8893
/*
8894
* A read to a cache device completed. Validate buffer contents before
8895
* handing over to the regular ARC routines.
8896
*/
8897
static void
8898
l2arc_read_done(zio_t *zio)
8899
{
8900
int tfm_error = 0;
8901
l2arc_read_callback_t *cb = zio->io_private;
8902
arc_buf_hdr_t *hdr;
8903
kmutex_t *hash_lock;
8904
boolean_t valid_cksum;
8905
boolean_t using_rdata = (BP_IS_ENCRYPTED(&cb->l2rcb_bp) &&
8906
(cb->l2rcb_flags & ZIO_FLAG_RAW_ENCRYPT));
8907
8908
ASSERT3P(zio->io_vd, !=, NULL);
8909
ASSERT(zio->io_flags & ZIO_FLAG_DONT_PROPAGATE);
8910
8911
spa_config_exit(zio->io_spa, SCL_L2ARC, zio->io_vd);
8912
8913
ASSERT3P(cb, !=, NULL);
8914
hdr = cb->l2rcb_hdr;
8915
ASSERT3P(hdr, !=, NULL);
8916
8917
hash_lock = HDR_LOCK(hdr);
8918
mutex_enter(hash_lock);
8919
ASSERT3P(hash_lock, ==, HDR_LOCK(hdr));
8920
8921
/*
8922
* If the data was read into a temporary buffer,
8923
* move it and free the buffer.
8924
*/
8925
if (cb->l2rcb_abd != NULL) {
8926
ASSERT3U(arc_hdr_size(hdr), <, zio->io_size);
8927
if (zio->io_error == 0) {
8928
if (using_rdata) {
8929
abd_copy(hdr->b_crypt_hdr.b_rabd,
8930
cb->l2rcb_abd, arc_hdr_size(hdr));
8931
} else {
8932
abd_copy(hdr->b_l1hdr.b_pabd,
8933
cb->l2rcb_abd, arc_hdr_size(hdr));
8934
}
8935
}
8936
8937
/*
8938
* The following must be done regardless of whether
8939
* there was an error:
8940
* - free the temporary buffer
8941
* - point zio to the real ARC buffer
8942
* - set zio size accordingly
8943
* These are required because zio is either re-used for
8944
* an I/O of the block in the case of the error
8945
* or the zio is passed to arc_read_done() and it
8946
* needs real data.
8947
*/
8948
abd_free(cb->l2rcb_abd);
8949
zio->io_size = zio->io_orig_size = arc_hdr_size(hdr);
8950
8951
if (using_rdata) {
8952
ASSERT(HDR_HAS_RABD(hdr));
8953
zio->io_abd = zio->io_orig_abd =
8954
hdr->b_crypt_hdr.b_rabd;
8955
} else {
8956
ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL);
8957
zio->io_abd = zio->io_orig_abd = hdr->b_l1hdr.b_pabd;
8958
}
8959
}
8960
8961
ASSERT3P(zio->io_abd, !=, NULL);
8962
8963
/*
8964
* Check this survived the L2ARC journey.
8965
*/
8966
ASSERT(zio->io_abd == hdr->b_l1hdr.b_pabd ||
8967
(HDR_HAS_RABD(hdr) && zio->io_abd == hdr->b_crypt_hdr.b_rabd));
8968
zio->io_bp_copy = cb->l2rcb_bp; /* XXX fix in L2ARC 2.0 */
8969
zio->io_bp = &zio->io_bp_copy; /* XXX fix in L2ARC 2.0 */
8970
zio->io_prop.zp_complevel = hdr->b_complevel;
8971
8972
valid_cksum = arc_cksum_is_equal(hdr, zio);
8973
8974
/*
8975
* b_rabd will always match the data as it exists on disk if it is
8976
* being used. Therefore if we are reading into b_rabd we do not
8977
* attempt to untransform the data.
8978
*/
8979
if (valid_cksum && !using_rdata)
8980
tfm_error = l2arc_untransform(zio, cb);
8981
8982
if (valid_cksum && tfm_error == 0 && zio->io_error == 0 &&
8983
!HDR_L2_EVICTED(hdr)) {
8984
mutex_exit(hash_lock);
8985
zio->io_private = hdr;
8986
arc_read_done(zio);
8987
} else {
8988
/*
8989
* Buffer didn't survive caching. Increment stats and
8990
* reissue to the original storage device.
8991
*/
8992
if (zio->io_error != 0) {
8993
ARCSTAT_BUMP(arcstat_l2_io_error);
8994
} else {
8995
zio->io_error = SET_ERROR(EIO);
8996
}
8997
if (!valid_cksum || tfm_error != 0)
8998
ARCSTAT_BUMP(arcstat_l2_cksum_bad);
8999
9000
/*
9001
* If there's no waiter, issue an async i/o to the primary
9002
* storage now. If there *is* a waiter, the caller must
9003
* issue the i/o in a context where it's OK to block.
9004
*/
9005
if (zio->io_waiter == NULL) {
9006
zio_t *pio = zio_unique_parent(zio);
9007
void *abd = (using_rdata) ?
9008
hdr->b_crypt_hdr.b_rabd : hdr->b_l1hdr.b_pabd;
9009
9010
ASSERT(!pio || pio->io_child_type == ZIO_CHILD_LOGICAL);
9011
9012
zio = zio_read(pio, zio->io_spa, zio->io_bp,
9013
abd, zio->io_size, arc_read_done,
9014
hdr, zio->io_priority, cb->l2rcb_flags,
9015
&cb->l2rcb_zb);
9016
9017
/*
9018
* Original ZIO will be freed, so we need to update
9019
* ARC header with the new ZIO pointer to be used
9020
* by zio_change_priority() in arc_read().
9021
*/
9022
for (struct arc_callback *acb = hdr->b_l1hdr.b_acb;
9023
acb != NULL; acb = acb->acb_next)
9024
acb->acb_zio_head = zio;
9025
9026
mutex_exit(hash_lock);
9027
zio_nowait(zio);
9028
} else {
9029
mutex_exit(hash_lock);
9030
}
9031
}
9032
9033
kmem_free(cb, sizeof (l2arc_read_callback_t));
9034
}
9035
9036
/*
9037
* This is the list priority from which the L2ARC will search for pages to
9038
* cache. This is used within loops (0..3) to cycle through lists in the
9039
* desired order. This order can have a significant effect on cache
9040
* performance.
9041
*
9042
* Currently the metadata lists are hit first, MFU then MRU, followed by
9043
* the data lists. This function returns a locked list, and also returns
9044
* the lock pointer.
9045
*/
9046
static multilist_sublist_t *
9047
l2arc_sublist_lock(int list_num)
9048
{
9049
multilist_t *ml = NULL;
9050
unsigned int idx;
9051
9052
ASSERT(list_num >= 0 && list_num < L2ARC_FEED_TYPES);
9053
9054
switch (list_num) {
9055
case 0:
9056
ml = &arc_mfu->arcs_list[ARC_BUFC_METADATA];
9057
break;
9058
case 1:
9059
ml = &arc_mru->arcs_list[ARC_BUFC_METADATA];
9060
break;
9061
case 2:
9062
ml = &arc_mfu->arcs_list[ARC_BUFC_DATA];
9063
break;
9064
case 3:
9065
ml = &arc_mru->arcs_list[ARC_BUFC_DATA];
9066
break;
9067
default:
9068
return (NULL);
9069
}
9070
9071
/*
9072
* Return a randomly-selected sublist. This is acceptable
9073
* because the caller feeds only a little bit of data for each
9074
* call (8MB). Subsequent calls will result in different
9075
* sublists being selected.
9076
*/
9077
idx = multilist_get_random_index(ml);
9078
return (multilist_sublist_lock_idx(ml, idx));
9079
}
9080
9081
/*
9082
* Calculates the maximum overhead of L2ARC metadata log blocks for a given
9083
* L2ARC write size. l2arc_evict and l2arc_write_size need to include this
9084
* overhead in processing to make sure there is enough headroom available
9085
* when writing buffers.
9086
*/
9087
static inline uint64_t
9088
l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev)
9089
{
9090
if (dev->l2ad_log_entries == 0) {
9091
return (0);
9092
} else {
9093
ASSERT(dev->l2ad_vdev != NULL);
9094
9095
uint64_t log_entries = write_sz >> SPA_MINBLOCKSHIFT;
9096
9097
uint64_t log_blocks = (log_entries +
9098
dev->l2ad_log_entries - 1) /
9099
dev->l2ad_log_entries;
9100
9101
return (vdev_psize_to_asize(dev->l2ad_vdev,
9102
sizeof (l2arc_log_blk_phys_t)) * log_blocks);
9103
}
9104
}
9105
9106
/*
9107
* Evict buffers from the device write hand to the distance specified in
9108
* bytes. This distance may span populated buffers, it may span nothing.
9109
* This is clearing a region on the L2ARC device ready for writing.
9110
* If the 'all' boolean is set, every buffer is evicted.
9111
*/
9112
static void
9113
l2arc_evict(l2arc_dev_t *dev, uint64_t distance, boolean_t all)
9114
{
9115
list_t *buflist;
9116
arc_buf_hdr_t *hdr, *hdr_prev;
9117
kmutex_t *hash_lock;
9118
uint64_t taddr;
9119
l2arc_lb_ptr_buf_t *lb_ptr_buf, *lb_ptr_buf_prev;
9120
vdev_t *vd = dev->l2ad_vdev;
9121
boolean_t rerun;
9122
9123
ASSERT(vd != NULL || all);
9124
ASSERT(dev->l2ad_spa != NULL || all);
9125
9126
buflist = &dev->l2ad_buflist;
9127
9128
top:
9129
rerun = B_FALSE;
9130
if (dev->l2ad_hand + distance > dev->l2ad_end) {
9131
/*
9132
* When there is no space to accommodate upcoming writes,
9133
* evict to the end. Then bump the write and evict hands
9134
* to the start and iterate. This iteration does not
9135
* happen indefinitely as we make sure in
9136
* l2arc_write_size() that when the write hand is reset,
9137
* the write size does not exceed the end of the device.
9138
*/
9139
rerun = B_TRUE;
9140
taddr = dev->l2ad_end;
9141
} else {
9142
taddr = dev->l2ad_hand + distance;
9143
}
9144
DTRACE_PROBE4(l2arc__evict, l2arc_dev_t *, dev, list_t *, buflist,
9145
uint64_t, taddr, boolean_t, all);
9146
9147
if (!all) {
9148
/*
9149
* This check has to be placed after deciding whether to
9150
* iterate (rerun).
9151
*/
9152
if (dev->l2ad_first) {
9153
/*
9154
* This is the first sweep through the device. There is
9155
* nothing to evict. We have already trimmed the
9156
* whole device.
9157
*/
9158
goto out;
9159
} else {
9160
/*
9161
* Trim the space to be evicted.
9162
*/
9163
if (vd->vdev_has_trim && dev->l2ad_evict < taddr &&
9164
l2arc_trim_ahead > 0) {
9165
/*
9166
* We have to drop the spa_config lock because
9167
* vdev_trim_range() will acquire it.
9168
* l2ad_evict already accounts for the label
9169
* size. To prevent vdev_trim_ranges() from
9170
* adding it again, we subtract it from
9171
* l2ad_evict.
9172
*/
9173
spa_config_exit(dev->l2ad_spa, SCL_L2ARC, dev);
9174
vdev_trim_simple(vd,
9175
dev->l2ad_evict - VDEV_LABEL_START_SIZE,
9176
taddr - dev->l2ad_evict);
9177
spa_config_enter(dev->l2ad_spa, SCL_L2ARC, dev,
9178
RW_READER);
9179
}
9180
9181
/*
9182
* When rebuilding L2ARC we retrieve the evict hand
9183
* from the header of the device. Of note, l2arc_evict()
9184
* does not actually delete buffers from the cache
9185
* device, but trimming may do so depending on the
9186
* hardware implementation. Thus keeping track of the
9187
* evict hand is useful.
9188
*/
9189
dev->l2ad_evict = MAX(dev->l2ad_evict, taddr);
9190
}
9191
}
9192
9193
retry:
9194
mutex_enter(&dev->l2ad_mtx);
9195
/*
9196
* We have to account for evicted log blocks. Run vdev_space_update()
9197
* on log blocks whose offset (in bytes) is before the evicted offset
9198
* (in bytes) by searching in the list of pointers to log blocks
9199
* present in the L2ARC device.
9200
*/
9201
for (lb_ptr_buf = list_tail(&dev->l2ad_lbptr_list); lb_ptr_buf;
9202
lb_ptr_buf = lb_ptr_buf_prev) {
9203
9204
lb_ptr_buf_prev = list_prev(&dev->l2ad_lbptr_list, lb_ptr_buf);
9205
9206
/* L2BLK_GET_PSIZE returns aligned size for log blocks */
9207
uint64_t asize = L2BLK_GET_PSIZE(
9208
(lb_ptr_buf->lb_ptr)->lbp_prop);
9209
9210
/*
9211
* We don't worry about log blocks left behind (ie
9212
* lbp_payload_start < l2ad_hand) because l2arc_write_buffers()
9213
* will never write more than l2arc_evict() evicts.
9214
*/
9215
if (!all && l2arc_log_blkptr_valid(dev, lb_ptr_buf->lb_ptr)) {
9216
break;
9217
} else {
9218
if (vd != NULL)
9219
vdev_space_update(vd, -asize, 0, 0);
9220
ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize);
9221
ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count);
9222
zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize,
9223
lb_ptr_buf);
9224
(void) zfs_refcount_remove(&dev->l2ad_lb_count,
9225
lb_ptr_buf);
9226
list_remove(&dev->l2ad_lbptr_list, lb_ptr_buf);
9227
kmem_free(lb_ptr_buf->lb_ptr,
9228
sizeof (l2arc_log_blkptr_t));
9229
kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t));
9230
}
9231
}
9232
9233
for (hdr = list_tail(buflist); hdr; hdr = hdr_prev) {
9234
hdr_prev = list_prev(buflist, hdr);
9235
9236
ASSERT(!HDR_EMPTY(hdr));
9237
hash_lock = HDR_LOCK(hdr);
9238
9239
/*
9240
* We cannot use mutex_enter or else we can deadlock
9241
* with l2arc_write_buffers (due to swapping the order
9242
* the hash lock and l2ad_mtx are taken).
9243
*/
9244
if (!mutex_tryenter(hash_lock)) {
9245
/*
9246
* Missed the hash lock. Retry.
9247
*/
9248
ARCSTAT_BUMP(arcstat_l2_evict_lock_retry);
9249
mutex_exit(&dev->l2ad_mtx);
9250
mutex_enter(hash_lock);
9251
mutex_exit(hash_lock);
9252
goto retry;
9253
}
9254
9255
/*
9256
* A header can't be on this list if it doesn't have L2 header.
9257
*/
9258
ASSERT(HDR_HAS_L2HDR(hdr));
9259
9260
/* Ensure this header has finished being written. */
9261
ASSERT(!HDR_L2_WRITING(hdr));
9262
ASSERT(!HDR_L2_WRITE_HEAD(hdr));
9263
9264
if (!all && (hdr->b_l2hdr.b_daddr >= dev->l2ad_evict ||
9265
hdr->b_l2hdr.b_daddr < dev->l2ad_hand)) {
9266
/*
9267
* We've evicted to the target address,
9268
* or the end of the device.
9269
*/
9270
mutex_exit(hash_lock);
9271
break;
9272
}
9273
9274
if (!HDR_HAS_L1HDR(hdr)) {
9275
ASSERT(!HDR_L2_READING(hdr));
9276
/*
9277
* This doesn't exist in the ARC. Destroy.
9278
* arc_hdr_destroy() will call list_remove()
9279
* and decrement arcstat_l2_lsize.
9280
*/
9281
arc_change_state(arc_anon, hdr);
9282
arc_hdr_destroy(hdr);
9283
} else {
9284
ASSERT(hdr->b_l1hdr.b_state != arc_l2c_only);
9285
ARCSTAT_BUMP(arcstat_l2_evict_l1cached);
9286
/*
9287
* Invalidate issued or about to be issued
9288
* reads, since we may be about to write
9289
* over this location.
9290
*/
9291
if (HDR_L2_READING(hdr)) {
9292
ARCSTAT_BUMP(arcstat_l2_evict_reading);
9293
arc_hdr_set_flags(hdr, ARC_FLAG_L2_EVICTED);
9294
}
9295
9296
arc_hdr_l2hdr_destroy(hdr);
9297
}
9298
mutex_exit(hash_lock);
9299
}
9300
mutex_exit(&dev->l2ad_mtx);
9301
9302
out:
9303
/*
9304
* We need to check if we evict all buffers, otherwise we may iterate
9305
* unnecessarily.
9306
*/
9307
if (!all && rerun) {
9308
/*
9309
* Bump device hand to the device start if it is approaching the
9310
* end. l2arc_evict() has already evicted ahead for this case.
9311
*/
9312
dev->l2ad_hand = dev->l2ad_start;
9313
dev->l2ad_evict = dev->l2ad_start;
9314
dev->l2ad_first = B_FALSE;
9315
goto top;
9316
}
9317
9318
if (!all) {
9319
/*
9320
* In case of cache device removal (all) the following
9321
* assertions may be violated without functional consequences
9322
* as the device is about to be removed.
9323
*/
9324
ASSERT3U(dev->l2ad_hand + distance, <=, dev->l2ad_end);
9325
if (!dev->l2ad_first)
9326
ASSERT3U(dev->l2ad_hand, <=, dev->l2ad_evict);
9327
}
9328
}
9329
9330
/*
9331
* Handle any abd transforms that might be required for writing to the L2ARC.
9332
* If successful, this function will always return an abd with the data
9333
* transformed as it is on disk in a new abd of asize bytes.
9334
*/
9335
static int
9336
l2arc_apply_transforms(spa_t *spa, arc_buf_hdr_t *hdr, uint64_t asize,
9337
abd_t **abd_out)
9338
{
9339
int ret;
9340
abd_t *cabd = NULL, *eabd = NULL, *to_write = hdr->b_l1hdr.b_pabd;
9341
enum zio_compress compress = HDR_GET_COMPRESS(hdr);
9342
uint64_t psize = HDR_GET_PSIZE(hdr);
9343
uint64_t size = arc_hdr_size(hdr);
9344
boolean_t ismd = HDR_ISTYPE_METADATA(hdr);
9345
boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS);
9346
dsl_crypto_key_t *dck = NULL;
9347
uint8_t mac[ZIO_DATA_MAC_LEN] = { 0 };
9348
boolean_t no_crypt = B_FALSE;
9349
9350
ASSERT((HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF &&
9351
!HDR_COMPRESSION_ENABLED(hdr)) ||
9352
HDR_ENCRYPTED(hdr) || HDR_SHARED_DATA(hdr) || psize != asize);
9353
ASSERT3U(psize, <=, asize);
9354
9355
/*
9356
* If this data simply needs its own buffer, we simply allocate it
9357
* and copy the data. This may be done to eliminate a dependency on a
9358
* shared buffer or to reallocate the buffer to match asize.
9359
*/
9360
if (HDR_HAS_RABD(hdr)) {
9361
ASSERT3U(asize, >, psize);
9362
to_write = abd_alloc_for_io(asize, ismd);
9363
abd_copy(to_write, hdr->b_crypt_hdr.b_rabd, psize);
9364
abd_zero_off(to_write, psize, asize - psize);
9365
goto out;
9366
}
9367
9368
if ((compress == ZIO_COMPRESS_OFF || HDR_COMPRESSION_ENABLED(hdr)) &&
9369
!HDR_ENCRYPTED(hdr)) {
9370
ASSERT3U(size, ==, psize);
9371
to_write = abd_alloc_for_io(asize, ismd);
9372
abd_copy(to_write, hdr->b_l1hdr.b_pabd, size);
9373
if (asize > size)
9374
abd_zero_off(to_write, size, asize - size);
9375
goto out;
9376
}
9377
9378
if (compress != ZIO_COMPRESS_OFF && !HDR_COMPRESSION_ENABLED(hdr)) {
9379
cabd = abd_alloc_for_io(MAX(size, asize), ismd);
9380
uint64_t csize = zio_compress_data(compress, to_write, &cabd,
9381
size, MIN(size, psize), hdr->b_complevel);
9382
if (csize >= size || csize > psize) {
9383
/*
9384
* We can't re-compress the block into the original
9385
* psize. Even if it fits into asize, it does not
9386
* matter, since checksum will never match on read.
9387
*/
9388
abd_free(cabd);
9389
return (SET_ERROR(EIO));
9390
}
9391
if (asize > csize)
9392
abd_zero_off(cabd, csize, asize - csize);
9393
to_write = cabd;
9394
}
9395
9396
if (HDR_ENCRYPTED(hdr)) {
9397
eabd = abd_alloc_for_io(asize, ismd);
9398
9399
/*
9400
* If the dataset was disowned before the buffer
9401
* made it to this point, the key to re-encrypt
9402
* it won't be available. In this case we simply
9403
* won't write the buffer to the L2ARC.
9404
*/
9405
ret = spa_keystore_lookup_key(spa, hdr->b_crypt_hdr.b_dsobj,
9406
FTAG, &dck);
9407
if (ret != 0)
9408
goto error;
9409
9410
ret = zio_do_crypt_abd(B_TRUE, &dck->dck_key,
9411
hdr->b_crypt_hdr.b_ot, bswap, hdr->b_crypt_hdr.b_salt,
9412
hdr->b_crypt_hdr.b_iv, mac, psize, to_write, eabd,
9413
&no_crypt);
9414
if (ret != 0)
9415
goto error;
9416
9417
if (no_crypt)
9418
abd_copy(eabd, to_write, psize);
9419
9420
if (psize != asize)
9421
abd_zero_off(eabd, psize, asize - psize);
9422
9423
/* assert that the MAC we got here matches the one we saved */
9424
ASSERT0(memcmp(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN));
9425
spa_keystore_dsl_key_rele(spa, dck, FTAG);
9426
9427
if (to_write == cabd)
9428
abd_free(cabd);
9429
9430
to_write = eabd;
9431
}
9432
9433
out:
9434
ASSERT3P(to_write, !=, hdr->b_l1hdr.b_pabd);
9435
*abd_out = to_write;
9436
return (0);
9437
9438
error:
9439
if (dck != NULL)
9440
spa_keystore_dsl_key_rele(spa, dck, FTAG);
9441
if (cabd != NULL)
9442
abd_free(cabd);
9443
if (eabd != NULL)
9444
abd_free(eabd);
9445
9446
*abd_out = NULL;
9447
return (ret);
9448
}
9449
9450
static void
9451
l2arc_blk_fetch_done(zio_t *zio)
9452
{
9453
l2arc_read_callback_t *cb;
9454
9455
cb = zio->io_private;
9456
if (cb->l2rcb_abd != NULL)
9457
abd_free(cb->l2rcb_abd);
9458
kmem_free(cb, sizeof (l2arc_read_callback_t));
9459
}
9460
9461
/*
9462
* Find and write ARC buffers to the L2ARC device.
9463
*
9464
* An ARC_FLAG_L2_WRITING flag is set so that the L2ARC buffers are not valid
9465
* for reading until they have completed writing.
9466
* The headroom_boost is an in-out parameter used to maintain headroom boost
9467
* state between calls to this function.
9468
*
9469
* Returns the number of bytes actually written (which may be smaller than
9470
* the delta by which the device hand has changed due to alignment and the
9471
* writing of log blocks).
9472
*/
9473
static uint64_t
9474
l2arc_write_buffers(spa_t *spa, l2arc_dev_t *dev, uint64_t target_sz)
9475
{
9476
arc_buf_hdr_t *hdr, *head, *marker;
9477
uint64_t write_asize, write_psize, headroom;
9478
boolean_t full, from_head = !arc_warm;
9479
l2arc_write_callback_t *cb = NULL;
9480
zio_t *pio, *wzio;
9481
uint64_t guid = spa_load_guid(spa);
9482
l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr;
9483
9484
ASSERT3P(dev->l2ad_vdev, !=, NULL);
9485
9486
pio = NULL;
9487
write_asize = write_psize = 0;
9488
full = B_FALSE;
9489
head = kmem_cache_alloc(hdr_l2only_cache, KM_PUSHPAGE);
9490
arc_hdr_set_flags(head, ARC_FLAG_L2_WRITE_HEAD | ARC_FLAG_HAS_L2HDR);
9491
marker = arc_state_alloc_marker();
9492
9493
/*
9494
* Copy buffers for L2ARC writing.
9495
*/
9496
for (int pass = 0; pass < L2ARC_FEED_TYPES; pass++) {
9497
/*
9498
* pass == 0: MFU meta
9499
* pass == 1: MRU meta
9500
* pass == 2: MFU data
9501
* pass == 3: MRU data
9502
*/
9503
if (l2arc_mfuonly == 1) {
9504
if (pass == 1 || pass == 3)
9505
continue;
9506
} else if (l2arc_mfuonly > 1) {
9507
if (pass == 3)
9508
continue;
9509
}
9510
9511
uint64_t passed_sz = 0;
9512
headroom = target_sz * l2arc_headroom;
9513
if (zfs_compressed_arc_enabled)
9514
headroom = (headroom * l2arc_headroom_boost) / 100;
9515
9516
/*
9517
* Until the ARC is warm and starts to evict, read from the
9518
* head of the ARC lists rather than the tail.
9519
*/
9520
multilist_sublist_t *mls = l2arc_sublist_lock(pass);
9521
ASSERT3P(mls, !=, NULL);
9522
if (from_head)
9523
hdr = multilist_sublist_head(mls);
9524
else
9525
hdr = multilist_sublist_tail(mls);
9526
9527
while (hdr != NULL) {
9528
kmutex_t *hash_lock;
9529
abd_t *to_write = NULL;
9530
9531
hash_lock = HDR_LOCK(hdr);
9532
if (!mutex_tryenter(hash_lock)) {
9533
skip:
9534
/* Skip this buffer rather than waiting. */
9535
if (from_head)
9536
hdr = multilist_sublist_next(mls, hdr);
9537
else
9538
hdr = multilist_sublist_prev(mls, hdr);
9539
continue;
9540
}
9541
9542
passed_sz += HDR_GET_LSIZE(hdr);
9543
if (l2arc_headroom != 0 && passed_sz > headroom) {
9544
/*
9545
* Searched too far.
9546
*/
9547
mutex_exit(hash_lock);
9548
break;
9549
}
9550
9551
if (!l2arc_write_eligible(guid, hdr)) {
9552
mutex_exit(hash_lock);
9553
goto skip;
9554
}
9555
9556
ASSERT(HDR_HAS_L1HDR(hdr));
9557
ASSERT3U(HDR_GET_PSIZE(hdr), >, 0);
9558
ASSERT3U(arc_hdr_size(hdr), >, 0);
9559
ASSERT(hdr->b_l1hdr.b_pabd != NULL ||
9560
HDR_HAS_RABD(hdr));
9561
uint64_t psize = HDR_GET_PSIZE(hdr);
9562
uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev,
9563
psize);
9564
9565
/*
9566
* If the allocated size of this buffer plus the max
9567
* size for the pending log block exceeds the evicted
9568
* target size, terminate writing buffers for this run.
9569
*/
9570
if (write_asize + asize +
9571
sizeof (l2arc_log_blk_phys_t) > target_sz) {
9572
full = B_TRUE;
9573
mutex_exit(hash_lock);
9574
break;
9575
}
9576
9577
/*
9578
* We should not sleep with sublist lock held or it
9579
* may block ARC eviction. Insert a marker to save
9580
* the position and drop the lock.
9581
*/
9582
if (from_head) {
9583
multilist_sublist_insert_after(mls, hdr,
9584
marker);
9585
} else {
9586
multilist_sublist_insert_before(mls, hdr,
9587
marker);
9588
}
9589
multilist_sublist_unlock(mls);
9590
9591
/*
9592
* If this header has b_rabd, we can use this since it
9593
* must always match the data exactly as it exists on
9594
* disk. Otherwise, the L2ARC can normally use the
9595
* hdr's data, but if we're sharing data between the
9596
* hdr and one of its bufs, L2ARC needs its own copy of
9597
* the data so that the ZIO below can't race with the
9598
* buf consumer. To ensure that this copy will be
9599
* available for the lifetime of the ZIO and be cleaned
9600
* up afterwards, we add it to the l2arc_free_on_write
9601
* queue. If we need to apply any transforms to the
9602
* data (compression, encryption) we will also need the
9603
* extra buffer.
9604
*/
9605
if (HDR_HAS_RABD(hdr) && psize == asize) {
9606
to_write = hdr->b_crypt_hdr.b_rabd;
9607
} else if ((HDR_COMPRESSION_ENABLED(hdr) ||
9608
HDR_GET_COMPRESS(hdr) == ZIO_COMPRESS_OFF) &&
9609
!HDR_ENCRYPTED(hdr) && !HDR_SHARED_DATA(hdr) &&
9610
psize == asize) {
9611
to_write = hdr->b_l1hdr.b_pabd;
9612
} else {
9613
int ret;
9614
arc_buf_contents_t type = arc_buf_type(hdr);
9615
9616
ret = l2arc_apply_transforms(spa, hdr, asize,
9617
&to_write);
9618
if (ret != 0) {
9619
arc_hdr_clear_flags(hdr,
9620
ARC_FLAG_L2CACHE);
9621
mutex_exit(hash_lock);
9622
goto next;
9623
}
9624
9625
l2arc_free_abd_on_write(to_write, asize, type);
9626
}
9627
9628
hdr->b_l2hdr.b_dev = dev;
9629
hdr->b_l2hdr.b_daddr = dev->l2ad_hand;
9630
hdr->b_l2hdr.b_hits = 0;
9631
hdr->b_l2hdr.b_arcs_state =
9632
hdr->b_l1hdr.b_state->arcs_state;
9633
/* l2arc_hdr_arcstats_update() expects a valid asize */
9634
HDR_SET_L2SIZE(hdr, asize);
9635
arc_hdr_set_flags(hdr, ARC_FLAG_HAS_L2HDR |
9636
ARC_FLAG_L2_WRITING);
9637
9638
(void) zfs_refcount_add_many(&dev->l2ad_alloc,
9639
arc_hdr_size(hdr), hdr);
9640
l2arc_hdr_arcstats_increment(hdr);
9641
vdev_space_update(dev->l2ad_vdev, asize, 0, 0);
9642
9643
mutex_enter(&dev->l2ad_mtx);
9644
if (pio == NULL) {
9645
/*
9646
* Insert a dummy header on the buflist so
9647
* l2arc_write_done() can find where the
9648
* write buffers begin without searching.
9649
*/
9650
list_insert_head(&dev->l2ad_buflist, head);
9651
}
9652
list_insert_head(&dev->l2ad_buflist, hdr);
9653
mutex_exit(&dev->l2ad_mtx);
9654
9655
boolean_t commit = l2arc_log_blk_insert(dev, hdr);
9656
mutex_exit(hash_lock);
9657
9658
if (pio == NULL) {
9659
cb = kmem_alloc(
9660
sizeof (l2arc_write_callback_t), KM_SLEEP);
9661
cb->l2wcb_dev = dev;
9662
cb->l2wcb_head = head;
9663
list_create(&cb->l2wcb_abd_list,
9664
sizeof (l2arc_lb_abd_buf_t),
9665
offsetof(l2arc_lb_abd_buf_t, node));
9666
pio = zio_root(spa, l2arc_write_done, cb,
9667
ZIO_FLAG_CANFAIL);
9668
}
9669
9670
wzio = zio_write_phys(pio, dev->l2ad_vdev,
9671
dev->l2ad_hand, asize, to_write,
9672
ZIO_CHECKSUM_OFF, NULL, hdr,
9673
ZIO_PRIORITY_ASYNC_WRITE,
9674
ZIO_FLAG_CANFAIL, B_FALSE);
9675
9676
DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev,
9677
zio_t *, wzio);
9678
zio_nowait(wzio);
9679
9680
write_psize += psize;
9681
write_asize += asize;
9682
dev->l2ad_hand += asize;
9683
9684
if (commit) {
9685
/* l2ad_hand will be adjusted inside. */
9686
write_asize +=
9687
l2arc_log_blk_commit(dev, pio, cb);
9688
}
9689
9690
next:
9691
multilist_sublist_lock(mls);
9692
if (from_head)
9693
hdr = multilist_sublist_next(mls, marker);
9694
else
9695
hdr = multilist_sublist_prev(mls, marker);
9696
multilist_sublist_remove(mls, marker);
9697
}
9698
9699
multilist_sublist_unlock(mls);
9700
9701
if (full == B_TRUE)
9702
break;
9703
}
9704
9705
arc_state_free_marker(marker);
9706
9707
/* No buffers selected for writing? */
9708
if (pio == NULL) {
9709
ASSERT0(write_psize);
9710
ASSERT(!HDR_HAS_L1HDR(head));
9711
kmem_cache_free(hdr_l2only_cache, head);
9712
9713
/*
9714
* Although we did not write any buffers l2ad_evict may
9715
* have advanced.
9716
*/
9717
if (dev->l2ad_evict != l2dhdr->dh_evict)
9718
l2arc_dev_hdr_update(dev);
9719
9720
return (0);
9721
}
9722
9723
if (!dev->l2ad_first)
9724
ASSERT3U(dev->l2ad_hand, <=, dev->l2ad_evict);
9725
9726
ASSERT3U(write_asize, <=, target_sz);
9727
ARCSTAT_BUMP(arcstat_l2_writes_sent);
9728
ARCSTAT_INCR(arcstat_l2_write_bytes, write_psize);
9729
9730
dev->l2ad_writing = B_TRUE;
9731
(void) zio_wait(pio);
9732
dev->l2ad_writing = B_FALSE;
9733
9734
/*
9735
* Update the device header after the zio completes as
9736
* l2arc_write_done() may have updated the memory holding the log block
9737
* pointers in the device header.
9738
*/
9739
l2arc_dev_hdr_update(dev);
9740
9741
return (write_asize);
9742
}
9743
9744
static boolean_t
9745
l2arc_hdr_limit_reached(void)
9746
{
9747
int64_t s = aggsum_upper_bound(&arc_sums.arcstat_l2_hdr_size);
9748
9749
return (arc_reclaim_needed() ||
9750
(s > (arc_warm ? arc_c : arc_c_max) * l2arc_meta_percent / 100));
9751
}
9752
9753
/*
9754
* This thread feeds the L2ARC at regular intervals. This is the beating
9755
* heart of the L2ARC.
9756
*/
9757
static __attribute__((noreturn)) void
9758
l2arc_feed_thread(void *unused)
9759
{
9760
(void) unused;
9761
callb_cpr_t cpr;
9762
l2arc_dev_t *dev;
9763
spa_t *spa;
9764
uint64_t size, wrote;
9765
clock_t begin, next = ddi_get_lbolt();
9766
fstrans_cookie_t cookie;
9767
9768
CALLB_CPR_INIT(&cpr, &l2arc_feed_thr_lock, callb_generic_cpr, FTAG);
9769
9770
mutex_enter(&l2arc_feed_thr_lock);
9771
9772
cookie = spl_fstrans_mark();
9773
while (l2arc_thread_exit == 0) {
9774
CALLB_CPR_SAFE_BEGIN(&cpr);
9775
(void) cv_timedwait_idle(&l2arc_feed_thr_cv,
9776
&l2arc_feed_thr_lock, next);
9777
CALLB_CPR_SAFE_END(&cpr, &l2arc_feed_thr_lock);
9778
next = ddi_get_lbolt() + hz;
9779
9780
/*
9781
* Quick check for L2ARC devices.
9782
*/
9783
mutex_enter(&l2arc_dev_mtx);
9784
if (l2arc_ndev == 0) {
9785
mutex_exit(&l2arc_dev_mtx);
9786
continue;
9787
}
9788
mutex_exit(&l2arc_dev_mtx);
9789
begin = ddi_get_lbolt();
9790
9791
/*
9792
* This selects the next l2arc device to write to, and in
9793
* doing so the next spa to feed from: dev->l2ad_spa. This
9794
* will return NULL if there are now no l2arc devices or if
9795
* they are all faulted.
9796
*
9797
* If a device is returned, its spa's config lock is also
9798
* held to prevent device removal. l2arc_dev_get_next()
9799
* will grab and release l2arc_dev_mtx.
9800
*/
9801
if ((dev = l2arc_dev_get_next()) == NULL)
9802
continue;
9803
9804
spa = dev->l2ad_spa;
9805
ASSERT3P(spa, !=, NULL);
9806
9807
/*
9808
* If the pool is read-only then force the feed thread to
9809
* sleep a little longer.
9810
*/
9811
if (!spa_writeable(spa)) {
9812
next = ddi_get_lbolt() + 5 * l2arc_feed_secs * hz;
9813
spa_config_exit(spa, SCL_L2ARC, dev);
9814
continue;
9815
}
9816
9817
/*
9818
* Avoid contributing to memory pressure.
9819
*/
9820
if (l2arc_hdr_limit_reached()) {
9821
ARCSTAT_BUMP(arcstat_l2_abort_lowmem);
9822
spa_config_exit(spa, SCL_L2ARC, dev);
9823
continue;
9824
}
9825
9826
ARCSTAT_BUMP(arcstat_l2_feeds);
9827
9828
size = l2arc_write_size(dev);
9829
9830
/*
9831
* Evict L2ARC buffers that will be overwritten.
9832
*/
9833
l2arc_evict(dev, size, B_FALSE);
9834
9835
/*
9836
* Write ARC buffers.
9837
*/
9838
wrote = l2arc_write_buffers(spa, dev, size);
9839
9840
/*
9841
* Calculate interval between writes.
9842
*/
9843
next = l2arc_write_interval(begin, size, wrote);
9844
spa_config_exit(spa, SCL_L2ARC, dev);
9845
}
9846
spl_fstrans_unmark(cookie);
9847
9848
l2arc_thread_exit = 0;
9849
cv_broadcast(&l2arc_feed_thr_cv);
9850
CALLB_CPR_EXIT(&cpr); /* drops l2arc_feed_thr_lock */
9851
thread_exit();
9852
}
9853
9854
boolean_t
9855
l2arc_vdev_present(vdev_t *vd)
9856
{
9857
return (l2arc_vdev_get(vd) != NULL);
9858
}
9859
9860
/*
9861
* Returns the l2arc_dev_t associated with a particular vdev_t or NULL if
9862
* the vdev_t isn't an L2ARC device.
9863
*/
9864
l2arc_dev_t *
9865
l2arc_vdev_get(vdev_t *vd)
9866
{
9867
l2arc_dev_t *dev;
9868
9869
mutex_enter(&l2arc_dev_mtx);
9870
for (dev = list_head(l2arc_dev_list); dev != NULL;
9871
dev = list_next(l2arc_dev_list, dev)) {
9872
if (dev->l2ad_vdev == vd)
9873
break;
9874
}
9875
mutex_exit(&l2arc_dev_mtx);
9876
9877
return (dev);
9878
}
9879
9880
static void
9881
l2arc_rebuild_dev(l2arc_dev_t *dev, boolean_t reopen)
9882
{
9883
l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr;
9884
uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize;
9885
spa_t *spa = dev->l2ad_spa;
9886
9887
/*
9888
* After a l2arc_remove_vdev(), the spa_t will no longer be valid
9889
*/
9890
if (spa == NULL)
9891
return;
9892
9893
/*
9894
* The L2ARC has to hold at least the payload of one log block for
9895
* them to be restored (persistent L2ARC). The payload of a log block
9896
* depends on the amount of its log entries. We always write log blocks
9897
* with 1022 entries. How many of them are committed or restored depends
9898
* on the size of the L2ARC device. Thus the maximum payload of
9899
* one log block is 1022 * SPA_MAXBLOCKSIZE = 16GB. If the L2ARC device
9900
* is less than that, we reduce the amount of committed and restored
9901
* log entries per block so as to enable persistence.
9902
*/
9903
if (dev->l2ad_end < l2arc_rebuild_blocks_min_l2size) {
9904
dev->l2ad_log_entries = 0;
9905
} else {
9906
dev->l2ad_log_entries = MIN((dev->l2ad_end -
9907
dev->l2ad_start) >> SPA_MAXBLOCKSHIFT,
9908
L2ARC_LOG_BLK_MAX_ENTRIES);
9909
}
9910
9911
/*
9912
* Read the device header, if an error is returned do not rebuild L2ARC.
9913
*/
9914
if (l2arc_dev_hdr_read(dev) == 0 && dev->l2ad_log_entries > 0) {
9915
/*
9916
* If we are onlining a cache device (vdev_reopen) that was
9917
* still present (l2arc_vdev_present()) and rebuild is enabled,
9918
* we should evict all ARC buffers and pointers to log blocks
9919
* and reclaim their space before restoring its contents to
9920
* L2ARC.
9921
*/
9922
if (reopen) {
9923
if (!l2arc_rebuild_enabled) {
9924
return;
9925
} else {
9926
l2arc_evict(dev, 0, B_TRUE);
9927
/* start a new log block */
9928
dev->l2ad_log_ent_idx = 0;
9929
dev->l2ad_log_blk_payload_asize = 0;
9930
dev->l2ad_log_blk_payload_start = 0;
9931
}
9932
}
9933
/*
9934
* Just mark the device as pending for a rebuild. We won't
9935
* be starting a rebuild in line here as it would block pool
9936
* import. Instead spa_load_impl will hand that off to an
9937
* async task which will call l2arc_spa_rebuild_start.
9938
*/
9939
dev->l2ad_rebuild = B_TRUE;
9940
} else if (spa_writeable(spa)) {
9941
/*
9942
* In this case TRIM the whole device if l2arc_trim_ahead > 0,
9943
* otherwise create a new header. We zero out the memory holding
9944
* the header to reset dh_start_lbps. If we TRIM the whole
9945
* device the new header will be written by
9946
* vdev_trim_l2arc_thread() at the end of the TRIM to update the
9947
* trim_state in the header too. When reading the header, if
9948
* trim_state is not VDEV_TRIM_COMPLETE and l2arc_trim_ahead > 0
9949
* we opt to TRIM the whole device again.
9950
*/
9951
if (l2arc_trim_ahead > 0) {
9952
dev->l2ad_trim_all = B_TRUE;
9953
} else {
9954
memset(l2dhdr, 0, l2dhdr_asize);
9955
l2arc_dev_hdr_update(dev);
9956
}
9957
}
9958
}
9959
9960
/*
9961
* Add a vdev for use by the L2ARC. By this point the spa has already
9962
* validated the vdev and opened it.
9963
*/
9964
void
9965
l2arc_add_vdev(spa_t *spa, vdev_t *vd)
9966
{
9967
l2arc_dev_t *adddev;
9968
uint64_t l2dhdr_asize;
9969
9970
ASSERT(!l2arc_vdev_present(vd));
9971
9972
/*
9973
* Create a new l2arc device entry.
9974
*/
9975
adddev = vmem_zalloc(sizeof (l2arc_dev_t), KM_SLEEP);
9976
adddev->l2ad_spa = spa;
9977
adddev->l2ad_vdev = vd;
9978
/* leave extra size for an l2arc device header */
9979
l2dhdr_asize = adddev->l2ad_dev_hdr_asize =
9980
MAX(sizeof (*adddev->l2ad_dev_hdr), 1 << vd->vdev_ashift);
9981
adddev->l2ad_start = VDEV_LABEL_START_SIZE + l2dhdr_asize;
9982
adddev->l2ad_end = VDEV_LABEL_START_SIZE + vdev_get_min_asize(vd);
9983
ASSERT3U(adddev->l2ad_start, <, adddev->l2ad_end);
9984
adddev->l2ad_hand = adddev->l2ad_start;
9985
adddev->l2ad_evict = adddev->l2ad_start;
9986
adddev->l2ad_first = B_TRUE;
9987
adddev->l2ad_writing = B_FALSE;
9988
adddev->l2ad_trim_all = B_FALSE;
9989
list_link_init(&adddev->l2ad_node);
9990
adddev->l2ad_dev_hdr = kmem_zalloc(l2dhdr_asize, KM_SLEEP);
9991
9992
mutex_init(&adddev->l2ad_mtx, NULL, MUTEX_DEFAULT, NULL);
9993
/*
9994
* This is a list of all ARC buffers that are still valid on the
9995
* device.
9996
*/
9997
list_create(&adddev->l2ad_buflist, sizeof (arc_buf_hdr_t),
9998
offsetof(arc_buf_hdr_t, b_l2hdr.b_l2node));
9999
10000
/*
10001
* This is a list of pointers to log blocks that are still present
10002
* on the device.
10003
*/
10004
list_create(&adddev->l2ad_lbptr_list, sizeof (l2arc_lb_ptr_buf_t),
10005
offsetof(l2arc_lb_ptr_buf_t, node));
10006
10007
vdev_space_update(vd, 0, 0, adddev->l2ad_end - adddev->l2ad_hand);
10008
zfs_refcount_create(&adddev->l2ad_alloc);
10009
zfs_refcount_create(&adddev->l2ad_lb_asize);
10010
zfs_refcount_create(&adddev->l2ad_lb_count);
10011
10012
/*
10013
* Decide if dev is eligible for L2ARC rebuild or whole device
10014
* trimming. This has to happen before the device is added in the
10015
* cache device list and l2arc_dev_mtx is released. Otherwise
10016
* l2arc_feed_thread() might already start writing on the
10017
* device.
10018
*/
10019
l2arc_rebuild_dev(adddev, B_FALSE);
10020
10021
/*
10022
* Add device to global list
10023
*/
10024
mutex_enter(&l2arc_dev_mtx);
10025
list_insert_head(l2arc_dev_list, adddev);
10026
atomic_inc_64(&l2arc_ndev);
10027
mutex_exit(&l2arc_dev_mtx);
10028
}
10029
10030
/*
10031
* Decide if a vdev is eligible for L2ARC rebuild, called from vdev_reopen()
10032
* in case of onlining a cache device.
10033
*/
10034
void
10035
l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen)
10036
{
10037
l2arc_dev_t *dev = NULL;
10038
10039
dev = l2arc_vdev_get(vd);
10040
ASSERT3P(dev, !=, NULL);
10041
10042
/*
10043
* In contrast to l2arc_add_vdev() we do not have to worry about
10044
* l2arc_feed_thread() invalidating previous content when onlining a
10045
* cache device. The device parameters (l2ad*) are not cleared when
10046
* offlining the device and writing new buffers will not invalidate
10047
* all previous content. In worst case only buffers that have not had
10048
* their log block written to the device will be lost.
10049
* When onlining the cache device (ie offline->online without exporting
10050
* the pool in between) this happens:
10051
* vdev_reopen() -> vdev_open() -> l2arc_rebuild_vdev()
10052
* | |
10053
* vdev_is_dead() = B_FALSE l2ad_rebuild = B_TRUE
10054
* During the time where vdev_is_dead = B_FALSE and until l2ad_rebuild
10055
* is set to B_TRUE we might write additional buffers to the device.
10056
*/
10057
l2arc_rebuild_dev(dev, reopen);
10058
}
10059
10060
typedef struct {
10061
l2arc_dev_t *rva_l2arc_dev;
10062
uint64_t rva_spa_gid;
10063
uint64_t rva_vdev_gid;
10064
boolean_t rva_async;
10065
10066
} remove_vdev_args_t;
10067
10068
static void
10069
l2arc_device_teardown(void *arg)
10070
{
10071
remove_vdev_args_t *rva = arg;
10072
l2arc_dev_t *remdev = rva->rva_l2arc_dev;
10073
hrtime_t start_time = gethrtime();
10074
10075
/*
10076
* Clear all buflists and ARC references. L2ARC device flush.
10077
*/
10078
l2arc_evict(remdev, 0, B_TRUE);
10079
list_destroy(&remdev->l2ad_buflist);
10080
ASSERT(list_is_empty(&remdev->l2ad_lbptr_list));
10081
list_destroy(&remdev->l2ad_lbptr_list);
10082
mutex_destroy(&remdev->l2ad_mtx);
10083
zfs_refcount_destroy(&remdev->l2ad_alloc);
10084
zfs_refcount_destroy(&remdev->l2ad_lb_asize);
10085
zfs_refcount_destroy(&remdev->l2ad_lb_count);
10086
kmem_free(remdev->l2ad_dev_hdr, remdev->l2ad_dev_hdr_asize);
10087
vmem_free(remdev, sizeof (l2arc_dev_t));
10088
10089
uint64_t elapsed = NSEC2MSEC(gethrtime() - start_time);
10090
if (elapsed > 0) {
10091
zfs_dbgmsg("spa %llu, vdev %llu removed in %llu ms",
10092
(u_longlong_t)rva->rva_spa_gid,
10093
(u_longlong_t)rva->rva_vdev_gid,
10094
(u_longlong_t)elapsed);
10095
}
10096
10097
if (rva->rva_async)
10098
arc_async_flush_remove(rva->rva_spa_gid, 2);
10099
kmem_free(rva, sizeof (remove_vdev_args_t));
10100
}
10101
10102
/*
10103
* Remove a vdev from the L2ARC.
10104
*/
10105
void
10106
l2arc_remove_vdev(vdev_t *vd)
10107
{
10108
spa_t *spa = vd->vdev_spa;
10109
boolean_t asynchronous = spa->spa_state == POOL_STATE_EXPORTED ||
10110
spa->spa_state == POOL_STATE_DESTROYED;
10111
10112
/*
10113
* Find the device by vdev
10114
*/
10115
l2arc_dev_t *remdev = l2arc_vdev_get(vd);
10116
ASSERT3P(remdev, !=, NULL);
10117
10118
/*
10119
* Save info for final teardown
10120
*/
10121
remove_vdev_args_t *rva = kmem_alloc(sizeof (remove_vdev_args_t),
10122
KM_SLEEP);
10123
rva->rva_l2arc_dev = remdev;
10124
rva->rva_spa_gid = spa_load_guid(spa);
10125
rva->rva_vdev_gid = remdev->l2ad_vdev->vdev_guid;
10126
10127
/*
10128
* Cancel any ongoing or scheduled rebuild.
10129
*/
10130
mutex_enter(&l2arc_rebuild_thr_lock);
10131
remdev->l2ad_rebuild_cancel = B_TRUE;
10132
if (remdev->l2ad_rebuild_began == B_TRUE) {
10133
while (remdev->l2ad_rebuild == B_TRUE)
10134
cv_wait(&l2arc_rebuild_thr_cv, &l2arc_rebuild_thr_lock);
10135
}
10136
mutex_exit(&l2arc_rebuild_thr_lock);
10137
rva->rva_async = asynchronous;
10138
10139
/*
10140
* Remove device from global list
10141
*/
10142
ASSERT(spa_config_held(spa, SCL_L2ARC, RW_WRITER) & SCL_L2ARC);
10143
mutex_enter(&l2arc_dev_mtx);
10144
list_remove(l2arc_dev_list, remdev);
10145
l2arc_dev_last = NULL; /* may have been invalidated */
10146
atomic_dec_64(&l2arc_ndev);
10147
10148
/* During a pool export spa & vdev will no longer be valid */
10149
if (asynchronous) {
10150
remdev->l2ad_spa = NULL;
10151
remdev->l2ad_vdev = NULL;
10152
}
10153
mutex_exit(&l2arc_dev_mtx);
10154
10155
if (!asynchronous) {
10156
l2arc_device_teardown(rva);
10157
return;
10158
}
10159
10160
arc_async_flush_t *af = arc_async_flush_add(rva->rva_spa_gid, 2);
10161
10162
taskq_dispatch_ent(arc_flush_taskq, l2arc_device_teardown, rva,
10163
TQ_SLEEP, &af->af_tqent);
10164
}
10165
10166
void
10167
l2arc_init(void)
10168
{
10169
l2arc_thread_exit = 0;
10170
l2arc_ndev = 0;
10171
10172
mutex_init(&l2arc_feed_thr_lock, NULL, MUTEX_DEFAULT, NULL);
10173
cv_init(&l2arc_feed_thr_cv, NULL, CV_DEFAULT, NULL);
10174
mutex_init(&l2arc_rebuild_thr_lock, NULL, MUTEX_DEFAULT, NULL);
10175
cv_init(&l2arc_rebuild_thr_cv, NULL, CV_DEFAULT, NULL);
10176
mutex_init(&l2arc_dev_mtx, NULL, MUTEX_DEFAULT, NULL);
10177
mutex_init(&l2arc_free_on_write_mtx, NULL, MUTEX_DEFAULT, NULL);
10178
10179
l2arc_dev_list = &L2ARC_dev_list;
10180
l2arc_free_on_write = &L2ARC_free_on_write;
10181
list_create(l2arc_dev_list, sizeof (l2arc_dev_t),
10182
offsetof(l2arc_dev_t, l2ad_node));
10183
list_create(l2arc_free_on_write, sizeof (l2arc_data_free_t),
10184
offsetof(l2arc_data_free_t, l2df_list_node));
10185
}
10186
10187
void
10188
l2arc_fini(void)
10189
{
10190
mutex_destroy(&l2arc_feed_thr_lock);
10191
cv_destroy(&l2arc_feed_thr_cv);
10192
mutex_destroy(&l2arc_rebuild_thr_lock);
10193
cv_destroy(&l2arc_rebuild_thr_cv);
10194
mutex_destroy(&l2arc_dev_mtx);
10195
mutex_destroy(&l2arc_free_on_write_mtx);
10196
10197
list_destroy(l2arc_dev_list);
10198
list_destroy(l2arc_free_on_write);
10199
}
10200
10201
void
10202
l2arc_start(void)
10203
{
10204
if (!(spa_mode_global & SPA_MODE_WRITE))
10205
return;
10206
10207
(void) thread_create(NULL, 0, l2arc_feed_thread, NULL, 0, &p0,
10208
TS_RUN, defclsyspri);
10209
}
10210
10211
void
10212
l2arc_stop(void)
10213
{
10214
if (!(spa_mode_global & SPA_MODE_WRITE))
10215
return;
10216
10217
mutex_enter(&l2arc_feed_thr_lock);
10218
cv_signal(&l2arc_feed_thr_cv); /* kick thread out of startup */
10219
l2arc_thread_exit = 1;
10220
while (l2arc_thread_exit != 0)
10221
cv_wait(&l2arc_feed_thr_cv, &l2arc_feed_thr_lock);
10222
mutex_exit(&l2arc_feed_thr_lock);
10223
}
10224
10225
/*
10226
* Punches out rebuild threads for the L2ARC devices in a spa. This should
10227
* be called after pool import from the spa async thread, since starting
10228
* these threads directly from spa_import() will make them part of the
10229
* "zpool import" context and delay process exit (and thus pool import).
10230
*/
10231
void
10232
l2arc_spa_rebuild_start(spa_t *spa)
10233
{
10234
ASSERT(MUTEX_HELD(&spa_namespace_lock));
10235
10236
/*
10237
* Locate the spa's l2arc devices and kick off rebuild threads.
10238
*/
10239
for (int i = 0; i < spa->spa_l2cache.sav_count; i++) {
10240
l2arc_dev_t *dev =
10241
l2arc_vdev_get(spa->spa_l2cache.sav_vdevs[i]);
10242
if (dev == NULL) {
10243
/* Don't attempt a rebuild if the vdev is UNAVAIL */
10244
continue;
10245
}
10246
mutex_enter(&l2arc_rebuild_thr_lock);
10247
if (dev->l2ad_rebuild && !dev->l2ad_rebuild_cancel) {
10248
dev->l2ad_rebuild_began = B_TRUE;
10249
(void) thread_create(NULL, 0, l2arc_dev_rebuild_thread,
10250
dev, 0, &p0, TS_RUN, minclsyspri);
10251
}
10252
mutex_exit(&l2arc_rebuild_thr_lock);
10253
}
10254
}
10255
10256
void
10257
l2arc_spa_rebuild_stop(spa_t *spa)
10258
{
10259
ASSERT(MUTEX_HELD(&spa_namespace_lock) ||
10260
spa->spa_export_thread == curthread);
10261
10262
for (int i = 0; i < spa->spa_l2cache.sav_count; i++) {
10263
l2arc_dev_t *dev =
10264
l2arc_vdev_get(spa->spa_l2cache.sav_vdevs[i]);
10265
if (dev == NULL)
10266
continue;
10267
mutex_enter(&l2arc_rebuild_thr_lock);
10268
dev->l2ad_rebuild_cancel = B_TRUE;
10269
mutex_exit(&l2arc_rebuild_thr_lock);
10270
}
10271
for (int i = 0; i < spa->spa_l2cache.sav_count; i++) {
10272
l2arc_dev_t *dev =
10273
l2arc_vdev_get(spa->spa_l2cache.sav_vdevs[i]);
10274
if (dev == NULL)
10275
continue;
10276
mutex_enter(&l2arc_rebuild_thr_lock);
10277
if (dev->l2ad_rebuild_began == B_TRUE) {
10278
while (dev->l2ad_rebuild == B_TRUE) {
10279
cv_wait(&l2arc_rebuild_thr_cv,
10280
&l2arc_rebuild_thr_lock);
10281
}
10282
}
10283
mutex_exit(&l2arc_rebuild_thr_lock);
10284
}
10285
}
10286
10287
/*
10288
* Main entry point for L2ARC rebuilding.
10289
*/
10290
static __attribute__((noreturn)) void
10291
l2arc_dev_rebuild_thread(void *arg)
10292
{
10293
l2arc_dev_t *dev = arg;
10294
10295
VERIFY(dev->l2ad_rebuild);
10296
(void) l2arc_rebuild(dev);
10297
mutex_enter(&l2arc_rebuild_thr_lock);
10298
dev->l2ad_rebuild_began = B_FALSE;
10299
dev->l2ad_rebuild = B_FALSE;
10300
cv_signal(&l2arc_rebuild_thr_cv);
10301
mutex_exit(&l2arc_rebuild_thr_lock);
10302
10303
thread_exit();
10304
}
10305
10306
/*
10307
* This function implements the actual L2ARC metadata rebuild. It:
10308
* starts reading the log block chain and restores each block's contents
10309
* to memory (reconstructing arc_buf_hdr_t's).
10310
*
10311
* Operation stops under any of the following conditions:
10312
*
10313
* 1) We reach the end of the log block chain.
10314
* 2) We encounter *any* error condition (cksum errors, io errors)
10315
*/
10316
static int
10317
l2arc_rebuild(l2arc_dev_t *dev)
10318
{
10319
vdev_t *vd = dev->l2ad_vdev;
10320
spa_t *spa = vd->vdev_spa;
10321
int err = 0;
10322
l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr;
10323
l2arc_log_blk_phys_t *this_lb, *next_lb;
10324
zio_t *this_io = NULL, *next_io = NULL;
10325
l2arc_log_blkptr_t lbps[2];
10326
l2arc_lb_ptr_buf_t *lb_ptr_buf;
10327
boolean_t lock_held;
10328
10329
this_lb = vmem_zalloc(sizeof (*this_lb), KM_SLEEP);
10330
next_lb = vmem_zalloc(sizeof (*next_lb), KM_SLEEP);
10331
10332
/*
10333
* We prevent device removal while issuing reads to the device,
10334
* then during the rebuilding phases we drop this lock again so
10335
* that a spa_unload or device remove can be initiated - this is
10336
* safe, because the spa will signal us to stop before removing
10337
* our device and wait for us to stop.
10338
*/
10339
spa_config_enter(spa, SCL_L2ARC, vd, RW_READER);
10340
lock_held = B_TRUE;
10341
10342
/*
10343
* Retrieve the persistent L2ARC device state.
10344
* L2BLK_GET_PSIZE returns aligned size for log blocks.
10345
*/
10346
dev->l2ad_evict = MAX(l2dhdr->dh_evict, dev->l2ad_start);
10347
dev->l2ad_hand = MAX(l2dhdr->dh_start_lbps[0].lbp_daddr +
10348
L2BLK_GET_PSIZE((&l2dhdr->dh_start_lbps[0])->lbp_prop),
10349
dev->l2ad_start);
10350
dev->l2ad_first = !!(l2dhdr->dh_flags & L2ARC_DEV_HDR_EVICT_FIRST);
10351
10352
vd->vdev_trim_action_time = l2dhdr->dh_trim_action_time;
10353
vd->vdev_trim_state = l2dhdr->dh_trim_state;
10354
10355
/*
10356
* In case the zfs module parameter l2arc_rebuild_enabled is false
10357
* we do not start the rebuild process.
10358
*/
10359
if (!l2arc_rebuild_enabled)
10360
goto out;
10361
10362
/* Prepare the rebuild process */
10363
memcpy(lbps, l2dhdr->dh_start_lbps, sizeof (lbps));
10364
10365
/* Start the rebuild process */
10366
for (;;) {
10367
if (!l2arc_log_blkptr_valid(dev, &lbps[0]))
10368
break;
10369
10370
if ((err = l2arc_log_blk_read(dev, &lbps[0], &lbps[1],
10371
this_lb, next_lb, this_io, &next_io)) != 0)
10372
goto out;
10373
10374
/*
10375
* Our memory pressure valve. If the system is running low
10376
* on memory, rather than swamping memory with new ARC buf
10377
* hdrs, we opt not to rebuild the L2ARC. At this point,
10378
* however, we have already set up our L2ARC dev to chain in
10379
* new metadata log blocks, so the user may choose to offline/
10380
* online the L2ARC dev at a later time (or re-import the pool)
10381
* to reconstruct it (when there's less memory pressure).
10382
*/
10383
if (l2arc_hdr_limit_reached()) {
10384
ARCSTAT_BUMP(arcstat_l2_rebuild_abort_lowmem);
10385
cmn_err(CE_NOTE, "System running low on memory, "
10386
"aborting L2ARC rebuild.");
10387
err = SET_ERROR(ENOMEM);
10388
goto out;
10389
}
10390
10391
spa_config_exit(spa, SCL_L2ARC, vd);
10392
lock_held = B_FALSE;
10393
10394
/*
10395
* Now that we know that the next_lb checks out alright, we
10396
* can start reconstruction from this log block.
10397
* L2BLK_GET_PSIZE returns aligned size for log blocks.
10398
*/
10399
uint64_t asize = L2BLK_GET_PSIZE((&lbps[0])->lbp_prop);
10400
l2arc_log_blk_restore(dev, this_lb, asize);
10401
10402
/*
10403
* log block restored, include its pointer in the list of
10404
* pointers to log blocks present in the L2ARC device.
10405
*/
10406
lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP);
10407
lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t),
10408
KM_SLEEP);
10409
memcpy(lb_ptr_buf->lb_ptr, &lbps[0],
10410
sizeof (l2arc_log_blkptr_t));
10411
mutex_enter(&dev->l2ad_mtx);
10412
list_insert_tail(&dev->l2ad_lbptr_list, lb_ptr_buf);
10413
ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize);
10414
ARCSTAT_BUMP(arcstat_l2_log_blk_count);
10415
zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf);
10416
zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf);
10417
mutex_exit(&dev->l2ad_mtx);
10418
vdev_space_update(vd, asize, 0, 0);
10419
10420
/*
10421
* Protection against loops of log blocks:
10422
*
10423
* l2ad_hand l2ad_evict
10424
* V V
10425
* l2ad_start |=======================================| l2ad_end
10426
* -----|||----|||---|||----|||
10427
* (3) (2) (1) (0)
10428
* ---|||---|||----|||---|||
10429
* (7) (6) (5) (4)
10430
*
10431
* In this situation the pointer of log block (4) passes
10432
* l2arc_log_blkptr_valid() but the log block should not be
10433
* restored as it is overwritten by the payload of log block
10434
* (0). Only log blocks (0)-(3) should be restored. We check
10435
* whether l2ad_evict lies in between the payload starting
10436
* offset of the next log block (lbps[1].lbp_payload_start)
10437
* and the payload starting offset of the present log block
10438
* (lbps[0].lbp_payload_start). If true and this isn't the
10439
* first pass, we are looping from the beginning and we should
10440
* stop.
10441
*/
10442
if (l2arc_range_check_overlap(lbps[1].lbp_payload_start,
10443
lbps[0].lbp_payload_start, dev->l2ad_evict) &&
10444
!dev->l2ad_first)
10445
goto out;
10446
10447
kpreempt(KPREEMPT_SYNC);
10448
for (;;) {
10449
mutex_enter(&l2arc_rebuild_thr_lock);
10450
if (dev->l2ad_rebuild_cancel) {
10451
mutex_exit(&l2arc_rebuild_thr_lock);
10452
err = SET_ERROR(ECANCELED);
10453
goto out;
10454
}
10455
mutex_exit(&l2arc_rebuild_thr_lock);
10456
if (spa_config_tryenter(spa, SCL_L2ARC, vd,
10457
RW_READER)) {
10458
lock_held = B_TRUE;
10459
break;
10460
}
10461
/*
10462
* L2ARC config lock held by somebody in writer,
10463
* possibly due to them trying to remove us. They'll
10464
* likely to want us to shut down, so after a little
10465
* delay, we check l2ad_rebuild_cancel and retry
10466
* the lock again.
10467
*/
10468
delay(1);
10469
}
10470
10471
/*
10472
* Continue with the next log block.
10473
*/
10474
lbps[0] = lbps[1];
10475
lbps[1] = this_lb->lb_prev_lbp;
10476
PTR_SWAP(this_lb, next_lb);
10477
this_io = next_io;
10478
next_io = NULL;
10479
}
10480
10481
if (this_io != NULL)
10482
l2arc_log_blk_fetch_abort(this_io);
10483
out:
10484
if (next_io != NULL)
10485
l2arc_log_blk_fetch_abort(next_io);
10486
vmem_free(this_lb, sizeof (*this_lb));
10487
vmem_free(next_lb, sizeof (*next_lb));
10488
10489
if (err == ECANCELED) {
10490
/*
10491
* In case the rebuild was canceled do not log to spa history
10492
* log as the pool may be in the process of being removed.
10493
*/
10494
zfs_dbgmsg("L2ARC rebuild aborted, restored %llu blocks",
10495
(u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count));
10496
return (err);
10497
} else if (!l2arc_rebuild_enabled) {
10498
spa_history_log_internal(spa, "L2ARC rebuild", NULL,
10499
"disabled");
10500
} else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) > 0) {
10501
ARCSTAT_BUMP(arcstat_l2_rebuild_success);
10502
spa_history_log_internal(spa, "L2ARC rebuild", NULL,
10503
"successful, restored %llu blocks",
10504
(u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count));
10505
} else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) == 0) {
10506
/*
10507
* No error but also nothing restored, meaning the lbps array
10508
* in the device header points to invalid/non-present log
10509
* blocks. Reset the header.
10510
*/
10511
spa_history_log_internal(spa, "L2ARC rebuild", NULL,
10512
"no valid log blocks");
10513
memset(l2dhdr, 0, dev->l2ad_dev_hdr_asize);
10514
l2arc_dev_hdr_update(dev);
10515
} else if (err != 0) {
10516
spa_history_log_internal(spa, "L2ARC rebuild", NULL,
10517
"aborted, restored %llu blocks",
10518
(u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count));
10519
}
10520
10521
if (lock_held)
10522
spa_config_exit(spa, SCL_L2ARC, vd);
10523
10524
return (err);
10525
}
10526
10527
/*
10528
* Attempts to read the device header on the provided L2ARC device and writes
10529
* it to `hdr'. On success, this function returns 0, otherwise the appropriate
10530
* error code is returned.
10531
*/
10532
static int
10533
l2arc_dev_hdr_read(l2arc_dev_t *dev)
10534
{
10535
int err;
10536
uint64_t guid;
10537
l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr;
10538
const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize;
10539
abd_t *abd;
10540
10541
guid = spa_guid(dev->l2ad_vdev->vdev_spa);
10542
10543
abd = abd_get_from_buf(l2dhdr, l2dhdr_asize);
10544
10545
err = zio_wait(zio_read_phys(NULL, dev->l2ad_vdev,
10546
VDEV_LABEL_START_SIZE, l2dhdr_asize, abd,
10547
ZIO_CHECKSUM_LABEL, NULL, NULL, ZIO_PRIORITY_SYNC_READ,
10548
ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY |
10549
ZIO_FLAG_SPECULATIVE, B_FALSE));
10550
10551
abd_free(abd);
10552
10553
if (err != 0) {
10554
ARCSTAT_BUMP(arcstat_l2_rebuild_abort_dh_errors);
10555
zfs_dbgmsg("L2ARC IO error (%d) while reading device header, "
10556
"vdev guid: %llu", err,
10557
(u_longlong_t)dev->l2ad_vdev->vdev_guid);
10558
return (err);
10559
}
10560
10561
if (l2dhdr->dh_magic == BSWAP_64(L2ARC_DEV_HDR_MAGIC))
10562
byteswap_uint64_array(l2dhdr, sizeof (*l2dhdr));
10563
10564
if (l2dhdr->dh_magic != L2ARC_DEV_HDR_MAGIC ||
10565
l2dhdr->dh_spa_guid != guid ||
10566
l2dhdr->dh_vdev_guid != dev->l2ad_vdev->vdev_guid ||
10567
l2dhdr->dh_version != L2ARC_PERSISTENT_VERSION ||
10568
l2dhdr->dh_log_entries != dev->l2ad_log_entries ||
10569
l2dhdr->dh_end != dev->l2ad_end ||
10570
!l2arc_range_check_overlap(dev->l2ad_start, dev->l2ad_end,
10571
l2dhdr->dh_evict) ||
10572
(l2dhdr->dh_trim_state != VDEV_TRIM_COMPLETE &&
10573
l2arc_trim_ahead > 0)) {
10574
/*
10575
* Attempt to rebuild a device containing no actual dev hdr
10576
* or containing a header from some other pool or from another
10577
* version of persistent L2ARC.
10578
*/
10579
ARCSTAT_BUMP(arcstat_l2_rebuild_abort_unsupported);
10580
return (SET_ERROR(ENOTSUP));
10581
}
10582
10583
return (0);
10584
}
10585
10586
/*
10587
* Reads L2ARC log blocks from storage and validates their contents.
10588
*
10589
* This function implements a simple fetcher to make sure that while
10590
* we're processing one buffer the L2ARC is already fetching the next
10591
* one in the chain.
10592
*
10593
* The arguments this_lp and next_lp point to the current and next log block
10594
* address in the block chain. Similarly, this_lb and next_lb hold the
10595
* l2arc_log_blk_phys_t's of the current and next L2ARC blk.
10596
*
10597
* The `this_io' and `next_io' arguments are used for block fetching.
10598
* When issuing the first blk IO during rebuild, you should pass NULL for
10599
* `this_io'. This function will then issue a sync IO to read the block and
10600
* also issue an async IO to fetch the next block in the block chain. The
10601
* fetched IO is returned in `next_io'. On subsequent calls to this
10602
* function, pass the value returned in `next_io' from the previous call
10603
* as `this_io' and a fresh `next_io' pointer to hold the next fetch IO.
10604
* Prior to the call, you should initialize your `next_io' pointer to be
10605
* NULL. If no fetch IO was issued, the pointer is left set at NULL.
10606
*
10607
* On success, this function returns 0, otherwise it returns an appropriate
10608
* error code. On error the fetching IO is aborted and cleared before
10609
* returning from this function. Therefore, if we return `success', the
10610
* caller can assume that we have taken care of cleanup of fetch IOs.
10611
*/
10612
static int
10613
l2arc_log_blk_read(l2arc_dev_t *dev,
10614
const l2arc_log_blkptr_t *this_lbp, const l2arc_log_blkptr_t *next_lbp,
10615
l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb,
10616
zio_t *this_io, zio_t **next_io)
10617
{
10618
int err = 0;
10619
zio_cksum_t cksum;
10620
uint64_t asize;
10621
10622
ASSERT(this_lbp != NULL && next_lbp != NULL);
10623
ASSERT(this_lb != NULL && next_lb != NULL);
10624
ASSERT(next_io != NULL && *next_io == NULL);
10625
ASSERT(l2arc_log_blkptr_valid(dev, this_lbp));
10626
10627
/*
10628
* Check to see if we have issued the IO for this log block in a
10629
* previous run. If not, this is the first call, so issue it now.
10630
*/
10631
if (this_io == NULL) {
10632
this_io = l2arc_log_blk_fetch(dev->l2ad_vdev, this_lbp,
10633
this_lb);
10634
}
10635
10636
/*
10637
* Peek to see if we can start issuing the next IO immediately.
10638
*/
10639
if (l2arc_log_blkptr_valid(dev, next_lbp)) {
10640
/*
10641
* Start issuing IO for the next log block early - this
10642
* should help keep the L2ARC device busy while we
10643
* decompress and restore this log block.
10644
*/
10645
*next_io = l2arc_log_blk_fetch(dev->l2ad_vdev, next_lbp,
10646
next_lb);
10647
}
10648
10649
/* Wait for the IO to read this log block to complete */
10650
if ((err = zio_wait(this_io)) != 0) {
10651
ARCSTAT_BUMP(arcstat_l2_rebuild_abort_io_errors);
10652
zfs_dbgmsg("L2ARC IO error (%d) while reading log block, "
10653
"offset: %llu, vdev guid: %llu", err,
10654
(u_longlong_t)this_lbp->lbp_daddr,
10655
(u_longlong_t)dev->l2ad_vdev->vdev_guid);
10656
goto cleanup;
10657
}
10658
10659
/*
10660
* Make sure the buffer checks out.
10661
* L2BLK_GET_PSIZE returns aligned size for log blocks.
10662
*/
10663
asize = L2BLK_GET_PSIZE((this_lbp)->lbp_prop);
10664
fletcher_4_native(this_lb, asize, NULL, &cksum);
10665
if (!ZIO_CHECKSUM_EQUAL(cksum, this_lbp->lbp_cksum)) {
10666
ARCSTAT_BUMP(arcstat_l2_rebuild_abort_cksum_lb_errors);
10667
zfs_dbgmsg("L2ARC log block cksum failed, offset: %llu, "
10668
"vdev guid: %llu, l2ad_hand: %llu, l2ad_evict: %llu",
10669
(u_longlong_t)this_lbp->lbp_daddr,
10670
(u_longlong_t)dev->l2ad_vdev->vdev_guid,
10671
(u_longlong_t)dev->l2ad_hand,
10672
(u_longlong_t)dev->l2ad_evict);
10673
err = SET_ERROR(ECKSUM);
10674
goto cleanup;
10675
}
10676
10677
/* Now we can take our time decoding this buffer */
10678
switch (L2BLK_GET_COMPRESS((this_lbp)->lbp_prop)) {
10679
case ZIO_COMPRESS_OFF:
10680
break;
10681
case ZIO_COMPRESS_LZ4: {
10682
abd_t *abd = abd_alloc_linear(asize, B_TRUE);
10683
abd_copy_from_buf_off(abd, this_lb, 0, asize);
10684
abd_t dabd;
10685
abd_get_from_buf_struct(&dabd, this_lb, sizeof (*this_lb));
10686
err = zio_decompress_data(
10687
L2BLK_GET_COMPRESS((this_lbp)->lbp_prop),
10688
abd, &dabd, asize, sizeof (*this_lb), NULL);
10689
abd_free(&dabd);
10690
abd_free(abd);
10691
if (err != 0) {
10692
err = SET_ERROR(EINVAL);
10693
goto cleanup;
10694
}
10695
break;
10696
}
10697
default:
10698
err = SET_ERROR(EINVAL);
10699
goto cleanup;
10700
}
10701
if (this_lb->lb_magic == BSWAP_64(L2ARC_LOG_BLK_MAGIC))
10702
byteswap_uint64_array(this_lb, sizeof (*this_lb));
10703
if (this_lb->lb_magic != L2ARC_LOG_BLK_MAGIC) {
10704
err = SET_ERROR(EINVAL);
10705
goto cleanup;
10706
}
10707
cleanup:
10708
/* Abort an in-flight fetch I/O in case of error */
10709
if (err != 0 && *next_io != NULL) {
10710
l2arc_log_blk_fetch_abort(*next_io);
10711
*next_io = NULL;
10712
}
10713
return (err);
10714
}
10715
10716
/*
10717
* Restores the payload of a log block to ARC. This creates empty ARC hdr
10718
* entries which only contain an l2arc hdr, essentially restoring the
10719
* buffers to their L2ARC evicted state. This function also updates space
10720
* usage on the L2ARC vdev to make sure it tracks restored buffers.
10721
*/
10722
static void
10723
l2arc_log_blk_restore(l2arc_dev_t *dev, const l2arc_log_blk_phys_t *lb,
10724
uint64_t lb_asize)
10725
{
10726
uint64_t size = 0, asize = 0;
10727
uint64_t log_entries = dev->l2ad_log_entries;
10728
10729
/*
10730
* Usually arc_adapt() is called only for data, not headers, but
10731
* since we may allocate significant amount of memory here, let ARC
10732
* grow its arc_c.
10733
*/
10734
arc_adapt(log_entries * HDR_L2ONLY_SIZE);
10735
10736
for (int i = log_entries - 1; i >= 0; i--) {
10737
/*
10738
* Restore goes in the reverse temporal direction to preserve
10739
* correct temporal ordering of buffers in the l2ad_buflist.
10740
* l2arc_hdr_restore also does a list_insert_tail instead of
10741
* list_insert_head on the l2ad_buflist:
10742
*
10743
* LIST l2ad_buflist LIST
10744
* HEAD <------ (time) ------ TAIL
10745
* direction +-----+-----+-----+-----+-----+ direction
10746
* of l2arc <== | buf | buf | buf | buf | buf | ===> of rebuild
10747
* fill +-----+-----+-----+-----+-----+
10748
* ^ ^
10749
* | |
10750
* | |
10751
* l2arc_feed_thread l2arc_rebuild
10752
* will place new bufs here restores bufs here
10753
*
10754
* During l2arc_rebuild() the device is not used by
10755
* l2arc_feed_thread() as dev->l2ad_rebuild is set to true.
10756
*/
10757
size += L2BLK_GET_LSIZE((&lb->lb_entries[i])->le_prop);
10758
asize += vdev_psize_to_asize(dev->l2ad_vdev,
10759
L2BLK_GET_PSIZE((&lb->lb_entries[i])->le_prop));
10760
l2arc_hdr_restore(&lb->lb_entries[i], dev);
10761
}
10762
10763
/*
10764
* Record rebuild stats:
10765
* size Logical size of restored buffers in the L2ARC
10766
* asize Aligned size of restored buffers in the L2ARC
10767
*/
10768
ARCSTAT_INCR(arcstat_l2_rebuild_size, size);
10769
ARCSTAT_INCR(arcstat_l2_rebuild_asize, asize);
10770
ARCSTAT_INCR(arcstat_l2_rebuild_bufs, log_entries);
10771
ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, lb_asize);
10772
ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio, asize / lb_asize);
10773
ARCSTAT_BUMP(arcstat_l2_rebuild_log_blks);
10774
}
10775
10776
/*
10777
* Restores a single ARC buf hdr from a log entry. The ARC buffer is put
10778
* into a state indicating that it has been evicted to L2ARC.
10779
*/
10780
static void
10781
l2arc_hdr_restore(const l2arc_log_ent_phys_t *le, l2arc_dev_t *dev)
10782
{
10783
arc_buf_hdr_t *hdr, *exists;
10784
kmutex_t *hash_lock;
10785
arc_buf_contents_t type = L2BLK_GET_TYPE((le)->le_prop);
10786
uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev,
10787
L2BLK_GET_PSIZE((le)->le_prop));
10788
10789
/*
10790
* Do all the allocation before grabbing any locks, this lets us
10791
* sleep if memory is full and we don't have to deal with failed
10792
* allocations.
10793
*/
10794
hdr = arc_buf_alloc_l2only(L2BLK_GET_LSIZE((le)->le_prop), type,
10795
dev, le->le_dva, le->le_daddr,
10796
L2BLK_GET_PSIZE((le)->le_prop), asize, le->le_birth,
10797
L2BLK_GET_COMPRESS((le)->le_prop), le->le_complevel,
10798
L2BLK_GET_PROTECTED((le)->le_prop),
10799
L2BLK_GET_PREFETCH((le)->le_prop),
10800
L2BLK_GET_STATE((le)->le_prop));
10801
10802
/*
10803
* vdev_space_update() has to be called before arc_hdr_destroy() to
10804
* avoid underflow since the latter also calls vdev_space_update().
10805
*/
10806
l2arc_hdr_arcstats_increment(hdr);
10807
vdev_space_update(dev->l2ad_vdev, asize, 0, 0);
10808
10809
mutex_enter(&dev->l2ad_mtx);
10810
list_insert_tail(&dev->l2ad_buflist, hdr);
10811
(void) zfs_refcount_add_many(&dev->l2ad_alloc, arc_hdr_size(hdr), hdr);
10812
mutex_exit(&dev->l2ad_mtx);
10813
10814
exists = buf_hash_insert(hdr, &hash_lock);
10815
if (exists) {
10816
/* Buffer was already cached, no need to restore it. */
10817
arc_hdr_destroy(hdr);
10818
/*
10819
* If the buffer is already cached, check whether it has
10820
* L2ARC metadata. If not, enter them and update the flag.
10821
* This is important is case of onlining a cache device, since
10822
* we previously evicted all L2ARC metadata from ARC.
10823
*/
10824
if (!HDR_HAS_L2HDR(exists)) {
10825
arc_hdr_set_flags(exists, ARC_FLAG_HAS_L2HDR);
10826
exists->b_l2hdr.b_dev = dev;
10827
exists->b_l2hdr.b_daddr = le->le_daddr;
10828
exists->b_l2hdr.b_arcs_state =
10829
L2BLK_GET_STATE((le)->le_prop);
10830
/* l2arc_hdr_arcstats_update() expects a valid asize */
10831
HDR_SET_L2SIZE(exists, asize);
10832
mutex_enter(&dev->l2ad_mtx);
10833
list_insert_tail(&dev->l2ad_buflist, exists);
10834
(void) zfs_refcount_add_many(&dev->l2ad_alloc,
10835
arc_hdr_size(exists), exists);
10836
mutex_exit(&dev->l2ad_mtx);
10837
l2arc_hdr_arcstats_increment(exists);
10838
vdev_space_update(dev->l2ad_vdev, asize, 0, 0);
10839
}
10840
ARCSTAT_BUMP(arcstat_l2_rebuild_bufs_precached);
10841
}
10842
10843
mutex_exit(hash_lock);
10844
}
10845
10846
/*
10847
* Starts an asynchronous read IO to read a log block. This is used in log
10848
* block reconstruction to start reading the next block before we are done
10849
* decoding and reconstructing the current block, to keep the l2arc device
10850
* nice and hot with read IO to process.
10851
* The returned zio will contain a newly allocated memory buffers for the IO
10852
* data which should then be freed by the caller once the zio is no longer
10853
* needed (i.e. due to it having completed). If you wish to abort this
10854
* zio, you should do so using l2arc_log_blk_fetch_abort, which takes
10855
* care of disposing of the allocated buffers correctly.
10856
*/
10857
static zio_t *
10858
l2arc_log_blk_fetch(vdev_t *vd, const l2arc_log_blkptr_t *lbp,
10859
l2arc_log_blk_phys_t *lb)
10860
{
10861
uint32_t asize;
10862
zio_t *pio;
10863
l2arc_read_callback_t *cb;
10864
10865
/* L2BLK_GET_PSIZE returns aligned size for log blocks */
10866
asize = L2BLK_GET_PSIZE((lbp)->lbp_prop);
10867
ASSERT(asize <= sizeof (l2arc_log_blk_phys_t));
10868
10869
cb = kmem_zalloc(sizeof (l2arc_read_callback_t), KM_SLEEP);
10870
cb->l2rcb_abd = abd_get_from_buf(lb, asize);
10871
pio = zio_root(vd->vdev_spa, l2arc_blk_fetch_done, cb,
10872
ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY);
10873
(void) zio_nowait(zio_read_phys(pio, vd, lbp->lbp_daddr, asize,
10874
cb->l2rcb_abd, ZIO_CHECKSUM_OFF, NULL, NULL,
10875
ZIO_PRIORITY_ASYNC_READ, ZIO_FLAG_CANFAIL |
10876
ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY, B_FALSE));
10877
10878
return (pio);
10879
}
10880
10881
/*
10882
* Aborts a zio returned from l2arc_log_blk_fetch and frees the data
10883
* buffers allocated for it.
10884
*/
10885
static void
10886
l2arc_log_blk_fetch_abort(zio_t *zio)
10887
{
10888
(void) zio_wait(zio);
10889
}
10890
10891
/*
10892
* Creates a zio to update the device header on an l2arc device.
10893
*/
10894
void
10895
l2arc_dev_hdr_update(l2arc_dev_t *dev)
10896
{
10897
l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr;
10898
const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize;
10899
abd_t *abd;
10900
int err;
10901
10902
VERIFY(spa_config_held(dev->l2ad_spa, SCL_STATE_ALL, RW_READER));
10903
10904
l2dhdr->dh_magic = L2ARC_DEV_HDR_MAGIC;
10905
l2dhdr->dh_version = L2ARC_PERSISTENT_VERSION;
10906
l2dhdr->dh_spa_guid = spa_guid(dev->l2ad_vdev->vdev_spa);
10907
l2dhdr->dh_vdev_guid = dev->l2ad_vdev->vdev_guid;
10908
l2dhdr->dh_log_entries = dev->l2ad_log_entries;
10909
l2dhdr->dh_evict = dev->l2ad_evict;
10910
l2dhdr->dh_start = dev->l2ad_start;
10911
l2dhdr->dh_end = dev->l2ad_end;
10912
l2dhdr->dh_lb_asize = zfs_refcount_count(&dev->l2ad_lb_asize);
10913
l2dhdr->dh_lb_count = zfs_refcount_count(&dev->l2ad_lb_count);
10914
l2dhdr->dh_flags = 0;
10915
l2dhdr->dh_trim_action_time = dev->l2ad_vdev->vdev_trim_action_time;
10916
l2dhdr->dh_trim_state = dev->l2ad_vdev->vdev_trim_state;
10917
if (dev->l2ad_first)
10918
l2dhdr->dh_flags |= L2ARC_DEV_HDR_EVICT_FIRST;
10919
10920
abd = abd_get_from_buf(l2dhdr, l2dhdr_asize);
10921
10922
err = zio_wait(zio_write_phys(NULL, dev->l2ad_vdev,
10923
VDEV_LABEL_START_SIZE, l2dhdr_asize, abd, ZIO_CHECKSUM_LABEL, NULL,
10924
NULL, ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE));
10925
10926
abd_free(abd);
10927
10928
if (err != 0) {
10929
zfs_dbgmsg("L2ARC IO error (%d) while writing device header, "
10930
"vdev guid: %llu", err,
10931
(u_longlong_t)dev->l2ad_vdev->vdev_guid);
10932
}
10933
}
10934
10935
/*
10936
* Commits a log block to the L2ARC device. This routine is invoked from
10937
* l2arc_write_buffers when the log block fills up.
10938
* This function allocates some memory to temporarily hold the serialized
10939
* buffer to be written. This is then released in l2arc_write_done.
10940
*/
10941
static uint64_t
10942
l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, l2arc_write_callback_t *cb)
10943
{
10944
l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk;
10945
l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr;
10946
uint64_t psize, asize;
10947
zio_t *wzio;
10948
l2arc_lb_abd_buf_t *abd_buf;
10949
abd_t *abd = NULL;
10950
l2arc_lb_ptr_buf_t *lb_ptr_buf;
10951
10952
VERIFY3S(dev->l2ad_log_ent_idx, ==, dev->l2ad_log_entries);
10953
10954
abd_buf = zio_buf_alloc(sizeof (*abd_buf));
10955
abd_buf->abd = abd_get_from_buf(lb, sizeof (*lb));
10956
lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP);
10957
lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t), KM_SLEEP);
10958
10959
/* link the buffer into the block chain */
10960
lb->lb_prev_lbp = l2dhdr->dh_start_lbps[1];
10961
lb->lb_magic = L2ARC_LOG_BLK_MAGIC;
10962
10963
/*
10964
* l2arc_log_blk_commit() may be called multiple times during a single
10965
* l2arc_write_buffers() call. Save the allocated abd buffers in a list
10966
* so we can free them in l2arc_write_done() later on.
10967
*/
10968
list_insert_tail(&cb->l2wcb_abd_list, abd_buf);
10969
10970
/* try to compress the buffer, at least one sector to save */
10971
psize = zio_compress_data(ZIO_COMPRESS_LZ4,
10972
abd_buf->abd, &abd, sizeof (*lb),
10973
zio_get_compression_max_size(ZIO_COMPRESS_LZ4,
10974
dev->l2ad_vdev->vdev_ashift,
10975
dev->l2ad_vdev->vdev_ashift, sizeof (*lb)), 0);
10976
10977
/* a log block is never entirely zero */
10978
ASSERT(psize != 0);
10979
asize = vdev_psize_to_asize(dev->l2ad_vdev, psize);
10980
ASSERT(asize <= sizeof (*lb));
10981
10982
/*
10983
* Update the start log block pointer in the device header to point
10984
* to the log block we're about to write.
10985
*/
10986
l2dhdr->dh_start_lbps[1] = l2dhdr->dh_start_lbps[0];
10987
l2dhdr->dh_start_lbps[0].lbp_daddr = dev->l2ad_hand;
10988
l2dhdr->dh_start_lbps[0].lbp_payload_asize =
10989
dev->l2ad_log_blk_payload_asize;
10990
l2dhdr->dh_start_lbps[0].lbp_payload_start =
10991
dev->l2ad_log_blk_payload_start;
10992
L2BLK_SET_LSIZE(
10993
(&l2dhdr->dh_start_lbps[0])->lbp_prop, sizeof (*lb));
10994
L2BLK_SET_PSIZE(
10995
(&l2dhdr->dh_start_lbps[0])->lbp_prop, asize);
10996
L2BLK_SET_CHECKSUM(
10997
(&l2dhdr->dh_start_lbps[0])->lbp_prop,
10998
ZIO_CHECKSUM_FLETCHER_4);
10999
if (asize < sizeof (*lb)) {
11000
/* compression succeeded */
11001
abd_zero_off(abd, psize, asize - psize);
11002
L2BLK_SET_COMPRESS(
11003
(&l2dhdr->dh_start_lbps[0])->lbp_prop,
11004
ZIO_COMPRESS_LZ4);
11005
} else {
11006
/* compression failed */
11007
abd_copy_from_buf_off(abd, lb, 0, sizeof (*lb));
11008
L2BLK_SET_COMPRESS(
11009
(&l2dhdr->dh_start_lbps[0])->lbp_prop,
11010
ZIO_COMPRESS_OFF);
11011
}
11012
11013
/* checksum what we're about to write */
11014
abd_fletcher_4_native(abd, asize, NULL,
11015
&l2dhdr->dh_start_lbps[0].lbp_cksum);
11016
11017
abd_free(abd_buf->abd);
11018
11019
/* perform the write itself */
11020
abd_buf->abd = abd;
11021
wzio = zio_write_phys(pio, dev->l2ad_vdev, dev->l2ad_hand,
11022
asize, abd_buf->abd, ZIO_CHECKSUM_OFF, NULL, NULL,
11023
ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE);
11024
DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev, zio_t *, wzio);
11025
(void) zio_nowait(wzio);
11026
11027
dev->l2ad_hand += asize;
11028
vdev_space_update(dev->l2ad_vdev, asize, 0, 0);
11029
11030
/*
11031
* Include the committed log block's pointer in the list of pointers
11032
* to log blocks present in the L2ARC device.
11033
*/
11034
memcpy(lb_ptr_buf->lb_ptr, &l2dhdr->dh_start_lbps[0],
11035
sizeof (l2arc_log_blkptr_t));
11036
mutex_enter(&dev->l2ad_mtx);
11037
list_insert_head(&dev->l2ad_lbptr_list, lb_ptr_buf);
11038
ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize);
11039
ARCSTAT_BUMP(arcstat_l2_log_blk_count);
11040
zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf);
11041
zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf);
11042
mutex_exit(&dev->l2ad_mtx);
11043
11044
/* bump the kstats */
11045
ARCSTAT_INCR(arcstat_l2_write_bytes, asize);
11046
ARCSTAT_BUMP(arcstat_l2_log_blk_writes);
11047
ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, asize);
11048
ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio,
11049
dev->l2ad_log_blk_payload_asize / asize);
11050
11051
/* start a new log block */
11052
dev->l2ad_log_ent_idx = 0;
11053
dev->l2ad_log_blk_payload_asize = 0;
11054
dev->l2ad_log_blk_payload_start = 0;
11055
11056
return (asize);
11057
}
11058
11059
/*
11060
* Validates an L2ARC log block address to make sure that it can be read
11061
* from the provided L2ARC device.
11062
*/
11063
boolean_t
11064
l2arc_log_blkptr_valid(l2arc_dev_t *dev, const l2arc_log_blkptr_t *lbp)
11065
{
11066
/* L2BLK_GET_PSIZE returns aligned size for log blocks */
11067
uint64_t asize = L2BLK_GET_PSIZE((lbp)->lbp_prop);
11068
uint64_t end = lbp->lbp_daddr + asize - 1;
11069
uint64_t start = lbp->lbp_payload_start;
11070
boolean_t evicted = B_FALSE;
11071
11072
/*
11073
* A log block is valid if all of the following conditions are true:
11074
* - it fits entirely (including its payload) between l2ad_start and
11075
* l2ad_end
11076
* - it has a valid size
11077
* - neither the log block itself nor part of its payload was evicted
11078
* by l2arc_evict():
11079
*
11080
* l2ad_hand l2ad_evict
11081
* | | lbp_daddr
11082
* | start | | end
11083
* | | | | |
11084
* V V V V V
11085
* l2ad_start ============================================ l2ad_end
11086
* --------------------------||||
11087
* ^ ^
11088
* | log block
11089
* payload
11090
*/
11091
11092
evicted =
11093
l2arc_range_check_overlap(start, end, dev->l2ad_hand) ||
11094
l2arc_range_check_overlap(start, end, dev->l2ad_evict) ||
11095
l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, start) ||
11096
l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, end);
11097
11098
return (start >= dev->l2ad_start && end <= dev->l2ad_end &&
11099
asize > 0 && asize <= sizeof (l2arc_log_blk_phys_t) &&
11100
(!evicted || dev->l2ad_first));
11101
}
11102
11103
/*
11104
* Inserts ARC buffer header `hdr' into the current L2ARC log block on
11105
* the device. The buffer being inserted must be present in L2ARC.
11106
* Returns B_TRUE if the L2ARC log block is full and needs to be committed
11107
* to L2ARC, or B_FALSE if it still has room for more ARC buffers.
11108
*/
11109
static boolean_t
11110
l2arc_log_blk_insert(l2arc_dev_t *dev, const arc_buf_hdr_t *hdr)
11111
{
11112
l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk;
11113
l2arc_log_ent_phys_t *le;
11114
11115
if (dev->l2ad_log_entries == 0)
11116
return (B_FALSE);
11117
11118
int index = dev->l2ad_log_ent_idx++;
11119
11120
ASSERT3S(index, <, dev->l2ad_log_entries);
11121
ASSERT(HDR_HAS_L2HDR(hdr));
11122
11123
le = &lb->lb_entries[index];
11124
memset(le, 0, sizeof (*le));
11125
le->le_dva = hdr->b_dva;
11126
le->le_birth = hdr->b_birth;
11127
le->le_daddr = hdr->b_l2hdr.b_daddr;
11128
if (index == 0)
11129
dev->l2ad_log_blk_payload_start = le->le_daddr;
11130
L2BLK_SET_LSIZE((le)->le_prop, HDR_GET_LSIZE(hdr));
11131
L2BLK_SET_PSIZE((le)->le_prop, HDR_GET_PSIZE(hdr));
11132
L2BLK_SET_COMPRESS((le)->le_prop, HDR_GET_COMPRESS(hdr));
11133
le->le_complevel = hdr->b_complevel;
11134
L2BLK_SET_TYPE((le)->le_prop, hdr->b_type);
11135
L2BLK_SET_PROTECTED((le)->le_prop, !!(HDR_PROTECTED(hdr)));
11136
L2BLK_SET_PREFETCH((le)->le_prop, !!(HDR_PREFETCH(hdr)));
11137
L2BLK_SET_STATE((le)->le_prop, hdr->b_l2hdr.b_arcs_state);
11138
11139
dev->l2ad_log_blk_payload_asize += vdev_psize_to_asize(dev->l2ad_vdev,
11140
HDR_GET_PSIZE(hdr));
11141
11142
return (dev->l2ad_log_ent_idx == dev->l2ad_log_entries);
11143
}
11144
11145
/*
11146
* Checks whether a given L2ARC device address sits in a time-sequential
11147
* range. The trick here is that the L2ARC is a rotary buffer, so we can't
11148
* just do a range comparison, we need to handle the situation in which the
11149
* range wraps around the end of the L2ARC device. Arguments:
11150
* bottom -- Lower end of the range to check (written to earlier).
11151
* top -- Upper end of the range to check (written to later).
11152
* check -- The address for which we want to determine if it sits in
11153
* between the top and bottom.
11154
*
11155
* The 3-way conditional below represents the following cases:
11156
*
11157
* bottom < top : Sequentially ordered case:
11158
* <check>--------+-------------------+
11159
* | (overlap here?) |
11160
* L2ARC dev V V
11161
* |---------------<bottom>============<top>--------------|
11162
*
11163
* bottom > top: Looped-around case:
11164
* <check>--------+------------------+
11165
* | (overlap here?) |
11166
* L2ARC dev V V
11167
* |===============<top>---------------<bottom>===========|
11168
* ^ ^
11169
* | (or here?) |
11170
* +---------------+---------<check>
11171
*
11172
* top == bottom : Just a single address comparison.
11173
*/
11174
boolean_t
11175
l2arc_range_check_overlap(uint64_t bottom, uint64_t top, uint64_t check)
11176
{
11177
if (bottom < top)
11178
return (bottom <= check && check <= top);
11179
else if (bottom > top)
11180
return (check <= top || bottom <= check);
11181
else
11182
return (check == top);
11183
}
11184
11185
EXPORT_SYMBOL(arc_buf_size);
11186
EXPORT_SYMBOL(arc_write);
11187
EXPORT_SYMBOL(arc_read);
11188
EXPORT_SYMBOL(arc_buf_info);
11189
EXPORT_SYMBOL(arc_getbuf_func);
11190
EXPORT_SYMBOL(arc_add_prune_callback);
11191
EXPORT_SYMBOL(arc_remove_prune_callback);
11192
11193
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min, param_set_arc_min,
11194
spl_param_get_u64, ZMOD_RW, "Minimum ARC size in bytes");
11195
11196
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, max, param_set_arc_max,
11197
spl_param_get_u64, ZMOD_RW, "Maximum ARC size in bytes");
11198
11199
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, meta_balance, UINT, ZMOD_RW,
11200
"Balance between metadata and data on ghost hits.");
11201
11202
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, grow_retry, param_set_arc_int,
11203
param_get_uint, ZMOD_RW, "Seconds before growing ARC size");
11204
11205
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, shrink_shift, param_set_arc_int,
11206
param_get_uint, ZMOD_RW, "log2(fraction of ARC to reclaim)");
11207
11208
#ifdef _KERNEL
11209
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, pc_percent, UINT, ZMOD_RW,
11210
"Percent of pagecache to reclaim ARC to");
11211
#endif
11212
11213
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, average_blocksize, UINT, ZMOD_RD,
11214
"Target average block size");
11215
11216
ZFS_MODULE_PARAM(zfs, zfs_, compressed_arc_enabled, INT, ZMOD_RW,
11217
"Disable compressed ARC buffers");
11218
11219
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min_prefetch_ms, param_set_arc_int,
11220
param_get_uint, ZMOD_RW, "Min life of prefetch block in ms");
11221
11222
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min_prescient_prefetch_ms,
11223
param_set_arc_int, param_get_uint, ZMOD_RW,
11224
"Min life of prescient prefetched block in ms");
11225
11226
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_max, U64, ZMOD_RW,
11227
"Max write bytes per interval");
11228
11229
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_boost, U64, ZMOD_RW,
11230
"Extra write bytes during device warmup");
11231
11232
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom, U64, ZMOD_RW,
11233
"Number of max device writes to precache");
11234
11235
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom_boost, U64, ZMOD_RW,
11236
"Compressed l2arc_headroom multiplier");
11237
11238
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, trim_ahead, U64, ZMOD_RW,
11239
"TRIM ahead L2ARC write size multiplier");
11240
11241
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_secs, U64, ZMOD_RW,
11242
"Seconds between L2ARC writing");
11243
11244
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_min_ms, U64, ZMOD_RW,
11245
"Min feed interval in milliseconds");
11246
11247
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, noprefetch, INT, ZMOD_RW,
11248
"Skip caching prefetched buffers");
11249
11250
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_again, INT, ZMOD_RW,
11251
"Turbo L2ARC warmup");
11252
11253
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, norw, INT, ZMOD_RW,
11254
"No reads during writes");
11255
11256
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, meta_percent, UINT, ZMOD_RW,
11257
"Percent of ARC size allowed for L2ARC-only headers");
11258
11259
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_enabled, INT, ZMOD_RW,
11260
"Rebuild the L2ARC when importing a pool");
11261
11262
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_blocks_min_l2size, U64, ZMOD_RW,
11263
"Min size in bytes to write rebuild log blocks in L2ARC");
11264
11265
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, mfuonly, INT, ZMOD_RW,
11266
"Cache only MFU data from ARC into L2ARC");
11267
11268
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, exclude_special, INT, ZMOD_RW,
11269
"Exclude dbufs on special vdevs from being cached to L2ARC if set.");
11270
11271
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, lotsfree_percent, param_set_arc_int,
11272
param_get_uint, ZMOD_RW, "System free memory I/O throttle in bytes");
11273
11274
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, sys_free, param_set_arc_u64,
11275
spl_param_get_u64, ZMOD_RW, "System free memory target size in bytes");
11276
11277
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit, param_set_arc_u64,
11278
spl_param_get_u64, ZMOD_RW, "Minimum bytes of dnodes in ARC");
11279
11280
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit_percent,
11281
param_set_arc_int, param_get_uint, ZMOD_RW,
11282
"Percent of ARC meta buffers for dnodes");
11283
11284
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, dnode_reduce_percent, UINT, ZMOD_RW,
11285
"Percentage of excess dnodes to try to unpin");
11286
11287
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, eviction_pct, UINT, ZMOD_RW,
11288
"When full, ARC allocation waits for eviction of this % of alloc size");
11289
11290
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, evict_batch_limit, UINT, ZMOD_RW,
11291
"The number of headers to evict per sublist before moving to the next");
11292
11293
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, prune_task_threads, INT, ZMOD_RW,
11294
"Number of arc_prune threads");
11295
11296
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, evict_threads, UINT, ZMOD_RD,
11297
"Number of threads to use for ARC eviction.");
11298
11299