Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
awilliam
GitHub Repository: awilliam/linux-vfio
Path: blob/master/Documentation/DMA-API.txt
10820 views
1
Dynamic DMA mapping using the generic device
2
============================================
3
4
James E.J. Bottomley <[email protected]>
5
6
This document describes the DMA API. For a more gentle introduction
7
of the API (and actual examples) see
8
Documentation/DMA-API-HOWTO.txt.
9
10
This API is split into two pieces. Part I describes the API. Part II
11
describes the extensions to the API for supporting non-consistent
12
memory machines. Unless you know that your driver absolutely has to
13
support non-consistent platforms (this is usually only legacy
14
platforms) you should only use the API described in part I.
15
16
Part I - dma_ API
17
-------------------------------------
18
19
To get the dma_ API, you must #include <linux/dma-mapping.h>
20
21
22
Part Ia - Using large dma-coherent buffers
23
------------------------------------------
24
25
void *
26
dma_alloc_coherent(struct device *dev, size_t size,
27
dma_addr_t *dma_handle, gfp_t flag)
28
29
Consistent memory is memory for which a write by either the device or
30
the processor can immediately be read by the processor or device
31
without having to worry about caching effects. (You may however need
32
to make sure to flush the processor's write buffers before telling
33
devices to read that memory.)
34
35
This routine allocates a region of <size> bytes of consistent memory.
36
It also returns a <dma_handle> which may be cast to an unsigned
37
integer the same width as the bus and used as the physical address
38
base of the region.
39
40
Returns: a pointer to the allocated region (in the processor's virtual
41
address space) or NULL if the allocation failed.
42
43
Note: consistent memory can be expensive on some platforms, and the
44
minimum allocation length may be as big as a page, so you should
45
consolidate your requests for consistent memory as much as possible.
46
The simplest way to do that is to use the dma_pool calls (see below).
47
48
The flag parameter (dma_alloc_coherent only) allows the caller to
49
specify the GFP_ flags (see kmalloc) for the allocation (the
50
implementation may choose to ignore flags that affect the location of
51
the returned memory, like GFP_DMA).
52
53
void
54
dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
55
dma_addr_t dma_handle)
56
57
Free the region of consistent memory you previously allocated. dev,
58
size and dma_handle must all be the same as those passed into the
59
consistent allocate. cpu_addr must be the virtual address returned by
60
the consistent allocate.
61
62
Note that unlike their sibling allocation calls, these routines
63
may only be called with IRQs enabled.
64
65
66
Part Ib - Using small dma-coherent buffers
67
------------------------------------------
68
69
To get this part of the dma_ API, you must #include <linux/dmapool.h>
70
71
Many drivers need lots of small dma-coherent memory regions for DMA
72
descriptors or I/O buffers. Rather than allocating in units of a page
73
or more using dma_alloc_coherent(), you can use DMA pools. These work
74
much like a struct kmem_cache, except that they use the dma-coherent allocator,
75
not __get_free_pages(). Also, they understand common hardware constraints
76
for alignment, like queue heads needing to be aligned on N-byte boundaries.
77
78
79
struct dma_pool *
80
dma_pool_create(const char *name, struct device *dev,
81
size_t size, size_t align, size_t alloc);
82
83
The pool create() routines initialize a pool of dma-coherent buffers
84
for use with a given device. It must be called in a context which
85
can sleep.
86
87
The "name" is for diagnostics (like a struct kmem_cache name); dev and size
88
are like what you'd pass to dma_alloc_coherent(). The device's hardware
89
alignment requirement for this type of data is "align" (which is expressed
90
in bytes, and must be a power of two). If your device has no boundary
91
crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
92
from this pool must not cross 4KByte boundaries.
93
94
95
void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
96
dma_addr_t *dma_handle);
97
98
This allocates memory from the pool; the returned memory will meet the size
99
and alignment requirements specified at creation time. Pass GFP_ATOMIC to
100
prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks),
101
pass GFP_KERNEL to allow blocking. Like dma_alloc_coherent(), this returns
102
two values: an address usable by the cpu, and the dma address usable by the
103
pool's device.
104
105
106
void dma_pool_free(struct dma_pool *pool, void *vaddr,
107
dma_addr_t addr);
108
109
This puts memory back into the pool. The pool is what was passed to
110
the pool allocation routine; the cpu (vaddr) and dma addresses are what
111
were returned when that routine allocated the memory being freed.
112
113
114
void dma_pool_destroy(struct dma_pool *pool);
115
116
The pool destroy() routines free the resources of the pool. They must be
117
called in a context which can sleep. Make sure you've freed all allocated
118
memory back to the pool before you destroy it.
119
120
121
Part Ic - DMA addressing limitations
122
------------------------------------
123
124
int
125
dma_supported(struct device *dev, u64 mask)
126
127
Checks to see if the device can support DMA to the memory described by
128
mask.
129
130
Returns: 1 if it can and 0 if it can't.
131
132
Notes: This routine merely tests to see if the mask is possible. It
133
won't change the current mask settings. It is more intended as an
134
internal API for use by the platform than an external API for use by
135
driver writers.
136
137
int
138
dma_set_mask(struct device *dev, u64 mask)
139
140
Checks to see if the mask is possible and updates the device
141
parameters if it is.
142
143
Returns: 0 if successful and a negative error if not.
144
145
int
146
dma_set_coherent_mask(struct device *dev, u64 mask)
147
148
Checks to see if the mask is possible and updates the device
149
parameters if it is.
150
151
Returns: 0 if successful and a negative error if not.
152
153
u64
154
dma_get_required_mask(struct device *dev)
155
156
This API returns the mask that the platform requires to
157
operate efficiently. Usually this means the returned mask
158
is the minimum required to cover all of memory. Examining the
159
required mask gives drivers with variable descriptor sizes the
160
opportunity to use smaller descriptors as necessary.
161
162
Requesting the required mask does not alter the current mask. If you
163
wish to take advantage of it, you should issue a dma_set_mask()
164
call to set the mask to the value returned.
165
166
167
Part Id - Streaming DMA mappings
168
--------------------------------
169
170
dma_addr_t
171
dma_map_single(struct device *dev, void *cpu_addr, size_t size,
172
enum dma_data_direction direction)
173
174
Maps a piece of processor virtual memory so it can be accessed by the
175
device and returns the physical handle of the memory.
176
177
The direction for both api's may be converted freely by casting.
178
However the dma_ API uses a strongly typed enumerator for its
179
direction:
180
181
DMA_NONE no direction (used for debugging)
182
DMA_TO_DEVICE data is going from the memory to the device
183
DMA_FROM_DEVICE data is coming from the device to the memory
184
DMA_BIDIRECTIONAL direction isn't known
185
186
Notes: Not all memory regions in a machine can be mapped by this
187
API. Further, regions that appear to be physically contiguous in
188
kernel virtual space may not be contiguous as physical memory. Since
189
this API does not provide any scatter/gather capability, it will fail
190
if the user tries to map a non-physically contiguous piece of memory.
191
For this reason, it is recommended that memory mapped by this API be
192
obtained only from sources which guarantee it to be physically contiguous
193
(like kmalloc).
194
195
Further, the physical address of the memory must be within the
196
dma_mask of the device (the dma_mask represents a bit mask of the
197
addressable region for the device. I.e., if the physical address of
198
the memory anded with the dma_mask is still equal to the physical
199
address, then the device can perform DMA to the memory). In order to
200
ensure that the memory allocated by kmalloc is within the dma_mask,
201
the driver may specify various platform-dependent flags to restrict
202
the physical memory range of the allocation (e.g. on x86, GFP_DMA
203
guarantees to be within the first 16Mb of available physical memory,
204
as required by ISA devices).
205
206
Note also that the above constraints on physical contiguity and
207
dma_mask may not apply if the platform has an IOMMU (a device which
208
supplies a physical to virtual mapping between the I/O memory bus and
209
the device). However, to be portable, device driver writers may *not*
210
assume that such an IOMMU exists.
211
212
Warnings: Memory coherency operates at a granularity called the cache
213
line width. In order for memory mapped by this API to operate
214
correctly, the mapped region must begin exactly on a cache line
215
boundary and end exactly on one (to prevent two separately mapped
216
regions from sharing a single cache line). Since the cache line size
217
may not be known at compile time, the API will not enforce this
218
requirement. Therefore, it is recommended that driver writers who
219
don't take special care to determine the cache line size at run time
220
only map virtual regions that begin and end on page boundaries (which
221
are guaranteed also to be cache line boundaries).
222
223
DMA_TO_DEVICE synchronisation must be done after the last modification
224
of the memory region by the software and before it is handed off to
225
the driver. Once this primitive is used, memory covered by this
226
primitive should be treated as read-only by the device. If the device
227
may write to it at any point, it should be DMA_BIDIRECTIONAL (see
228
below).
229
230
DMA_FROM_DEVICE synchronisation must be done before the driver
231
accesses data that may be changed by the device. This memory should
232
be treated as read-only by the driver. If the driver needs to write
233
to it at any point, it should be DMA_BIDIRECTIONAL (see below).
234
235
DMA_BIDIRECTIONAL requires special handling: it means that the driver
236
isn't sure if the memory was modified before being handed off to the
237
device and also isn't sure if the device will also modify it. Thus,
238
you must always sync bidirectional memory twice: once before the
239
memory is handed off to the device (to make sure all memory changes
240
are flushed from the processor) and once before the data may be
241
accessed after being used by the device (to make sure any processor
242
cache lines are updated with data that the device may have changed).
243
244
void
245
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
246
enum dma_data_direction direction)
247
248
Unmaps the region previously mapped. All the parameters passed in
249
must be identical to those passed in (and returned) by the mapping
250
API.
251
252
dma_addr_t
253
dma_map_page(struct device *dev, struct page *page,
254
unsigned long offset, size_t size,
255
enum dma_data_direction direction)
256
void
257
dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
258
enum dma_data_direction direction)
259
260
API for mapping and unmapping for pages. All the notes and warnings
261
for the other mapping APIs apply here. Also, although the <offset>
262
and <size> parameters are provided to do partial page mapping, it is
263
recommended that you never use these unless you really know what the
264
cache width is.
265
266
int
267
dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
268
269
In some circumstances dma_map_single and dma_map_page will fail to create
270
a mapping. A driver can check for these errors by testing the returned
271
dma address with dma_mapping_error(). A non-zero return value means the mapping
272
could not be created and the driver should take appropriate action (e.g.
273
reduce current DMA mapping usage or delay and try again later).
274
275
int
276
dma_map_sg(struct device *dev, struct scatterlist *sg,
277
int nents, enum dma_data_direction direction)
278
279
Returns: the number of physical segments mapped (this may be shorter
280
than <nents> passed in if some elements of the scatter/gather list are
281
physically or virtually adjacent and an IOMMU maps them with a single
282
entry).
283
284
Please note that the sg cannot be mapped again if it has been mapped once.
285
The mapping process is allowed to destroy information in the sg.
286
287
As with the other mapping interfaces, dma_map_sg can fail. When it
288
does, 0 is returned and a driver must take appropriate action. It is
289
critical that the driver do something, in the case of a block driver
290
aborting the request or even oopsing is better than doing nothing and
291
corrupting the filesystem.
292
293
With scatterlists, you use the resulting mapping like this:
294
295
int i, count = dma_map_sg(dev, sglist, nents, direction);
296
struct scatterlist *sg;
297
298
for_each_sg(sglist, sg, count, i) {
299
hw_address[i] = sg_dma_address(sg);
300
hw_len[i] = sg_dma_len(sg);
301
}
302
303
where nents is the number of entries in the sglist.
304
305
The implementation is free to merge several consecutive sglist entries
306
into one (e.g. with an IOMMU, or if several pages just happen to be
307
physically contiguous) and returns the actual number of sg entries it
308
mapped them to. On failure 0, is returned.
309
310
Then you should loop count times (note: this can be less than nents times)
311
and use sg_dma_address() and sg_dma_len() macros where you previously
312
accessed sg->address and sg->length as shown above.
313
314
void
315
dma_unmap_sg(struct device *dev, struct scatterlist *sg,
316
int nhwentries, enum dma_data_direction direction)
317
318
Unmap the previously mapped scatter/gather list. All the parameters
319
must be the same as those and passed in to the scatter/gather mapping
320
API.
321
322
Note: <nents> must be the number you passed in, *not* the number of
323
physical entries returned.
324
325
void
326
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
327
enum dma_data_direction direction)
328
void
329
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
330
enum dma_data_direction direction)
331
void
332
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
333
enum dma_data_direction direction)
334
void
335
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
336
enum dma_data_direction direction)
337
338
Synchronise a single contiguous or scatter/gather mapping for the cpu
339
and device. With the sync_sg API, all the parameters must be the same
340
as those passed into the single mapping API. With the sync_single API,
341
you can use dma_handle and size parameters that aren't identical to
342
those passed into the single mapping API to do a partial sync.
343
344
Notes: You must do this:
345
346
- Before reading values that have been written by DMA from the device
347
(use the DMA_FROM_DEVICE direction)
348
- After writing values that will be written to the device using DMA
349
(use the DMA_TO_DEVICE) direction
350
- before *and* after handing memory to the device if the memory is
351
DMA_BIDIRECTIONAL
352
353
See also dma_map_single().
354
355
dma_addr_t
356
dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
357
enum dma_data_direction dir,
358
struct dma_attrs *attrs)
359
360
void
361
dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
362
size_t size, enum dma_data_direction dir,
363
struct dma_attrs *attrs)
364
365
int
366
dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
367
int nents, enum dma_data_direction dir,
368
struct dma_attrs *attrs)
369
370
void
371
dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
372
int nents, enum dma_data_direction dir,
373
struct dma_attrs *attrs)
374
375
The four functions above are just like the counterpart functions
376
without the _attrs suffixes, except that they pass an optional
377
struct dma_attrs*.
378
379
struct dma_attrs encapsulates a set of "dma attributes". For the
380
definition of struct dma_attrs see linux/dma-attrs.h.
381
382
The interpretation of dma attributes is architecture-specific, and
383
each attribute should be documented in Documentation/DMA-attributes.txt.
384
385
If struct dma_attrs* is NULL, the semantics of each of these
386
functions is identical to those of the corresponding function
387
without the _attrs suffix. As a result dma_map_single_attrs()
388
can generally replace dma_map_single(), etc.
389
390
As an example of the use of the *_attrs functions, here's how
391
you could pass an attribute DMA_ATTR_FOO when mapping memory
392
for DMA:
393
394
#include <linux/dma-attrs.h>
395
/* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and
396
* documented in Documentation/DMA-attributes.txt */
397
...
398
399
DEFINE_DMA_ATTRS(attrs);
400
dma_set_attr(DMA_ATTR_FOO, &attrs);
401
....
402
n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr);
403
....
404
405
Architectures that care about DMA_ATTR_FOO would check for its
406
presence in their implementations of the mapping and unmapping
407
routines, e.g.:
408
409
void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
410
size_t size, enum dma_data_direction dir,
411
struct dma_attrs *attrs)
412
{
413
....
414
int foo = dma_get_attr(DMA_ATTR_FOO, attrs);
415
....
416
if (foo)
417
/* twizzle the frobnozzle */
418
....
419
420
421
Part II - Advanced dma_ usage
422
-----------------------------
423
424
Warning: These pieces of the DMA API should not be used in the
425
majority of cases, since they cater for unlikely corner cases that
426
don't belong in usual drivers.
427
428
If you don't understand how cache line coherency works between a
429
processor and an I/O device, you should not be using this part of the
430
API at all.
431
432
void *
433
dma_alloc_noncoherent(struct device *dev, size_t size,
434
dma_addr_t *dma_handle, gfp_t flag)
435
436
Identical to dma_alloc_coherent() except that the platform will
437
choose to return either consistent or non-consistent memory as it sees
438
fit. By using this API, you are guaranteeing to the platform that you
439
have all the correct and necessary sync points for this memory in the
440
driver should it choose to return non-consistent memory.
441
442
Note: where the platform can return consistent memory, it will
443
guarantee that the sync points become nops.
444
445
Warning: Handling non-consistent memory is a real pain. You should
446
only ever use this API if you positively know your driver will be
447
required to work on one of the rare (usually non-PCI) architectures
448
that simply cannot make consistent memory.
449
450
void
451
dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
452
dma_addr_t dma_handle)
453
454
Free memory allocated by the nonconsistent API. All parameters must
455
be identical to those passed in (and returned by
456
dma_alloc_noncoherent()).
457
458
int
459
dma_get_cache_alignment(void)
460
461
Returns the processor cache alignment. This is the absolute minimum
462
alignment *and* width that you must observe when either mapping
463
memory or doing partial flushes.
464
465
Notes: This API may return a number *larger* than the actual cache
466
line, but it will guarantee that one or more cache lines fit exactly
467
into the width returned by this call. It will also always be a power
468
of two for easy alignment.
469
470
void
471
dma_cache_sync(struct device *dev, void *vaddr, size_t size,
472
enum dma_data_direction direction)
473
474
Do a partial sync of memory that was allocated by
475
dma_alloc_noncoherent(), starting at virtual address vaddr and
476
continuing on for size. Again, you *must* observe the cache line
477
boundaries when doing this.
478
479
int
480
dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
481
dma_addr_t device_addr, size_t size, int
482
flags)
483
484
Declare region of memory to be handed out by dma_alloc_coherent when
485
it's asked for coherent memory for this device.
486
487
bus_addr is the physical address to which the memory is currently
488
assigned in the bus responding region (this will be used by the
489
platform to perform the mapping).
490
491
device_addr is the physical address the device needs to be programmed
492
with actually to address this memory (this will be handed out as the
493
dma_addr_t in dma_alloc_coherent()).
494
495
size is the size of the area (must be multiples of PAGE_SIZE).
496
497
flags can be or'd together and are:
498
499
DMA_MEMORY_MAP - request that the memory returned from
500
dma_alloc_coherent() be directly writable.
501
502
DMA_MEMORY_IO - request that the memory returned from
503
dma_alloc_coherent() be addressable using read/write/memcpy_toio etc.
504
505
One or both of these flags must be present.
506
507
DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by
508
dma_alloc_coherent of any child devices of this one (for memory residing
509
on a bridge).
510
511
DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions.
512
Do not allow dma_alloc_coherent() to fall back to system memory when
513
it's out of memory in the declared region.
514
515
The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and
516
must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO
517
if only DMA_MEMORY_MAP were passed in) for success or zero for
518
failure.
519
520
Note, for DMA_MEMORY_IO returns, all subsequent memory returned by
521
dma_alloc_coherent() may no longer be accessed directly, but instead
522
must be accessed using the correct bus functions. If your driver
523
isn't prepared to handle this contingency, it should not specify
524
DMA_MEMORY_IO in the input flags.
525
526
As a simplification for the platforms, only *one* such region of
527
memory may be declared per device.
528
529
For reasons of efficiency, most platforms choose to track the declared
530
region only at the granularity of a page. For smaller allocations,
531
you should use the dma_pool() API.
532
533
void
534
dma_release_declared_memory(struct device *dev)
535
536
Remove the memory region previously declared from the system. This
537
API performs *no* in-use checking for this region and will return
538
unconditionally having removed all the required structures. It is the
539
driver's job to ensure that no parts of this memory region are
540
currently in use.
541
542
void *
543
dma_mark_declared_memory_occupied(struct device *dev,
544
dma_addr_t device_addr, size_t size)
545
546
This is used to occupy specific regions of the declared space
547
(dma_alloc_coherent() will hand out the first free region it finds).
548
549
device_addr is the *device* address of the region requested.
550
551
size is the size (and should be a page-sized multiple).
552
553
The return value will be either a pointer to the processor virtual
554
address of the memory, or an error (via PTR_ERR()) if any part of the
555
region is occupied.
556
557
Part III - Debug drivers use of the DMA-API
558
-------------------------------------------
559
560
The DMA-API as described above as some constraints. DMA addresses must be
561
released with the corresponding function with the same size for example. With
562
the advent of hardware IOMMUs it becomes more and more important that drivers
563
do not violate those constraints. In the worst case such a violation can
564
result in data corruption up to destroyed filesystems.
565
566
To debug drivers and find bugs in the usage of the DMA-API checking code can
567
be compiled into the kernel which will tell the developer about those
568
violations. If your architecture supports it you can select the "Enable
569
debugging of DMA-API usage" option in your kernel configuration. Enabling this
570
option has a performance impact. Do not enable it in production kernels.
571
572
If you boot the resulting kernel will contain code which does some bookkeeping
573
about what DMA memory was allocated for which device. If this code detects an
574
error it prints a warning message with some details into your kernel log. An
575
example warning message may look like this:
576
577
------------[ cut here ]------------
578
WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
579
check_unmap+0x203/0x490()
580
Hardware name:
581
forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
582
function [device address=0x00000000640444be] [size=66 bytes] [mapped as
583
single] [unmapped as page]
584
Modules linked in: nfsd exportfs bridge stp llc r8169
585
Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1
586
Call Trace:
587
<IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
588
[<ffffffff80647b70>] _spin_unlock+0x10/0x30
589
[<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
590
[<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
591
[<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
592
[<ffffffff80252f96>] queue_work+0x56/0x60
593
[<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
594
[<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
595
[<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
596
[<ffffffff80235177>] find_busiest_group+0x207/0x8a0
597
[<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
598
[<ffffffff803c7ea3>] check_unmap+0x203/0x490
599
[<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
600
[<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
601
[<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
602
[<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
603
[<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
604
[<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
605
[<ffffffff8020c093>] ret_from_intr+0x0/0xa
606
<EOI> <4>---[ end trace f6435a98e2a38c0e ]---
607
608
The driver developer can find the driver and the device including a stacktrace
609
of the DMA-API call which caused this warning.
610
611
Per default only the first error will result in a warning message. All other
612
errors will only silently counted. This limitation exist to prevent the code
613
from flooding your kernel log. To support debugging a device driver this can
614
be disabled via debugfs. See the debugfs interface documentation below for
615
details.
616
617
The debugfs directory for the DMA-API debugging code is called dma-api/. In
618
this directory the following files can currently be found:
619
620
dma-api/all_errors This file contains a numeric value. If this
621
value is not equal to zero the debugging code
622
will print a warning for every error it finds
623
into the kernel log. Be careful with this
624
option, as it can easily flood your logs.
625
626
dma-api/disabled This read-only file contains the character 'Y'
627
if the debugging code is disabled. This can
628
happen when it runs out of memory or if it was
629
disabled at boot time
630
631
dma-api/error_count This file is read-only and shows the total
632
numbers of errors found.
633
634
dma-api/num_errors The number in this file shows how many
635
warnings will be printed to the kernel log
636
before it stops. This number is initialized to
637
one at system boot and be set by writing into
638
this file
639
640
dma-api/min_free_entries
641
This read-only file can be read to get the
642
minimum number of free dma_debug_entries the
643
allocator has ever seen. If this value goes
644
down to zero the code will disable itself
645
because it is not longer reliable.
646
647
dma-api/num_free_entries
648
The current number of free dma_debug_entries
649
in the allocator.
650
651
dma-api/driver-filter
652
You can write a name of a driver into this file
653
to limit the debug output to requests from that
654
particular driver. Write an empty string to
655
that file to disable the filter and see
656
all errors again.
657
658
If you have this code compiled into your kernel it will be enabled by default.
659
If you want to boot without the bookkeeping anyway you can provide
660
'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
661
Notice that you can not enable it again at runtime. You have to reboot to do
662
so.
663
664
If you want to see debug messages only for a special device driver you can
665
specify the dma_debug_driver=<drivername> parameter. This will enable the
666
driver filter at boot time. The debug code will only print errors for that
667
driver afterwards. This filter can be disabled or changed later using debugfs.
668
669
When the code disables itself at runtime this is most likely because it ran
670
out of dma_debug_entries. These entries are preallocated at boot. The number
671
of preallocated entries is defined per architecture. If it is too low for you
672
boot with 'dma_debug_entries=<your_desired_number>' to overwrite the
673
architectural default.
674
675