Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
awilliam
GitHub Repository: awilliam/linux-vfio
Path: blob/master/Documentation/PCI/pci-error-recovery.txt
10821 views
1
2
PCI Error Recovery
3
------------------
4
February 2, 2006
5
6
Current document maintainer:
7
Linas Vepstas <[email protected]>
8
updated by Richard Lary <[email protected]>
9
and Mike Mason <[email protected]> on 27-Jul-2009
10
11
12
Many PCI bus controllers are able to detect a variety of hardware
13
PCI errors on the bus, such as parity errors on the data and address
14
busses, as well as SERR and PERR errors. Some of the more advanced
15
chipsets are able to deal with these errors; these include PCI-E chipsets,
16
and the PCI-host bridges found on IBM Power4, Power5 and Power6-based
17
pSeries boxes. A typical action taken is to disconnect the affected device,
18
halting all I/O to it. The goal of a disconnection is to avoid system
19
corruption; for example, to halt system memory corruption due to DMA's
20
to "wild" addresses. Typically, a reconnection mechanism is also
21
offered, so that the affected PCI device(s) are reset and put back
22
into working condition. The reset phase requires coordination
23
between the affected device drivers and the PCI controller chip.
24
This document describes a generic API for notifying device drivers
25
of a bus disconnection, and then performing error recovery.
26
This API is currently implemented in the 2.6.16 and later kernels.
27
28
Reporting and recovery is performed in several steps. First, when
29
a PCI hardware error has resulted in a bus disconnect, that event
30
is reported as soon as possible to all affected device drivers,
31
including multiple instances of a device driver on multi-function
32
cards. This allows device drivers to avoid deadlocking in spinloops,
33
waiting for some i/o-space register to change, when it never will.
34
It also gives the drivers a chance to defer incoming I/O as
35
needed.
36
37
Next, recovery is performed in several stages. Most of the complexity
38
is forced by the need to handle multi-function devices, that is,
39
devices that have multiple device drivers associated with them.
40
In the first stage, each driver is allowed to indicate what type
41
of reset it desires, the choices being a simple re-enabling of I/O
42
or requesting a slot reset.
43
44
If any driver requests a slot reset, that is what will be done.
45
46
After a reset and/or a re-enabling of I/O, all drivers are
47
again notified, so that they may then perform any device setup/config
48
that may be required. After these have all completed, a final
49
"resume normal operations" event is sent out.
50
51
The biggest reason for choosing a kernel-based implementation rather
52
than a user-space implementation was the need to deal with bus
53
disconnects of PCI devices attached to storage media, and, in particular,
54
disconnects from devices holding the root file system. If the root
55
file system is disconnected, a user-space mechanism would have to go
56
through a large number of contortions to complete recovery. Almost all
57
of the current Linux file systems are not tolerant of disconnection
58
from/reconnection to their underlying block device. By contrast,
59
bus errors are easy to manage in the device driver. Indeed, most
60
device drivers already handle very similar recovery procedures;
61
for example, the SCSI-generic layer already provides significant
62
mechanisms for dealing with SCSI bus errors and SCSI bus resets.
63
64
65
Detailed Design
66
---------------
67
Design and implementation details below, based on a chain of
68
public email discussions with Ben Herrenschmidt, circa 5 April 2005.
69
70
The error recovery API support is exposed to the driver in the form of
71
a structure of function pointers pointed to by a new field in struct
72
pci_driver. A driver that fails to provide the structure is "non-aware",
73
and the actual recovery steps taken are platform dependent. The
74
arch/powerpc implementation will simulate a PCI hotplug remove/add.
75
76
This structure has the form:
77
struct pci_error_handlers
78
{
79
int (*error_detected)(struct pci_dev *dev, enum pci_channel_state);
80
int (*mmio_enabled)(struct pci_dev *dev);
81
int (*link_reset)(struct pci_dev *dev);
82
int (*slot_reset)(struct pci_dev *dev);
83
void (*resume)(struct pci_dev *dev);
84
};
85
86
The possible channel states are:
87
enum pci_channel_state {
88
pci_channel_io_normal, /* I/O channel is in normal state */
89
pci_channel_io_frozen, /* I/O to channel is blocked */
90
pci_channel_io_perm_failure, /* PCI card is dead */
91
};
92
93
Possible return values are:
94
enum pci_ers_result {
95
PCI_ERS_RESULT_NONE, /* no result/none/not supported in device driver */
96
PCI_ERS_RESULT_CAN_RECOVER, /* Device driver can recover without slot reset */
97
PCI_ERS_RESULT_NEED_RESET, /* Device driver wants slot to be reset. */
98
PCI_ERS_RESULT_DISCONNECT, /* Device has completely failed, is unrecoverable */
99
PCI_ERS_RESULT_RECOVERED, /* Device driver is fully recovered and operational */
100
};
101
102
A driver does not have to implement all of these callbacks; however,
103
if it implements any, it must implement error_detected(). If a callback
104
is not implemented, the corresponding feature is considered unsupported.
105
For example, if mmio_enabled() and resume() aren't there, then it
106
is assumed that the driver is not doing any direct recovery and requires
107
a slot reset. If link_reset() is not implemented, the card is assumed to
108
not care about link resets. Typically a driver will want to know about
109
a slot_reset().
110
111
The actual steps taken by a platform to recover from a PCI error
112
event will be platform-dependent, but will follow the general
113
sequence described below.
114
115
STEP 0: Error Event
116
-------------------
117
A PCI bus error is detected by the PCI hardware. On powerpc, the slot
118
is isolated, in that all I/O is blocked: all reads return 0xffffffff,
119
all writes are ignored.
120
121
122
STEP 1: Notification
123
--------------------
124
Platform calls the error_detected() callback on every instance of
125
every driver affected by the error.
126
127
At this point, the device might not be accessible anymore, depending on
128
the platform (the slot will be isolated on powerpc). The driver may
129
already have "noticed" the error because of a failing I/O, but this
130
is the proper "synchronization point", that is, it gives the driver
131
a chance to cleanup, waiting for pending stuff (timers, whatever, etc...)
132
to complete; it can take semaphores, schedule, etc... everything but
133
touch the device. Within this function and after it returns, the driver
134
shouldn't do any new IOs. Called in task context. This is sort of a
135
"quiesce" point. See note about interrupts at the end of this doc.
136
137
All drivers participating in this system must implement this call.
138
The driver must return one of the following result codes:
139
- PCI_ERS_RESULT_CAN_RECOVER:
140
Driver returns this if it thinks it might be able to recover
141
the HW by just banging IOs or if it wants to be given
142
a chance to extract some diagnostic information (see
143
mmio_enable, below).
144
- PCI_ERS_RESULT_NEED_RESET:
145
Driver returns this if it can't recover without a
146
slot reset.
147
- PCI_ERS_RESULT_DISCONNECT:
148
Driver returns this if it doesn't want to recover at all.
149
150
The next step taken will depend on the result codes returned by the
151
drivers.
152
153
If all drivers on the segment/slot return PCI_ERS_RESULT_CAN_RECOVER,
154
then the platform should re-enable IOs on the slot (or do nothing in
155
particular, if the platform doesn't isolate slots), and recovery
156
proceeds to STEP 2 (MMIO Enable).
157
158
If any driver requested a slot reset (by returning PCI_ERS_RESULT_NEED_RESET),
159
then recovery proceeds to STEP 4 (Slot Reset).
160
161
If the platform is unable to recover the slot, the next step
162
is STEP 6 (Permanent Failure).
163
164
>>> The current powerpc implementation assumes that a device driver will
165
>>> *not* schedule or semaphore in this routine; the current powerpc
166
>>> implementation uses one kernel thread to notify all devices;
167
>>> thus, if one device sleeps/schedules, all devices are affected.
168
>>> Doing better requires complex multi-threaded logic in the error
169
>>> recovery implementation (e.g. waiting for all notification threads
170
>>> to "join" before proceeding with recovery.) This seems excessively
171
>>> complex and not worth implementing.
172
173
>>> The current powerpc implementation doesn't much care if the device
174
>>> attempts I/O at this point, or not. I/O's will fail, returning
175
>>> a value of 0xff on read, and writes will be dropped. If more than
176
>>> EEH_MAX_FAILS I/O's are attempted to a frozen adapter, EEH
177
>>> assumes that the device driver has gone into an infinite loop
178
>>> and prints an error to syslog. A reboot is then required to
179
>>> get the device working again.
180
181
STEP 2: MMIO Enabled
182
-------------------
183
The platform re-enables MMIO to the device (but typically not the
184
DMA), and then calls the mmio_enabled() callback on all affected
185
device drivers.
186
187
This is the "early recovery" call. IOs are allowed again, but DMA is
188
not, with some restrictions. This is NOT a callback for the driver to
189
start operations again, only to peek/poke at the device, extract diagnostic
190
information, if any, and eventually do things like trigger a device local
191
reset or some such, but not restart operations. This callback is made if
192
all drivers on a segment agree that they can try to recover and if no automatic
193
link reset was performed by the HW. If the platform can't just re-enable IOs
194
without a slot reset or a link reset, it will not call this callback, and
195
instead will have gone directly to STEP 3 (Link Reset) or STEP 4 (Slot Reset)
196
197
>>> The following is proposed; no platform implements this yet:
198
>>> Proposal: All I/O's should be done _synchronously_ from within
199
>>> this callback, errors triggered by them will be returned via
200
>>> the normal pci_check_whatever() API, no new error_detected()
201
>>> callback will be issued due to an error happening here. However,
202
>>> such an error might cause IOs to be re-blocked for the whole
203
>>> segment, and thus invalidate the recovery that other devices
204
>>> on the same segment might have done, forcing the whole segment
205
>>> into one of the next states, that is, link reset or slot reset.
206
207
The driver should return one of the following result codes:
208
- PCI_ERS_RESULT_RECOVERED
209
Driver returns this if it thinks the device is fully
210
functional and thinks it is ready to start
211
normal driver operations again. There is no
212
guarantee that the driver will actually be
213
allowed to proceed, as another driver on the
214
same segment might have failed and thus triggered a
215
slot reset on platforms that support it.
216
217
- PCI_ERS_RESULT_NEED_RESET
218
Driver returns this if it thinks the device is not
219
recoverable in its current state and it needs a slot
220
reset to proceed.
221
222
- PCI_ERS_RESULT_DISCONNECT
223
Same as above. Total failure, no recovery even after
224
reset driver dead. (To be defined more precisely)
225
226
The next step taken depends on the results returned by the drivers.
227
If all drivers returned PCI_ERS_RESULT_RECOVERED, then the platform
228
proceeds to either STEP3 (Link Reset) or to STEP 5 (Resume Operations).
229
230
If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform
231
proceeds to STEP 4 (Slot Reset)
232
233
STEP 3: Link Reset
234
------------------
235
The platform resets the link, and then calls the link_reset() callback
236
on all affected device drivers. This is a PCI-Express specific state
237
and is done whenever a non-fatal error has been detected that can be
238
"solved" by resetting the link. This call informs the driver of the
239
reset and the driver should check to see if the device appears to be
240
in working condition.
241
242
The driver is not supposed to restart normal driver I/O operations
243
at this point. It should limit itself to "probing" the device to
244
check its recoverability status. If all is right, then the platform
245
will call resume() once all drivers have ack'd link_reset().
246
247
Result codes:
248
(identical to STEP 3 (MMIO Enabled)
249
250
The platform then proceeds to either STEP 4 (Slot Reset) or STEP 5
251
(Resume Operations).
252
253
>>> The current powerpc implementation does not implement this callback.
254
255
STEP 4: Slot Reset
256
------------------
257
258
In response to a return value of PCI_ERS_RESULT_NEED_RESET, the
259
the platform will peform a slot reset on the requesting PCI device(s).
260
The actual steps taken by a platform to perform a slot reset
261
will be platform-dependent. Upon completion of slot reset, the
262
platform will call the device slot_reset() callback.
263
264
Powerpc platforms implement two levels of slot reset:
265
soft reset(default) and fundamental(optional) reset.
266
267
Powerpc soft reset consists of asserting the adapter #RST line and then
268
restoring the PCI BAR's and PCI configuration header to a state
269
that is equivalent to what it would be after a fresh system
270
power-on followed by power-on BIOS/system firmware initialization.
271
Soft reset is also known as hot-reset.
272
273
Powerpc fundamental reset is supported by PCI Express cards only
274
and results in device's state machines, hardware logic, port states and
275
configuration registers to initialize to their default conditions.
276
277
For most PCI devices, a soft reset will be sufficient for recovery.
278
Optional fundamental reset is provided to support a limited number
279
of PCI Express PCI devices for which a soft reset is not sufficient
280
for recovery.
281
282
If the platform supports PCI hotplug, then the reset might be
283
performed by toggling the slot electrical power off/on.
284
285
It is important for the platform to restore the PCI config space
286
to the "fresh poweron" state, rather than the "last state". After
287
a slot reset, the device driver will almost always use its standard
288
device initialization routines, and an unusual config space setup
289
may result in hung devices, kernel panics, or silent data corruption.
290
291
This call gives drivers the chance to re-initialize the hardware
292
(re-download firmware, etc.). At this point, the driver may assume
293
that the card is in a fresh state and is fully functional. The slot
294
is unfrozen and the driver has full access to PCI config space,
295
memory mapped I/O space and DMA. Interrupts (Legacy, MSI, or MSI-X)
296
will also be available.
297
298
Drivers should not restart normal I/O processing operations
299
at this point. If all device drivers report success on this
300
callback, the platform will call resume() to complete the sequence,
301
and let the driver restart normal I/O processing.
302
303
A driver can still return a critical failure for this function if
304
it can't get the device operational after reset. If the platform
305
previously tried a soft reset, it might now try a hard reset (power
306
cycle) and then call slot_reset() again. It the device still can't
307
be recovered, there is nothing more that can be done; the platform
308
will typically report a "permanent failure" in such a case. The
309
device will be considered "dead" in this case.
310
311
Drivers for multi-function cards will need to coordinate among
312
themselves as to which driver instance will perform any "one-shot"
313
or global device initialization. For example, the Symbios sym53cxx2
314
driver performs device init only from PCI function 0:
315
316
+ if (PCI_FUNC(pdev->devfn) == 0)
317
+ sym_reset_scsi_bus(np, 0);
318
319
Result codes:
320
- PCI_ERS_RESULT_DISCONNECT
321
Same as above.
322
323
Drivers for PCI Express cards that require a fundamental reset must
324
set the needs_freset bit in the pci_dev structure in their probe function.
325
For example, the QLogic qla2xxx driver sets the needs_freset bit for certain
326
PCI card types:
327
328
+ /* Set EEH reset type to fundamental if required by hba */
329
+ if (IS_QLA24XX(ha) || IS_QLA25XX(ha) || IS_QLA81XX(ha))
330
+ pdev->needs_freset = 1;
331
+
332
333
Platform proceeds either to STEP 5 (Resume Operations) or STEP 6 (Permanent
334
Failure).
335
336
>>> The current powerpc implementation does not try a power-cycle
337
>>> reset if the driver returned PCI_ERS_RESULT_DISCONNECT.
338
>>> However, it probably should.
339
340
341
STEP 5: Resume Operations
342
-------------------------
343
The platform will call the resume() callback on all affected device
344
drivers if all drivers on the segment have returned
345
PCI_ERS_RESULT_RECOVERED from one of the 3 previous callbacks.
346
The goal of this callback is to tell the driver to restart activity,
347
that everything is back and running. This callback does not return
348
a result code.
349
350
At this point, if a new error happens, the platform will restart
351
a new error recovery sequence.
352
353
STEP 6: Permanent Failure
354
-------------------------
355
A "permanent failure" has occurred, and the platform cannot recover
356
the device. The platform will call error_detected() with a
357
pci_channel_state value of pci_channel_io_perm_failure.
358
359
The device driver should, at this point, assume the worst. It should
360
cancel all pending I/O, refuse all new I/O, returning -EIO to
361
higher layers. The device driver should then clean up all of its
362
memory and remove itself from kernel operations, much as it would
363
during system shutdown.
364
365
The platform will typically notify the system operator of the
366
permanent failure in some way. If the device is hotplug-capable,
367
the operator will probably want to remove and replace the device.
368
Note, however, not all failures are truly "permanent". Some are
369
caused by over-heating, some by a poorly seated card. Many
370
PCI error events are caused by software bugs, e.g. DMA's to
371
wild addresses or bogus split transactions due to programming
372
errors. See the discussion in powerpc/eeh-pci-error-recovery.txt
373
for additional detail on real-life experience of the causes of
374
software errors.
375
376
377
Conclusion; General Remarks
378
---------------------------
379
The way the callbacks are called is platform policy. A platform with
380
no slot reset capability may want to just "ignore" drivers that can't
381
recover (disconnect them) and try to let other cards on the same segment
382
recover. Keep in mind that in most real life cases, though, there will
383
be only one driver per segment.
384
385
Now, a note about interrupts. If you get an interrupt and your
386
device is dead or has been isolated, there is a problem :)
387
The current policy is to turn this into a platform policy.
388
That is, the recovery API only requires that:
389
390
- There is no guarantee that interrupt delivery can proceed from any
391
device on the segment starting from the error detection and until the
392
slot_reset callback is called, at which point interrupts are expected
393
to be fully operational.
394
395
- There is no guarantee that interrupt delivery is stopped, that is,
396
a driver that gets an interrupt after detecting an error, or that detects
397
an error within the interrupt handler such that it prevents proper
398
ack'ing of the interrupt (and thus removal of the source) should just
399
return IRQ_NOTHANDLED. It's up to the platform to deal with that
400
condition, typically by masking the IRQ source during the duration of
401
the error handling. It is expected that the platform "knows" which
402
interrupts are routed to error-management capable slots and can deal
403
with temporarily disabling that IRQ number during error processing (this
404
isn't terribly complex). That means some IRQ latency for other devices
405
sharing the interrupt, but there is simply no other way. High end
406
platforms aren't supposed to share interrupts between many devices
407
anyway :)
408
409
>>> Implementation details for the powerpc platform are discussed in
410
>>> the file Documentation/powerpc/eeh-pci-error-recovery.txt
411
412
>>> As of this writing, there is a growing list of device drivers with
413
>>> patches implementing error recovery. Not all of these patches are in
414
>>> mainline yet. These may be used as "examples":
415
>>>
416
>>> drivers/scsi/ipr
417
>>> drivers/scsi/sym53c8xx_2
418
>>> drivers/scsi/qla2xxx
419
>>> drivers/scsi/lpfc
420
>>> drivers/next/bnx2.c
421
>>> drivers/next/e100.c
422
>>> drivers/net/e1000
423
>>> drivers/net/e1000e
424
>>> drivers/net/ixgb
425
>>> drivers/net/ixgbe
426
>>> drivers/net/cxgb3
427
>>> drivers/net/s2io.c
428
>>> drivers/net/qlge
429
430
The End
431
-------
432
433