An improvement in weight-fault tolerance of feedforward neural networks
This paper proposes feedforward neural networks (NNs) tolerating stuck-at faults of weights. To cope with faults having small false absolute values, the potential calculation of the neuron is modified, and the gradient of activation function is steepened. To cope with faults having large absolute va...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper proposes feedforward neural networks (NNs) tolerating stuck-at faults of weights. To cope with faults having small false absolute values, the potential calculation of the neuron is modified, and the gradient of activation function is steepened. To cope with faults having large absolute values, the function working as filter sets products of inputs and faulty weights to allowable values. The experimental results show that the proposed NN is superior in fault tolerance, learning cycles and time to other NNs. |
---|---|
ISSN: | 1081-7735 2377-5386 |
DOI: | 10.1109/ATS.2001.990309 |