Logical activation functions for training arbitrary probabilistic Boolean operations

In this work, we introduce a family of novel activation functions for deep neural networks that approximate n-ary, or n-argument, probabilistic logic. Logic has long been used to encode complex relationships between claims that are either true or false. Thus, these activation functions provide a ste...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information sciences 2024-04, Vol.664, p.120304, Article 120304
Hauptverfasser: Duersch, Jed A., Catanach, Tommie A., Das, Niladri
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this work, we introduce a family of novel activation functions for deep neural networks that approximate n-ary, or n-argument, probabilistic logic. Logic has long been used to encode complex relationships between claims that are either true or false. Thus, these activation functions provide a step towards models that can efficiently encode information. Unfortunately, typical feedforward networks with elementwise activation functions cannot capture certain relationships succinctly, such as the exclusive disjunction (p xor q) and conditioned disjunction (if c then p else q). Our n-ary activation functions address this challenge by approximating belief functions (probabilistic Boolean logic) with logit representations of probability and experiments demonstrate the ability to learn arbitrary logical ground truths in a single layer. Further, by representing belief tables using a basis that associates the number of nonzero parameters with the effective arity of each belief function, we forge a concrete relationship between logical complexity and sparsity, thus opening new optimization approaches to suppress logical complexity during training. We provide a computationally efficient PyTorch implementation and test our activation functions against other logic-approximating activation functions on both traditional machine learning tasks as well as reproducing known logical relationships.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2024.120304