Transformer with Tree-order Encoding for Neural Program Generation
While a considerable amount of semantic parsing approaches have employed RNN architectures for code generation tasks, there have been only few attempts to investigate the applicability of Transformers for this task. Including hierarchical information of the underlying programming language syntax has...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While a considerable amount of semantic parsing approaches have employed RNN
architectures for code generation tasks, there have been only few attempts to
investigate the applicability of Transformers for this task. Including
hierarchical information of the underlying programming language syntax has
proven to be effective for code generation. Since the positional encoding of
the Transformer can only represent positions in a flat sequence, we have
extended the encoding scheme to allow the attention mechanism to also attend
over hierarchical positions in the input. Furthermore, we have realized a
decoder based on a restrictive grammar graph model to improve the generation
accuracy and ensure the well-formedness of the generated code. While we did not
surpass the state of the art, our findings suggest that employing a tree-based
positional encoding in combination with a shared natural-language subword
vocabulary improves generation performance over sequential positional
encodings. |
---|---|
DOI: | 10.48550/arxiv.2206.13354 |