Generative power of a protein language model trained on multiple sequence alignments
Computational models starting from large ensembles of evolutionarily related protein sequences capture a representation of protein families and learn constraints associated to protein structure and function. They thus open the possibility for generating novel sequences belonging to protein families....
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Computational models starting from large ensembles of evolutionarily related
protein sequences capture a representation of protein families and learn
constraints associated to protein structure and function. They thus open the
possibility for generating novel sequences belonging to protein families.
Protein language models trained on multiple sequence alignments, such as MSA
Transformer, are highly attractive candidates to this end. We propose and test
an iterative method that directly employs the masked language modeling
objective to generate sequences using MSA Transformer. We demonstrate that the
resulting sequences score as well as natural sequences, for homology,
coevolution and structure-based measures. For large protein families, our
synthetic sequences have similar or better properties compared to sequences
generated by Potts models, including experimentally-validated ones. Moreover,
for small protein families, our generation method based on MSA Transformer
outperforms Potts models. Our method also more accurately reproduces the
higher-order statistics and the distribution of sequences in sequence space of
natural data than Potts models. MSA Transformer is thus a strong candidate for
protein sequence generation and protein design. |
---|---|
DOI: | 10.48550/arxiv.2204.07110 |