ProLLaMA: A Protein Language Model for Multi-Task Protein Language Processing
Large Language Models (LLMs) have achieved remarkable performance in multiple Natural Language Processing (NLP) tasks. Under the premise that protein sequences constitute the protein language, Protein Language Models(PLMs) have advanced the field of protein engineering. However, as of now, unlike LL...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) have achieved remarkable performance in multiple
Natural Language Processing (NLP) tasks. Under the premise that protein
sequences constitute the protein language, Protein Language Models(PLMs) have
advanced the field of protein engineering. However, as of now, unlike LLMs in
NLP, PLMs cannot handle the protein understanding task and the protein
generation task simultaneously in the Protein Language Processing (PLP) field.
This prompts us to delineate the inherent limitations in current PLMs: (i) the
lack of natural language capabilities, (ii) insufficient instruction
understanding, and (iii) high training resource demands. To address these
challenges, we introduce a training framework to transform any general LLM into
a PLM capable of handling multiple PLP tasks. To improve training efficiency,
we propose Protein Vocabulary Pruning (PVP) for general LLMs. We construct a
multi-task instruction dataset containing 13 million samples with superfamily
information, facilitating better modeling of protein sequence-function
landscapes. Through these methods, we develop the ProLLaMA model, the first
known PLM to handle multiple PLP tasks simultaneously. Experiments show that
ProLLaMA achieves state-of-the-art results in the unconditional protein
sequence generation task. In the controllable protein sequence generation task,
ProLLaMA can design novel proteins with desired functionalities. As for the
protein understanding task, ProLLaMA achieves a 62\% exact match rate in
superfamily prediction. Codes, model weights, and datasets are available at
\url{https://github.com/PKU-YuanGroup/ProLLaMA} and
\url{https://huggingface.co/GreatCaptainNemo}. |
---|---|
DOI: | 10.48550/arxiv.2402.16445 |