DrugLLM: Open Large Language Model for Few-shot Molecule Generation
Large Language Models (LLMs) have made great strides in areas such as language processing and computer vision. Despite the emergence of diverse techniques to improve few-shot learning capacity, current LLMs fall short in handling the languages in biology and chemistry. For example, they are struggli...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) have made great strides in areas such as
language processing and computer vision. Despite the emergence of diverse
techniques to improve few-shot learning capacity, current LLMs fall short in
handling the languages in biology and chemistry. For example, they are
struggling to capture the relationship between molecule structure and
pharmacochemical properties. Consequently, the few-shot learning capacity of
small-molecule drug modification remains impeded. In this work, we introduced
DrugLLM, a LLM tailored for drug design. During the training process, we
employed Group-based Molecular Representation (GMR) to represent molecules,
arranging them in sequences that reflect modifications aimed at enhancing
specific molecular properties. DrugLLM learns how to modify molecules in drug
discovery by predicting the next molecule based on past modifications.
Extensive computational experiments demonstrate that DrugLLM can generate new
molecules with expected properties based on limited examples, presenting a
powerful few-shot molecule generation capacity. |
---|---|
DOI: | 10.48550/arxiv.2405.06690 |