MediaGPT : A Large Language Model For Chinese Media
Large language models (LLMs) have shown remarkable capabilities in generating high-quality text and making predictions based on large amounts of data, including the media domain. However, in practical applications, the differences between the media's use cases and the general-purpose applicatio...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have shown remarkable capabilities in generating
high-quality text and making predictions based on large amounts of data,
including the media domain. However, in practical applications, the differences
between the media's use cases and the general-purpose applications of LLMs have
become increasingly apparent, especially Chinese. This paper examines the
unique characteristics of media-domain-specific LLMs compared to general LLMs,
designed a diverse set of task instruction types to cater the specific
requirements of the domain and constructed unique datasets that are tailored to
the media domain. Based on these, we proposed MediaGPT, a domain-specific LLM
for the Chinese media domain, training by domain-specific data and experts SFT
data. By performing human experts evaluation and strong model evaluation on a
validation set, this paper demonstrated that MediaGPT outperforms mainstream
models on various Chinese media domain tasks and verifies the importance of
domain data and domain-defined prompt types for building an effective
domain-specific LLM. |
---|---|
DOI: | 10.48550/arxiv.2307.10930 |