PROMPT GENERATION SIMULATING FINE-TUNING FOR A MACHINE LEARNING MODEL

Aspects of the present disclosure relate to systems and methods for generating one or more prompts based on an input and the semantic context associated with the input. In examples, the prompts may be provided as input to one or more general ML models to provide a semantic context around the input a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: SCHILLACE, Samuel Edward, MADAN, Umesh, LUCATO, Devis
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Aspects of the present disclosure relate to systems and methods for generating one or more prompts based on an input and the semantic context associated with the input. In examples, the prompts may be provided as input to one or more general ML models to provide a semantic context around the input and/or output of the model. The prompt simulates training and fine-tuned specialization of the general ML model without the need to use a fine-tuning process to actually train the general ML model into a fine-tuned state. Additionally, the model output may be evaluated for responsiveness to the input prior to being returned to the user. An advantage of the present disclosure is that it allows a general ML model to be applied to a plurality of applications without the need for expensive and time-consuming training to fine-tune the ML model.