TOKENIZATION OF TEXT DATA TO FACILITATE AUTOMATED DISCOVERY OF SPEECH DISFLUENCIES
Introduced here are computer programs and associated computer-implemented techniques for discovering the presence of filler words through tokenization of a transcript derived from audio content. When audio content is obtained by a media production platform, the audio content can be converted into te...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Introduced here are computer programs and associated computer-implemented techniques for discovering the presence of filler words through tokenization of a transcript derived from audio content. When audio content is obtained by a media production platform, the audio content can be converted into text content as part of a speech-to-text operation. The text content can then be tokenized and labeled using a Natural Language Processing (NLP) library. Tokenizing/labeling may be performed in accordance with a series of rules associated with filler words. At a high level, these rules may examine the text content (and associated tokens/labels) to determine whether patterns, relationships, verbatim, and context indicate that a term is a filler word. Any filler words that are discovered in the text content can be identified as such so that appropriate action(s) can be taken. |
---|