Parallel metadata generation based on a window of overlapped frames
One embodiment provides a method comprising segmenting an input video into a first video chunk and one or more subsequent video chunks. The method further comprises, for each subsequent video chunk, generating a corresponding window of overlapped frames by selecting a subsequence of frames from a di...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | One embodiment provides a method comprising segmenting an input video into a first video chunk and one or more subsequent video chunks. The method further comprises, for each subsequent video chunk, generating a corresponding window of overlapped frames by selecting a subsequence of frames from a different video chunk immediately preceding the subsequent video chunk. The method further comprises generating metadata corresponding to each video chunk by processing each video chunk in parallel. Each subsequent video chunk is processed based in part on a corresponding window of overlapped frames. The method further comprises, for each subsequent video chunk, discarding a portion of metadata corresponding to the subsequent video chunk, where the portion discarded is specific to a corresponding window of overlapped frames. The method further comprises merging each video chunk into a single output video. Each video chunk merged is associated with any remaining corresponding metadata. |
---|