Learning to predict perceptual visibility of rendering deterioration in computer games

Contemporary computer gaming affords players the agency to manually tailor rendering settings, a capability crucial for optimizing computational demands following their hardware performance. Specifically, adjustments to texture resolution, shadow map intricacies, and anti-aliasing complexities facil...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Scientific reports 2024-11, Vol.14 (1), p.27830-11, Article 27830
Hauptverfasser: Piórkowski, Rafał, Mantiuk, Radosław, Wernikowski, Marek
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Contemporary computer gaming affords players the agency to manually tailor rendering settings, a capability crucial for optimizing computational demands following their hardware performance. Specifically, adjustments to texture resolution, shadow map intricacies, and anti-aliasing complexities facilitate seamless animation generation, particularly on systems equipped with budget-friendly graphical units. Nonetheless, the intricacy and extensive interdependencies among these rendering parameters render selecting an optimal configuration a multifaceted challenge. Our approach involves training a proprietary convolutional neural network (CNN) to streamline this intricate process. This CNN quantitatively compares reference images rendered at peak quality to their counterparts subjected to quality reduction. Its function involves identifying and classifying artifacts within the altered images, and evaluating their perceptibility to human observers. Our neural network undergoes rigorous training using an expansive dataset derived from prevalent game engine scenes. A meticulous process to establish accurate classification data involves manually annotating image regions afflicted by diminished quality. As a testament to our methodology, we implement a prototype forward renderer based on OpenGL. Employing the trained neural network within this application enables the evaluation of image quality across diverse anti-aliasing settings, culminating in identifying settings minimizing artifact visibility. Empirical validation via user studies substantiates our network’s efficacy, demonstrating its superior discernment of artifact visibility compared to established image quality metrics.
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-024-78254-0