Semi-automatic user interface generation considering pointing granularity

Development of GUIs (graphical user interfaces) for multiple devices is still a time-consuming and error-prone task. Each class of physical devices - and in addition each application-tailored set of physical devices - has different properties and thus needs a specifically tailored GUI. Current model...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kavaldjian, S., Raneburger, D., Falb, J., Kaindl, H., Ertl, D.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Development of GUIs (graphical user interfaces) for multiple devices is still a time-consuming and error-prone task. Each class of physical devices - and in addition each application-tailored set of physical devices - has different properties and thus needs a specifically tailored GUI. Current model-driven GUI generation approaches take only few properties into account, like screen resolution. Additional device properties, especially pointing granularity, allow generating GUIs suited for certain classes of devices like touch screens. This paper is based on a model-driven UI development approach for multiple devices based on a discourse model that provides an interaction design. Our approach generates UIs using an extended device specification and applying model transformation rules taking them into account. In particular, we show how to semi-automatically generate finger-based touch screen UIs and compare them with usual UIs for use with a mouse that have also been generated semi-automatically.
ISSN:1062-922X
2577-1655
DOI:10.1109/ICSMC.2009.5346356