R-Metric: Evaluating the Performance of Preference-Based Evolutionary Multiobjective Optimization Using Reference Points

Measuring the performance of an algorithm for solving multiobjective optimization problem has always been challenging simply due to two conflicting goals, i.e., convergence and diversity of obtained tradeoff solutions. There are a number of metrics for evaluating the performance of a multiobjective...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on evolutionary computation 2018-12, Vol.22 (6), p.821-835
Hauptverfasser: Ke Li, Deb, Kalyanmoy, Xin Yao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Measuring the performance of an algorithm for solving multiobjective optimization problem has always been challenging simply due to two conflicting goals, i.e., convergence and diversity of obtained tradeoff solutions. There are a number of metrics for evaluating the performance of a multiobjective optimizer that approximates the whole Pareto-optimal front. However, for evaluating the quality of a preferred subset of the whole front, the existing metrics are inadequate. In this paper, we suggest a systematic way to adapt the existing metrics to quantitatively evaluate the performance of a preference-based evolutionary multiobjective optimization algorithm using reference points. The basic idea is to preprocess the preferred solution set according to a multicriterion decision making approach before using a regular metric for performance assessment. Extensive experiments on several artificial scenarios, and benchmark problems fully demonstrate its effectiveness in evaluating the quality of different preferred solution sets with regard to various reference points supplied by a decision maker.
ISSN:1089-778X
1941-0026
DOI:10.1109/TEVC.2017.2737781