High Throughput Computing for Massive Scenario Analysis and Optimization to Minimize Cascading Blackout Risk

We describe a simulation-based optimization method that allocates additional capacity to transmission lines in order to minimize the expected value of the load shed due to a cascading blackout. Estimation of the load-shed distribution is accomplished via the ORNL-PSerc-Alaska simulation model, which...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on smart grid 2017-05, Vol.8 (3), p.1427-1435
Hauptverfasser: Anderson, Eric James, Linderoth, Jeff
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We describe a simulation-based optimization method that allocates additional capacity to transmission lines in order to minimize the expected value of the load shed due to a cascading blackout. Estimation of the load-shed distribution is accomplished via the ORNL-PSerc-Alaska simulation model, which solves a sequence of linear programs. Key to achieving an effective algorithm is the use of a high-throughput computing environment that allocates computational resources on a platform of more than 14 000 cores simultaneously among several users. We discuss also important implementation details necessary to achieve effective implementation in this massive-scale computing environment. In the end, we demonstrate a prototype computation that reduces the expected load shed by 76% allocating only 1.1% of the installed capacity. The massive-scale computation is made possible using the computational platform provided through HTCondor, effectively obtaining over five months of CPU time in just over one day.
ISSN:1949-3053
1949-3061
DOI:10.1109/TSG.2016.2646640