Abstract
Distributed energy systems (DES) have the potential to minimise costly
network upgrades while increasing the proportion of renewable energy generation
in the electrical grid, when properly designed. In contrast, poorly designed
DES can accelerate the degeneration of existing network infrastructure. Most
optimisation models used to design grid-connected DES have oversimplified or
excluded constraints associated with alternating current (AC) power flow, as
the latter has been studied in a standalone class of models known as Optimal
Power Flow (OPF). A small subset of models, labelled DES-OPF models, have
attempted to combine these independent frameworks. However, the impacts of
using a DES-OPF framework on the resulting designs, as opposed to a
conventional DES framework without AC power flow, have not been studied in
previous work. This study aims to shed light on these impacts by proposing a
bi-level method to solve the computationally-expensive DES-OPF framework, and
simultaneously comparing results to a baseline MILP model that utilises direct
current (DC) approximations in place of AC OPF, as found abundantly in
literature. Two test cases of varying scale are employed to test the frameworks
and compare resulting designs. The practical feasibility of the designs is
assessed, based on whether the designs can mitigate network violations and
energy wastage during standard operation. Results demonstrate that the baseline
MILP underestimates total costs due to its inability to detect current and
voltage violations, resulting in a 37% increase for one case study when tested
with the DES-OPF framework. Major implications on battery capacity are also
observed, primarily to manage higher levels of renewable energy curtailment,
which emphasise the need to use DES-OPF frameworks when designing
grid-connected DES.