Model Predictive Control (MPC), variously known as rolling-horizon
control, receding-horizon control, etc, is a powerful technique employed
in diverse engineering applications, and valued for its inherent
ability to handle constraints while minimizing some cost (or maximizing
some reward). The underlying idea of MPC is to approximate an infinite
horizon constrained optimization problem by a finite horizon one, and
then an optimal control law is calculated at every time step and applied
in a rolling-horizon fashion. In most applications uncertainty in the
model is involved, either due to some external disturbances or due to
imprecise modeling of the controlled process. Uncertainty is commonly
addressed in literature by formulating a robust MPC problem, assuming
that the uncertainty is bounded and adopting a worst-case approach.
Although there has been some major advances in this field, this approach
is often too pessimistic.
But what happens if the uncertainty is not bounded? Or when it is not
uniformly distributed? A less conservative approach would clearly be to
take these possibilities into account, identifying appropriate
distributions of the uncertainties and formulating a stochastic MPC
problem instead. In this context, there are several points one should
take into consideration for the problem formulation:
- Probabilistic Constraints: In the context of unbounded stochastic disturbances, satisfying all constraints with probability one is impossible, as at any time the system might encounter a very big disturbance that forces it to violate its constraints. It is important thus, to substitute the hard constraints with soft, probabilistic ones, ensuring that they are respected with a desired probability
- Expected Value Constraints:
Another alternative for dealing with constraints is to make sure that they are respected on an average for the optimization problem considered. Depending on the nature of the problem such expected value constraints may be more meaningful than the probabilistic ones.
- Cost (Reward):
The simplest cost (reward) is perhaps an expected cost (reward), formulated as the expected value of the sum of discounted cost-per-stage functions. Depending on the application, alternative and more difficult formulations may be to consider long-run expected average cost (reward), or long-run pathwise cost (reward).
Tidak ada komentar:
Posting Komentar