[BP20b]
Edoardo Bacci and David Parker.
Probabilistic Guarantees for Safe Deep Reinforcement Learning.
In Proc. 18th International Conference on Formal Modelling and Analysis of Timed Systems (FORMATS'20), volume 12288 of LNCS, pages 231-248, Springer.
September 2020.
[pdf]
[bib]
[Proposes techniques for probabilistic verification of deep reinforcement learning policies, using PRISM as an underlying model checker.]
|
Notes:
An extended version of the paper, with proofs, is available at https://arxiv.org/abs/2005.07073.
The original publication is available at link.springer.com.
|
Links:
[Google]
[Google Scholar]
|
Abstract.
Deep reinforcement learning has been successfully applied to many control tasks,
but the application of such controllers in safety-critical scenarios has been limited
due to safety concerns.
Rigorous testing of these controllers is challenging,
particularly when they operate in probabilistic environments
due to, for example, hardware faults or noisy sensors.
We propose MOSAIC, an algorithm for measuring the safety of
deep reinforcement learning controllers in stochastic settings.
Our approach is based on the iterative construction of a formal abstraction
of a controller's execution in an environment,
and leverages probabilistic model checking of Markov decision processes to
produce probabilistic guarantees on safe behaviour over a finite time horizon.
It produces bounds on the probability of safe operation of the controller
for different initial configurations and identifies regions where
correct behaviour can be guaranteed.
We implement and evaluate our approach on controllers trained for
several benchmark control problems.
|