[BHK+20] Carlos E. Budde, Arnd Hartmanns, Michaela Klauck, Jan Kretinsky, David Parker, Tim Quatmann, Andrea Turrini and Zhen Zhang. On Correctness, Precision, and Performance in Quantitative Verification: QComp 2020 Competition Report. In Proc. 9th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation (ISoLA'20), volume 12479 of LNCS, pages 216-241, Springer. October 2020. [pdf] [bib] [Summarises the 2020 Comparison of Tools for the Analysis of Quantitative Formal Models (QComp'20) featuring PRISM and 8 other tools.]
Downloads:  pdf pdf (661 KB)  bib bib
Notes: The original publication is available at link.springer.com.
Links: [Google] [Google Scholar]
Abstract. Quantitative verification tools compute probabilities, expected rewards, or steady-state values for formal models of stochastic and timed systems. Exact results often cannot be obtained efficiently, so most tools use floating-point arithmetic in iterative algorithms that approximate the quantity of interest. Correctness is thus defined by the desired precision and determines performance. In this paper, we report on the experimental evaluation of these trade-offs performed in QComp 2020: the second friendly competition of tools for the analysis of quantitative formal models. We survey the precision guarantees - ranging from exact rational results to statistical confidence statements - offered by the nine participating tools. They gave rise to a performance evaluation using five tracks with varying correctness criteria, of which we present the results.