The PRISM benchmark suite is a collection of PRISM models and associated properties, that can be used for benchmarking or testing of a new probabilistic model checking implementation or technique. These are already widely used in the verification community for this purpose.
Models (and properties) are grouped by type:
The benchmarks themselves are hosted on GitHub:
Check out the repository, or download it as a zip file.
PRISM includes useful scripts for automating benchmarking, notably prism-auto and prism-log-extract. For example, these commands:
run PRISM on all DTMC benchmark instances with less than 10,000 states using three different methods and then collate the run times in a CSV file for easy analysis.
See the benchmarking section of the PRISM Developer Resources for more details.
If you need the benchmarks in another format, don't forget that PRISM can export models in various formats.
Comments or suggestions regarding this benchmark set, as well as contributed models, are welcome.
The contents of the PRISM benchmark suite are distributed under the CC-BY 4.0 license: you are free to copy, edit and redistribute it, but please give appropriate credit.