* doc/org/install.org: New file. * doc/Makefile.am: Add it. * doc/org/index.org: Link to it. * doc/org/setup.org: Add macro for various version numbers. * doc/org/tools.org: Update version number. * NEWS, README, bench/ltl2tgba/README, debian/control, debian/copyright: Update URLs to website.
147 lines
5 KiB
Text
147 lines
5 KiB
Text
This directory contains benchmark scripts for LTL-to-Büchi translators.
|
|
|
|
====================
|
|
QUICK INSTRUCTIONS
|
|
====================
|
|
|
|
Running 'make run' should run three benchmarks (you may want to use
|
|
'make run -j3' if you have three cores or more), then summarize these
|
|
into a LaTeX document that is then compiled to 'result.pdf'.
|
|
|
|
The summary script requires python3 to be installed, and the LaTeX
|
|
compilation obviously needs some LaTeX distribution.
|
|
|
|
The three benchmarks features respectively 200, 200, and 184 formulae,
|
|
to be translated (when these tools exist) by spin, ltl2ba, ltl3ba (4
|
|
configurations) and ltl2tgba (2 configurations). Each translation has
|
|
a timeout set to 10 minutes, but with a total of 4672 translations to
|
|
perform it can take a long time. If you want to speed things up, you
|
|
may edit the file 'algorithms' to remove tools or lower the timeout.
|
|
|
|
|
|
==========
|
|
CONTENTS
|
|
==========
|
|
|
|
Here are the different scripts used, in case you want to customize
|
|
this benchmark.
|
|
|
|
* tools
|
|
|
|
The configuration of all the translators. This is merely a script
|
|
that builds the command-line of ltlcross, to be run by the next
|
|
three scripts. Most of the variables (like $SPIN, $LTL2BA, etc) are
|
|
defined by the 'defs' file, which is output by 'configure' after
|
|
checking for the presence of the said tools.
|
|
|
|
If you want to add your own tool to the mix, simply modify this
|
|
'tools' file.
|
|
|
|
The timeout value, common to the three benchmarks, is also set here.
|
|
|
|
* small
|
|
* big
|
|
* known
|
|
|
|
Three scripts that run ltlcross on, respectively:
|
|
100 small formulae (size 10, 4 propositions) and their negations
|
|
100 big formulae (size 12..15, 8 propositions) and their negations
|
|
92 known formulae (from formulae.ltl) and their negations
|
|
|
|
Each script generates 3 files:
|
|
xxxx.log: the log of ltlcross' execution, updated as the script goes
|
|
xxxx.csv: the results in CSV format
|
|
xxxx.json: the results in JSON format
|
|
|
|
The last two files are only output when ltlcross terminates, so if
|
|
you kill a script before it terminates only the xxxx.log file will
|
|
have been overwritten.
|
|
|
|
* formulae.ltl
|
|
|
|
A list of LTL formulae used by the `known' check. They come
|
|
from three sources:
|
|
|
|
@InProceedings{ dwyer.98.fmsp,
|
|
author = {Matthew B. Dwyer and George S. Avrunin and James C.
|
|
Corbett},
|
|
title = {Property Specification Patterns for Finite-state
|
|
Verification},
|
|
booktitle = {Proceedings of the 2nd Workshop on Formal Methods in
|
|
Software Practice (FMSP'98)},
|
|
publisher = {ACM Press},
|
|
address = {New York},
|
|
editor = {Mark Ardis},
|
|
month = mar,
|
|
year = {1998},
|
|
pages = {7--15}
|
|
}
|
|
|
|
@InProceedings{ etessami.00.concur,
|
|
author = {Kousha Etessami and Gerard J. Holzmann},
|
|
title = {Optimizing {B\"u}chi Automata},
|
|
booktitle = {Proceedings of the 11th International Conference on
|
|
Concurrency Theory (Concur'00)},
|
|
pages = {153--167},
|
|
year = {2000},
|
|
editor = {C. Palamidessi},
|
|
volume = {1877},
|
|
series = {Lecture Notes in Computer Science},
|
|
address = {Pennsylvania, USA},
|
|
publisher = {Springer-Verlag}
|
|
}
|
|
|
|
@InProceedings{ somenzi.00.cav,
|
|
author = {Fabio Somenzi and Roderick Bloem},
|
|
title = {Efficient {B\"u}chi Automata for {LTL} Formul{\ae}},
|
|
booktitle = {Proceedings of the 12th International Conference on
|
|
Computer Aided Verification (CAV'00)},
|
|
pages = {247--263},
|
|
year = {2000},
|
|
volume = {1855},
|
|
series = {Lecture Notes in Computer Science},
|
|
address = {Chicago, Illinois, USA},
|
|
publisher = {Springer-Verlag}
|
|
}
|
|
|
|
In the known benchmark, we use both positive and negated versions
|
|
of these formulae.
|
|
|
|
* sym.py
|
|
|
|
This script reads all the *.json files, and write out a LaTeX file
|
|
with summary tables.
|
|
|
|
|
|
=======================
|
|
Reading the summaries
|
|
=======================
|
|
|
|
The various outputs (CSV, JSON, our LaTeX) use column headers
|
|
that are described in doc/userdoc/ltlcross.html and also
|
|
in the ltlcross man page.
|
|
|
|
The summary tables produced by sum.py accumulate all these results for
|
|
all formulae, tool by tool. They display an additional column, called
|
|
'count', giving the number of formulae successfully translated (the
|
|
missing formulae correspond to timeouts).
|
|
|
|
For all these values (except count), smaller numbers are better.
|
|
|
|
|
|
More details about ltlcross (used to produce these outputs) can be
|
|
found in its man page, and at http://spot.lrde.epita.fr/tools.html
|
|
|
|
|
|
==============================================
|
|
Running differents configurations / toolsets
|
|
==============================================
|
|
|
|
Instead of modifying the 'tools' file, you can also set the TOOLS
|
|
environment variable to point to another file.
|
|
|
|
For instance try
|
|
|
|
TOOLS=./tools.sim make -j3 run
|
|
|
|
to benchmark several simulation-related options.
|