spot/bench/ltl2tgba
Alexandre Duret-Lutz 23903d204f configure: remove useless calls to AC_CHECK_PROG
Fixes #329.

* configure.ac: Do not check for lbt, modella, and ltl2nba.
* bench/ltl2tgba/defs.in: Remove the associated substitutions,
not used in the scripts.
2018-03-09 20:21:43 +01:00
..
.gitignore Rewrite the ltl2tgba bench using ltlcross 2013-01-06 18:57:50 +01:00
big bench/ltl2tgba: Adjust to support different config files. 2013-04-15 15:31:43 +02:00
defs.in configure: remove useless calls to AC_CHECK_PROG 2018-03-09 20:21:43 +01:00
known genltl: add formulas from three papers 2016-05-05 18:39:13 +02:00
Makefile.am genltl: add formulas from three papers 2016-05-05 18:39:13 +02:00
README genltl: add formulas from three papers 2016-05-05 18:39:13 +02:00
small bench/ltl2tgba: Adjust to support different config files. 2013-04-15 15:31:43 +02:00
sum.py ltlcross: report exit_status and exit_code columns in CSV and JSON 2013-11-22 02:13:57 +01:00
tools update to ltl3ba 1.1.1 2015-01-25 11:44:00 +01:00
tools.sim simulation: get rid of the "don't care" simulation reductions 2015-01-19 18:17:44 +01:00

This directory contains benchmark scripts for LTL-to-Büchi translators.

====================
 QUICK INSTRUCTIONS
====================

Running 'make run' should run three benchmarks (you may want to use
'make run -j3' if you have three cores or more), then summarize these
into a LaTeX document that is then compiled to 'result.pdf'.

The summary script requires python3 to be installed, and the LaTeX
compilation obviously needs some LaTeX distribution.

The three benchmarks features respectively 200, 200, and 184 formulae,
to be translated (when these tools exist) by spin, ltl2ba, ltl3ba (4
configurations) and ltl2tgba (2 configurations).  Each translation has
a timeout set to 10 minutes, but with a total of 4672 translations to
perform it can take a long time.  If you want to speed things up, you
may edit the file 'algorithms' to remove tools or lower the timeout.


==========
 CONTENTS
==========

Here are the different scripts used, in case you want to customize
this benchmark.

* tools

  The configuration of all the translators.  This is merely a script
  that builds the command-line of ltlcross, to be run by the next
  three scripts.  Most of the variables (like $SPIN, $LTL2BA, etc) are
  defined by the 'defs' file, which is output by 'configure' after
  checking for the presence of the said tools.

  If you want to add your own tool to the mix, simply modify this
  'tools' file.

  The timeout value, common to the three benchmarks, is also set here.

* small
* big
* known

  Three scripts that run ltlcross on, respectively:
    100 small formulae (size 10, 4 propositions) and their negations
    100 big formulae (size 12..15, 8 propositions) and their negations
    92 known formulae (produced by genltl, see below) and their negations

  Each script generates 3 files:
    xxxx.log: the log of ltlcross' execution, updated as the script goes
    xxxx.csv: the results in CSV format
    xxxx.json: the results in JSON format

  The last two files are only output when ltlcross terminates, so if
  you kill a script before it terminates only the xxxx.log file will
  have been overwritten.


The known LTL formulas are generated by genltl, and come from the following
three papers:

    @InProceedings{   dwyer.98.fmsp,
      author        = {Matthew B. Dwyer and George S. Avrunin and James C.
		      Corbett},
      title         = {Property Specification Patterns for Finite-state
		      Verification},
      booktitle     = {Proceedings of the 2nd Workshop on Formal Methods in
		      Software Practice (FMSP'98)},
      publisher     = {ACM Press},
      address       = {New York},
      editor        = {Mark Ardis},
      month         = mar,
      year          = {1998},
      pages         = {7--15}
    }

    @InProceedings{   etessami.00.concur,
      author        = {Kousha Etessami and Gerard J. Holzmann},
      title         = {Optimizing {B\"u}chi Automata},
      booktitle     = {Proceedings of the 11th International Conference on
		      Concurrency Theory (Concur'00)},
      pages         = {153--167},
      year          = {2000},
      editor        = {C. Palamidessi},
      volume        = {1877},
      series        = {Lecture Notes in Computer Science},
      address       = {Pennsylvania, USA},
      publisher     = {Springer-Verlag}
    }

    @InProceedings{   somenzi.00.cav,
      author        = {Fabio Somenzi and Roderick Bloem},
      title         = {Efficient {B\"u}chi Automata for {LTL} Formul{\ae}},
      booktitle     = {Proceedings of the 12th International Conference on
		      Computer Aided Verification (CAV'00)},
      pages         = {247--263},
      year          = {2000},
      volume        = {1855},
      series        = {Lecture Notes in Computer Science},
      address       = {Chicago, Illinois, USA},
      publisher     = {Springer-Verlag}
    }

  In the known benchmark, we use both positive and negated versions
  of these formulae, yielding 178 unique formulas.

* sym.py

  This script reads all the *.json files, and write out a LaTeX file
  with summary tables.


=======================
 Reading the summaries
=======================

The various outputs (CSV, JSON, our LaTeX) use column headers
that are described in doc/userdoc/ltlcross.html and also
in the ltlcross man page.

The summary tables produced by sum.py accumulate all these results for
all formulae, tool by tool.  They display an additional column, called
'count', giving the number of formulae successfully translated (the
missing formulae correspond to timeouts).

For all these values (except count), smaller numbers are better.


More details about ltlcross (used to produce these outputs) can be
found in its man page, and at http://spot.lrde.epita.fr/tools.html


==============================================
 Running differents configurations / toolsets
==============================================

Instead of modifying the 'tools' file, you can also set the TOOLS
environment variable to point to another file.

For instance try

  TOOLS=./tools.sim make -j3 run

to benchmark several simulation-related options.