spot/bench/dtgbasat
2016-09-21 13:56:48 +02:00
..
.gitignore help git --status by ignoring more files 2015-09-09 00:49:13 +02:00
formulas Add benchmark for our DTGBA SAT-minimization. 2013-09-18 18:08:41 +02:00
Makefile.am Add benchmark for our DTGBA SAT-minimization. 2013-09-18 18:08:41 +02:00
prepare.sh ltlfilt: implement -q/--quiet as in grep 2014-12-11 15:34:39 +01:00
README * bench/dtgbasat/README: Update references. 2016-09-21 13:56:48 +02:00
rundbamin.pl Add benchmark for our DTGBA SAT-minimization. 2013-09-18 18:08:41 +02:00
stat.sh sat-minimize: more statistics. 2014-02-08 20:31:49 +01:00
stats.sh sat-minimize: more statistics. 2014-02-08 20:31:49 +01:00
tabl.pl sat-minimize: more statistics. 2014-02-08 20:31:49 +01:00
tabl1.pl sat-minimize: more statistics. 2014-02-08 20:31:49 +01:00
tabl2.pl sat-minimize: more statistics. 2014-02-08 20:31:49 +01:00
tabl3.pl sat-minimize: more statistics. 2014-02-08 20:31:49 +01:00
tabl4.pl sat-minimize: more statistics. 2014-02-08 20:31:49 +01:00

This benchmark was used in the FORTE'14 paper "Mechanizing the
Minimization of Deterministic Generalized Büchi Automata", by
Souheib Baarir and Alexandre Duret-Lutz.  The generated data
used in the paper are at

  https://www.lrde.epita.fr/~adl/forte14/

Note that the encoding used in the SAT-based minimization have evolved
since the paper, so if you want to reproduce exactly the benchmark
from the paper, you should download a copy of Spot 1.2.3.


To reproduce, follow these instructions:

1) Compile Spot and install if it is not already done.

2) Compile Glucose 3 from http://www.labri.fr/perso/lsimon/glucose/
   and make sure the 'glucose' binary is in your PATH.

3) Compile ltl2dstar from http://www.ltl2dstar.de/ and
   make sure the 'ltl2dstar' binary is in your path.

4) Compile DBAminimizer from
   http://www.react.uni-saarland.de/tools/dbaminimizer

   and define the path to minimize.py with

   % export DBA_MINIMIZER=$HOME/dbaminimizer/minimize.py

5) Then make sure you are in this directory:

   % cd bench/dtgbasat/

6) Classify the set of formulas with

   % ./prepare.sh

   This will read the formulas from file "formulas" and output a file
   "info.ltl", in which each formula is associated to a class (the ①,
   ②, ③, and ④ of the paper), and a number of acceptance conditions
   (the "m" of the paper).

7) Build a makefile to run all experiments

   % ./stats.sh

8) You may optionally change the directory that contains temporary
    files with

   % export TMPDIR=someplace

   (These temporary files can be several GB large.)

   Note that runtime will be limited to 2h, but memory is not bounded.
   You should set such a limit with "ulimit" if you like.  For instance
   % ulimit -v 41943040

9) Actually run all experiments

   % make -j4 -f stat.mk

   This will build a CSV file called "all.log".

10) You may generate LaTeX code for the tables with the scripts:
   - tabl.pl: Full data.
   - tabl1.pl, tabl2.pl, tabl3.pl, tabl4.pl: Partial tables as shown
   in the paper.

   All these script takes the CSV file all.log as first argument, and
   output LaTeX to their standard output.


For more instruction about how to use ltl2tgba and dstar2tgba to
compute minimal DTGBA or DTBA, please read doc/userdoc/satmin.html