spot/bench/dtgbasat
Alexandre Duret-Lutz be41155308 various typos
* bench/dtgbasat/gen.py, spot/twaalgos/complement.hh: Fix
looser->loser and lossing->losing.
* tests/sanity/style.test: Catch 'an uni[^n]'.
* spot/ta/ta.hh, spot/taalgos/tgba2ta.cc, spot/taalgos/tgba2ta.hh,
spot/twa/twagraph.cc, spot/twaalgos/complement.hh,
spot/twaalgos/sccinfo.cc, spot/twaalgos/sum.hh: Fix various occurences
of this pattern.
2017-03-30 21:45:43 +02:00
..
.gitignore help git --status by ignoring more files 2015-09-09 00:49:13 +02:00
config.bench TYPOS 2017-01-16 13:38:12 +01:00
formulas complete Alexandre's patch 2016-10-13 15:55:25 +02:00
gen.py various typos 2017-03-30 21:45:43 +02:00
Makefile.am Update dtgbasat benchmark 2017-01-09 22:46:43 +01:00
prepare.sh dtgbasat: Fix dtgbasat bench errors 2016-10-13 15:51:21 +02:00
README Update dtgbasat benchmark 2017-01-09 22:46:43 +01:00
rundbamin.pl Add benchmark for our DTGBA SAT-minimization. 2013-09-18 18:08:41 +02:00
stat-gen.sh TYPOS 2017-01-16 13:38:12 +01:00
stat.sh dtgbasat: Fix dtgbasat bench errors 2016-10-13 15:51:21 +02:00
stats.sh Update dtgbasat benchmark 2017-01-09 22:46:43 +01:00
tabl.pl sat-minimize: more statistics. 2014-02-08 20:31:49 +01:00
tabl1.pl sat-minimize: more statistics. 2014-02-08 20:31:49 +01:00
tabl2.pl sat-minimize: more statistics. 2014-02-08 20:31:49 +01:00
tabl3.pl sat-minimize: more statistics. 2014-02-08 20:31:49 +01:00
tabl4.pl sat-minimize: more statistics. 2014-02-08 20:31:49 +01:00

This benchmark was used in the FORTE'14 paper "Mechanizing the
Minimization of Deterministic Generalized Büchi Automata", by
Souheib Baarir and Alexandre Duret-Lutz.  The generated data
used in the paper are at

  https://www.lrde.epita.fr/~adl/forte14/

Note that the encoding used in the SAT-based minimization have evolved
since the paper, so if you want to reproduce exactly the benchmark
from the paper, you should download a copy of Spot 1.2.3.

This benchmark has grown since FORTE'14. Some new SAT-based methods have
been implemented. This benchmark measures all the methods and identifies the
best.

To reproduce, follow these instructions:

1) Compile Spot and install if it is not already done.

2) Compile Glucose 3 from http://www.labri.fr/perso/lsimon/glucose/
   and make sure the 'glucose' binary is in your PATH.

3) Compile ltl2dstar from http://www.ltl2dstar.de/ and
   make sure the 'ltl2dstar' binary is in your path.

4) Compile DBAminimizer from
   http://www.react.uni-saarland.de/tools/dbaminimizer

   and define the path to minimize.py with

   % export DBA_MINIMIZER=$HOME/dbaminimizer/minimize.py

5) Then make sure you are in this directory:

   % cd bench/dtgbasat/

6) Classify the set of formulas with

   % ./prepare.sh

   This will read the formulas from file "formulas" and output a file
   "info.ltl", in which each formula is associated to a class (the ①,
   ②, ③, and ④ of the paper), and a number of acceptance conditions
   (the "m" of the paper).

7) Build a makefile to run all experiments

   % ./stats.sh

8) You may optionally change the directory that contains temporary
    files with

   % export TMPDIR=someplace

   (These temporary files can be several GB large.)

   Note that runtime will be limited to 2h, but memory is not bounded.
   You should set such a limit with "ulimit" if you like.  For instance
   % ulimit -v 41943040

9) Before running the experiment, as Spot has now more SAT-based minimization
   methods, you may want to reduce or choose the methods to be benched.
   Have a look at the 'config.bench' file. By default, it contains all the
   possible methods. Leave it unchanged if you want to compare all methods.
   If it is changed, you need to re-generate the stat-gen.sh file by running:
   % ./gen.py script --timeout <int> --unit <h|m|s>

10) Actually run all experiments

    % make -j4 -f stat.mk

    This will build a CSV file called "all.csv".

11) You may generate LaTeX code for the tables with 'gen.py'. This script is
    really helpful as it permits you to generate even partial results. If after
    all benchmarks, you want to compare only two methods among the others, just
    comment the others in the configuration file 'config.bench' and run this
    script.
    % ./gen.py results

    This scripts read the all.csv file generated at the end of the benchmark
    and outputs two PDF documents:
    -results.pdf: Which contains all statistics about each formula for each
     method.
    -resume.pdf which present two tables that count how many times a method
     is better than another.


For more instruction about how to use ltl2tgba and dstar2tgba to
compute minimal DTGBA or DTBA, please read doc/userdoc/satmin.html