ltlcross: report exit_status and exit_code columns in CSV and JSON

* src/bin/ltlcross.cc: Report exit_status and exit_code columns in CSV
and JSON files.  Also output lines for failed translations, and add
a --omit-missing option to disable that.  Move the time column right
after exit_status and exit_code.
* src/bin/man/ltlcross.x: Document each column of the output.
* bench/ltl2tgba/tools: Use the "{name}cmd" notation.
* bench/ltl2tgba/sum.py: Adjust to the new columns.
* bench/ltl2tgba/README: Update to point to the man page for a
description of the columns.
* bench/ltl2tgba/Makefile.am: Build results.pdf as said announced in
README.
* bench/spin13/html.bottom: Update code to ignore these two new
columns and lines with null values.
* src/tgbatest/ltlcross3.test: Add tests.
* doc/org/ltlcross.org: Adjust examples.
* NEWS: Mention this.
This commit is contained in:
Alexandre Duret-Lutz 2013-11-19 23:22:29 +01:00
parent 686a45484d
commit f65c621a55
10 changed files with 430 additions and 202 deletions

8
NEWS
View file

@ -7,7 +7,13 @@ New in spot 1.2a (not released)
ltlcross '{small} ltl2tgba -s --small %f >%N' ...
will run the command "ltl2tgba -s --small %f >%N", but only
print "small" in output files.
- ltlcross' CSV and JSON output now contains two additional
columns: exit_status and exit_code, used to report failures of
the translator. If the translation failed, only the time is
reported, and the rest of the statistics, which are missing,
area left empty (in CVS) or null (in JSON). A new option,
--omit-missing can be used to remove lines for failed
translations, and remove these two columns.
* Bug fixes:
- ltlcross' CSV output now stricly follows RFC 4180.
- ltlcross failed to report missing input or output escape sequences

View file

@ -34,9 +34,11 @@ OUTLOG = $(OUTPUTS:=.log)
CLEANFILES = $(OUTCSV) $(OUTJSON) $(OUTLOG) \
results.pdf results.aux results.log results.tex
.PHONY = run
.PHONY = run json
run: $(OUTJSON)
run: results.pdf
json: $(OUTJSON)
deps = $(srcdir)/tools \
$(top_srcdir)/configure.ac \

View file

@ -117,40 +117,16 @@ this benchmark.
Reading the summaries
=======================
The various outputs (CSV, JSON, our LaTeX) may use the following
column headers:
* input: formula translated (as a string in the CSV output, and as
an index into the input table in the JSON output)
* tool: tool used to translated this formula (idem)
* states: number of states of the resulting automaton
* edges: number of physical arcs between these states
* transitions: number of logical transitions from one state to the other
(for instance if the atomic propositions are 'a' and 'b', and
edge labeled by 'a' represents two transitions labeled by
'a&b' and 'a&!b')
* acc: number of acceptance sets used; it should always be one
in this automaton since we are producing (degeneralized)
Büchi automata.
* scc: number of strongly conncected components in the produced automaton
* nondetstates: number of nondeterministic states
* nondeterministic: 0 if the automaton is deterministic
(no nondeterministic state), 1 otherwise
* time: time required by the translation (although this is measured with
the highest-resolution clock available, it is "wall time", so it
can be affected by the machine's load).
* product_states: number of states in a product if the automaton with some
random Kripke structure (one unique Kripke structure is generated
for each formula and used with automata produced by all tools)
* product_transitions: number of transitions in that product
* product_scc: number of strongly connected componebts in that product
The various outputs (CSV, JSON, our LaTeX) use column headers
that are described in doc/userdoc/ltlcross.html and also
in the ltlcross man page.
The summary tables produced by sum.py accumulate all these results for
all formulae, tool by tool. They display an additional column, called
'count', giving the number of formulae successfully translated (the
missing formulae correspond to timeouts).
For all these values (except count), the sammler number the better.
For all these values (except count), smaller numbers are better.
More details about ltlcross (used to produce these outputs) can be

View file

@ -47,23 +47,26 @@ def process_file(filename):
ncols = len(data['fields'])
ntools = len(data['tool'])
datacols = range(2, ncols)
datacols = range(4, ncols)
fields = { name:index for index,name in enumerate(data["fields"]) }
toolcol = fields['tool']
inputcol = fields['formula']
statescol = fields['states']
inputs = data["inputs"]
# Index results by tool, then input
results = { t:{} for t in range(0, ntools) }
for l in data["results"]:
if l[statescol] == None:
continue
results[l[toolcol]][l[inputcol]] = l
for i in range(0, ntools):
# Remove any leading directory, and trailing %...
name = data["tool"][i]
name = name[name.rfind('/', 0, name.find(' ')) + 1:]
data["tool"][i] = latex_escape(name[0:name.find('%')])
data["tool"][i] = latex_escape(name[0:(name+'%').find('%')])
timecol = fields['time']
@ -72,7 +75,7 @@ def process_file(filename):
\subsection*{Cumulative summary}''' % latex_escape(filename))
print('\\noindent\\begin{tabular}{l' + ('r' * (ncols - 1)) + '}\n',
" & ".join(rot(latex_escape(["tool", "count"] + data["fields"][2:]))),
" & ".join(rot(latex_escape(["tool", "count"] + data["fields"][4:]))),
"\\\\")
for i in range(0, ntools):
# Compute sums over each column.
@ -96,7 +99,6 @@ states and more transitions.
header += 'c'
header += '}'
statescol = fields['states']
transcol = fields['transitions']
print(header)

View file

@ -25,17 +25,17 @@
set dummy
# Add third-party tools if they are available.
test -n "$SPIN" && set "$@" "$SPIN -f %s >%N"
test -n "$LTL2BA" && set "$@" "$LTL2BA -f %s >%N"
test -n "$LTL3BA" && set "$@" "$LTL3BA -f %s >%N" \
"$LTL3BA -M -f %s >%N" \
"$LTL3BA -S -f %s >%N" \
"$LTL3BA -S -M -f %s >%N"
test -n "$SPIN" && set "$@" "{spin} $SPIN -f %s >%N"
test -n "$LTL2BA" && set "$@" "{ltl2ba} $LTL2BA -f %s >%N"
test -n "$LTL3BA" && set "$@" "{ltl3ba} $LTL3BA -f %s >%N" \
"{ltl3ba-M} $LTL3BA -M -f %s >%N" \
"{ltl3ba-S} $LTL3BA -S -f %s >%N" \
"{ltl3ba-SM} $LTL3BA -S -M -f %s >%N"
# Use -s to output a neverclaim, like the other tools.
set "$@" \
"$LTL2TGBA --det -s %s >%N" \
"$LTL2TGBA --small -s %s >%N"
"{ltl2tgba-D} $LTL2TGBA --det -s %s >%N" \
"{ltl2tgba} $LTL2TGBA --small -s %s >%N"
# If you want to add your own tool, you can add it here.
# See 'man ltlcross' for the list of %-escapes you may use

View file

@ -303,7 +303,7 @@ var tfield;
for (var m = 0; m < nfields; ++m)
{
if (m == ifield || m == tfield)
if (!is_datacol(m))
continue;
var tmp = [];
for (var t = 0; t < ntools; ++t)
@ -352,16 +352,29 @@ var tfield;
for (var c = 0; c < nfields; ++c)
{
var name = rawdata.fields[c];
if (name == 'exit_status')
results.addColumn('string', name, name);
else
results.addColumn('number', name, name);
fields[name] = c;
}
// FIXME: we should uses rawdata.inputs to set ifield and tfield
tfield = fields['tool'];
ifield = fields['formula'];
sfield = fields['states'];
esfield = fields['exit_status'];
ecfield = fields['exit_code'];
is_datacol = function(c) {
return (c != ifield && c != tfield && c != esfield && c != ecfield);
}
nresults = rawdata.results.length;
for (var r = 0; r < nresults; ++r)
{
var row = rawdata.results[r];
if (row[sfield] == null)
continue;
results.addRow(row);
hashres[[row[tfield],row[ifield]]] = row;
}
@ -412,7 +425,7 @@ var tfield;
var col = 2;
for (var c = 0; c < nfields; ++c)
{
if (c != ifield && c != tfield)
if (is_datacol(c))
{
aggreg.push({column:c, aggregation: google.visualization.data.sum, type: 'number'})
aggreg.push({column:c, aggregation: google.visualization.data.avg, type: 'number'})
@ -469,7 +482,7 @@ var tfield;
var pos = 3;
for (var m = 0; m < nfields; ++m)
{
if (m != ifield && m != tfield)
if (is_datacol(m))
{
var row = [rawdata.fields[m]];
var max = sumresults.getColumnRange(pos + 2)['max'];
@ -498,7 +511,7 @@ var tfield;
sel3.append($("<option>").attr('value', -1).text('(nothing)'));
for (var c = 0; c < nfields; ++c)
{
if (c != ifield && c != tfield)
if (is_datacol(c))
{
sel.append($("<option>").attr('value', c).text(rawdata.fields[c]));
sel2.append($("<option>").attr('value', c).text(rawdata.fields[c]));
@ -525,7 +538,7 @@ var tfield;
var first = true;
for (var c = 0; c < nfields; ++c)
{
if (c != ifield && c != tfield)
if (is_datacol(c))
{
sel.append($("<option>").attr('value', c).text(rawdata.fields[c]));
if (first)

View file

@ -180,22 +180,22 @@ ltlcross --csv=results.csv --json=results.json \
#+END_SRC
#+RESULTS:
#+begin_example
-:1: (G((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) R ((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) | (X(p1)))))
Running [P0]: ltl2tgba -s '(G((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) R ((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) | (X(p1)))))' >'lck-o0-HcRzrd'
Running [P1]: spin -f '([]((((p0) && (<>(p1))) U ((p1) U ((p1) && ((!(p2)) V (p0))))) V ((((p0) && (<>(p1))) U ((p1) U ((p1) && ((!(p2)) V (p0))))) || (X(p1)))))' >'lck-o1-Sir9YC'
Running [P2]: lbt < 'lck-i0-W7LdjO' >'lck-o2-ZACV3b'
Running [N0]: ltl2tgba -s '(!(G((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) R ((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) | (X(p1))))))' >'lck-o0-KoveKk'
Running [N1]: spin -f '(!([]((((p0) && (<>(p1))) U ((p1) U ((p1) && ((!(p2)) V (p0))))) V ((((p0) && (<>(p1))) U ((p1) U ((p1) && ((!(p2)) V (p0))))) || (X(p1))))))' >'lck-o1-xxXdfU'
Running [N2]: lbt < 'lck-i0-tcO4oL' >'lck-o2-QQUs0t'
-:1: (G(((p0) U ((p0) & (G(p1)))) R ((G(p1)) | ((p0) U ((p0) & (G(p1)))))))
Running [P0]: ltl2tgba -s '(G(((p0) U ((p0) & (G(p1)))) R ((G(p1)) | ((p0) U ((p0) & (G(p1)))))))' >'lcr-o0-I7yemj'
Running [P1]: spin -f '([](((p0) U ((p0) && ([](p1)))) V (([](p1)) || ((p0) U ((p0) && ([](p1)))))))' >'lcr-o1-VFo8q2'
Running [P2]: lbt < 'lcr-i0-SYp5lA' >'lcr-o2-GIQcxL'
Running [N0]: ltl2tgba -s '(!(G(((p0) U ((p0) & (G(p1)))) R ((G(p1)) | ((p0) U ((p0) & (G(p1))))))))' >'lcr-o0-jppaKd'
Running [N1]: spin -f '(!([](((p0) U ((p0) && ([](p1)))) V (([](p1)) || ((p0) U ((p0) && ([](p1))))))))' >'lcr-o1-7v46UW'
Running [N2]: lbt < 'lcr-i0-8AoGDu' >'lcr-o2-PzOH6F'
Performing sanity checks and gathering statistics...
-:2: (!(((!(G((p0) | (F(p1))))) <-> ((p0) | (X(p1)))) -> (!(p1))))
Running [P0]: ltl2tgba -s '(!(((!(G((p0) | (F(p1))))) <-> ((p0) | (X(p1)))) -> (!(p1))))' >'lck-o0-qlcvic'
Running [P1]: spin -f '(!((!(p1)) || (!(((!([]((p0) || (<>(p1))))) && ((p0) || (X(p1)))) || (([]((p0) || (<>(p1)))) && (!((p0) || (X(p1)))))))))' >'lck-o1-fEBqz3'
Running [P2]: lbt < 'lck-i1-sint9k' >'lck-o2-6oY4RU'
Running [N0]: ltl2tgba -s '((!(G((p0) | (F(p1))))) <-> ((p0) | (X(p1)))) -> (!(p1))' >'lck-o0-6PQGuD'
Running [N1]: spin -f '(!(p1)) || (!(((!([]((p0) || (<>(p1))))) && ((p0) || (X(p1)))) || (([]((p0) || (<>(p1)))) && (!((p0) || (X(p1)))))))' >'lck-o1-1l4NVu'
Running [N2]: lbt < 'lck-i1-iEEnbM' >'lck-o2-a2Toum'
-:2: (!((G(F(p0))) -> (F(p1))))
Running [P0]: ltl2tgba -s '(!((G(F(p0))) -> (F(p1))))' >'lcr-o0-J9i0Ac'
Running [P1]: spin -f '(!((<>(p1)) || (!([](<>(p0))))))' >'lcr-o1-N2NUTX'
Running [P2]: lbt < 'lcr-i1-T8OQlr' >'lcr-o2-xaj9cJ'
Running [N0]: ltl2tgba -s '(G(F(p0))) -> (F(p1))' >'lcr-o0-YfWgQf'
Running [N1]: spin -f '(<>(p1)) || (!([](<>(p0))))' >'lcr-o1-cpcHd1'
Running [N2]: lbt < 'lcr-i1-vqYHwu' >'lcr-o2-WCqqBM'
Performing sanity checks and gathering statistics...
No problem detected.
@ -208,24 +208,26 @@ cat results.csv
#+END_SRC
#+RESULTS:
#+begin_example
"formula", "tool", "states", "edges", "transitions", "acc", "scc", "nonacc_scc", "terminal_scc", "weak_scc", "strong_scc", "nondetstates", "nondeterministic", "terminal_aut", "weak_aut", "strong_aut", "time", "product_states", "product_transitions", "product_scc"
"(G((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) R ((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) | (X(p1)))))", "ltl2tgba -s %f >%N", 7, 27, 42, 1, 1, 0, 0, 0, 1, 5, 1, 0, 0, 1, 0.162927, 1333, 20565, 3
"(G((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) R ((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) | (X(p1)))))", "spin -f %s >%N", 55, 957, 1723, 1, 1, 0, 0, 0, 1, 55, 1, 0, 0, 1, 3.83261, 10791, 866615, 37
"(G((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) R ((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) | (X(p1)))))", "lbt < %L >%T", 167, 5656, 10744, 3, 2, 1, 0, 0, 1, 167, 1, 0, 0, 1, 0.0365079, 32258, 5318535, 96
"(!(G((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) R ((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) | (X(p1))))))", "ltl2tgba -s %f >%N", 11, 28, 72, 1, 10, 6, 1, 2, 1, 1, 1, 0, 0, 1, 0.0628941, 2163, 36722, 594
"(!(G((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) R ((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) | (X(p1))))))", "spin -f %s >%N", 23, 113, 331, 1, 14, 9, 1, 1, 3, 20, 1, 0, 0, 1, 0.101343, 4567, 171114, 1193
"(!(G((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) R ((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) | (X(p1))))))", "lbt < %L >%T", 157, 2414, 5957, 3, 109, 103, 1, 1, 4, 133, 1, 0, 0, 1, 0.0197828, 30811, 3020266, 19147
"(!(((!(G((p0) | (F(p1))))) <-> ((p0) | (X(p1)))) -> (!(p1))))", "ltl2tgba -s %f >%N", 6, 12, 21, 1, 5, 3, 0, 1, 1, 1, 1, 0, 0, 1, 0.0509422, 806, 15638, 9
"(!(((!(G((p0) | (F(p1))))) <-> ((p0) | (X(p1)))) -> (!(p1))))", "spin -f %s >%N", 11, 21, 47, 1, 8, 6, 0, 1, 1, 7, 1, 0, 0, 1, 0.0102468, 1217, 36416, 20
"(!(((!(G((p0) | (F(p1))))) <-> ((p0) | (X(p1)))) -> (!(p1))))", "lbt < %L >%T", 17, 45, 100, 2, 13, 11, 0, 1, 1, 14, 1, 0, 0, 1, 0.00346881, 1744, 57783, 347
"((!(G((p0) | (F(p1))))) <-> ((p0) | (X(p1)))) -> (!(p1))", "ltl2tgba -s %f >%N", 7, 14, 28, 1, 6, 3, 1, 1, 1, 2, 1, 0, 0, 1, 0.0503676, 1006, 19822, 10
"((!(G((p0) | (F(p1))))) <-> ((p0) | (X(p1)))) -> (!(p1))", "spin -f %s >%N", 17, 43, 102, 1, 13, 10, 1, 1, 1, 12, 1, 0, 0, 1, 0.0474604, 2449, 70190, 256
"((!(G((p0) | (F(p1))))) <-> ((p0) | (X(p1)))) -> (!(p1))", "lbt < %L >%T", 23, 68, 154, 2, 19, 16, 1, 1, 1, 18, 1, 0, 0, 1, 0.0037305, 2236, 73111, 640
"formula","tool","exit_status","exit_code","time","states","edges","transitions","acc","scc","nonacc_scc","terminal_scc","weak_scc","strong_scc","nondet_states","nondet_aut","terminal_aut","weak_aut","strong_aut","product_states","product_transitions","product_scc"
"(G(((p0) U ((p0) & (G(p1)))) R ((G(p1)) | ((p0) U ((p0) & (G(p1)))))))","ltl2tgba -s %f >%N","ok",0,0.0315577,3,5,9,1,3,2,0,1,0,2,1,0,1,0,401,5168,3
"(G(((p0) U ((p0) & (G(p1)))) R ((G(p1)) | ((p0) U ((p0) & (G(p1)))))))","spin -f %s >%N","ok",0,0.00750061,6,13,18,1,3,2,0,0,1,6,1,0,0,1,999,14414,5
"(G(((p0) U ((p0) & (G(p1)))) R ((G(p1)) | ((p0) U ((p0) & (G(p1)))))))","lbt < %L >%T","ok",0,0.0022915,8,41,51,1,3,2,0,0,1,8,1,0,0,1,1397,43175,5
"(!(G(((p0) U ((p0) & (G(p1)))) R ((G(p1)) | ((p0) U ((p0) & (G(p1))))))))","ltl2tgba -s %f >%N","ok",0,0.0296025,4,10,16,1,3,1,1,0,1,0,0,0,0,1,797,16411,3
"(!(G(((p0) U ((p0) & (G(p1)))) R ((G(p1)) | ((p0) U ((p0) & (G(p1))))))))","spin -f %s >%N","ok",0,0.00388934,7,24,63,1,4,2,1,0,1,6,1,0,0,1,1400,64822,4
"(!(G(((p0) U ((p0) & (G(p1)))) R ((G(p1)) | ((p0) U ((p0) & (G(p1))))))))","lbt < %L >%T","ok",0,0.00268385,39,286,614,3,28,26,1,0,1,33,1,0,0,1,7583,600472,4394
"(!((G(F(p0))) -> (F(p1))))","ltl2tgba -s %f >%N","ok",0,0.0248804,2,4,4,1,1,0,0,0,1,0,0,0,0,1,399,4130,1
"(!((G(F(p0))) -> (F(p1))))","spin -f %s >%N","ok",0,0.0019908,2,3,5,1,1,0,0,0,1,1,1,0,0,1,399,5174,1
"(!((G(F(p0))) -> (F(p1))))","lbt < %L >%T","ok",0,0.00197792,5,10,15,1,4,3,0,0,1,5,1,0,0,1,407,6333,9
"(G(F(p0))) -> (F(p1))","ltl2tgba -s %f >%N","ok",0,0.0256492,3,5,11,1,3,1,1,1,0,1,1,0,1,0,600,11305,3
"(G(F(p0))) -> (F(p1))","spin -f %s >%N","ok",0,0.00185036,3,5,14,1,3,1,1,1,0,1,1,0,1,0,600,14397,3
"(G(F(p0))) -> (F(p1))","lbt < %L >%T","ok",0,0.00209399,11,18,54,2,11,9,1,1,0,5,1,0,1,0,1245,25838,449
#+end_example
This can be loaded in any spreadsheet application. Although we only
supplied 2 random generated formulas, the output contains 4 formulas because
=ltlcross= had to translate the positive and negative version of each.
This file can be loaded in any spreadsheet or statistical application.
Although we only supplied 2 random generated formulas, the output
contains 4 formulas because =ltlcross= had to translate the positive
and negative version of each.
If we had used the option =--json=results.json= instead of (or in
addition to) =--cvs=results.csv=, the file =results.json= would have
@ -243,28 +245,28 @@ cat results.json
"lbt < %L >%T"
],
"formula": [
"(G((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) R ((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) | (X(p1)))))",
"(!(G((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) R ((((p0) & (F(p1))) U ((p1) U ((p1) & ((!(p2)) R (p0))))) | (X(p1))))))",
"(!(((!(G((p0) | (F(p1))))) <-> ((p0) | (X(p1)))) -> (!(p1))))",
"((!(G((p0) | (F(p1))))) <-> ((p0) | (X(p1)))) -> (!(p1))"
"(G(((p0) U ((p0) & (G(p1)))) R ((G(p1)) | ((p0) U ((p0) & (G(p1)))))))",
"(!(G(((p0) U ((p0) & (G(p1)))) R ((G(p1)) | ((p0) U ((p0) & (G(p1))))))))",
"(!((G(F(p0))) -> (F(p1))))",
"(G(F(p0))) -> (F(p1))"
],
"fields": [
"formula", "tool", "states", "edges", "transitions", "acc", "scc", "nonacc_scc", "terminal_scc", "weak_scc", "strong_scc", "nondetstates", "nondeterministic", "terminal_aut", "weak_aut", "strong_aut", "time", "product_states", "product_transitions", "product_scc"
"formula","tool","exit_status","exit_code","time","states","edges","transitions","acc","scc","nonacc_scc","terminal_scc","weak_scc","strong_scc","nondet_states","nondet_aut","terminal_aut","weak_aut","strong_aut","product_states","product_transitions","product_scc"
],
"inputs": [ 0, 1 ],
"results": [
[ 0, 0, 7, 27, 42, 1, 1, 0, 0, 0, 1, 5, 1, 0, 0, 1, 0.162927, 1333, 20565, 3 ],
[ 0, 1, 55, 957, 1723, 1, 1, 0, 0, 0, 1, 55, 1, 0, 0, 1, 3.83261, 10791, 866615, 37 ],
[ 0, 2, 167, 5656, 10744, 3, 2, 1, 0, 0, 1, 167, 1, 0, 0, 1, 0.0365079, 32258, 5318535, 96 ],
[ 1, 0, 11, 28, 72, 1, 10, 6, 1, 2, 1, 1, 1, 0, 0, 1, 0.0628941, 2163, 36722, 594 ],
[ 1, 1, 23, 113, 331, 1, 14, 9, 1, 1, 3, 20, 1, 0, 0, 1, 0.101343, 4567, 171114, 1193 ],
[ 1, 2, 157, 2414, 5957, 3, 109, 103, 1, 1, 4, 133, 1, 0, 0, 1, 0.0197828, 30811, 3020266, 19147 ],
[ 2, 0, 6, 12, 21, 1, 5, 3, 0, 1, 1, 1, 1, 0, 0, 1, 0.0509422, 806, 15638, 9 ],
[ 2, 1, 11, 21, 47, 1, 8, 6, 0, 1, 1, 7, 1, 0, 0, 1, 0.0102468, 1217, 36416, 20 ],
[ 2, 2, 17, 45, 100, 2, 13, 11, 0, 1, 1, 14, 1, 0, 0, 1, 0.00346881, 1744, 57783, 347 ],
[ 3, 0, 7, 14, 28, 1, 6, 3, 1, 1, 1, 2, 1, 0, 0, 1, 0.0503676, 1006, 19822, 10 ],
[ 3, 1, 17, 43, 102, 1, 13, 10, 1, 1, 1, 12, 1, 0, 0, 1, 0.0474604, 2449, 70190, 256 ],
[ 3, 2, 23, 68, 154, 2, 19, 16, 1, 1, 1, 18, 1, 0, 0, 1, 0.0037305, 2236, 73111, 640 ]
[ 0,0,"ok",0,0.0315577,3,5,9,1,3,2,0,1,0,2,1,0,1,0,401,5168,3 ],
[ 0,1,"ok",0,0.00750061,6,13,18,1,3,2,0,0,1,6,1,0,0,1,999,14414,5 ],
[ 0,2,"ok",0,0.0022915,8,41,51,1,3,2,0,0,1,8,1,0,0,1,1397,43175,5 ],
[ 1,0,"ok",0,0.0296025,4,10,16,1,3,1,1,0,1,0,0,0,0,1,797,16411,3 ],
[ 1,1,"ok",0,0.00388934,7,24,63,1,4,2,1,0,1,6,1,0,0,1,1400,64822,4 ],
[ 1,2,"ok",0,0.00268385,39,286,614,3,28,26,1,0,1,33,1,0,0,1,7583,600472,4394 ],
[ 2,0,"ok",0,0.0248804,2,4,4,1,1,0,0,0,1,0,0,0,0,1,399,4130,1 ],
[ 2,1,"ok",0,0.0019908,2,3,5,1,1,0,0,0,1,1,1,0,0,1,399,5174,1 ],
[ 2,2,"ok",0,0.00197792,5,10,15,1,4,3,0,0,1,5,1,0,0,1,407,6333,9 ],
[ 3,0,"ok",0,0.0256492,3,5,11,1,3,1,1,1,0,1,1,0,1,0,600,11305,3 ],
[ 3,1,"ok",0,0.00185036,3,5,14,1,3,1,1,1,0,1,1,0,1,0,600,14397,3 ],
[ 3,2,"ok",0,0.00209399,11,18,54,2,11,9,1,1,0,5,1,0,1,0,1245,25838,449 ]
]
}
#+end_example
@ -278,23 +280,23 @@ of translators and formulas, because they don't have to be repeated on
each line as in the CSV version.
JSON data can be easily processed in any language. For instance the
following Python3 script averages each column for each tool, and
presents the results in a form that can almost be copied into a LaTeX
table (the =%= in the tool names have to be taken care of). Note that
for simplicity we assume that the first two columns are inputs,
instead of reading the =inputs= field.
following Python3 script averages each column (except the first four)
for each tool, and presents the results in a form that can almost be
copied into a LaTeX table (the =%= in the tool names have to be taken
care of). Note that for simplicity we assume that the first two
columns are inputs, instead of reading the =inputs= field.
#+BEGIN_SRC python :results output :exports both
#!/usr/bin/python3
import json
data = json.load(open('results.json'))
datacols = range(2, len(data["fields"]))
datacols = range(4, len(data["fields"]))
# Index results by tool
results = { t:[] for t in range(0, len(data["tool"])) }
for l in data["results"]:
results[l[1]].append(l)
# Average columns for each tool, and display them as a table
print("%-18s & count & %s \\\\" % ("tool", " & ".join(data["fields"][2:])))
print("%-18s & count & %s \\\\" % ("tool", " & ".join(data["fields"][4:])))
for i in range(0, len(data["tool"])):
c = len(results[i])
sums = ["%6.1f" % (sum([x[j] for x in results[i]])/c)
@ -303,10 +305,10 @@ for i in range(0, len(data["tool"])):
" & ".join(sums)))
#+END_SRC
#+RESULTS:
: tool & count & states & edges & transitions & acc & scc & nonacc_scc & terminal_scc & weak_scc & strong_scc & nondetstates & nondeterministic & terminal_aut & weak_aut & strong_aut & time & product_states & product_transitions & product_scc \\
: ltl2tgba -s %f >%N & 4 & 7.0 & 20.0 & 40.0 & 1.0 & 5.0 & 3.0 & 0.0 & 1.0 & 1.0 & 2.0 & 1.0 & 0.0 & 0.0 & 1.0 & 0.1 & 1327.0 & 23186.0 & 154.0 \\
: spin -f %s >%N & 4 & 26.0 & 283.0 & 550.0 & 1.0 & 9.0 & 6.0 & 0.0 & 0.0 & 1.0 & 23.0 & 1.0 & 0.0 & 0.0 & 1.0 & 1.0 & 4756.0 & 286083.0 & 376.0 \\
: lbt < %L >%T & 4 & 91.0 & 2045.0 & 4238.0 & 2.0 & 35.0 & 32.0 & 0.0 & 0.0 & 1.0 & 83.0 & 1.0 & 0.0 & 0.0 & 1.0 & 0.0 & 16762.0 & 2117423.0 & 5057.0 \\
: tool & count & time & states & edges & transitions & acc & scc & nonacc_scc & terminal_scc & weak_scc & strong_scc & nondet_states & nondet_aut & terminal_aut & weak_aut & strong_aut & product_states & product_transitions & product_scc \\
: ltl2tgba -s %f >%N & 4 & 0.0 & 3.0 & 6.0 & 10.0 & 1.0 & 2.5 & 1.0 & 0.5 & 0.5 & 0.5 & 0.8 & 0.5 & 0.0 & 0.5 & 0.5 & 549.2 & 9253.5 & 2.5 \\
: spin -f %s >%N & 4 & 0.0 & 4.5 & 11.2 & 25.0 & 1.0 & 2.8 & 1.2 & 0.5 & 0.2 & 0.8 & 3.5 & 1.0 & 0.0 & 0.2 & 0.8 & 849.5 & 24701.8 & 3.2 \\
: lbt < %L >%T & 4 & 0.0 & 15.8 & 88.8 & 183.5 & 1.8 & 11.5 & 10.0 & 0.5 & 0.2 & 0.8 & 12.8 & 1.0 & 0.0 & 0.2 & 0.8 & 2658.0 & 168954.5 & 1214.2 \\
The script =bench/ltl2tgba/sum.py= is a more evolved version of the
above script that generates two kinds of LaTeX tables.
@ -314,10 +316,14 @@ above script that generates two kinds of LaTeX tables.
When computing such statistics, you should be aware that inputs for
which a tool failed to generate an automaton (e.g. it crashed, or it
was killed if you used =ltlcross='s =--timeout= option to limit run
time) are not represented in the CSV or JSON files. However data for
bogus automata are still included: as shown below =ltlcross= will
report inconsistencies between automata as errors, but it does not try
to guess who is incorrect.
time) will appear as mostly empty lines in the CSV or JSON files,
since most statistics cannot be computed without an automaton...
Those lines with missing data can be omitted with the =--omit-missing=
option (this used to be the default up to Spot 1.2).
However data for bogus automata are still included: as shown below
=ltlcross= will report inconsistencies between automata as errors, but
it does not try to guess who is incorrect.
** Description of the columns
@ -326,6 +332,25 @@ run to translate it. In the CSV, these columns contain the actual
text. In the JSON output, these column contains an index into the
=formula= and =tool= table declared separately.
=exit_status= and =exit_code= are used to indicate if the translator
successfully produced an automaton, or if it failed. On successful
translation, =exit_status= is equal to "=ok=" and =exit_code= is 0.
If the translation took more time than allowed with the =--timeout=
option, =exit_status= will contain "=timeout=" and =exit_code= will be
set to -1. Other values are used to diagnose various issues: please
check the man-page for =ltlcross= for a list of them.
=time= obviously contains the time used by the translation. Time is
measured with some high-resolution clock when available (that's
nanosecond accuracy under Linux), but because translator commands are
executed through a shell, it also includes the time to start a shell.
(This extra cost apply identically to all translators, so it is not unfair.)
All the values that follow will be missing if =exit_status= is not
equal to "=ok=". (You instruct =ltlcross= to not output lines with
such missing data with the option =--omit-missing=.)
=states=, =edged=, =transitions=, =acc= are size measures for the
automaton that was translated. =acc= counts the number of acceptance
sets. When building (degeneralized) Büchi automata, it will always be
@ -400,12 +425,6 @@ showing two automata for =a U b=, the first automaton is deterministic
a nondeterministic state (state A2 has two possible successors for the
assignment $ab$) and is therefore not deterministic.
=time= obviously contains the time used by the translation. Time is
measured with some high-resolution clock when available (that's
nanosecond accuracy under Linux), but because translator commands are
executed through a shell, it also includes the time to start a shell.
(This extra cost apply identically to all translators, so it is not unfair.)
Finally, =product_states=, =product_transitions=, and =product_scc=
count the number of state, transitions and strongly-connect components
in the product that has been built between the translated automaton
@ -438,15 +457,15 @@ configurations, and the strings =ltl2tgba -s --small %f >%N= and
ltlcross -f a -f Ga 'ltl2tgba -s --small %f >%N' 'ltl2tgba -s --deter %f >%N' --csv
#+END_SRC
#+RESULTS:
: "formula","tool","states","edges","transitions","acc","scc","nonacc_scc","terminal_scc","weak_scc","strong_scc","nondet_states","nondet_aut","terminal_aut","weak_aut","strong_aut","time","product_states","product_transitions","product_scc"
: "(a)","ltl2tgba -s --small %f >%N",2,2,3,1,2,1,1,0,0,0,0,1,0,0,0.0309309,201,4092,2
: "(a)","ltl2tgba -s --deter %f >%N",2,2,3,1,2,1,1,0,0,0,0,1,0,0,0.0280637,201,4092,2
: "(!(a))","ltl2tgba -s --small %f >%N",2,2,3,1,2,1,1,0,0,0,0,1,0,0,0.026676,201,4098,2
: "(!(a))","ltl2tgba -s --deter %f >%N",2,2,3,1,2,1,1,0,0,0,0,1,0,0,0.0250905,201,4098,2
: "(G(a))","ltl2tgba -s --small %f >%N",1,1,1,1,1,0,0,1,0,0,0,0,1,0,0.0244696,200,2023,1
: "(G(a))","ltl2tgba -s --deter %f >%N",1,1,1,1,1,0,0,1,0,0,0,0,1,0,0.0240073,200,2023,1
: "(!(G(a)))","ltl2tgba -s --small %f >%N",2,3,4,1,2,1,1,0,0,0,0,1,0,0,0.0229872,400,8166,2
: "(!(G(a)))","ltl2tgba -s --deter %f >%N",2,3,4,1,2,1,1,0,0,0,0,1,0,0,0.0229391,400,8166,2
: "formula","tool","exit_status","exit_code","time","states","edges","transitions","acc","scc","nonacc_scc","terminal_scc","weak_scc","strong_scc","nondet_states","nondet_aut","terminal_aut","weak_aut","strong_aut","product_states","product_transitions","product_scc"
: "(a)","ltl2tgba -s --small %f >%N","ok",0,0.0274533,2,2,3,1,2,1,1,0,0,0,0,1,0,0,201,4092,2
: "(a)","ltl2tgba -s --deter %f >%N","ok",0,0.0274613,2,2,3,1,2,1,1,0,0,0,0,1,0,0,201,4092,2
: "(!(a))","ltl2tgba -s --small %f >%N","ok",0,0.0271082,2,2,3,1,2,1,1,0,0,0,0,1,0,0,201,4098,2
: "(!(a))","ltl2tgba -s --deter %f >%N","ok",0,0.0263196,2,2,3,1,2,1,1,0,0,0,0,1,0,0,201,4098,2
: "(G(a))","ltl2tgba -s --small %f >%N","ok",0,0.0232094,1,1,1,1,1,0,0,1,0,0,0,0,1,0,200,2023,1
: "(G(a))","ltl2tgba -s --deter %f >%N","ok",0,0.0253393,1,1,1,1,1,0,0,1,0,0,0,0,1,0,200,2023,1
: "(!(G(a)))","ltl2tgba -s --small %f >%N","ok",0,0.023919,2,3,4,1,2,1,1,0,0,0,0,1,0,0,400,8166,2
: "(!(G(a)))","ltl2tgba -s --deter %f >%N","ok",0,0.0235093,2,3,4,1,2,1,1,0,0,0,0,1,0,0,400,8166,2
To present these results graphically, or even when analyzing these
data, it might be convenient to give each configured tool a shorter
@ -459,15 +478,15 @@ For instance:
ltlcross -f a -f Ga '{small} ltl2tgba -s --small %f >%N' '{deter} ltl2tgba -s --deter %f >%N' --csv
#+END_SRC
#+RESULTS:
: "formula","tool","states","edges","transitions","acc","scc","nonacc_scc","terminal_scc","weak_scc","strong_scc","nondet_states","nondet_aut","terminal_aut","weak_aut","strong_aut","time","product_states","product_transitions","product_scc"
: "(a)","small",2,2,3,1,2,1,1,0,0,0,0,1,0,0,0.0272853,201,4092,2
: "(a)","deter",2,2,3,1,2,1,1,0,0,0,0,1,0,0,0.02555,201,4092,2
: "(!(a))","small",2,2,3,1,2,1,1,0,0,0,0,1,0,0,0.0242986,201,4098,2
: "(!(a))","deter",2,2,3,1,2,1,1,0,0,0,0,1,0,0,0.0239094,201,4098,2
: "(G(a))","small",1,1,1,1,1,0,0,1,0,0,0,0,1,0,0.0232251,200,2023,1
: "(G(a))","deter",1,1,1,1,1,0,0,1,0,0,0,0,1,0,0.0230073,200,2023,1
: "(!(G(a)))","small",2,3,4,1,2,1,1,0,0,0,0,1,0,0,0.0226587,400,8166,2
: "(!(G(a)))","deter",2,3,4,1,2,1,1,0,0,0,0,1,0,0,0.0229191,400,8166,2
: "formula","tool","exit_status","exit_code","time","states","edges","transitions","acc","scc","nonacc_scc","terminal_scc","weak_scc","strong_scc","nondet_states","nondet_aut","terminal_aut","weak_aut","strong_aut","product_states","product_transitions","product_scc"
: "(a)","small","ok",0,0.0310046,2,2,3,1,2,1,1,0,0,0,0,1,0,0,201,4092,2
: "(a)","deter","ok",0,0.0292434,2,2,3,1,2,1,1,0,0,0,0,1,0,0,201,4092,2
: "(!(a))","small","ok",0,0.027431,2,2,3,1,2,1,1,0,0,0,0,1,0,0,201,4098,2
: "(!(a))","deter","ok",0,0.0259028,2,2,3,1,2,1,1,0,0,0,0,1,0,0,201,4098,2
: "(G(a))","small","ok",0,0.0245275,1,1,1,1,1,0,0,1,0,0,0,0,1,0,200,2023,1
: "(G(a))","deter","ok",0,0.0245139,1,1,1,1,1,0,0,1,0,0,0,0,1,0,200,2023,1
: "(!(G(a)))","small","ok",0,0.0241781,2,3,4,1,2,1,1,0,0,0,0,1,0,0,400,8166,2
: "(!(G(a)))","deter","ok",0,0.0235059,2,3,4,1,2,1,1,0,0,0,0,1,0,0,400,8166,2
* Detecting problems

View file

@ -93,6 +93,7 @@ Exit status:\n\
#define OPT_PRODUCTS 9
#define OPT_COLOR 10
#define OPT_NOCOMP 11
#define OPT_OMIT 12
static const argp_option options[] =
{
@ -150,6 +151,8 @@ static const argp_option options[] =
"output statistics as JSON in FILENAME or on standard output", 0 },
{ "csv", OPT_CSV, "FILENAME", OPTION_ARG_OPTIONAL,
"output statistics as CSV in FILENAME or on standard output", 0 },
{ "omit-missing", OPT_OMIT, 0, 0,
"do not output statistics for timeouts or failed translations", 0 },
/**************************************************/
{ 0, 0, 0, 0, "Miscellaneous options:", -1 },
{ "color", OPT_COLOR, "WHEN", OPTION_ARG_OPTIONAL,
@ -166,7 +169,6 @@ const struct argp_child children[] =
{ 0, 0, 0, 0 }
};
enum color_type { color_never, color_always, color_if_tty };
static char const *const color_args[] =
@ -201,7 +203,7 @@ bool no_complement = false;
bool stop_on_error = false;
int seed = 0;
unsigned products = 1;
bool opt_omit = false;
struct translator_spec
{
@ -289,6 +291,9 @@ struct statistics
{
statistics()
: ok(false),
status_str(0),
status_code(0),
time(0),
states(0),
transitions(0),
acc(0),
@ -302,14 +307,18 @@ struct statistics
terminal_aut(false),
weak_aut(false),
strong_aut(false),
time(0),
product_states(0),
product_transitions(0),
product_scc(0)
{
}
// If OK is false, only the status_str, status_code, and time fields
// should be valid.
bool ok;
const char* status_str;
int status_code;
double time;
unsigned states;
unsigned edges;
unsigned transitions;
@ -324,15 +333,17 @@ struct statistics
bool terminal_aut;
bool weak_aut;
bool strong_aut;
double time;
double product_states;
double product_transitions;
double product_scc;
static void
fields(std::ostream& os)
fields(std::ostream& os, bool all)
{
os << ("\"states\","
if (all)
os << "\"exit_status\",\"exit_code\",";
os << ("\"time\","
"\"states\","
"\"edges\","
"\"transitions\","
"\"acc\","
@ -346,15 +357,18 @@ struct statistics
"\"terminal_aut\","
"\"weak_aut\","
"\"strong_aut\","
"\"time\","
"\"product_states\","
"\"product_transitions\","
"\"product_scc\"");
}
void
to_csv(std::ostream& os)
to_csv(std::ostream& os, bool all, const char* na = "")
{
if (all)
os << '"' << status_str << "\"," << status_code << ',';
os << time << ',';
if (ok)
os << states << ','
<< edges << ','
<< transitions << ','
@ -369,10 +383,27 @@ struct statistics
<< terminal_aut << ','
<< weak_aut << ','
<< strong_aut << ','
<< time << ','
<< product_states << ','
<< product_transitions << ','
<< product_scc;
else
os << na << ','
<< na << ','
<< na << ','
<< na << ','
<< na << ','
<< na << ','
<< na << ','
<< na << ','
<< na << ','
<< na << ','
<< na << ','
<< na << ','
<< na << ','
<< na << ','
<< na << ','
<< na << ','
<< na;
}
};
@ -470,6 +501,9 @@ parse_opt(int key, char* arg, struct argp_state*)
case OPT_NOCOMP:
no_complement = true;
break;
case OPT_OMIT:
opt_omit = true;
break;
case OPT_SEED:
seed = to_pos_int(arg);
break;
@ -784,27 +818,37 @@ namespace
int es = exec_with_timeout(cmd.c_str());
xtime_t after = gethrxtime();
const char* status_str = 0;
const spot::tgba* res = 0;
if (timed_out)
{
// This is not considered to be a global error.
std::cerr << "warning: timeout during execution of command\n";
++timeout_count;
status_str = "timeout";
es = -1;
}
else if (WIFSIGNALED(es))
{
status_str = "signal";
es = WTERMSIG(es);
global_error() << "error: execution terminated by signal "
<< WTERMSIG(es) << ".\n";
<< es << ".\n";
end_error();
}
else if (WIFEXITED(es) && WEXITSTATUS(es) != 0)
{
es = WEXITSTATUS(es);
status_str = "exit code";
global_error() << "error: execution returned exit code "
<< WEXITSTATUS(es) << ".\n";
<< es << ".\n";
end_error();
}
else
{
status_str = "ok";
es = 0;
switch (output.format)
{
case printable_result_filename::Spin:
@ -814,6 +858,8 @@ namespace
res = spot::neverclaim_parse(filename, pel, &dict);
if (!pel.empty())
{
status_str = "parse error";
es = -1;
std::ostream& err = global_error();
err << "error: failed to parse the produced neverclaim.\n";
spot::format_neverclaim_parse_errors(err, filename, pel);
@ -829,6 +875,8 @@ namespace
std::ifstream f(output.val()->name());
if (!f)
{
status_str = "no output";
es = -1;
global_error() << "Cannot open " << output.val()
<< std::endl;
end_error();
@ -838,6 +886,8 @@ namespace
res = spot::lbtt_parse(f, error, &dict);
if (!res)
{
status_str = "parse error";
es = -1;
global_error() << ("error: failed to parse output in "
"LBTT format: ")
<< error << std::endl;
@ -854,6 +904,8 @@ namespace
aut = spot::dstar_parse(filename, pel, &dict);
if (!pel.empty())
{
status_str = "parse error";
es = -1;
std::ostream& err = global_error();
err << "error: failed to parse the produced DSTAR"
" output.\n";
@ -873,10 +925,18 @@ namespace
assert(!"unreachable code");
}
}
// Compute statistics.
if (res && want_stats)
if (want_stats)
{
statistics* st = &(*fstats)[translator_num];
st->status_str = status_str;
st->status_code = es;
double prec = XTIME_PRECISION;
st->time = (after - before) / prec;
// Compute statistics.
if (res)
{
st->ok = true;
spot::tgba_sub_statistics s = sub_stats_reachable(res);
st->states = s.states;
@ -906,8 +966,7 @@ namespace
st->weak_aut = true;
else
st->terminal_aut = true;
double prec = XTIME_PRECISION;
st->time = (after - before) / prec;
}
}
output.cleanup();
return res;
@ -1401,18 +1460,18 @@ print_stats_csv(const char* filename)
assert(rounds == formulas.size());
*out << "\"formula\",\"tool\",";
statistics::fields(*out);
statistics::fields(*out, !opt_omit);
*out << "\r\n";
for (unsigned r = 0; r < rounds; ++r)
for (unsigned t = 0; t < ntrans; ++t)
if (vstats[r][t].ok)
if (!opt_omit || vstats[r][t].ok)
{
*out << "\"";
spot::escape_rfc4180(*out, formulas[r]);
*out << "\",\"";
spot::escape_rfc4180(*out, translators[t].name);
*out << "\",";
vstats[r][t].to_csv(*out);
vstats[r][t].to_csv(*out, !opt_omit);
*out << "\r\n";
}
delete outfile;
@ -1453,19 +1512,19 @@ print_stats_json(const char* filename)
spot::escape_str(*out, formulas[r]);
}
*out << ("\"\n ],\n \"fields\": [\n \"formula\",\"tool\",");
statistics::fields(*out);
statistics::fields(*out, !opt_omit);
*out << "\n ],\n \"inputs\": [ 0, 1 ],";
*out << "\n \"results\": [";
bool notfirst = false;
for (unsigned r = 0; r < rounds; ++r)
for (unsigned t = 0; t < ntrans; ++t)
if (vstats[r][t].ok)
if (!opt_omit || vstats[r][t].ok)
{
if (notfirst)
*out << ',';
notfirst = true;
*out << "\n [ " << r << ',' << t << ',';
vstats[r][t].to_csv(*out);
vstats[r][t].to_csv(*out, !opt_omit, "null");
*out << " ]";
}
*out << "\n ]\n}\n";

View file

@ -68,6 +68,110 @@ When this variable is defined, temporary files are not removed.
This is mostly useful for debugging.
[OUTPUT DATA]
The following columns are output in the CSV or JSON files.
.TP 7
\fBformula\fR
The formula translated.
.TP
\fBtool\fR
The tool used to translate this formula. This is either the value of the
full \fICOMMANDFMT\fR string specified on the command-line, or,
if \fICOMMANDFMT\fR has the form \f(CW{\fISHORTNAME\fR\f(CW}\fR\FiCMD\fR,
the value of \fISHORTNAME\fR.
.TP
\fBexit_status\fR, \fBexit_code\fR
Information about how the execution of the translator went. If the
option \fB\-\-omit\-missing\fR is given, these two columns are omitted
and only the lines corresponding to successful translation are output.
Otherwise, \fBexit_status\fR is a string that can take the following
values:
.RS
.TP
\f(CW"ok"\fR
The translator ran succesfully (this does not imply that the produced
automaton is correct) and ltlcross could parse the resulting
automaton. In this case \fBexit_code\fR is always 0.
.TP
\f(CW"timeout"\fR
The translator ran for more than the number of seconds
specified with the \fB\-\-timeout\fR option. In this
case \fBexit_code\fR is always -1.
.TP
\f(CW"exit code"\fR
The translator terminated with a non-zero exit code.
\fBexit_code\fR contains that value.
.TP
\f(CW"signal"\fR
The translator terminated with a signal.
\fBexit_code\fR contains that signal's number.
.TP
\f(CW"parse error"\fR
The translator terminated normally, but ltlcross could not
parse its output. In this case \fBexit_code\fR is always -1.
.TP
\f(CW"no output"\fR
The translator terminated normally, but without creating the specified
output file. In this case \fBexit_code\fR is always -1.
.RE
.TP
\fBtime\fR
A floating point number giving the run time of the translator in seconds.
This is reported for all executions, even failling ones.
.PP
Unless the \fB\-\-omit\-missing\fR option is used, data for all the
following columns might be missing.
.TP
\fBstate\fR, \fBedges\fR, \fBtransitions\fR, \fBacc\fR
The number of states, edges, transitions, and acceptance sets in the
translated automaton. Column \fBedges\fR counts the number of edges
(labeled by Boolean formulas) in the automaton seen as a graph, while
\fBtransitions\fR counts the number of assignment-labeled transitions
that might have been merged into a formula-labeled edge. For instance
an edge labeled by \f(CWtrue\fR will be counted as 2^3=8 transitions if
the automaton mention 3 atomic propositions.
.TP
\fBscc\fR, \fBnonacc_scc\fR, \fBterminal_scc\fR, \fBweak_scc\fR, \fBstrong_scc\fR
The number of strongly connected components in the automaton. The
\fBscc\fR column gives the total number, while the other columns only
count the SCCs that are non-accepting (a.k.a. transiant), terminal
(recognizes and accepts all words), weak (do not recognize all words,
but accepts all recognized words), or strong (accept some words, but
reject some recognized words).
.TP
\fBnondet_states\fR, \fBnondet_aut\fR
The number of nondeterministic states, and a Boolean indicating whether the
automaton is nondeterministic.
.TP
\fBterminal_aut\fR, \fBweak_aut\fR, \fBstrong_aut\fR
Three Boolean used to indicate whether the automaton is terminal (no
weak nor strong SCCs), weak (some weak SCCs but no strong SCCs), or strong
(some strong SCCs).
.TP
\fBproduct_states\fR, \fBproduct_transitions\fR, \fBproduct_scc\fR
Size of the product between the translated automaton and a randomly
generated state-space. For a given formula, the same state-space is
of course used the result of each translator. When the
\fB\-\-products\fR=\fIN\fR option is used, these values are averaged
over the \fIN\fR products performed.
[SEE ALSO]
.BR randltl (1),
.BR genltl (1),

View file

@ -21,5 +21,52 @@
. ./defs
set -e
run 2 ../../bin/ltlcross 'ltl2tgba -s %f >%N' 'foo bar' 2>stderr -f a
check_csv()
{
# Make sure all lines in $1 have the same number of comas
sed 's/[^,]//g' < "$1" |
( read first
while read l; do
test "x$first" = "x$l" || exit 1
done)
}
ltl2tgba=../../bin/ltl2tgba
run 2 ../../bin/ltlcross "$ltl2tgba -s %f >%N" 'foo bar' 2>stderr -f a
grep 'ltlcross.*no input.*in.*foo bar' stderr
# Make sure non-zero exit codes are reported...
run 1 ../../bin/ltlcross "$ltl2tgba -s %f >%N" 'false %f >%N' \
-f a --csv=out.csv 2>stderr
grep '"exit_status"' out.csv
grep '"exit_code"' out.csv
test `grep 'error:.*returned exit code 1' stderr | wc -l` -eq 2
test `grep '"exit code",1' out.csv | wc -l` -eq 2
check_csv out.csv
# ... unless --omit-missing is supplied.
run 1 ../../bin/ltlcross "$ltl2tgba -s %f >%N" 'false %f >%N' \
-f a --csv=out.csv --omit-missing 2>stderr
grep '"exit_status"' out.csv && exit 1
grep '"exit_code"' out.csv && exit 1
test `grep 'error:.*returned exit code 1' stderr | wc -l` -eq 2
test `grep '"exit code",1' out.csv | wc -l` -eq 0
check_csv out.csv
# Likewise for timeouts
run 0 ../../bin/ltlcross 'sleep 5; false %f >%N' \
--timeout 2 -f a --csv=out.csv 2>stderr
grep '"exit_status"' out.csv
grep '"exit_code"' out.csv
test `grep 'warning:.*timeout' stderr | wc -l` -eq 2
test `grep '"timeout",-1' out.csv | wc -l` -eq 2
check_csv out.csv
run 0 ../../bin/ltlcross 'sleep 5; false %f >%N' \
--timeout 2 --omit-missing -f a --csv=out.csv 2>stderr
grep '"exit_status"' out.csv && exit 1
grep '"exit_code"' out.csv && exit 1
test `grep 'warning:.*timeout' stderr | wc -l` -eq 2
test `wc -l < out.csv` -eq 1
check_csv out.csv