org: fix many errors
Most of those errors were pointed out by the language-check tool. However while fixing those I found a few other issues that I fixed. In particular I updated the bibliographic reference for ltlsynt, added some DOI links for some cited papers that had no link, and fixed the broken introduction of ltlgrind. * doc/org/autcross.org, doc/org/autfilt.org, doc/org/citing.org, doc/org/compile.org, doc/org/concepts.org, doc/org/csv.org, doc/org/dstar2tgba.org, doc/org/genaut.org, doc/org/hierarchy.org, doc/org/install.org, doc/org/ioltl.org, doc/org/ltl2tgba.org, doc/org/ltl2tgta.org, doc/org/ltlcross.org, doc/org/ltldo.org, doc/org/ltlfilt.org, doc/org/ltlgrind.org, doc/org/ltlsynt.org, doc/org/oaut.org, doc/org/randaut.org, doc/org/randltl.org, doc/org/satmin.org, doc/org/tut01.org, doc/org/tut02.org, doc/org/tut03.org, doc/org/tut10.org, doc/org/tut11.org, doc/org/tut12.org, doc/org/tut20.org, doc/org/tut22.org, doc/org/tut24.org, doc/org/tut30.org, doc/org/tut40.org, doc/org/tut50.org, doc/org/tut51.org, doc/org/tut52.org, doc/org/tut90.org, doc/org/upgrade2.org: Fix errors. * bin/autfilt.cc, bin/common_aoutput.cc, bin/genaut.cc: Fix some typos in --help text that appeared in the above org files.
This commit is contained in:
parent
a6f79c6211
commit
4cf7503fff
41 changed files with 393 additions and 325 deletions
|
|
@ -230,7 +230,7 @@ static const argp_option options[] =
|
|||
{ "is-alternating", OPT_IS_ALTERNATING, nullptr, 0,
|
||||
"keep only automata using universal branching", 0 },
|
||||
{ "intersect", OPT_INTERSECT, "FILENAME", 0,
|
||||
"keep automata whose languages have an non-empty intersection with"
|
||||
"keep automata whose languages have a non-empty intersection with"
|
||||
" the automaton from FILENAME", 0 },
|
||||
{ "included-in", OPT_INCLUDED_IN, "FILENAME", 0,
|
||||
"keep automata whose languages are included in that of the "
|
||||
|
|
|
|||
|
|
@ -228,7 +228,7 @@ static const argp_option io_options[] =
|
|||
"(iw) inherently weak. Use uppercase letters to negate them.", 0 },
|
||||
{ "%R, %[LETTERS]R", 0, nullptr,
|
||||
OPTION_DOC | OPTION_NO_USAGE,
|
||||
"CPU time (excluding parsing), in seconds; Add LETTERS to restrict to "
|
||||
"CPU time (excluding parsing), in seconds; add LETTERS to restrict to "
|
||||
"(u) user time, (s) system time, (p) parent process, "
|
||||
"or (c) children processes.", 0 },
|
||||
{ "%N, %n", 0, nullptr, OPTION_DOC | OPTION_NO_USAGE,
|
||||
|
|
@ -297,7 +297,7 @@ static const argp_option o_options[] =
|
|||
"(iw) inherently weak. Use uppercase letters to negate them.", 0 },
|
||||
{ "%R, %[LETTERS]R", 0, nullptr,
|
||||
OPTION_DOC | OPTION_NO_USAGE,
|
||||
"CPU time (excluding parsing), in seconds; Add LETTERS to restrict to"
|
||||
"CPU time (excluding parsing), in seconds; add LETTERS to restrict to"
|
||||
"(u) user time, (s) system time, (p) parent process, "
|
||||
"or (c) children processes.", 0 },
|
||||
{ "%n", 0, nullptr, OPTION_DOC | OPTION_NO_USAGE,
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ static const argp_option options[] =
|
|||
"equivalent deterministic Rabin automaton of less than N! states.", 0},
|
||||
{ "m-nba", gen::AUT_M_NBA, "RANGE", 0,
|
||||
"An NBA with N+1 states whose determinization needs at least "
|
||||
"N! states", 0},
|
||||
"N! states.", 0},
|
||||
{ "cyclist-trace-nba", gen::AUT_CYCLIST_TRACE_NBA, "RANGE", 0,
|
||||
"An NBA with N+2 states that should include cyclist-proof-dba=B.", 0},
|
||||
{ "cyclist-proof-dba", gen::AUT_CYCLIST_PROOF_DBA, "RANGE", 0,
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ The core of =autcross= is a loop that does the following steps:
|
|||
will be named =A0=, =A1=, and =A2=.
|
||||
- Ensure that all produced automata are equivalent.
|
||||
|
||||
Statistics about the results of each tools can optionally be saved in
|
||||
Statistics about the results of each tool can optionally be saved in
|
||||
a CSV file. And in case only those statistics matters, it is also
|
||||
possible to disable the equivalence checks.
|
||||
|
||||
|
|
@ -46,7 +46,7 @@ following character sequences:
|
|||
: %O filename for the automaton output in HOA, never
|
||||
: claim, LBTT, or ltl2dstar's format
|
||||
|
||||
For instance we can use =autfilt --complement %H >%O= to indicate that
|
||||
For instance, we can use =autfilt --complement %H >%O= to indicate that
|
||||
=autfilt= reads one file (=%H=) in the HOA format, and to redirect the
|
||||
output in file so that =autcross= can find it. The output format is
|
||||
automatically detected, so a generic =%O= is used for the output file
|
||||
|
|
@ -203,7 +203,7 @@ time) will appear with empty columns at the end of the CSV line.
|
|||
Those lines with missing data can be omitted with the =--omit-missing=
|
||||
option.
|
||||
|
||||
However data for bogus automata are still included: as shown below
|
||||
However, data for bogus automata are still included: as shown below
|
||||
=autcross= will report inconsistencies between automata as errors, but
|
||||
it does not try to guess who is incorrect.
|
||||
|
||||
|
|
@ -251,9 +251,9 @@ EOF
|
|||
|
||||
* Transformation that preserve or complement languages
|
||||
|
||||
By default =autcross= assumes that for a given input the automata
|
||||
produced by all tools should be equivalent. However it does not
|
||||
assume that those language should be equivalent to the input (it is
|
||||
By default, =autcross= assumes that for a given input the automata
|
||||
produced by all tools should be equivalent. However, it does not
|
||||
assume that those languages should be equivalent to the input (it is
|
||||
clearly not the case in our complementation test above).
|
||||
|
||||
If the transformation being tested does preserve the language of an
|
||||
|
|
@ -277,19 +277,19 @@ If a translator exits with a non-zero status code, or fails to output
|
|||
an automaton =autcross= can read, and error will be displayed and the
|
||||
result of the tool will be discarded.
|
||||
|
||||
Otherwise =autcross= performs equivalence checks between each pair of
|
||||
Otherwise, =autcross= performs equivalence checks between each pair of
|
||||
automata. This is done in two steps. First, all produced automata
|
||||
=A0=, =A1=, etc. are complemented: the complement automata are
|
||||
named =Comp(A0)=, =Comp(A1)= etc. Second, =autcross= ensures
|
||||
that =Ai*Comp(Aj)= is empty for all =i= and =j=.
|
||||
|
||||
If the =--language-preserved= option is passed, the =input= automaton
|
||||
also participate to these equivalence checks.
|
||||
also participates to these equivalence checks.
|
||||
|
||||
|
||||
To simulate a problem, let's compare pretend we want verify that
|
||||
=autfilt --complement= preserves the input language (clearly it does
|
||||
not, since it actually complement the language of the automaton).
|
||||
To simulate a problem, let's pretend we want to verify that =autfilt
|
||||
--complement= preserves the input language (clearly it does not, since
|
||||
it actually complements the language of the automaton).
|
||||
|
||||
#+BEGIN_SRC sh :prologue "exec 2>&1" :epilogue true
|
||||
randaut -B -n 3 a b --name="automaton %L" |
|
||||
|
|
@ -334,15 +334,15 @@ examples would not exit if the language was really preserved by the
|
|||
tool.
|
||||
|
||||
Incoherence between the output of several tools (even with
|
||||
=--language-preserved=) are reported in a similar way.
|
||||
=--language-preserved=) are reported similarly.
|
||||
|
||||
* Miscellaneous options
|
||||
|
||||
** =--stop-on-error=
|
||||
|
||||
The =--stop-on-error= option will cause =autcross= to abort on the
|
||||
first detected error. This include failure to start some tool,
|
||||
read its output, or failure to passe the sanity checks. Timeouts are
|
||||
first detected error. This includes failure to start some tool,
|
||||
read its output, or failure to pass the sanity checks. Timeouts are
|
||||
allowed unless =--fail-on-timeout= is also given.
|
||||
|
||||
One use for this option is when =autcross= is used in combination with
|
||||
|
|
@ -472,7 +472,7 @@ Performing sanity checks and gathering statistics...
|
|||
No problem detected.
|
||||
#+end_example
|
||||
|
||||
However in practice you could also use the =name:= field of the input
|
||||
However, in practice you could also use the =name:= field of the input
|
||||
automaton, combined with =%M= in the tool specification, to designate
|
||||
an alternate filename to load, or some key to look up somewhere.
|
||||
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ process them in batch. (The only restriction is that inside a file an
|
|||
automaton in LBTT's format may not follow an automaton in
|
||||
=ltl2dstar='s format.)
|
||||
|
||||
By default the output uses the HOA format. This can be changed using
|
||||
By default, the output uses the HOA format. This can be changed using
|
||||
[[file:oaut.org][the common output options]] like =--spin=, =--lbtt=, =--dot=,
|
||||
=--stats=...
|
||||
|
||||
|
|
@ -196,7 +196,7 @@ autfilt --help | sed -n '/ for output):/,/^$/p' | sed '1d;$d'
|
|||
#+end_example
|
||||
|
||||
When a letter is available both as uppercase and lowercase, the
|
||||
uppercase version refer to the input automaton, while the lowercase
|
||||
uppercase version refers to the input automaton, while the lowercase
|
||||
refer to the output automaton. Of course this distinction makes sense
|
||||
only if =autfilt= was instructed to perform an operation on the input
|
||||
automaton.
|
||||
|
|
@ -237,7 +237,7 @@ autfilt --help | sed -n '/Filtering options.*:/,/^$/p' | sed '1d;$d'
|
|||
inherently-weak SCCs is in RANGE. An accepting
|
||||
SCC is inherently weak if it does not have a
|
||||
rejecting cycle.
|
||||
--intersect=FILENAME keep automata whose languages have an non-empty
|
||||
--intersect=FILENAME keep automata whose languages have a non-empty
|
||||
intersection with the automaton from FILENAME
|
||||
--is-alternating keep only automata using universal branching
|
||||
--is-colored keep colored automata (i.e., exactly one
|
||||
|
|
@ -349,7 +349,7 @@ autfilt --help | sed -n '/Simplification level:/,/^$/p' | sed '1d;$d'
|
|||
|
||||
|
||||
By default, =--any --low= is used, which cause all simplifications to
|
||||
be skipped. However if any goal is given, than the simplification level
|
||||
be skipped. However, if any goal is given, then the simplification level
|
||||
defaults to =--high= (unless specified otherwise). If a simplification
|
||||
level is given without specifying any goal, then the goal default to =--small=.
|
||||
|
||||
|
|
@ -370,7 +370,7 @@ depending on the constraints on the acceptance conditions:
|
|||
in the output, but it may not always succeed and may output
|
||||
non-deterministic automata. Note that if =autfilt --deterministic
|
||||
--tgba= fails to output a deterministic automaton, it does not
|
||||
necessarily implies that a deterministic TGBA does not exist: it
|
||||
necessarily imply that a deterministic TGBA does not exist: it
|
||||
just implies that =autfilt= could not find it.
|
||||
|
||||
|
||||
|
|
@ -442,19 +442,27 @@ autfilt --help | sed -n '/Transformations:/,/^$/p' | sed '1d;$d'
|
|||
generalized Rabin definition from the HOA format;
|
||||
the "share-inf" option allows clauses to share Inf
|
||||
sets, therefore reducing the number of sets
|
||||
--generalized-streett[=unique-fin|share-fin], --gsa[=unique-fin|share-fin] rewrite the
|
||||
acceptance condition as generalized Streett; the
|
||||
"share-fin" option allows clauses to share Fin
|
||||
sets, therefore reducing the number of sets; the
|
||||
default "unique-fin" does not
|
||||
--generalized-streett[=unique-fin|share-fin], --gsa[=unique-fin|share-fin]
|
||||
rewrite the acceptance condition as generalized
|
||||
Streett; the "share-fin" option allows clauses to
|
||||
share Fin sets, therefore reducing the number of
|
||||
sets; the default "unique-fin" does not
|
||||
--instut[=1|2] allow more stuttering (two possible algorithms)
|
||||
--keep-states=NUM[,NUM...] only keep specified states. The first state
|
||||
will be the new initial state. Implies
|
||||
--remove-unreachable-states.
|
||||
--kill-states=NUM[,NUM...] mark the specified states as dead (no
|
||||
successor), and remove them. Implies
|
||||
--remove-dead-states.
|
||||
--mask-acc=NUM[,NUM...] remove all transitions in specified acceptance
|
||||
sets
|
||||
--merge-transitions merge transitions with same destination and
|
||||
acceptance
|
||||
--partial-degeneralize[=NUM1,NUM2,...]
|
||||
Degeneralize automata according to sets
|
||||
NUM1,NUM2,... If no sets are given, partial
|
||||
degeneralization is performed for all conjunctions
|
||||
of Inf and disjunctions of Fin.
|
||||
--product=FILENAME, --product-and=FILENAME
|
||||
build the product with the automaton in FILENAME
|
||||
to intersect languages
|
||||
|
|
@ -467,8 +475,8 @@ autfilt --help | sed -n '/Transformations:/,/^$/p' | sed '1d;$d'
|
|||
quantification, or by assigning them 0 or 1
|
||||
--remove-dead-states remove states that are unreachable, or that cannot
|
||||
belong to an infinite path
|
||||
--remove-fin rewrite the automaton without using Fin acceptance
|
||||
|
||||
--remove-fin rewrite the automaton without using Fin
|
||||
acceptance
|
||||
--remove-unreachable-states
|
||||
remove states that are unreachable from the
|
||||
initial state
|
||||
|
|
@ -505,6 +513,10 @@ autfilt --help | sed -n '/Transformations:/,/^$/p' | sed '1d;$d'
|
|||
sum languages
|
||||
--sum-and=FILENAME build the sum with the automaton in FILENAME to
|
||||
intersect languages
|
||||
--to-finite[=alive] Convert an automaton with "alive" and "!alive"
|
||||
propositions into a Büchi automaton interpretable
|
||||
as a finite automaton. States with a outgoing
|
||||
"!alive" edge are marked as accepting.
|
||||
#+end_example
|
||||
|
||||
* Decorations
|
||||
|
|
|
|||
|
|
@ -71,12 +71,6 @@ be more specific about a particular aspect of Spot.
|
|||
Presents the automaton format [[file:hoa.org][supported by Spot]] and [[http://adl.github.io/hoaf/support.html][several other
|
||||
tools]].
|
||||
|
||||
- *Reactive Synthesis from LTL Specification with Spot*,
|
||||
/Thibaud Michaud/, /Maximilien Colange/.
|
||||
In Proc. of SYNT@CAV'18. ([[https://www.lrde.epita.fr/~max/bibtexbrowser.php?key=michaud.18.synt&bib=perso.bib][bib]] | [[https://www.lrde.epita.fr/dload/papers/michaud.18.synt.pdf][pdf]])
|
||||
|
||||
Presents the tool [[file:ltlsynt.org][=ltlsynt=]].
|
||||
|
||||
- *Generic Emptiness Check for Fun and Profit*,
|
||||
/Christel Baier/, /František Blahoudek/, /Alexandre Duret-Lutz/,
|
||||
/Joachim Klein/, /David Müller/, and /Jan Strejček/.
|
||||
|
|
@ -91,6 +85,15 @@ be more specific about a particular aspect of Spot.
|
|||
In. Proc. of TACAS'22, LNCS 13244, pp. 99--117, Apr 2022. ([[https://www.lrde.epita.fr/~adl/dl/adl_bib.html#casares.22.tacas][bib]] | [[https://www.lrde.epita.fr/~adl/dl/adl/casares.22.tacas.pdf][pdf]] |
|
||||
[[https://www.lrde.epita.fr/~adl/dl/adl/casares.22.tacas.slides.pdf][slides1]] | [[https://www.lrde.epita.fr/~adl/dl/adl/casares.22.tacas.slides2.pdf][slides2]])
|
||||
|
||||
Shows applications of the ACD implemented in Spot.
|
||||
|
||||
- *Dissecting ltlsynt*,
|
||||
/Florian Renkin/, /Philipp Schlehuber-Caissier/, /Alexandre Duret-Lutz/,
|
||||
and Adrien Pommellet.
|
||||
In Formal Methods in System Design, 2023. ([[https://www.lrde.epita.fr/~adl/dl/adl_bib.html#renkin.23.scp][bib]] | [[https://www.lrde.epita.fr/~adl/dl/adl/renkin.23.scp.pdf][pdf]])
|
||||
|
||||
Discuss the implementation of [[file:ltlsynt.org][=ltlsynt=]].
|
||||
|
||||
* Obsolete references
|
||||
|
||||
- *Spot 2.0 — a framework for LTL and ω-automata manipulation*,
|
||||
|
|
@ -111,6 +114,14 @@ be more specific about a particular aspect of Spot.
|
|||
For a while, this used to be the only paper presenting Spot as a
|
||||
model-checking library.
|
||||
|
||||
- *Reactive Synthesis from LTL Specification with Spot*,
|
||||
/Thibaud Michaud/, /Maximilien Colange/.
|
||||
In Proc. of SYNT@CAV'18. ([[https://www.lrde.epita.fr/~max/bibtexbrowser.php?key=michaud.18.synt&bib=perso.bib][bib]] | [[https://www.lrde.epita.fr/dload/papers/michaud.18.synt.pdf][pdf]])
|
||||
|
||||
Was the first presentation of the tool [[file:ltlsynt.org][=ltlsynt=]]. The paper
|
||||
*Dissecting ltlsynt*, mentioned earlier, is more up-to-date.
|
||||
|
||||
|
||||
# LocalWords: utf html Alexandre Duret Lutz Lewkowicz Amaury Xu pdf
|
||||
# LocalWords: Fauchille Thibaud Michaud Etienne Proc ATVA LNCS TGBA
|
||||
# LocalWords: ltlfilt randltl ltlcross tgba Eddine Fabrice Kordon
|
||||
|
|
|
|||
|
|
@ -49,7 +49,7 @@ might need to tell the compiler several things:
|
|||
search path with =-I location=.
|
||||
3. The linker should be able to find the Spot library (on Linux it would
|
||||
be called =libspot.so=, unless you forced a static compilation, in which
|
||||
case it would be =libspot.a=). This might require appending another
|
||||
case it would be =libspot.a=). This might require appending another
|
||||
directory to the library search path with =-L location= in addition to
|
||||
passing the =-lspot= option.
|
||||
|
||||
|
|
@ -120,7 +120,7 @@ to tell the dynamic loader about this location.
|
|||
|
||||
* Case 3: You compiled Spot yourself, and installed it in a custom directory
|
||||
|
||||
For instance you might have used
|
||||
For instance, you might have used
|
||||
#+BEGIN_SRC sh
|
||||
./configure --prefix ~/usr
|
||||
make
|
||||
|
|
@ -147,7 +147,7 @@ it every time you want to run a binary that depends on Spot.
|
|||
* Case 4: You compiled Spot yourself, but did not install it
|
||||
|
||||
We do not recommend this, but it is possible to compile programs
|
||||
that uses an uninstalled version of Spot.
|
||||
that use an uninstalled version of Spot.
|
||||
|
||||
So you would just compile Spot in some directory (let's call it
|
||||
=/dir/spot-X.Y/=) with
|
||||
|
|
@ -164,7 +164,7 @@ There are at least two traps with this scenario:
|
|||
=/usr/local/include/spot/= using the same layout, but it also
|
||||
includes some private, internal headers. These headers are
|
||||
normally not installed, so in the other scenarios you cannot use
|
||||
them. In this setup however, you might use them by mistake. Also
|
||||
them. In this setup however, you might use them by mistake. Also,
|
||||
that directory contains =*.cc= files implementing all the features
|
||||
of the library. Clearly those file should be considered private as
|
||||
well.
|
||||
|
|
@ -192,7 +192,7 @@ Using =libtool link g++= instead of =g++= will cause =libtool= to
|
|||
edit the =g++= command line, and replace
|
||||
=/dir/spot-X.Y/spot/libspot.la= by whatever options are
|
||||
needed to link against the library represented by this /Libtool
|
||||
archive/. Furthermore the resulting =hello= executable will not be a
|
||||
archive/. Furthermore, the resulting =hello= executable will not be a
|
||||
binary, but a shell script that defines some necessary environment
|
||||
variables (like =LD_LIBRARY_PATH= to make sure the Spot library is
|
||||
found) before running the actual binary.
|
||||
|
|
@ -215,7 +215,7 @@ will need to add =-pthread= to the compiler flags.
|
|||
|
||||
In the fourth case where =libtool= is used to link against
|
||||
=libspot.la= linking against =libbddx.la= should not be necessary because
|
||||
Libtool already handles such dependencies. However the version of =libtool=
|
||||
Libtool already handles such dependencies. However, the version of =libtool=
|
||||
distributed with Debian is patched to ignore those dependencies, so in this
|
||||
case you have to list all dependencies.
|
||||
|
||||
|
|
@ -237,8 +237,8 @@ will turn on assertions, and debugging options, while
|
|||
#+END_SRC
|
||||
will disable assertions and enable more optimizations.
|
||||
|
||||
If you are writing programs against Spot, we recommend to compile Spot
|
||||
with =--enable-devel= while your are developing your programs (the
|
||||
If you are writing programs against Spot, we recommend compiling Spot
|
||||
with =--enable-devel= while you are developing your programs (the
|
||||
assertions in Spot can be useful to diagnose problems in your program,
|
||||
or in Spot), and then use =--disable-devel= once you are confident and
|
||||
desire speed.
|
||||
|
|
|
|||
|
|
@ -93,7 +93,7 @@ to write that word using [[https://en.wikipedia.org/wiki/Canonical_normal_form#M
|
|||
An ω-automaton is used to represent sets of ω-word.
|
||||
|
||||
Those look like the classical [[https://en.wikipedia.org/wiki/Nondeterministic_finite_automaton][Nondeterministic Finite Automata]] in the
|
||||
sense that they also have states and transitions. However ω-automata
|
||||
sense that they also have states and transitions. However, ω-automata
|
||||
recognize [[#word][ω-words]] instead of finite words. In this context, the
|
||||
notion of /final state/ makes no sense, and is replaced by the notion
|
||||
of [[#acceptance-condition][acceptance condition]]: a run of the automaton (i.e., an infinite
|
||||
|
|
@ -108,7 +108,7 @@ are compatible with the minterms used as letters in the word.
|
|||
|
||||
The /language/ of an ω-automaton is the set of ω-words it accepts.
|
||||
|
||||
There are many kinds of ω-Automata and they mostly differ by their
|
||||
There are many kinds of ω-Automata, and they mostly differ by their
|
||||
[[#acceptance-condition][acceptance condition]]. The different types of acceptance condition,
|
||||
and whether the automata are deterministic or not can affect their
|
||||
expressive power.
|
||||
|
|
@ -144,7 +144,6 @@ $txt
|
|||
The above automaton would accept the [[#word][ω-word we used previously as an
|
||||
example]].
|
||||
|
||||
|
||||
As a more concrete example, here is a (complete) Büchi automaton for
|
||||
the [[#ltl][LTL formula]] =G(door_open -> light_on)= that specifies that
|
||||
=light_on= should be true whenever =door_open= is true.
|
||||
|
|
@ -260,15 +259,15 @@ and as many transitions.
|
|||
|
||||
Spot has some function to merge those "parallel transitions" into
|
||||
larger edges. Limiting the number of edges helps most of the
|
||||
algorithms that have to explore automata, since they have less
|
||||
algorithms that have to explore automata, since they have fewer
|
||||
successors to consider.
|
||||
|
||||
The distinction between *edge* and *transition* is something we try
|
||||
maintain in the various interfaces of Spot. For instance the
|
||||
to maintain in the various interfaces of Spot. For instance the
|
||||
[[file:oaut.org::#stats][=--stats= option]] has =%e= or =%t= to count either edges or
|
||||
transitions. The method used to add new edge into an automaton is
|
||||
called =new_edge(...)=, not =new_transition(...)=, because it takes a
|
||||
[[#bdd][BDD]] (representing a Boolean formula) as label. However that naming
|
||||
[[#bdd][BDD]] (representing a Boolean formula) as label. However, that naming
|
||||
convention is recent in the history of Spot. Spot versions up to
|
||||
1.2.6 used to call everything /transition/ (and what we now call
|
||||
/transition/ was sometime called /sub-transition/), and traces of this
|
||||
|
|
@ -396,7 +395,7 @@ $txt
|
|||
#+RESULTS:
|
||||
[[file:concept-tgba1.svg]]
|
||||
|
||||
This automaton accept all ω-words that infinitely often match the
|
||||
This automaton accepts all ω-words that infinitely often match the
|
||||
pattern $a^+;b$ (that is: a positive number of letters where $a$ is
|
||||
true are followed by one letter where $b$ is true).
|
||||
|
||||
|
|
@ -477,8 +476,8 @@ following formula: =(Fin(0)&Inf(1)) | (Fin(2)&Inf(3)) |
|
|||
|
||||
The following table gives an overview of how some classical acceptance
|
||||
condition are encoded. The first column gives a name that is more
|
||||
human readable (those names are defined in the [[#hoa][HOA]] format and are also
|
||||
recognized by Spot). The second column give the encoding as a
|
||||
human-readable (those names are defined in the [[#hoa][HOA]] format and are also
|
||||
recognized by Spot). The second column gives the encoding as a
|
||||
formula. Everything here is case-sensitive.
|
||||
|
||||
#+BEGIN_SRC python :results verbatim raw :exports results
|
||||
|
|
@ -880,9 +879,9 @@ $txt
|
|||
|
||||
|
||||
Since this file format is the only one able to represent the range of
|
||||
ω-automata supported by Spot, and it its default output format.
|
||||
ω-automata supported by Spot, and it is its default output format.
|
||||
|
||||
However note that Spot does not support all automata that can be
|
||||
However, note that Spot does not support all automata that can be
|
||||
expressed using the HOA format. The present support for the HOA
|
||||
format in Spot, is discussed on [[file:hoa.org][a separate page]], with a section
|
||||
dedicated to the [[file:hoa.org::#restrictions][restrictions]].
|
||||
|
|
@ -961,7 +960,7 @@ For example the formula ={(1;1)[*]}[]->a= can be interpreted as follows:
|
|||
- the part =...[]->a= requests that =a= should be true at the end of each
|
||||
matched prefix.
|
||||
|
||||
Therefore this formula ensures that =a= is true at every even instant
|
||||
Therefore, this formula ensures that =a= is true at every even instant
|
||||
(if we consider the first instant to be odd). This is the canonical
|
||||
example of formula that can be expressed in PSL but not in LTL.
|
||||
|
||||
|
|
@ -1018,7 +1017,7 @@ layers.
|
|||
generate families of automata, useful for benchmarking and testing
|
||||
- all the supplied [[file:tools.org][command-line tools]] distributed with Spot are
|
||||
built upon the =libspot= or =libspotgen= libraries
|
||||
- =libspotltsmin= is a library that helps interfacing Spot with
|
||||
- =libspotltsmin= is a library that helps to interface Spot with
|
||||
dynamic libraries that [[http://fmt.cs.utwente.nl/tools/ltsmin/][LTSmin]] uses to represent state-spaces. It
|
||||
currently supports libraries generated from Promela models using
|
||||
SpinS or a patched version of DiVinE, but you have to install
|
||||
|
|
@ -1067,7 +1066,7 @@ automaton, and that can be queried or set by algorithms:
|
|||
| =complete= | for any letter ℓ, each state has is at least one outgoing transition compatible with ℓ |
|
||||
| =universal= | there is at most one run *recognizing* each word, but not necessarily accepting it |
|
||||
| =semi_deterministic= | any nondeterminism occurs before entering an accepting SCC |
|
||||
| =unambiguous= | there is at most one run *accepting* each word (but it might be recognized several time) |
|
||||
| =unambiguous= | there is at most one run *accepting* each word (but it might be recognized several times) |
|
||||
| =stutter_invariant= | the property recognized by the automaton is [[https://www.lrde.epita.fr/~adl/dl/adl/michaud.15.spin.pdf][stutter-invariant]] |
|
||||
|
||||
For each flag =flagname=, the =twa= class has a method
|
||||
|
|
@ -1097,7 +1096,7 @@ existential.
|
|||
|
||||
These automata properties are encoded into the [[file:hoa.org::#property-bits][HOA format]], so they can
|
||||
be preserved when building a processing pipeline using the shell.
|
||||
However the HOA format has support for more properties that do not
|
||||
However, the HOA format has support for more properties that do not
|
||||
correspond to any =twa= flag.
|
||||
|
||||
* Named properties for automata
|
||||
|
|
@ -1125,7 +1124,7 @@ Here is a list of named properties currently used inside Spot:
|
|||
| ~degen-levels~ | ~std::vector<unsigned>~ | level associated to each state by the degeneralization algorithm |
|
||||
| ~highlight-edges~ | ~std::map<unsigned, unsigned>~ | map of (edge number, color number) for highlighting the output |
|
||||
| ~highlight-states~ | ~std::map<unsigned, unsigned>~ | map of (state number, color number) for highlighting the output |
|
||||
| ~incomplete-states~ | ~std::set<unsigned>~ | set of states numbers that should be displayed as incomplete (used internally by ~print_dot()~ when truncating large automata) |
|
||||
| ~incomplete-states~ | ~std::set<unsigned>~ | set of states numbers that should be displayed as incomplete (used internally by ~print_dot()~ when truncating large automata) |
|
||||
| ~original-classes~ | ~std::vector<unsigned>~ | class number associated to each state of a construction (used by some algorithms like =tgba_deternize()=) |
|
||||
| ~original-clauses~ | ~std::vector<unsigned>~ | original DNF clause associated to each state in automata created by =dnf_to_streett()= |
|
||||
| ~original-states~ | ~std::vector<unsigned>~ | original state number before transformation (used by some algorithms like =degeneralize()=) |
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@
|
|||
#+PROPERTY: header-args:sh :results verbatim :exports both
|
||||
|
||||
This page discusses features available in Spot's command-line
|
||||
tools to produce an consume CSV files.
|
||||
tools to produce and consume CSV files.
|
||||
|
||||
* Producing CSV files
|
||||
|
||||
|
|
@ -176,7 +176,7 @@ Note that if the =--format= option is not specified, the default
|
|||
format is one of: =%f=, =%<,%f=, =%f,%>=, or =%<,%f,%>= depending on
|
||||
whether the input CSV had column before and after the selected one.
|
||||
Furthermore, the formula field is automatically double-quoted if the
|
||||
formula actually use double quotes, and the input CSV file had more
|
||||
formula actually uses double quotes, and the input CSV file had more
|
||||
than one column.
|
||||
|
||||
Typical uses of =ltlfilt= on CSV file include:
|
||||
|
|
@ -251,7 +251,7 @@ cat csv-aut.csv
|
|||
|
||||
Note that when producing CSV files, it is important to surround =%h=
|
||||
with double quotes to indicate that double quotes from the HOA format
|
||||
(output by =%h=) should be escaped. Otherwise the result would not be
|
||||
(output by =%h=) should be escaped. Otherwise, the result would not be
|
||||
a valid CSV file.
|
||||
|
||||
[[file:autfilt.org][=autfilt=]] can process a column of such a CSV file using the same
|
||||
|
|
|
|||
|
|
@ -7,19 +7,18 @@
|
|||
|
||||
This tool converts automata into transition-based generalized Büchi
|
||||
automata, a.k.a., TGBA. It can also produce Büchi automata on request
|
||||
(=-B=). It's usage is almost similar to [[file:ltl2tgba.org][=ltl2tgba=]] except that
|
||||
(=-B=). Its usage is almost similar to [[file:ltl2tgba.org][=ltl2tgba=]] except that
|
||||
instead of supplying a formula to translate, you should specify a
|
||||
filename containing the automaton to convert.
|
||||
|
||||
In earlier version (before Spot 1.99.4) =dstar2tgba= was only able to
|
||||
read automata written in [[http://www.ltl2dstar.de/docs/ltl2dstar.html][the format output by =ltl2dstar=]]. However
|
||||
nowadays it can read automata in any of the supported formats ([[file:hoa.org][HOA]],
|
||||
LBTT's format, ltl2dstar's format, and never claims). Also
|
||||
=dstar2tgba= used to be the only tool being able to read ltl2dstar's
|
||||
format, but today this format can also be read by any of the tool that
|
||||
read automata. So in practice, running =dstar2tgba some files...=
|
||||
produces the same result as running =autfilt --tgba --high --small
|
||||
some files...=.
|
||||
read automata written in [[http://www.ltl2dstar.de/docs/ltl2dstar.html][the format output by =ltl2dstar=]]. Nowadays,
|
||||
it can read automata in any of the supported formats ([[file:hoa.org][HOA]], LBTT's
|
||||
format, ltl2dstar's format, and never claims). Also, =dstar2tgba= used
|
||||
to be the only tool being able to read ltl2dstar's format, but today
|
||||
this format can also be read by any of the tool that read automata.
|
||||
So in practice, running =dstar2tgba some files...= produces the same
|
||||
result as running =autfilt --tgba --high --small some files...=.
|
||||
|
||||
|
||||
* Two quick examples
|
||||
|
|
@ -33,7 +32,7 @@ The following command instructs =ltl2dstar= to:
|
|||
1. run =ltl2tgba -Ds= to build a Büchi automaton for =(a U b) & GFb=, and then
|
||||
2. convert that Büchi automaton into a deterministic Rabin automaton
|
||||
(DRA) stored in =fagfb=.
|
||||
Additionally we use =ltlfilt= to convert our formula to the
|
||||
Additionally, we use =ltlfilt= to convert our formula to the
|
||||
prefix format used by =ltl2dstar=.
|
||||
|
||||
#+BEGIN_SRC sh :results silent
|
||||
|
|
@ -247,23 +246,33 @@ dstar2tgba --help | sed -n '/Output format:/,/^$/p' | sed '1d;$d'
|
|||
--lbtt or --spin)
|
||||
--check[=PROP] test for the additional property PROP and output
|
||||
the result in the HOA format (implies -H). PROP
|
||||
may be any prefix of 'all' (default),
|
||||
'unambiguous', 'stutter-invariant', or 'strength'.
|
||||
|
||||
-d, --dot[=1|a|b|B|c|e|f(FONT)|h|n|N|o|r|R|s|t|v|+INT]
|
||||
may be some prefix of 'all' (default),
|
||||
'unambiguous', 'stutter-invariant',
|
||||
'stutter-sensitive-example', 'semi-determinism',
|
||||
or 'strength'.
|
||||
-d, --dot[=1|a|A|b|B|c|C(COLOR)|e|E|f(FONT)|h|i(ID)|k|K|n|N|o|r|R|s|t|u|v|y|+INT|<INT|#]
|
||||
GraphViz's format. Add letters for (1) force
|
||||
numbered states, (a) acceptance display, (b)
|
||||
numbered states, (a) show acceptance condition
|
||||
(default), (A) hide acceptance condition, (b)
|
||||
acceptance sets as bullets, (B) bullets except for
|
||||
Büchi/co-Büchi automata, (c) force circular
|
||||
nodes, (e) force elliptic nodes, (f(FONT)) use
|
||||
FONT, (h) horizontal layout, (v) vertical layout,
|
||||
(n) with name, (N) without name, (o) ordered
|
||||
transitions, (r) rainbow colors for acceptance
|
||||
sets, (R) color acceptance sets by Inf/Fin, (s)
|
||||
with SCCs, (t) force transition-based acceptance,
|
||||
(+INT) add INT to all set numbers
|
||||
-H, --hoaf[=i|l|m|s|t|v] Output the automaton in HOA format (default). Add
|
||||
letters to select (i) use implicit labels for
|
||||
nodes, (C) color nodes with COLOR, (d) show
|
||||
origins when known, (e) force elliptic nodes, (E)
|
||||
force rEctangular nodes, (f(FONT)) use FONT, (g)
|
||||
hide edge labels, (h) horizontal layout, (i) or
|
||||
(i(GRAPHID)) add IDs, (k) use state labels when
|
||||
possible, (K) use transition labels (default), (n)
|
||||
show name, (N) hide name, (o) ordered transitions,
|
||||
(r) rainbow colors for acceptance sets, (R) color
|
||||
acceptance sets by Inf/Fin, (s) with SCCs, (t)
|
||||
force transition-based acceptance, (u) hide true
|
||||
states, (v) vertical layout, (y) split universal
|
||||
edges by color, (+INT) add INT to all set numbers,
|
||||
(<INT) display at most INT states, (#) show
|
||||
internal edge numbers
|
||||
-H, --hoaf[=1.1|i|k|l|m|s|t|v] Output the automaton in HOA format
|
||||
(default). Add letters to select (1.1) version
|
||||
1.1 of the format, (i) use implicit labels for
|
||||
complete deterministic automata, (s) prefer
|
||||
state-based acceptance when possible [default],
|
||||
(t) force transition-based acceptance, (m) mix
|
||||
|
|
@ -280,7 +289,8 @@ dstar2tgba --help | sed -n '/Output format:/,/^$/p' | sed '1d;$d'
|
|||
-s, --spin[=6|c] Spin neverclaim (implies --ba). Add letters to
|
||||
select (6) Spin's 6.2.4 style, (c) comments on
|
||||
states
|
||||
--stats=FORMAT output statistics about the automaton
|
||||
--stats=FORMAT, --format=FORMAT
|
||||
output statistics about the automaton
|
||||
#+end_example
|
||||
|
||||
The =--stats= options can output statistics about the input and the
|
||||
|
|
@ -292,9 +302,9 @@ For instance here is a complex command that will
|
|||
1. generate an infinite stream of random LTL formulas with [[file:randltl.org][=randltl=]],
|
||||
2. use [[file:ltlfilt.org][=ltlfilt=]] to rewrite the W and M operators away (=--remove-wm=),
|
||||
simplify the formulas (=-r=), remove duplicates (=u=) as well as
|
||||
formulas that have a size less then 3 (=--size-min=3=), and
|
||||
formulas that have a size less than 3 (=--size-min=3=), and
|
||||
keep only the 10 first formulas (=-n 10=)
|
||||
3. loop to process each of these formula:
|
||||
3. loop to process each of these formulas:
|
||||
- print it
|
||||
- then convert the formula into =ltl2dstar='s input format, process
|
||||
it with =ltl2dstar= (using =ltl2tgba= as the actual LTL->BA
|
||||
|
|
@ -346,7 +356,7 @@ An important point you should be aware of when comparing these numbers
|
|||
of states is that the deterministic automata produced by =ltl2dstar=
|
||||
are complete, while the automata produced by =dstar2tgba=
|
||||
(deterministic or not) are not complete by default. This can explain
|
||||
a difference of one state (the so called "sink" state).
|
||||
a difference of one state (the so-called "sink" state).
|
||||
|
||||
You can instruct =dstar2tgba= to output a complete automaton using the
|
||||
=--complete= option (or =-C= for short).
|
||||
|
|
@ -380,7 +390,7 @@ create one new Fin-accepting set for each conjunct of the CNF. The
|
|||
combination of these two algorithms is implemented by the
|
||||
=to_generalized_buchi()= function in Spot.
|
||||
|
||||
Finally a TGBA can easily be converted into a BA with classical
|
||||
Finally, a TGBA can easily be converted into a BA with classical
|
||||
degeneralization algorithms (our version of that includes several
|
||||
SCC-based optimizations described in our [[https://www.lrde.epita.fr/~adl/dl/adl/babiak.13.spin.pdf][SPIN'13 paper]]).
|
||||
|
||||
|
|
@ -396,7 +406,7 @@ to Rabin by adding some extra Fin or Inf terms to the acceptance
|
|||
conditions and ensuring that those terms are always true.
|
||||
|
||||
The conversion implemented is a variation of Krishnan et al.'s
|
||||
"Deterministic ω-Automata vis-a-vis Deterministic Büchi Automata"
|
||||
[[https://doi.org/10.1007/3-540-58325-4_202]["Deterministic ω-Automata vis-a-vis Deterministic Büchi Automata"]]
|
||||
(ISAAC'94) paper. They explain how to convert a deterministic Rabin
|
||||
automaton (DRA) into a deterministic Büchi automaton (DBA) when such
|
||||
an automaton exist. The surprising result is that when a DRA is
|
||||
|
|
@ -408,8 +418,8 @@ SCC-wise: any DRA will be converted into a BA, and the determinism
|
|||
will be conserved only for strongly connected components where
|
||||
determinism can be conserved. (If some SCC is not DBA-realizable, it
|
||||
will be cloned into several deterministic SCC, but the jumps between
|
||||
these SCCs will be nondeterministic.) Our implementation also work on
|
||||
automata with transition-based acceptance.
|
||||
these SCCs will be nondeterministic.) Our implementation also works
|
||||
on automata with transition-based acceptance.
|
||||
|
||||
This specialized conversion is built in the =remove_fin()= procedure
|
||||
described above.
|
||||
|
|
|
|||
|
|
@ -16,6 +16,10 @@ genaut --help | sed -n '/Pattern selection:/,/^$/p' | sed '1d;$d'
|
|||
|
||||
#+RESULTS:
|
||||
#+begin_example
|
||||
--cyclist-proof-dba=RANGE A DBA with N+2 states that should be included
|
||||
in cyclist-trace-nba=B.
|
||||
--cyclist-trace-nba=RANGE An NBA with N+2 states that should include
|
||||
cyclist-proof-dba=B.
|
||||
--ks-nca=RANGE A co-Büchi automaton with 2N+1 states for which
|
||||
any equivalent deterministic co-Büchi automaton
|
||||
has at least 2^N/(2N+1) states.
|
||||
|
|
@ -26,7 +30,7 @@ genaut --help | sed -n '/Pattern selection:/,/^$/p' | sed '1d;$d'
|
|||
complementary Streett automaton needs at least N!
|
||||
states.
|
||||
--m-nba=RANGE An NBA with N+1 states whose determinization needs
|
||||
at least N! states
|
||||
at least N! states.
|
||||
#+end_example
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -73,7 +73,7 @@ makes it possible to express constraints on finite prefixes.
|
|||
and /guarantee/ properties, while /reactivity/ properties are Boolean
|
||||
combinations of /recurrence/ and /persistence/ properties. The
|
||||
negation of a /safety/ property is a /guarantee/ property (and
|
||||
vice-versa), and the same duality hold for /persistence/ and
|
||||
vice versa), and the same duality hold for /persistence/ and
|
||||
/recurrence/ properties.
|
||||
|
||||
The red letters in each of these seven classes are keys used in
|
||||
|
|
@ -176,13 +176,13 @@ G({[*]}[]-> Fa)
|
|||
#+end_example
|
||||
|
||||
Note that the order of the =ltlfilt= filters could be changed provided
|
||||
the =-n10= stays at the end. For instance we could first keep all
|
||||
the =-n10= stays at the end. For instance, we could first keep all
|
||||
recurrence before removing obligations and then removing LTL formulas.
|
||||
The order given above actually starts with the easier checks first and
|
||||
keep the most complex tests at the end of the pipeline so they are
|
||||
keep the most complex tests at the end of the pipeline, so they are
|
||||
applied to fewer formulas. Testing whether a formula is an LTL
|
||||
formula is very cheap, testing if a formula is an obligation is harder
|
||||
(it may involves a translation to automata and a powerset
|
||||
(it may involve a translation to automata and a powerset
|
||||
construction), and testing whether a formula is a recurrence is the
|
||||
most costly procedure (it involves a translation as well, plus
|
||||
conversion to deterministic Rabin automata, and an attempt to convert
|
||||
|
|
@ -225,10 +225,10 @@ is not in class syntactic-C (we just know that some equivalent formula
|
|||
is in class syntactic-C).
|
||||
|
||||
=ltlfilt= has options like =--syntactic-guarantee=,
|
||||
=--syntactic-persistence=, etc. to match formulas from this classes.
|
||||
=--syntactic-persistence=, etc. to match formulas from these classes.
|
||||
|
||||
Here is how to generate 10 random LTL formulas that describe safety
|
||||
properties but that are not in the syntactic-safety class:
|
||||
properties, but that are not in the syntactic-safety class:
|
||||
|
||||
#+BEGIN_SRC sh
|
||||
randltl -n-1 a b |
|
||||
|
|
@ -262,7 +262,7 @@ ltlfilt --simplify -f 'b M Gb'
|
|||
#+RESULTS:
|
||||
: Gb
|
||||
|
||||
However in the general case Spot is not able to provide the equivalent
|
||||
However, in the general case Spot is not able to provide the equivalent
|
||||
formula from the appropriate syntactic class.
|
||||
|
||||
* What to do with each class?
|
||||
|
|
@ -318,9 +318,9 @@ $txt
|
|||
Note that the default translation used by =ltl2tgba= will turn any
|
||||
syntactic persistence formulas (this includes obligations formulas)
|
||||
into a weak automaton. In a weak automaton, the acceptance condition
|
||||
could be defined in term of SCCs, i.e., the cycles of some SCCs are
|
||||
either all accepting, or all rejecting. As a consequence, it there is
|
||||
no incentive to use transition-based acceptance; instead, state-based
|
||||
could be defined in terms of SCCs, i.e., the cycles of some SCCs are
|
||||
either all accepting, or all rejecting. As a consequence, if there is
|
||||
no incentive to use transition-based acceptance, state-based
|
||||
acceptance is output by default.
|
||||
|
||||
With =ltl2tgba -D= we get a (minimal) deterministic weak Büchi
|
||||
|
|
@ -437,7 +437,7 @@ already be =t= (meaning that all runs are accepting). However since
|
|||
the translator does not do anything particular about safety formulas,
|
||||
it is possible to find some pathological formulas for which the
|
||||
translator outputs a non-deterministic Büchi automaton where not all
|
||||
run are accepting.
|
||||
runs are accepting.
|
||||
|
||||
Here is an example:
|
||||
|
||||
|
|
@ -622,7 +622,7 @@ requirement.
|
|||
|
||||
** Persistence
|
||||
|
||||
Since /persistence/ properties are outside of the /recurrence/ class,
|
||||
Since /persistence/ properties are outside the /recurrence/ class,
|
||||
they cannot be represented by deterministic Büchi automata. The typical
|
||||
persistence formula is =FGa=, and using =-D= on this is hopeless.
|
||||
|
||||
|
|
@ -665,7 +665,7 @@ return an automaton whose acceptance is one of =Fin(0)=, =t=, or =f=.
|
|||
The translator is aware of that, so when it detects that the input
|
||||
formula is a syntactic-persistence, it simplifies its translation
|
||||
slightly to ensure that the output will use at most one acceptance
|
||||
set. (It is possible to define a persistence properties using an LTL
|
||||
set. (It is possible to define a persistence property using an LTL
|
||||
formula that is not a syntactic-persistence, in that case this
|
||||
optimization is simply not applied.)
|
||||
|
||||
|
|
|
|||
|
|
@ -133,14 +133,14 @@ sudo dnf install spot python3-spot spot-devel spot-doc
|
|||
or a subset of those packages. The package =spot= contains the
|
||||
command-line tools, =python3-spot= contains the Python bindings,
|
||||
=spot-devel= contains the C++ header files, and =spot-doc= the
|
||||
documentation that you can also find online. Those packages depends
|
||||
documentation that you can also find online. Those packages depend
|
||||
on =libspot= that contains the shared libraries.
|
||||
|
||||
* Installing as a Conda package
|
||||
|
||||
Spot is available as a [[https://anaconda.org/conda-forge/spot][Conda-forge package]] for Linux and OS X.
|
||||
|
||||
A typical installation would go as follow:
|
||||
A typical installation would go as follows:
|
||||
|
||||
#+BEGIN_SRC sh
|
||||
conda create --name myenv python=3.8 # adjust as desired
|
||||
|
|
@ -150,7 +150,7 @@ on =libspot= that contains the shared libraries.
|
|||
|
||||
Note that this package is built automatically by the conda-forge
|
||||
infrastructure, but this requires some manual trigger after each
|
||||
release. Therefore there might be a delay between the moment a
|
||||
release. Therefore, there might be a delay between the moment a
|
||||
release of Spot is announced, and the availability of the Conda
|
||||
package.
|
||||
|
||||
|
|
|
|||
|
|
@ -165,7 +165,7 @@ The following operators are supported:
|
|||
As an extension to LBT's syntax, alphanumeric atomic propositions that
|
||||
follow the "=p= + number" rule will be accepted if they do not
|
||||
conflict with one of the operators (e.g., =i=, the /implies/ operator,
|
||||
cannot be used as an atomic proposition). Also any atomic proposition
|
||||
cannot be used as an atomic proposition). Also, any atomic proposition
|
||||
may be double-quoted. These extensions are compatible with the syntax
|
||||
used by [[http://www.ltl2dstar.de][ltl2dstar]].
|
||||
|
||||
|
|
@ -224,7 +224,7 @@ that case discussing associativity and parentheses makes no sense.
|
|||
The =--csv= causes the formulas to be double-quoted (with inner
|
||||
double-quotes doubled, as per RFC 4180), regardless of the selected
|
||||
format. This is needed if the formula should appear in a CSV file,
|
||||
and you want to be robust to formulas that contains commas or
|
||||
and you want to be robust to formulas that contain commas or
|
||||
double-quotes. We have [[file:csv.org][examples of reading or writing CSV files on a
|
||||
separate page]].
|
||||
|
||||
|
|
@ -244,7 +244,7 @@ Other =%=-sequences are supported by these tools, and documented in
|
|||
the output of =--help=. For instance =%s= can be used to compute the
|
||||
size of a formula.
|
||||
|
||||
By default everything is output to standard output, so that you can
|
||||
By default, everything is output to standard output, so that you can
|
||||
redirect the output to a file, and pipe it to another tool. The
|
||||
=--output= (or =-o=) allows you to construct a filename using some of
|
||||
the above =%=-sequences.
|
||||
|
|
@ -267,7 +267,7 @@ wc -l example-*.ltl
|
|||
Option =-0= is useful if the list of formulas is passed to =xargs=.
|
||||
=xargs= normally splits its input on white space (which are frequent
|
||||
in LTL formulas), but you can use =xargs -0= to split the input on
|
||||
null characters. So for instance the following two invocations have
|
||||
null characters. So for instance the following two invocations have
|
||||
nearly the same output:
|
||||
|
||||
#+BEGIN_SRC sh
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ less frequent.
|
|||
Except for =-B= which forces state-based acceptance, these options
|
||||
build transition-based automata by default. Internally, Spot only
|
||||
supports transition-based automata. However, while transition-based
|
||||
automata can be smaller then their state-based counterpart, there are
|
||||
automata can be smaller than their state-based counterpart, there are
|
||||
many cases where transition-based acceptance does not bring any benefits.
|
||||
|
||||
In case where it is detected that the transition-based automaton looks
|
||||
|
|
@ -105,7 +105,7 @@ $txt
|
|||
|
||||
Characters like ⓿, ❶, etc. denotes the acceptance sets a transition
|
||||
belongs to. In this case, there is only one acceptance set, called
|
||||
=0=, containing a single transition. An acceptance set can contains
|
||||
=0=, containing a single transition. An acceptance set can contain
|
||||
multiple transitions, and a transition may also belong to multiple
|
||||
acceptance sets. An infinite path through this automaton is accepting
|
||||
iff it visits each acceptance set infinitely often. Therefore, in the
|
||||
|
|
@ -238,7 +238,7 @@ ltl2tgba --help | sed -n '/Output format:/,/^$/p' | sed '1d;$d'
|
|||
'unambiguous', 'stutter-invariant',
|
||||
'stutter-sensitive-example', 'semi-determinism',
|
||||
or 'strength'.
|
||||
-d, --dot[=1|a|A|b|B|c|C(COLOR)|e|E|f(FONT)|h|k|K|n|N|o|r|R|s|t|u|v|y|+INT|<INT|#]
|
||||
-d, --dot[=1|a|A|b|B|c|C(COLOR)|e|E|f(FONT)|h|i(ID)|k|K|n|N|o|r|R|s|t|u|v|y|+INT|<INT|#]
|
||||
GraphViz's format. Add letters for (1) force
|
||||
numbered states, (a) show acceptance condition
|
||||
(default), (A) hide acceptance condition, (b)
|
||||
|
|
@ -247,16 +247,17 @@ ltl2tgba --help | sed -n '/Output format:/,/^$/p' | sed '1d;$d'
|
|||
nodes, (C) color nodes with COLOR, (d) show
|
||||
origins when known, (e) force elliptic nodes, (E)
|
||||
force rEctangular nodes, (f(FONT)) use FONT, (g)
|
||||
hide edge labels, (h) horizontal layout, (k) use
|
||||
state labels when possible, (K) use transition
|
||||
labels (default), (n) show name, (N) hide name,
|
||||
(o) ordered transitions, (r) rainbow colors for
|
||||
acceptance sets, (R) color acceptance sets by
|
||||
Inf/Fin, (s) with SCCs, (t) force transition-based
|
||||
acceptance, (u) hide true states, (v) vertical
|
||||
layout, (y) split universal edges by color, (+INT)
|
||||
add INT to all set numbers, (<INT) display at most
|
||||
INT states, (#) show internal edge numbers
|
||||
hide edge labels, (h) horizontal layout, (i) or
|
||||
(i(GRAPHID)) add IDs, (k) use state labels when
|
||||
possible, (K) use transition labels (default), (n)
|
||||
show name, (N) hide name, (o) ordered transitions,
|
||||
(r) rainbow colors for acceptance sets, (R) color
|
||||
acceptance sets by Inf/Fin, (s) with SCCs, (t)
|
||||
force transition-based acceptance, (u) hide true
|
||||
states, (v) vertical layout, (y) split universal
|
||||
edges by color, (+INT) add INT to all set numbers,
|
||||
(<INT) display at most INT states, (#) show
|
||||
internal edge numbers
|
||||
-H, --hoaf[=1.1|i|k|l|m|s|t|v] Output the automaton in HOA format
|
||||
(default). Add letters to select (1.1) version
|
||||
1.1 of the format, (i) use implicit labels for
|
||||
|
|
@ -327,8 +328,8 @@ T0_S3:
|
|||
#+end_example
|
||||
|
||||
Since Spin 6 extended its syntax to support arbitrary atomic
|
||||
propositions, you may also need put the parser in =--lenient= mode to
|
||||
support these:
|
||||
propositions, you may also need to put the parser in =--lenient= mode
|
||||
to support these:
|
||||
|
||||
#+BEGIN_SRC sh
|
||||
ltl2tgba -s --lenient '(a < b) U (process[2]@ok)'
|
||||
|
|
@ -363,9 +364,9 @@ ltl2tgba --help | sed -n '/Simplification goal:/,/^$/p' | sed '1d;$d'
|
|||
: --small prefer small automata (default)
|
||||
|
||||
The =--any= option tells the translator that it should attempt to
|
||||
reduce or produce a deterministic result result: any automaton
|
||||
denoting the given formula is OK. This effectively disables
|
||||
post-processings and speeds up the translation.
|
||||
reduce or produce a deterministic result: any automaton denoting the
|
||||
given formula is OK. This effectively disables post-processings and
|
||||
speeds up the translation.
|
||||
|
||||
With the =-D= or =--deterministic= option, the translator will
|
||||
/attempt/ to produce a deterministic automaton, even if this requires
|
||||
|
|
@ -488,9 +489,9 @@ recognized by at most one accepting run of the automaton (however a
|
|||
word can be rejected by multiple runs, so unambiguous automata can be
|
||||
non-deterministic).
|
||||
|
||||
The following example is an ambiguous Büchi automaton, because the are
|
||||
two ways to accept a run that repeats continuously the configuration
|
||||
$\bar ab$.
|
||||
The following example is an ambiguous Büchi automaton, because there
|
||||
are two ways to accept a run that repeats continuously the
|
||||
configuration $\bar ab$.
|
||||
|
||||
#+NAME: ambig1
|
||||
#+BEGIN_SRC sh :exports code
|
||||
|
|
@ -628,13 +629,13 @@ automaton is overkill (you only need an accepted run).
|
|||
|
||||
Finally, it should be noted that the default optimization options
|
||||
(=--small --high=) are usually overkill. =--low= will produce good
|
||||
automata most of the time. Most of pattern formulas of [[file:genltl.org][=genltl=]] will
|
||||
be efficiently translated in this configuration (meaning that =--small
|
||||
--high= will not produce a better automaton). If you are planning to
|
||||
generate automata for large family of pattern formulas, it makes sense
|
||||
to experiment with the different settings on a small version of the
|
||||
pattern, and select the lowest setting that satisfies your
|
||||
expectations.
|
||||
automata most of the time. Most of the pattern formulas of [[file:genltl.org][=genltl=]]
|
||||
will be efficiently translated in this configuration (meaning that
|
||||
=--small --high= will not produce a better automaton). If you are
|
||||
planning to generate automata for large family of pattern formulas, it
|
||||
makes sense to experiment with the different settings on a small
|
||||
version of the pattern, and select the lowest setting that satisfies
|
||||
your expectations.
|
||||
|
||||
* Deterministic automata with =--generic --deterministic=
|
||||
:PROPERTIES:
|
||||
|
|
@ -698,7 +699,7 @@ ltl2tgba "Fb|Gc" -G -D -d
|
|||
[[file:ltl2tgba-fbgc-D.svg]]
|
||||
|
||||
|
||||
Finally if we translate the conjunction of these two subformulas, a
|
||||
Finally, if we translate the conjunction of these two subformulas, a
|
||||
product of these two automata will be made, producing:
|
||||
|
||||
#+NAME: ltl2tgba-fbgcfga-D
|
||||
|
|
@ -777,7 +778,7 @@ understood in the sense of the HOA format, where:
|
|||
sets), and
|
||||
- the parity acceptance is only a type of acceptance condition, i.e.,
|
||||
a formula expressed in terms of acceptance sets, and does not have
|
||||
additional constraints on these sets. In particular it is not
|
||||
additional constraints on these sets. In particular, it is not
|
||||
necessary the case that each transition or state belongs to exactly
|
||||
one acceptance set (this is the "colored" property, see below).
|
||||
|
||||
|
|
@ -917,7 +918,8 @@ ltl2tgba --help | sed -n '/ sequences:/,/^$/p' | sed '1d;$d'
|
|||
(iw) inherently weak. Use uppercase letters to
|
||||
negate them.
|
||||
%d 1 if the output is deterministic, 0 otherwise
|
||||
%e number of reachable edges
|
||||
%e, %[LETTER]e number of edges (add one LETTER to select (r)
|
||||
reachable [default], (u) unreachable, (a) all).
|
||||
%f the formula, in Spot's syntax
|
||||
%F name of the input file
|
||||
%g, %[LETTERS]g acceptance condition (in HOA syntax); add brackets
|
||||
|
|
@ -933,6 +935,7 @@ ltl2tgba --help | sed -n '/ sequences:/,/^$/p' | sed '1d;$d'
|
|||
%[opt]h to specify additional options as in
|
||||
--hoa=opt)
|
||||
%L location in the input file
|
||||
%l serial number of the output automaton (0-based)
|
||||
%m name of the automaton
|
||||
%n number of nondeterministic states in output
|
||||
%p 1 if the output is complete, 0 otherwise
|
||||
|
|
@ -942,8 +945,16 @@ ltl2tgba --help | sed -n '/ sequences:/,/^$/p' | sed '1d;$d'
|
|||
LETTERS to restrict to(u) user time, (s) system
|
||||
time, (p) parent process, or (c) children
|
||||
processes.
|
||||
%s number of reachable states
|
||||
%t number of reachable transitions
|
||||
%s, %[LETTER]s number of states (add one LETTER to select (r)
|
||||
reachable [default], (u) unreachable, (a) all).
|
||||
%t, %[LETTER]t number of transitions (add one LETTER to select
|
||||
(r) reachable [default], (u) unreachable, (a)
|
||||
all).
|
||||
%u, %[e]u number of states (or [e]dges) with universal
|
||||
branching
|
||||
%u, %[LETTER]u 1 if the automaton contains some universal
|
||||
branching (or a number of [s]tates or [e]dges with
|
||||
universal branching)
|
||||
%w one word accepted by the output automaton
|
||||
%x, %[LETTERS]x number of atomic propositions declared in the
|
||||
automaton; add LETTERS to list atomic
|
||||
|
|
@ -955,7 +966,7 @@ ltl2tgba --help | sed -n '/ sequences:/,/^$/p' | sed '1d;$d'
|
|||
will be used to separate propositions
|
||||
#+end_example
|
||||
|
||||
For instance we can study the size of the automata generated for the
|
||||
For instance, we can study the size of the automata generated for the
|
||||
right-nested =U= formulas as follows:
|
||||
|
||||
#+BEGIN_SRC sh
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@
|
|||
#+HTML_LINK_UP: tools.html
|
||||
#+PROPERTY: header-args:sh :results verbatim :exports code
|
||||
|
||||
This tool generates various form of Testing Automata, i.e., automata
|
||||
This tool generates various forms of Testing Automata, i.e., automata
|
||||
that observe the /changes/ of atomic propositions, not their values.
|
||||
|
||||
Three types of automata can be output. The only output format
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ The main differences with LBTT are:
|
|||
Although =ltlcross= performs similar sanity checks as LBTT, it does
|
||||
not implement any of the interactive features of LBTT. In our almost
|
||||
10-year usage of LBTT, we never had to use its interactive features to
|
||||
understand bugs in our translation. Therefore =ltlcross= will report
|
||||
understand bugs in our translation. Therefore, =ltlcross= will report
|
||||
problems, maybe with a counterexample, but you will be on your own to
|
||||
investigate and fix them (the =--grind= option may help you reduce the
|
||||
problem to a shorter formula).
|
||||
|
|
@ -243,7 +243,7 @@ If a translator exits with a non-zero status code, or fails to output
|
|||
an automaton =ltlcross= can read, and error will be displayed and the
|
||||
result of the translation will be discarded.
|
||||
|
||||
Otherwise =ltlcross= performs the following checks on all translated
|
||||
Otherwise, =ltlcross= performs the following checks on all translated
|
||||
formulas ($P_i$ and $N_i$ designate respectively the translation of
|
||||
positive and negative formulas by the ith translator).
|
||||
|
||||
|
|
@ -302,7 +302,7 @@ positive and negative formulas by the ith translator).
|
|||
likewise if it's $N_i$ that is deterministic.
|
||||
|
||||
When validating a translator with =ltlcross= without using the
|
||||
=--determinize= option we highly recommend to include a translator
|
||||
=--determinize= option we highly recommend including a translator
|
||||
with good deterministic output to augment test coverage. Using
|
||||
'=ltl2tgba -D %f >%O=' will produce deterministic automata for all
|
||||
obligation properties and many recurrence properties. Using
|
||||
|
|
@ -312,7 +312,7 @@ positive and negative formulas by the ith translator).
|
|||
|
||||
- Cross-comparison checks: for some state-space $S$,
|
||||
all $P_i\otimes S$ are either all empty, or all non-empty.
|
||||
Similarly all $N_i\otimes S$ are either all empty, or all non-empty.
|
||||
Similarly, all $N_i\otimes S$ are either all empty, or all non-empty.
|
||||
|
||||
A cross-comparison failure could be displayed as:
|
||||
|
||||
|
|
@ -328,7 +328,7 @@ positive and negative formulas by the ith translator).
|
|||
|
||||
These products tests may sometime catch errors that were not
|
||||
captured by the first two tests if one non-deterministic automaton
|
||||
recognize less words than what it should. If the input automata
|
||||
recognize fewer words than what it should. If the input automata
|
||||
are all deterministic or the =--determinize= option is used, this test
|
||||
is redundant and can be disabled. (In fact, the =--determinize=
|
||||
option implies option =--product=0= to do so.)
|
||||
|
|
@ -349,7 +349,7 @@ positive and negative formulas by the ith translator).
|
|||
printed.
|
||||
|
||||
This test may catch errors that were not captured by the first two
|
||||
tests if one non-deterministic automaton recognize less words than
|
||||
tests if one non-deterministic automaton recognize fewer words than
|
||||
what it should. If the input automata are deterministic or the
|
||||
=--determinize= option is used, this test is redundant and can be
|
||||
disabled. (In fact, the =--determinize= option implies option
|
||||
|
|
@ -569,7 +569,7 @@ since most statistics cannot be computed without an automaton...
|
|||
Those lines with missing data can be omitted with the =--omit-missing=
|
||||
option (this used to be the default up to Spot 1.2).
|
||||
|
||||
However data for bogus automata are still included: as shown below
|
||||
However, data for bogus automata are still included: as shown below
|
||||
=ltlcross= will report inconsistencies between automata as errors, but
|
||||
it does not try to guess who is incorrect.
|
||||
|
||||
|
|
@ -579,9 +579,9 @@ The number of column output in the CSV or JSON outputs depend on the
|
|||
options passed to =ltlcross=. Additional columns will be output if
|
||||
=--strength=, =--ambiguous=, =--automata=, or =--product=+N= are used.
|
||||
|
||||
Columns =formula= and =tool= contain the formula translated and the
|
||||
Columns =formula= and =tool= contain the formula translated, and the
|
||||
command run to translate it. In the CSV, these columns contain the
|
||||
actual text. In the JSON output, these column contains an index into
|
||||
actual text. In the JSON output, these columns contain an index into
|
||||
the =formula= and =tool= table declared separately.
|
||||
|
||||
=exit_status= and =exit_code= are used to indicate if the translator
|
||||
|
|
@ -634,7 +634,7 @@ These SCC strengths can be used to compute the strength of the
|
|||
automaton as a whole:
|
||||
- an automaton is terminal if it contains only non-accepting or
|
||||
terminal SCCs,
|
||||
- an automaton is weak if it it contains only non-accepting,
|
||||
- an automaton is weak if it contains only non-accepting,
|
||||
terminal, or weak SCCs,
|
||||
- an automaton is strong if it contains at least one strong SCC.
|
||||
|
||||
|
|
@ -645,7 +645,7 @@ usually prefer terminal automata over weak automata, and weak automata
|
|||
over strong automata, because the emptiness check of terminal (and
|
||||
weak) automata is easier. When working with alternating automata, all
|
||||
those strength-related columns will be empty, because the routines
|
||||
used to compute those statistic do not yet support universal edges.
|
||||
used to compute those statistics do not yet support universal edges.
|
||||
|
||||
=nondetstates= counts the number of non-deterministic states in the
|
||||
automaton. =nondeterministic= is a Boolean value indicating if the
|
||||
|
|
@ -669,7 +669,7 @@ count the number of state, transitions and strongly-connect components
|
|||
in the product that has been built between the translated automaton
|
||||
and a random model. For a given formula, the same random model is of
|
||||
course used against the automata translated by all tools. Comparing
|
||||
the size of these product might give another indication of the
|
||||
the size of these products might give another indication of the
|
||||
"conciseness" of a translated automaton.
|
||||
|
||||
There is of course a certain "luck factor" in the size of the product.
|
||||
|
|
@ -868,7 +868,7 @@ Classes ‘data.table’ and 'data.frame': 20 obs. of 16 variables:
|
|||
- attr(*, ".internal.selfref")=<externalptr>
|
||||
#+end_example
|
||||
|
||||
Currently the data frame shows one line per couple (formula, tool).
|
||||
Currently, the data frame shows one line per couple (formula, tool).
|
||||
This makes comparing tools quite difficult, as their results are on
|
||||
different lines.
|
||||
|
||||
|
|
@ -952,8 +952,8 @@ ggplot(dt2, aes(x=states.small, y=states.deter)) +
|
|||
** =--stop-on-error=
|
||||
|
||||
The =--stop-on-error= option will cause =ltlcross= to abort on the
|
||||
first detected error. This include failure to start some translator,
|
||||
read its output, or failure to passe the sanity checks. Timeouts are
|
||||
first detected error. This includes failure to start some translator,
|
||||
read its output, or failure to pass the sanity checks. Timeouts are
|
||||
allowed unless =--fail-on-time= is also given.
|
||||
|
||||
One use for this option is when =ltlcross= is used in combination with
|
||||
|
|
@ -1003,7 +1003,7 @@ Here is the procedure used:
|
|||
them by length (as [[file:ltlgrind.org][=ltlgrind --sort=]] would do)
|
||||
- process every mutation until one is found that exhibit the bug
|
||||
- repeat the process with this new formula, and again until a formula
|
||||
is found for which no mutation exhibit the bug
|
||||
is found for which no mutation exhibits the bug
|
||||
- output that last formula in =FILENAME=
|
||||
|
||||
If =--save-bogus=OTHERFILENAME= is provided, every bogus formula found
|
||||
|
|
@ -1172,7 +1172,7 @@ The =--no-check= option disables all sanity checks, and only use the supplied
|
|||
formulas in their positive form.
|
||||
|
||||
When checks are enabled, the negated formulas are intermixed with the
|
||||
positives ones in the results. Therefore the =--no-check= option can
|
||||
positives ones in the results. Therefore, the =--no-check= option can
|
||||
be used to gather statistics about a specific set of formulas.
|
||||
|
||||
** =--verbose=
|
||||
|
|
@ -1183,7 +1183,7 @@ be used to gather statistics about a specific set of formulas.
|
|||
The verbose option can be useful to troubleshoot problems or simply
|
||||
follow the list of transformations and tests performed by =ltlcross=.
|
||||
|
||||
For instance here is what happens if we try to cross check =ltl2tgba=
|
||||
For instance here is what happens if we try to cross-check =ltl2tgba=
|
||||
and =ltl3ba -H1= on the formula =FGa=. Note that =ltl2tgba= will
|
||||
produce transition-based generalized Büchi automata, while =ltl3ba
|
||||
-H1= produces co-Büchi alternating automata.
|
||||
|
|
@ -1231,8 +1231,8 @@ First =FGa= and its negations =!FGa= are translated with the two
|
|||
tools, resulting in four automata: two positive automata =P0= and =P1=
|
||||
for =FGa=, and two negative automata =N0= and =N1=.
|
||||
|
||||
Some basic information about the collected automata are displayed.
|
||||
For instance we can see that although =ltl3ba -H1= outputs co-Büchi
|
||||
Some basic information about the collected automata is displayed.
|
||||
For instance, we can see that although =ltl3ba -H1= outputs co-Büchi
|
||||
alternating automata, only automaton =N1= uses universal edges: the
|
||||
automaton =P1= can be used like a non-alternating co-Büchi automaton.
|
||||
|
||||
|
|
@ -1250,9 +1250,9 @@ rewriting them to get rid of any =Fin= acceptance.
|
|||
|
||||
After this preparatory work, it is time to actually compare these
|
||||
automata. Together, the tests =P0*N0= and =Comp(N0)*Comp(P0)= ensure
|
||||
that the automaton =N0= is really the complement of =P0=. Similarly
|
||||
that the automaton =N0= is really the complement of =P0=. Similarly,
|
||||
=P1*N1= and =Comp(N1)*Comp(P1)= ensure that =N1= is the complement of
|
||||
=P1=. Finally =P0*N1= and =P1*N0= ensure that =P1= is equivalent to
|
||||
=P1=. Finally, =P0*N1= and =P1*N0= ensure that =P1= is equivalent to
|
||||
=P0= and =N1= is equivalent to =N0=.
|
||||
|
||||
Note that if we reduce =ltlcross='s ability to determinize
|
||||
|
|
@ -1377,7 +1377,7 @@ No problem detected.
|
|||
:END:
|
||||
|
||||
The =ltlcross= command itself has no built-in support for
|
||||
parallelization (patches welcome). However its interface makes it
|
||||
parallelization (patches welcome). However, its interface makes it
|
||||
rather easy to parallelize =ltlcross= runs with third-party tools
|
||||
such as:
|
||||
|
||||
|
|
@ -1405,7 +1405,7 @@ No problem detected.
|
|||
with 8 processes in parallel. Here =ltlcross= is called with
|
||||
option =-q= to silence most its regular output as the 8 instances
|
||||
of =ltlcross= would be otherwise writing to the same terminal.
|
||||
With =-q=, only errors are displayed. Additionally =--save-bogus=
|
||||
With =-q=, only errors are displayed. Additionally, =--save-bogus=
|
||||
is used to keep track of all formulas causing errors. The =>>bugs.ltl=
|
||||
syntax means to open =bugs.ltl= in append mode, so that =bugs.ltl= does
|
||||
not get overwritten each time a new =ltlcross= instance finds a bug.
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ any other tool.
|
|||
|
||||
As a motivating example, consider a scenario where we want to run
|
||||
[[https://sourceforge.net/projects/ltl3ba/][=ltl3ba=]] on a set of 10 formulas stored in a file. For each formula
|
||||
we would like to compute compute the number of states and edges in the
|
||||
we would like to compute the number of states and edges in the
|
||||
Büchi automaton produced by =ltl3ba=.
|
||||
|
||||
Here is the input file:
|
||||
|
|
@ -192,7 +192,7 @@ ltldo --help | sed -n '/character sequences:/,/^$/p' | sed '1d;$d'
|
|||
: ltl2dstar's format
|
||||
|
||||
Contrarily to =ltlcross=, it this not mandatory to specify an output
|
||||
filename using one of the sequence for that last line. For instance
|
||||
filename using one of the sequence for that last line. For instance,
|
||||
we could simply run a formula though =echo= to compare different
|
||||
output syntaxes:
|
||||
|
||||
|
|
@ -328,7 +328,7 @@ will be changed into
|
|||
'{DRA} ~/mytools/ltl2dstar-0.5.2 --output-format=hoa %[MW]L %O'
|
||||
#+end_example
|
||||
|
||||
Therefore you can type the following to obtain a Dot output (as
|
||||
Therefore, you can type the following to obtain a Dot output (as
|
||||
requested with =-d=) for the neverclaim produced by =ltl2ba -f a=.
|
||||
|
||||
#+BEGIN_SRC sh :prologue export SPOT_DOTEXTRA= SPOT_DOTDEFAULT=
|
||||
|
|
@ -354,7 +354,7 @@ The =ltl2ba= argument passed to =ltldo= was interpreted as if you had
|
|||
typed ={ltl2ba}ltl2ba -f %s>%O=.
|
||||
|
||||
Those shorthand patterns are only tested if the command string does
|
||||
not contains any =%= character. They should always patch a prefix of
|
||||
not contain any =%= character. They should always patch a prefix of
|
||||
the command, ignoring any leading directory. This makes it possible
|
||||
to add options:
|
||||
|
||||
|
|
@ -415,7 +415,7 @@ syntax, but cannot cope with double-quoted atomic propositions).
|
|||
There are some cases where the renaming is not completely transparent.
|
||||
For instance if a translator tool outputs some HOA file named after
|
||||
the formula translated, the name will be output unmodified (since this
|
||||
can be any text string, there is not way for =ltldo= to assume it is
|
||||
can be any text string, there is no way for =ltldo= to assume it is
|
||||
an LTL formula). In the following example, you can see that the
|
||||
automaton uses the atomic proposition =Error=, but its name contains a
|
||||
reference to =p0=.
|
||||
|
|
@ -518,9 +518,9 @@ The sorting criterion can be specified using =--smallest= or
|
|||
=--greatest=, optionally followed by a format string with
|
||||
=%=-sequences. The default criterion is =%s,%e=, so the number of
|
||||
states will be compared first, and in case of equality the number of
|
||||
edges. If we desire the automaton that has the fewest states, and in
|
||||
case of equality the smallest number of non-deterministic states, we
|
||||
can use the following command instead.
|
||||
edges. If we desire the automaton that has the fewest states (=%s=),
|
||||
and in case of equality the smallest number of non-deterministic
|
||||
states (=%n=), we can use the following command instead.
|
||||
|
||||
#+BEGIN_SRC sh
|
||||
ltldo ltl2ba ltl3ba 'ltl2tgba -s' -f 'F(a & Xa | FGa)' --smallest=%s,%n
|
||||
|
|
@ -549,20 +549,19 @@ State: 2 {0}
|
|||
--END--
|
||||
#+end_example
|
||||
|
||||
We can of course apply this on a large number of formulas. For
|
||||
instance here is a more complex pipeline, where we take 11 patterns
|
||||
from Dwyer et al. (FMSP'98), and print which translator among
|
||||
=ltl2ba=, =ltl3ba=, and =ltl2tgba -s= would produce the smallest
|
||||
automaton.
|
||||
We can of course apply this to a stream of formulas. For instance
|
||||
here is a more complex pipeline, where we take 11 patterns from [[https://doi.org/10.1145/302405.302672][Dwyer
|
||||
et al. (FMSP'98)]], and print which translator among =ltl2ba=,
|
||||
=ltl3ba=, and =ltl2tgba -s= would produce the smallest automaton.
|
||||
|
||||
#+BEGIN_SRC sh
|
||||
genltl --dac=10..20 --format=%F:%L,%f |
|
||||
genltl --dac=10..20 --format=%F:%L,%f |
|
||||
ltldo -F-/2 ltl2ba ltl3ba 'ltl2tgba -s' --smallest --stats='%<,%T'
|
||||
#+END_SRC
|
||||
#+RESULTS:
|
||||
#+begin_example
|
||||
dac-patterns:10,ltl2ba
|
||||
dac-patterns:11,ltl3ba
|
||||
dac-patterns:11,ltl2ba
|
||||
dac-patterns:12,ltl2tgba -s
|
||||
dac-patterns:13,ltl2tgba -s
|
||||
dac-patterns:14,ltl2tgba -s
|
||||
|
|
@ -603,22 +602,22 @@ dac-patterns:20,G((p0 & !p1) -> (p2 W p1))
|
|||
This is a two-column CSV file where each line is a description of the
|
||||
origin of the formula (=%F:%L=), followed by the formula itself
|
||||
(=%f=). The =ltldo= from the previous pipeline simply takes its input
|
||||
from the second column of its standard input (=-F-/2=), run that
|
||||
formula through the three translator, pick the smallest automaton
|
||||
(=--smallest=), and for this automaton, it display the translator that
|
||||
from the second column of its standard input (=-F-/2=), runs that
|
||||
formula through the three translators, picks the smallest automaton
|
||||
(=--smallest=), and for this automaton, it displays the translator that
|
||||
was used (=%T=) along with the portion of the CSV file that was before
|
||||
the input column (=%<=).
|
||||
|
||||
|
||||
If you are curious about the actually size of the automata produced by
|
||||
If you are curious about the actual size of the automata produced by
|
||||
=ltl2ba=, =ltl3ba=, and =ltl2tgba -s= in the above example, you can
|
||||
quickly build a CSV file using the following pipeline where each
|
||||
command append a new column. We wrap =ltl2ba= and =ltl3ba= with
|
||||
command appends a new column. We wrap =ltl2ba= and =ltl3ba= with
|
||||
=ltldo= so that they can process one column of the CSV that is input,
|
||||
and output statistics in CSV as output. =ltl2tgba= does not need
|
||||
that, as it already supports those features. In the resulting CSV
|
||||
file, displayed as a table below, entries like =2s 4e 0d= represent an
|
||||
automaton with 2 states, 4 edges, and that is not deterministic. .
|
||||
automaton with 2 states, 4 edges, and that is not deterministic.
|
||||
(We have a [[file:csv.org][separate page]] with more examples of reading and writing CSV
|
||||
files.)
|
||||
|
||||
|
|
@ -679,8 +678,8 @@ When a timeout occurs a warning is printed on stderr, and no automaton
|
|||
command/formula. The processing then continue with other formulas and
|
||||
tools. Timeouts are not considered as errors, so they have no effect
|
||||
on the exit status of =ltldo=. This behavior can be changed with
|
||||
option =--fail-on-timeout=, in which case timeouts are considered
|
||||
as errors.
|
||||
option =--fail-on-timeout=, in which case timeouts are considered as
|
||||
errors.
|
||||
|
||||
For each command (that does not terminate with a timeout) the runtime
|
||||
can be printed using the =%r= escape sequence. This makes =ltldo= an
|
||||
|
|
|
|||
|
|
@ -79,14 +79,22 @@ ltlfilt --help | sed -n '/Transformation options.*:/,/^$/p' | sed '1d;$d'
|
|||
--nnf rewrite formulas in negative normal form
|
||||
--relabel[=abc|pnn] relabel all atomic propositions, alphabetically
|
||||
unless specified otherwise
|
||||
--relabel-bool[=abc|pnn] relabel Boolean subexpressions, alphabetically
|
||||
--relabel-bool[=abc|pnn] relabel Boolean subexpressions that do not
|
||||
share atomic propositions, relabel alphabetically
|
||||
unless specified otherwise
|
||||
--relabel-overlapping-bool[=abc|pnn]
|
||||
relabel Boolean subexpressions even if they share
|
||||
atomic propositions, relabel alphabetically unless
|
||||
specified otherwise
|
||||
--remove-wm rewrite operators W and M using U and R (this is
|
||||
an alias for --unabbreviate=WM)
|
||||
--remove-x remove X operators (valid only for
|
||||
stutter-insensitive properties)
|
||||
-r, --simplify[=LEVEL] simplify formulas according to LEVEL (see below);
|
||||
LEVEL is set to 3 if omitted
|
||||
--sonf[=PREFIX] rewrite formulas in suffix operator normal form
|
||||
--sonf-aps[=FILENAME] when used with --sonf, output the newly introduced
|
||||
atomic propositions
|
||||
--unabbreviate[=STR] remove all occurrences of the operators specified
|
||||
by STR, which must be a substring of "eFGiMRW^",
|
||||
where 'e', 'i', and '^' stand respectively for
|
||||
|
|
@ -110,7 +118,7 @@ be reordered by =ltlfilt= even when the formula is not changed
|
|||
otherwise. This is because Spot internally order all operands of
|
||||
commutative and associative operators, and that this order depends on
|
||||
the order in which the subformulas are first encountered. Adding
|
||||
transformation options such as =-r= may alter this order. However
|
||||
transformation options such as =-r= may alter this order. However,
|
||||
this difference is semantically insignificant.
|
||||
|
||||
Formulas can be easily negated using the =-n= option, rewritten into
|
||||
|
|
@ -161,14 +169,14 @@ ltlfilt -f '(a & !b) & GF(a & !b) & FG(!c & a)' --relabel-bool=pnn
|
|||
|
||||
In the first formula, the independent =a & !b= and =!c= subformulas
|
||||
were respectively renamed =p0= and =p1=. In the second formula, =a &
|
||||
!b= and =!c & a= are dependent so they could not be renamed; instead
|
||||
!b= and =!c & a= are dependent, so they could not be renamed; instead
|
||||
=a=, =!b= and =c= were renamed as =p0=, =p1= and =p2=.
|
||||
|
||||
This option was originally developed to remove superfluous formulas
|
||||
from benchmarks of LTL translators. For instance the automata
|
||||
generated for =GF(a|b)= and =GF(p0)= should be structurally
|
||||
equivalent: replacing =p0= by =a|b= in the second automaton should
|
||||
turn in into the first automaton, and vice-versa. (However algorithms
|
||||
turn in into the first automaton, and vice versa. (However algorithms
|
||||
dealing with =GF(a|b)= might be slower because they have to deal with
|
||||
more atomic propositions.) So given a long list of LTL formulas, we
|
||||
can combine =--relabel-bool= and =-u= to keep only one instance of
|
||||
|
|
@ -284,7 +292,7 @@ ltldo ltl3ba -f '"proc@loc1" U "proc@loc2"' --spin
|
|||
: }
|
||||
|
||||
This case also relabels the formula before calling =ltl3ba=, and it
|
||||
then rename all the atomic propositions in the output.
|
||||
then renames all the atomic propositions in the output.
|
||||
|
||||
An example showing how to use the =--from-ltlf= option is on [[file:tut12.org][a
|
||||
separate page]].
|
||||
|
|
@ -308,13 +316,19 @@ ltlfilt --help | sed -n '/Filtering options.*:/,/^$/p' | sed '1d;$d'
|
|||
--guarantee match guarantee formulas (even pathological)
|
||||
--implied-by=FORMULA match formulas implied by FORMULA
|
||||
--imply=FORMULA match formulas implying FORMULA
|
||||
--liveness match liveness properties
|
||||
--ltl match only LTL formulas (no PSL operator)
|
||||
-N, --nth=RANGE assuming input formulas are numbered from 1, keep
|
||||
only those in RANGE
|
||||
--obligation match obligation formulas (even pathological)
|
||||
--persistence match persistence formulas (even pathological)
|
||||
--recurrence match recurrence formulas (even pathological)
|
||||
--reject-word=WORD keep formulas that reject WORD
|
||||
--safety match safety formulas (even pathological)
|
||||
--size=RANGE match formulas with size in RANGE
|
||||
--stutter-insensitive, --stutter-invariant
|
||||
match stutter-insensitive LTL formulas
|
||||
--suspendable synonym for --universal --eventual
|
||||
--syntactic-guarantee match syntactic-guarantee formulas
|
||||
--syntactic-obligation match syntactic-obligation formulas
|
||||
--syntactic-persistence match syntactic-persistence formulas
|
||||
|
|
@ -471,8 +485,8 @@ size. So =F(a & b & c)= would have Boolean-size 2. This type of size
|
|||
is probably a better way to classify formulas that are going to be
|
||||
translated as automata, since transitions are labeled by Boolean
|
||||
formulas: the complexity of the Boolean subformulas has little
|
||||
influence on the overall translation. Here are 10 random formula with
|
||||
Boolean-size 5:
|
||||
influence on the overall translation. Here are 10 random formulas
|
||||
with Boolean-size 5:
|
||||
|
||||
#+BEGIN_SRC sh
|
||||
randltl -n -1 --tree-size=12 a b | ltlfilt --bsize=5 -n 10
|
||||
|
|
@ -513,6 +527,7 @@ ltlfilt --help | sed -n '/ sequences:/,/^$/p' | sed '1d;$d'
|
|||
%h, %[vw]h the class of the formula is the Manna-Pnueli
|
||||
hierarchy ([v] replaces abbreviations by class
|
||||
names, [w] for all compatible classes)
|
||||
%l the serial number of the output formula
|
||||
%L the original line number in the input file
|
||||
%[OP]n the nesting depth of operator OP. OP should be a
|
||||
single letter denoting the operator to count, or
|
||||
|
|
@ -522,7 +537,7 @@ ltlfilt --help | sed -n '/ sequences:/,/^$/p' | sed '1d;$d'
|
|||
%r wall-clock time elapsed in seconds (excluding
|
||||
parsing)
|
||||
%R, %[LETTERS]R CPU time (excluding parsing), in seconds; Add
|
||||
LETTERS to restrict to(u) user time, (s) system
|
||||
LETTERS to restrict to (u) user time, (s) system
|
||||
time, (p) parent process, or (c) children
|
||||
processes.
|
||||
%s the length (or size) of the formula
|
||||
|
|
@ -540,7 +555,7 @@ As a trivial example, use
|
|||
#+HTML: <code>--latex --format='$%f$'</code>
|
||||
to enclose formula in LaTeX format with =$...$=.
|
||||
|
||||
But =--format= can be useful in more complex scenarios. For instance
|
||||
But =--format= can be useful in more complex scenarios. For instance,
|
||||
you could print only the line numbers containing formulas matching
|
||||
some criterion. In the following, we print only the numbers of the
|
||||
lines of =scheck.ltl= that contain guarantee formulas:
|
||||
|
|
|
|||
|
|
@ -5,9 +5,8 @@
|
|||
#+HTML_LINK_UP: tools.html
|
||||
#+PROPERTY: header-args:sh :results verbatim :exports both
|
||||
|
||||
:results scalar: Is the same as :results verbatim.
|
||||
|
||||
:results table: Interprets the results as an Org This tool lists
|
||||
This tool lists formulas that are similar to but simpler than a given
|
||||
formula by applying simple mutations to it, like removing operands or
|
||||
formulas that are similar to but simpler than a given formula by
|
||||
applying simple mutations to it, like removing operands or
|
||||
operators. This is meant to be used with ltlcross to simplify a
|
||||
|
|
@ -66,7 +65,7 @@ The idea behind this tool is that when a bogus algorithm is found with
|
|||
=ltlcross=, you probably want to debug it using a smaller formula than
|
||||
the one found by =ltlcross=. So you would give the formula found by
|
||||
=ltlcross= as an argument to =ltlgrind= and then use the resulting
|
||||
mutations as an new input for =ltlcross=. It might report an error on
|
||||
mutations as a new input for =ltlcross=. It might report an error on
|
||||
one of the mutation, which is guaranteed to be simpler than the
|
||||
initial formula. The process can then be repeated until no error is
|
||||
reported by =ltlcross=.
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ controller, the acceptance condition is irrelevant and trivially true.
|
|||
- =--formula= or =--file=: a specification in LTL or PSL.
|
||||
|
||||
One of =--ins= or =--outs= may be omitted, as any atomic proposition not listed
|
||||
as input can be assumed to be output and vice-versa.
|
||||
as input can be assumed to be output and vice versa.
|
||||
|
||||
The following example illustrates the synthesis of a controller
|
||||
ensuring that input =i1= and =i2= are both true initially if and only
|
||||
|
|
@ -272,11 +272,20 @@ Further improvements are described in the following paper:
|
|||
/Alexandre Duret-Lutz/, and /Adrien Pommellet/. Presented at the
|
||||
SYNT'21 workshop. ([[https://www.lrde.epita.fr/~adl/dl/adl/renkin.21.synt.pdf][pdf]] | [[https://www.lrde.epita.fr/~adl/dl/adl_bib.html#renkin.21.synt][bib]])
|
||||
|
||||
Simplification of Mealy machines is discussed in:
|
||||
Simplification of Mealy machines is discussed in the following papers:
|
||||
|
||||
- *Effective reductions of Mealy machines*, /Florian Renkin/,
|
||||
/Philipp Schlehuber-Caissier/, /Alexandre Duret-Lutz/, and /Adrien Pommellet/.
|
||||
Presented at FORTE'22. ([[https://www.lrde.epita.fr/~adl/dl/adl/renkin.22.forte.pdf][pdf]] | [[https://www.lrde.epita.fr/~adl/dl/adl_bib.html#renkin.22.forte][bib]])
|
||||
- *The Mealy-machine reduction functions of Spot*, /Florian Renkin/,
|
||||
/Philipp Schlehuber-Caissier/, /Alexandre Duret-Lutz/, and /Adrien Pommellet/.
|
||||
Science of Computer Programming, 230(102995), August 2023. ([[https://www.lrde.epita.fr/~adl/dl/adl_bib.html#renkin.23.fmsd][bib]] | [[https://www.lrde.epita.fr/~adl/dl/adl/renkin.23.fmsd.pdf][pdf]])
|
||||
|
||||
A more recent paper covering many aspects of =ltlsynt= is the following
|
||||
|
||||
- *Dissecting ltlsynt*, /Florian Renkin/, /Philipp
|
||||
Schlehuber-Caissier/, /Alexandre Duret-Lutz/, and Adrien Pommellet.
|
||||
In Formal Methods in System Design, 2023. ([[https://www.lrde.epita.fr/~adl/dl/adl_bib.html#renkin.23.scp][bib]] | [[https://www.lrde.epita.fr/~adl/dl/adl/renkin.23.scp.pdf][pdf]])
|
||||
|
||||
# LocalWords: utf ltlsynt AIGER html args mapsto SRC acc aiger TLSF
|
||||
# LocalWords: UNREALIZABLE unrealizable SYNTCOMP realizability Proc
|
||||
|
|
|
|||
|
|
@ -198,7 +198,7 @@ State: 1
|
|||
#+END_SRC
|
||||
|
||||
Option =m= uses mixed acceptance, i.e, some states might use
|
||||
state-based acceptance while other will not:
|
||||
state-based acceptance while others will not:
|
||||
|
||||
#+BEGIN_SRC sh :wrap SRC hoa
|
||||
ltl2tgba -Hm '(Ga -> Gb) W c'
|
||||
|
|
@ -569,16 +569,16 @@ The dot output can also be customized via two environment variables:
|
|||
- =SPOT_DOTDEFAULT= contains default arguments for the =--dot= option
|
||||
(for when it is used implicitly, or used as just =--dot= without
|
||||
argument). For instance after =export SPOT_DOTDEFAULT=vcsn=, using
|
||||
=--dot= is equivalent to =--dot=vcsn=. However using =--dot=xyz=
|
||||
=--dot= is equivalent to =--dot=vcsn=. However, using =--dot=xyz=
|
||||
(for any value of =xyz=, even empty) will ignore the
|
||||
=SPOT_DOTDEFAULT= variable. If the argument of =--dot= contains
|
||||
a dot character, then this dot is replaced by the contents of
|
||||
=SPOT_DOTDEFAULT=. So ~--dot=.A~ would be equivalent to =--dot=vcsnA=
|
||||
with our example definition of =SPOT_DOTDEFAULT=.
|
||||
- =SPOT_DOTEXTRA= may contains an arbitrary string that will be emitted
|
||||
- =SPOT_DOTEXTRA= may contain an arbitrary string that will be emitted
|
||||
in the dot output before the first state. This can be used to modify
|
||||
any attribute. For instance (except for this page, where we had
|
||||
do demonstrate the various options of =--dot=, and a few pages where
|
||||
to demonstrate the various options of =--dot=, and a few pages where
|
||||
we show the =--dot= output verbatim) all the automata displayed in
|
||||
this documentation are generated with the following environment
|
||||
variables set:
|
||||
|
|
@ -598,7 +598,7 @@ passing option =i= to the dot printer, this unique number will be used
|
|||
to form a unique =id= attribute for these elements: a prefix =S= (for
|
||||
state), =E= (for edge), or "SCC=" is simply added to the unique
|
||||
number. Additionally, using =i(graphid)= will define =graphid= as
|
||||
that =id= of the automaton. GraphViz will keep these identifier in
|
||||
that =id= of the automaton. GraphViz will keep these identifiers in
|
||||
the generated SVG, so this makes it possible to modify rendering
|
||||
of the automaton using CSS or javascript.
|
||||
|
||||
|
|
@ -683,7 +683,7 @@ ltl2tgba -G -D '!a & FGa' --dot=sbarx | dot2tex --autosize --nominsize > out.tex
|
|||
|
||||
Caveats:
|
||||
- =dot2tex= should be called with option =--autosize= in order to
|
||||
compute the size of each label before calling GraphViz to layout the
|
||||
compute the size of each label before calling GraphViz to lay out the
|
||||
graph. This is because GraphViz cannot compute the correct size of
|
||||
mathematical formulas. Make sure you use =dot2tex= version 2.11 or
|
||||
later, as earlier releases had a bug where sizes were interpreted
|
||||
|
|
@ -836,7 +836,7 @@ current process and any of its children: adding =p= (parent) and =c=
|
|||
(children) will show only the selected time. Note that few tools
|
||||
actually execute other processes: [[file:autfilt.org][=autfilt=]] and [[file:ltl2tgba.org][=ltl2tgba=]] can do so
|
||||
when calling a SAT solver for [[file:satmin.org][SAT-based minimization]], and [[file:ltldo.org][=ltldo=]] will
|
||||
obviously call any listed tool. However in the case of =ltldo= the
|
||||
obviously call any listed tool. However, in the case of =ltldo= the
|
||||
measured time is that of executing the other tools, so the result of
|
||||
=%[p]R= is likely to be always 0.
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@
|
|||
The =randaut= tool generates random (connected) automata.
|
||||
|
||||
By default, it will generate a random automaton with 10 states, no
|
||||
acceptance sets, and using a set of atomic propositions you have to
|
||||
acceptance set, and using a set of atomic propositions you have to
|
||||
supply.
|
||||
|
||||
#+NAME: randaut1
|
||||
|
|
@ -37,7 +37,7 @@ In an automaton with $Q$ states and density $e$, the degree of each
|
|||
state will follow a normal distribution with mean $1+(Q-1)d$ and
|
||||
variance $(Q-1)e(1-e)$.
|
||||
|
||||
In particular =-e0= will cause all states to have 1 successors, and
|
||||
In particular =-e0= will cause all states to have 1 successor, and
|
||||
=-e1= will cause all states to be interconnected.
|
||||
|
||||
#+NAME: randaut2
|
||||
|
|
@ -212,7 +212,7 @@ The output format can be controlled using [[file:oaut.org][the common output opt
|
|||
like =--hoaf=, =--dot==, =--lbtt=, and =--spin=. Note that =--spin=
|
||||
automatically implies =--ba=.
|
||||
|
||||
Automata are send to standard output by default, by you can use =-o=
|
||||
Automata are sent to standard output by default, by you can use =-o=
|
||||
to give a filename, or even a pattern for filenames. For instance the
|
||||
following generates 20 automata, but store them in different files
|
||||
according to the acceptance condition. The format =%g= represent the
|
||||
|
|
|
|||
|
|
@ -108,13 +108,13 @@ them. Rather than running =randltl= several times with different
|
|||
seeds, we can use the =-n= option to specify a number of formulas to
|
||||
produce as seen in the very first example of this page.
|
||||
|
||||
By default =randltl= will never output the same formula twice (this
|
||||
By default, =randltl= will never output the same formula twice (this
|
||||
can be changed with the =--allow-dups= option), so it may generate
|
||||
more formulas internally than it eventually prints. To ensure
|
||||
termination, for each output formula the number of ignored (because
|
||||
duplicated) random formulas that are generated is limited to 100000.
|
||||
Therefore in some situations, most likely when generating small
|
||||
formulas, with few atomic proposition, you may see =randltl= stop
|
||||
Therefore, in some situations, most likely when generating small
|
||||
formulas with few atomic propositions, you may see =randltl= stop
|
||||
before the requested number of formulas has been output with an error
|
||||
message.
|
||||
|
||||
|
|
@ -142,7 +142,7 @@ randltl -n 5 a b c --tree-size=22..30
|
|||
: 1
|
||||
|
||||
The tree size is just the number of nodes in the syntax tree of the
|
||||
formula during its construction. However because Spot automatically
|
||||
formula during its construction. However, because Spot automatically
|
||||
applies some /trivial simplifications/ during the construction of its
|
||||
formulas (e.g., =F(F(a)= is reduced to =F(a)=, =a&0= to =0=, etc.),
|
||||
the actual size of the formula output may be smaller than the
|
||||
|
|
@ -150,7 +150,7 @@ tree size specified.
|
|||
|
||||
It is pretty common to obtain the formulas =0= or =1= among the first
|
||||
formulas output, since many random formulas trivially simplify to
|
||||
these. However because duplicate formulas are suppressed by default,
|
||||
these. However, because duplicate formulas are suppressed by default,
|
||||
they shall only occur once.
|
||||
|
||||
Stronger simplifications may be requested using the =-r= option, that
|
||||
|
|
@ -257,7 +257,7 @@ randltl -B -n 5 a b c
|
|||
|
||||
In that case, priorities should be set with =--boolean-priorities=.
|
||||
|
||||
Finally, PSL formulas may be output using the =-P= option. However
|
||||
Finally, PSL formulas may be output using the =-P= option. However,
|
||||
keep in mind that since LTL formulas are PSL formulas, generating
|
||||
random PSL formula may produce many LTL formulas that do not use any
|
||||
PSL operator (this is even more so the case when simplifications are
|
||||
|
|
@ -329,7 +329,7 @@ or 1
|
|||
The =--ltl-priorities= option we have seen previously now recognize
|
||||
some new PSL-specific operators: =Closure= is the ={sere}= operator,
|
||||
=EConcat= is the ={sere}<>->f= operator, and =UConcat= is the
|
||||
={sere}[]->f= operator. When these operator are selected, they
|
||||
={sere}[]->f= operator. When these operators are selected, they
|
||||
require a SERE argument which is generated according to the priorities
|
||||
set by =--sere-priorities=: =eword= is the empty word, =boolform= is a
|
||||
Boolean formula (generated using the priorities set by
|
||||
|
|
|
|||
|
|
@ -51,7 +51,7 @@ Let us first state a few facts about this minimization procedure.
|
|||
|
||||
* How to change the SAT solver used
|
||||
|
||||
By default Spot uses PicoSAT call_version()[:results raw], this SAT-solver
|
||||
By default, Spot uses PicoSAT call_version()[:results raw], this SAT-solver
|
||||
is built into the Spot library, so that no temporary files are used to
|
||||
store the problem.
|
||||
|
||||
|
|
@ -79,10 +79,10 @@ post-processing routine used by both tools to prefer a
|
|||
deterministic automaton over a smaller equivalent nondeterministic
|
||||
automaton.
|
||||
|
||||
However =-D= is not a guarantee to obtain a deterministic automaton,
|
||||
However, =-D= is not a guarantee to obtain a deterministic automaton,
|
||||
even if one exists. For instance, =-D= fails to produce a
|
||||
deterministic automaton for =a U X(b | GF!b)=. Instead we get a 4-state
|
||||
non-deterministic automaton.
|
||||
deterministic automaton for =a U X(b | GF!b)=. Instead, we get a
|
||||
4-state non-deterministic automaton.
|
||||
|
||||
#+BEGIN_SRC sh
|
||||
ltl2tgba -D 'a U X(b | GF!b)' --stats='states=%s, det=%d'
|
||||
|
|
@ -161,7 +161,7 @@ some help from [[http://www.ltl2dstar.de/][=ltl2dstar=]].
|
|||
|
||||
The first is purely syntactic. If a formula belongs to the class of
|
||||
"syntactic recurrence formulas", it expresses a syntactic property.
|
||||
(Of course there are formulas that expresses a syntactic properties
|
||||
(Of course there are formulas that expresses syntactic properties
|
||||
without being syntactic recurrences.) [[file:ltlfilt.org][=ltlfilt=]] can be instructed to
|
||||
print only formulas that are syntactic recurrences:
|
||||
|
||||
|
|
@ -220,9 +220,9 @@ minimized into an even smaller automaton if we use multiple acceptance
|
|||
sets.
|
||||
|
||||
Unfortunately because =dstar2tgba= does not know the formula being
|
||||
translated, and it always convert a DRA into a DBA (with a single
|
||||
translated, and it always converts a DRA into a DBA (with a single
|
||||
acceptance set) before further processing, it does not know if using
|
||||
more acceptance sets could be useful to further minimize it. This
|
||||
more acceptance sets could be useful to further minimize it. This
|
||||
number of acceptance sets can however be specified on the command-line
|
||||
with option =-x sat-acc=M=. For instance:
|
||||
|
||||
|
|
@ -257,10 +257,10 @@ options).
|
|||
The picture is slightly inaccurate in the sense that both =ltl2tgba=
|
||||
and =dstar2tgba= are actually using the same post-processing chain:
|
||||
only the initial translation to TGBA or conversion to DBA differs, the
|
||||
rest is the same. However in the case of =dstar2tgba=, no
|
||||
rest is the same. However, in the case of =dstar2tgba=, no
|
||||
degeneration or determinization are needed.
|
||||
|
||||
Also the picture does not show what happens when =-B= is used: any
|
||||
Also, the picture does not show what happens when =-B= is used: any
|
||||
DTBA is degeneralized into a DBA, before being sent to "DTBA SAT
|
||||
minimization", with a special option to request state-based output.
|
||||
|
||||
|
|
@ -285,7 +285,7 @@ The following options can be used to fine-tune this procedure:
|
|||
resulting DTBA is equivalent to the input.
|
||||
- =-x sat-minimize= :: enable SAT-based minimization. It is the same as
|
||||
=-x sat-minimize=1= (which is the default value). It performs a dichotomy
|
||||
to find the correct automaton size.This option implies =-x tba-det=.
|
||||
to find the correct automaton size. This option implies =-x tba-det=.
|
||||
- =-x sat-minimize=[2|3]= :: enable SAT-based
|
||||
minimization. Let us consider each intermediate automaton as a =step=
|
||||
towards the minimal automaton and assume =N= as the size of the starting
|
||||
|
|
@ -311,7 +311,7 @@ The following options can be used to fine-tune this procedure:
|
|||
- =-x sat-incr-steps=N= :: set the value of =sat-incr-steps= to N. It does not
|
||||
make sense to use it without =-x sat-minimize=2= or =-x sat-minimize=3=.
|
||||
- =-x sat-acc=$m$= :: attempt to build a minimal DTGBA with $m$ acceptance sets.
|
||||
This options implies =-x sat-minimize=.
|
||||
This option implies =-x sat-minimize=.
|
||||
- =-x sat-states=$n$= :: attempt to build an equivalent DTGBA with $n$
|
||||
states. This also implies =-x sat-minimize= but won't perform
|
||||
any loop to lower the number of states. Note that $n$ should be
|
||||
|
|
@ -319,7 +319,7 @@ The following options can be used to fine-tune this procedure:
|
|||
and =dstar2tgba= both remove sink states in their output by
|
||||
default (use option =--complete= to output a complete automaton).
|
||||
Also note that even with the =--complete= option, the output
|
||||
automaton may have appear to have less states because the other
|
||||
automaton may appear to have fewer states because the other
|
||||
are unreachable.
|
||||
- =-x state-based= :: for all outgoing transition of each state
|
||||
to belong to the same acceptance sets.
|
||||
|
|
@ -332,7 +332,7 @@ is implied.
|
|||
|
||||
* Using =autfilt --sat-minimize= to minimize any deterministic ω-automaton
|
||||
|
||||
This interface is new in Spot 1.99 and allows to minimize any
|
||||
This interface is new in Spot 1.99 and allows minimizing any
|
||||
deterministic ω-automaton, regardless of the acceptance condition
|
||||
used. By default, the procedure will try to use the same acceptance
|
||||
condition (or any inferior one) and produce transition-based
|
||||
|
|
@ -389,7 +389,7 @@ $txt
|
|||
|
||||
This is clearly smaller than the input automaton. In this example the
|
||||
acceptance condition did not change. The SAT-based minimization only
|
||||
tries to minimize the number of states, but sometime the
|
||||
tries to minimize the number of states, but sometimes the
|
||||
simplifications algorithms that are run before we attempt SAT-solving
|
||||
will simplify the acceptance, because even removing a single
|
||||
acceptance set can halve the run time.
|
||||
|
|
@ -411,7 +411,7 @@ $txt
|
|||
|
||||
|
||||
Note that instead of naming the acceptance condition, you can actually
|
||||
give an acceptance formula in the [[http://adl.github.io/hoaf/#acceptance][HOA syntax]]. For example we can
|
||||
give an acceptance formula in the [[http://adl.github.io/hoaf/#acceptance][HOA syntax]]. For example, we can
|
||||
attempt to create a co-Büchi automaton with
|
||||
|
||||
#+NAME: autfiltsm5
|
||||
|
|
@ -444,7 +444,7 @@ obtain an upper bound on the number of states if you haven't specified
|
|||
specify by hand.
|
||||
|
||||
Here is an example demonstrating the case where the input automaton is
|
||||
smaller than the output. Let's take this small TGBA as input:
|
||||
smaller than the output. Let's take this small TGBA as input:
|
||||
|
||||
#+NAME: autfiltsm6
|
||||
#+BEGIN_SRC sh :exports code
|
||||
|
|
@ -472,7 +472,7 @@ echo $?
|
|||
#+RESULTS: autfiltsm7
|
||||
: 1
|
||||
|
||||
However if we allow more states, it will work:
|
||||
However, if we allow more states, it will work:
|
||||
|
||||
#+NAME: autfiltsm8
|
||||
#+BEGIN_SRC sh :exports code
|
||||
|
|
@ -491,7 +491,7 @@ By default, the SAT-based minimization tries to find a smaller automaton by
|
|||
performing a binary search starting from =N/2= (N being the size of the
|
||||
starting automaton). After various benchmarks, this algorithm proves to be the
|
||||
best. However, in some cases, other rather similar methods might be better. The
|
||||
algorithm to execute and some other parameters can be set thanks to the
|
||||
algorithm to execute, and some other parameters can be set thanks to the
|
||||
=--sat-minimize= option.
|
||||
|
||||
The =--sat-minimize= option takes a comma separated list of arguments
|
||||
|
|
@ -530,9 +530,9 @@ that can be any of the following:
|
|||
- =sat-naive= :: use the =naive= algorithm to find a smaller automaton. It starts
|
||||
from =N= and then checks =N-1=, =N-2=, etc. until the last successful
|
||||
check.
|
||||
- =sat-langmap= :: Find the lower bound of default sat-minimize procedure. This
|
||||
- =sat-langmap= :: Find the lower bound of default sat-minimize procedure. This
|
||||
relies on the fact that the size of the minimal automaton is at least equal
|
||||
to the total number of different languages recognized by the automaton's
|
||||
to the total number of different languages recognized by the automaton's
|
||||
states.
|
||||
- =colored= :: force all transitions (or all states if =-S= is used)
|
||||
to belong to exactly one acceptance condition.
|
||||
|
|
@ -559,7 +559,7 @@ $txt
|
|||
[[file:autfiltsm9.svg]]
|
||||
|
||||
... to the following, where the automaton is colored, i.e., each state
|
||||
belong to exactly one acceptance set:
|
||||
belongs to exactly one acceptance set:
|
||||
|
||||
#+NAME: autfiltsm10
|
||||
#+BEGIN_SRC sh :exports code
|
||||
|
|
@ -589,7 +589,7 @@ dstar2tgba -D -x sat-minimize,sat-acc=2 --stats='input(states=%S) output(states=
|
|||
#+RESULTS:
|
||||
: input(states=11) output(states=5, acc-sets=2, det=1)
|
||||
|
||||
Here is the contents of the =stats.csv= file:
|
||||
Here are the contents of the =stats.csv= file:
|
||||
#+begin_src sh :exports results :results output raw
|
||||
sed '1a\
|
||||
|-|
|
||||
|
|
@ -623,8 +623,8 @@ file follows RFC4180 in escaping double-quote by doubling them.
|
|||
|
||||
In the above example, the DRA produced by =ltl2dstar= had 11 states.
|
||||
In the first line of the =stats.csv= file, you can see the
|
||||
minimization function had a 8-state input, which means that
|
||||
=dstar2tgba= first reduced the 11-state (complete) DRA into a 8-state
|
||||
minimization function had an 8-state input, which means that
|
||||
=dstar2tgba= first reduced the 11-state (complete) DRA into an 8-state
|
||||
(complete) DBA before calling the SAT-based minimization (the fact
|
||||
that the input was reduced to a *DBA* is not very obvious from this
|
||||
trace), This first line shows the SAT-based minimization for a
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ Our first task is to read formulas and print them in another syntax.
|
|||
* Shell command
|
||||
|
||||
Using =ltlfilt=, you can easily read an LTL formula in one syntax, and
|
||||
output it in another syntax. By default the parser will accept a
|
||||
output it in another syntax. By default, the parser will accept a
|
||||
formula in [[file:ioltl.org][any infix syntax]], but if the input is in the prefix syntax
|
||||
of LBT, you should use [[file:ioltl.org][=--lbt-input=]]. The output syntax is controlled
|
||||
using different options such as (=--spin=, =--lbt=, =--latex=, etc.).
|
||||
|
|
@ -364,7 +364,7 @@ atomically output in a way that Spin can parse.
|
|||
This Spin syntax is not accepted by default by the infix parser, but
|
||||
it has an option for that. This is called /lenient parsing/: when the
|
||||
parser finds a parenthetical block it does not understand, it simply
|
||||
assume that this block represents an atomic proposition.
|
||||
assumes that this block represents an atomic proposition.
|
||||
|
||||
#+BEGIN_SRC sh
|
||||
ltlfilt --lenient -f '(a > 4) U (b < 5)'
|
||||
|
|
|
|||
|
|
@ -141,7 +141,7 @@ ltlfilt -ps --relabel-bool=pnn --define -f '"Proc@Here" U ("var > 10" | "var < 4
|
|||
|
||||
For instance =a U (a & b)= will not be relabeled into =(p0) U (p1)=
|
||||
because that would hide the fact that both =p0= and =p1= check for
|
||||
=a=. Instead we get this:
|
||||
=a=. Instead, we get this:
|
||||
|
||||
#+BEGIN_SRC sh
|
||||
ltlfilt -ps --relabel-bool=pnn --define -f 'a U (a & b)'
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ This page explains how to build formulas and how to iterate over their
|
|||
syntax trees.
|
||||
|
||||
We will first describe how to build a formula from scratch, by using
|
||||
the constructors associated to each operators, and show the basic
|
||||
the constructors associated to each operator, and show the basic
|
||||
accessor methods for formulas. We will do that for C++ first, and
|
||||
then Python. Once these basics are covered, we will show examples for
|
||||
traversing and transforming formulas (again in C++ then Python).
|
||||
|
|
@ -166,7 +166,7 @@ The Python equivalent is similar:
|
|||
for child in f:
|
||||
print(" *", child)
|
||||
# the type of the operator can be accessed with kind(), which returns
|
||||
# an op_XXX constant (corresponding the the spot::op enum of C++)
|
||||
# an op_XXX constant (corresponding to the spot::op enum of C++)
|
||||
print(f[1][0], "is F" if f[1][0].kind() == spot.op_F else "is not F")
|
||||
# "is" is keyword in Python, the so shortcut is called _is:
|
||||
print(f[1][1], "is G" if f[1][1]._is(spot.op_G) else "is not G")
|
||||
|
|
@ -191,8 +191,8 @@ formulas (for instance the [[file:tut02.org][relabeling function]]) actually rec
|
|||
traverse the input formula to construct the output formula.
|
||||
|
||||
Using the operators described in the previous section is enough to
|
||||
write algorithms on formulas. However there are two special methods
|
||||
that makes it a lot easier: =traverse= and =map=.
|
||||
write algorithms on formulas. However, there are two special methods
|
||||
that make it a lot easier: =traverse= and =map=.
|
||||
|
||||
=traverse= takes a function =fun=, and applies it to each subformulas
|
||||
of a given formula, including that starting formula itself. The
|
||||
|
|
@ -206,7 +206,7 @@ in the formula. We also print each subformula to show the recursion,
|
|||
and stop the recursion as soon as we encounter a subformula without
|
||||
sugar (the =is_sugar_free_ltl()= method is a constant-time operation
|
||||
that tells whether a formula contains a =F= or =G= operator) to save
|
||||
time time by not exploring further.
|
||||
time by not exploring further.
|
||||
|
||||
#+NAME: gcount_cpp
|
||||
#+BEGIN_SRC C++
|
||||
|
|
@ -375,11 +375,11 @@ without lambda:
|
|||
: exchanges: 6
|
||||
|
||||
Now let's pretend that we want to define =xchg_fg= as a lambda, and
|
||||
=count= to by captured by reference. In order to call pass the lambda
|
||||
recursively to =map=, the lambda needs to know its address.
|
||||
that we want =count= to be captured by reference. In order to pass
|
||||
the lambda recursively to =map=, the lambda needs to know its address.
|
||||
Unfortunately, if the lambda is stored with type =auto=, it cannot
|
||||
capture itself. A solution is to use =std::function= but that has a
|
||||
large penalty cost. We can work around that by assuming that that
|
||||
large penalty cost. We can work around that by assuming that the
|
||||
address will be passed as an argument (=self=) to the lambda:
|
||||
|
||||
#+BEGIN_SRC C++
|
||||
|
|
|
|||
|
|
@ -124,7 +124,7 @@ All the translation pipeline (this includes simplifying the formula,
|
|||
translating the simplified formula into an automaton, and simplifying
|
||||
the resulting automaton) is handled by the =spot::translator= class.
|
||||
An instance of this class can configured by calling =set_type()= to
|
||||
chose the type of automaton to output, =set_level()= to set the level
|
||||
choose the type of automaton to output, =set_level()= to set the level
|
||||
of optimization (it's high by default), and =set_pref()= to set
|
||||
various preferences (like small or deterministic) or characteristic
|
||||
(complete, unambiguous, state-based acceptance) for the resulting
|
||||
|
|
|
|||
|
|
@ -145,9 +145,9 @@ State: 1
|
|||
If you drop the =-D= option from =ltl2tgba=, or the =det= argument
|
||||
from =spot.translate()=, or the
|
||||
=set_pref(spot::postprocessor::Deterministic)= in C++, then a
|
||||
non-deterministic monitor can be output. By default Spot will build
|
||||
both a deterministic and a non-deterministic monitor, it will output
|
||||
the smallest one.
|
||||
non-deterministic monitor can be output. By default, Spot will try to
|
||||
build both a deterministic and a non-deterministic monitor, then it will
|
||||
keep the smallest one.
|
||||
|
||||
* Details
|
||||
|
||||
|
|
@ -198,7 +198,7 @@ ltl2tgba -D -M 'G(press -> red U green)' -d
|
|||
|
||||
This monitor will report violations if both *red* and *green* are off
|
||||
when the button is pressed, and also if *red* goes off without *green*
|
||||
going on. However note that in the original formula, =red U green=
|
||||
going on. However, note that in the original formula, =red U green=
|
||||
implies that *green* will eventually become true, and the monitor
|
||||
cannot ensure that: a system where *red* is continuously on, and
|
||||
*green* is continuously off would not trigger any violation. The
|
||||
|
|
@ -280,7 +280,7 @@ State: 1
|
|||
|
||||
* Further reading
|
||||
|
||||
If your application requires monitors and you plan to build them with
|
||||
If your application requires monitors, and you plan to build them with
|
||||
Spot, it is very likely that you will want to convert the resulting
|
||||
automata to your own data structure. See [[file:tut21.org][how to print an automaton in
|
||||
a custom format]] to learn all you need to iterate over Spot's automata.
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ automata over infinite words.
|
|||
ltlfilt --from-ltlf -f "$f"
|
||||
#+end_src
|
||||
|
||||
However there is a trick we can use in case we want to use Spot to
|
||||
However, there is a trick we can use in case we want to use Spot to
|
||||
build a finite automaton that recognize some LTLf (i.e. LTL with
|
||||
finite semantics) property. The plan is as follows:
|
||||
|
||||
|
|
@ -233,8 +233,8 @@ you could replace =alive= by =!dead= by using ~ltlfilt
|
|||
|
||||
When working with LTLf, there are two different semantics for the next
|
||||
operator:
|
||||
- The weak next: =X a= is true if =a= hold in the next step or if
|
||||
there are no next step. In particular, =X(0)= is true iff there are
|
||||
- The weak next: =X a= is true if =a= hold in the next step, or if
|
||||
there is no next step. In particular, =X(0)= is true iff there is
|
||||
no successor. (By the way, you cannot write =X0= because that is an
|
||||
atomic proposition: use =X(0)= or =X 0=.)
|
||||
- The strong next: =X[!] a= is true if =a= hold in the next step *and*
|
||||
|
|
|
|||
|
|
@ -7,7 +7,8 @@
|
|||
#+PROPERTY: header-args:python :results output :exports both
|
||||
#+PROPERTY: header-args:C+++ :results verbatim :exports both
|
||||
|
||||
The goal is to start from a never claim, as produced by Spin, e.g.:
|
||||
Our goal convert never claim produced by Spin into an automaton in [[file:hoa.org][the
|
||||
HOA format]]. We will use the following never claim as input:
|
||||
|
||||
#+BEGIN_SRC sh
|
||||
spin -f '[]<>foo U bar' > tut20.never
|
||||
|
|
@ -42,11 +43,10 @@ accept_all:
|
|||
}
|
||||
#+end_example
|
||||
|
||||
and convert this into an automaton in [[file:hoa.org][the HOA format]].
|
||||
|
||||
Note that the automaton parser of Spot can read automata written
|
||||
either as never claims, in LBTT's format, in ltl2dstar's format or in
|
||||
the HOA format, and there is no need to specify which format you
|
||||
the HOA format, and there is no need to specify which format it should
|
||||
expect. Even if our example uses a never claim as input, the code we
|
||||
write will read any of those formats.
|
||||
|
||||
|
|
@ -203,7 +203,7 @@ existing atomic propositions will reuse the existing variable.
|
|||
|
||||
In the example for [[file:tut10.org][translating LTL into BA]], we did not specify any
|
||||
=bdd_dict=, because the =translator= object will create a new one by
|
||||
default. However it is possible to supply such a =bdd_dict= to the
|
||||
default. However, it is possible to supply such a =bdd_dict= to the
|
||||
translator as well. Similarly, in the Python bindings, there is a
|
||||
global =bdd_dict= that is implicitly used for all operations, but it
|
||||
can be specified if needed.
|
||||
|
|
|
|||
|
|
@ -158,7 +158,7 @@ faked as follows:
|
|||
- additionally, we set =prop_state_acc(true)= to indicate that the
|
||||
automaton should output as if it were state-based.
|
||||
|
||||
Some algorithm recognize the =prop_state_acc()= properties and trigger
|
||||
Some algorithms recognize the =prop_state_acc()= properties and trigger
|
||||
some special handling of the automaton, maybe to preserve its "fake
|
||||
state-based nature".
|
||||
|
||||
|
|
|
|||
|
|
@ -123,7 +123,7 @@ The "universality" of an edge can be tested using the
|
|||
=twa_graph::is_univ_dest()= method: it takes a destination state as
|
||||
input, as in =aut->is_univ_dest(t.dst)= or
|
||||
=aut->is_univ_dest(aut->get_init_state_number())=. For convenience
|
||||
this method can also be called on a edge, as in =aut->is_univ_dest(t)=.
|
||||
this method can also be called on an edge, as in =aut->is_univ_dest(t)=.
|
||||
|
||||
The set of destination states of a universal edge can be iterated over
|
||||
via the =twa_graph::univ_dests()= method. This takes either a
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ $txt
|
|||
[[file:tut30in.svg]]
|
||||
|
||||
Our goal is to generate an equivalent Büchi automaton, preserving
|
||||
determinism if possible. However nothing of what we will write is
|
||||
determinism if possible. However, nothing of what we will write is
|
||||
specific to Rabin acceptance: the same code will convert automata with
|
||||
any acceptance to Büchi acceptance.
|
||||
|
||||
|
|
@ -82,9 +82,9 @@ $txt
|
|||
#+RESULTS:
|
||||
[[file:tut30out.svg]]
|
||||
|
||||
In the general case transforming an automaton with a complex
|
||||
In the general case, transforming an automaton with a complex
|
||||
acceptance condition into a Büchi automaton can make the output
|
||||
bigger. However the post-processing routines may manage to simplify
|
||||
bigger. However, the post-processing routines may manage to simplify
|
||||
the result further.
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -71,7 +71,7 @@ autfilt --dot='.#' tut40.hoa
|
|||
Whether two states are in simulation can be decided as a game between
|
||||
two players. If the game is in state $(q,q')$, spoiler (player 0)
|
||||
first selects a transition from state $q$, and duplicator (player 1)
|
||||
then has to chose a compatible transition from state $q'$. Duplicator
|
||||
then has to choose a compatible transition from state $q'$. Duplicator
|
||||
of course wins if it always manages to select compatibles transitions,
|
||||
otherwise spoiler wins.
|
||||
|
||||
|
|
@ -136,7 +136,7 @@ $txt
|
|||
Since player 1 is winning from state $(4,0)$, we know that state 4
|
||||
simulates state 0. Also since player 1 would also win from state
|
||||
$(5,1)$, we can tell that state 5 simulates state 1. We also learn
|
||||
that state 5 does not simulates states 2 and 3. We could build other
|
||||
that state 5 does not simulate states 2 and 3. We could build other
|
||||
games, or add more state to this game, to learn about other pairs of
|
||||
states.
|
||||
|
||||
|
|
@ -225,13 +225,13 @@ To solve a safety game =g= that has been created by the above method,
|
|||
it is enough to just call =solve_safety_game(g)=. The function
|
||||
=solve_game(g)= used below is a more generic interface that looks at
|
||||
the acceptance condition of the game to dispatch to the more specific
|
||||
game solver. These functions returns the player winning in the
|
||||
initial state. However, as a side-effect they define additional
|
||||
game solver. These functions return the player winning in the
|
||||
initial state. However, as a side effect they define additional
|
||||
automaton properties that indicate the winner of each state, and the
|
||||
associated strategy.
|
||||
|
||||
Therefore to list all simulation pairs we learned from a game starting
|
||||
in state $(i,j)$, we could proceed as follow:
|
||||
Therefore, to list all simulation pairs we learned from a game starting
|
||||
in state $(i,j)$, we could proceed as follows:
|
||||
|
||||
#+NAME: computesim_tut40
|
||||
#+BEGIN_SRC python :exports code
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ there are two different interfaces that can be used:
|
|||
2. the *explicit* =twa_graph= interface.
|
||||
|
||||
To demonstrate the difference between the two interfaces, we will
|
||||
write an small depth-first search that prints all states accessible
|
||||
write a small depth-first search that prints all states accessible
|
||||
from the initial state of an automaton.
|
||||
|
||||
* The explicit interface
|
||||
|
|
@ -543,12 +543,12 @@ which returns a pointer to a =state=. Then, calling
|
|||
that allows iterating over all successors.
|
||||
|
||||
Different subclasses of =twa= will instantiate different subclasses of
|
||||
=state= and =twa_succ_iterator= . In the case of =twa_graph=, the
|
||||
=state= and ~twa_succ_iterator~. In the case of =twa_graph=, the
|
||||
subclasses used are =twa_graph_succ_iterator= and =twa_graph_state=,
|
||||
but you can ignore that until you have to write your own =twa=
|
||||
subclass.
|
||||
|
||||
The interface puts few requirement on memory management: we want to be
|
||||
The interface puts few requirements on memory management: we want to be
|
||||
able to write automata that can forget about their states (and
|
||||
recompute them), so there is no guarantee that reaching the same state
|
||||
twice will return the same pointer twice. Even calling
|
||||
|
|
@ -625,7 +625,7 @@ are $n$ successors, there will be $1$ call to =first()=, $n$ calls to
|
|||
=next()=, and $n+1$ calls to =done()=, so a total of $2n+2$ virtual
|
||||
method calls.
|
||||
|
||||
However =first()= and =next()= also return a Boolean stating whether
|
||||
However, =first()= and =next()= also return a Boolean stating whether
|
||||
the loop could continue. This allows rewriting the above code as
|
||||
follows:
|
||||
|
||||
|
|
@ -688,13 +688,13 @@ following equivalent code:
|
|||
: 0->1
|
||||
: 0->2
|
||||
|
||||
This works in a similar way as =out(s)= in the explicit interface.
|
||||
This works similarly to =out(s)= in the explicit interface.
|
||||
Calling =aut->succ(s)= creates a fake container
|
||||
(=internal::succ_iterable=) with =begin()= and =end()= methods that
|
||||
return STL-like iterators (=internal::succ_iterator=). Incrementing
|
||||
the =internal::succ_iterator= will actually increment the
|
||||
=twa_succ_iterator= they hold. Upon completion of the loop, the
|
||||
temporary =internal::succ_iterable= is destroyed and its destructor
|
||||
temporary =internal::succ_iterable= is destroyed, and its destructor
|
||||
passes the iterator back to =aut->release_iter()= for recycling.
|
||||
|
||||
** Recursive DFS (v1)
|
||||
|
|
@ -823,7 +823,7 @@ They are performed in =state_unicity_table::is_new()= and in
|
|||
** Iterative DFS
|
||||
|
||||
For a non-recursive version, let us use a stack of
|
||||
=twa_succ_iterator=. However these iterators do not know their
|
||||
=twa_succ_iterator=. However, these iterators do not know their
|
||||
source, so we better store that in the stack as well if we want to
|
||||
print it.
|
||||
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ often holds as well.
|
|||
** What needs to be done
|
||||
|
||||
In Spot, Kripke structures are implemented as subclass of =twa=, but
|
||||
some operations have specialized versions that takes advantages of the
|
||||
some operations have specialized versions that take advantage of the
|
||||
state-labeled nature of Kripke structure. For instance the on-the-fly
|
||||
product of a Kripke structure with a =twa= is slightly more efficient
|
||||
than the on-the-fly product of two =twa=.
|
||||
|
|
@ -202,7 +202,7 @@ implement =compare()= using =hash()=.
|
|||
#+RESULTS: demo-state
|
||||
|
||||
Note that a state does not know how to print itself, this
|
||||
a job for the automaton.
|
||||
is a job for the automaton.
|
||||
|
||||
** Implementing the =kripke_succ_iterator= subclass
|
||||
|
||||
|
|
@ -575,7 +575,7 @@ It is perfectly possible to write a =kripke= (or even =twa=) subclass
|
|||
that returns pointers to preallocated states. In that case
|
||||
=state::destroy()= would have to be overridden with an empty body so
|
||||
that no deallocation occurs, and the automaton would have to get rid
|
||||
of the allocated states in its destructor. Also the =state::clone()=
|
||||
of the allocated states in its destructor. Also, the =state::clone()=
|
||||
methods is overridden by a function that returns the identity. An
|
||||
example of class following this convention is =twa_graph=, were states
|
||||
returned by the on-the-fly interface are just pointers into the actual
|
||||
|
|
|
|||
|
|
@ -18,8 +18,8 @@ put for a toy example.
|
|||
|
||||
This document shows how to create a Kripke structure that is stored as
|
||||
an explicit graph. The class for those is =spot::kripke_graph= and
|
||||
works in a similar way as the class =spot::twa_graph= used for
|
||||
automata. The main difference between those two classes is that
|
||||
works similarly to the class =spot::twa_graph= (used for
|
||||
automata). The main difference between those two classes is that
|
||||
Kripke structures labels the states instead of the transitions. Using
|
||||
=spot::kripke_graph= instead of =spot::twa_graph= saves a bit of
|
||||
memory.
|
||||
|
|
@ -188,8 +188,7 @@ int main()
|
|||
|
||||
Note that this main function is similar to the main function we used
|
||||
for [[file:tut51.org::#check-prop][the on-the-fly version]] except for [[(ck)][the line that creates the Kripke
|
||||
structure]]. You can modify it to display the counterexample in a
|
||||
similar way.
|
||||
structure]]. You can modify it to display the counterexample similarly.
|
||||
|
||||
* Python implementation
|
||||
** Building the state space
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ There are other algorithms where BDDs are used from different tasks.
|
|||
For instance, our simulation-based reduction function computes a
|
||||
*signature* of each state as a BDD that is essentially the disjunction
|
||||
of all outgoing edges, represented by their guard, their acceptance
|
||||
sets, and their destination *classes*. Also the translation of LTL
|
||||
sets, and their destination *classes*. Also, the translation of LTL
|
||||
formulas to transition-based generalized Büchi automata is using an
|
||||
intermediate representation of states that is similar to the
|
||||
aforementioned signatures, excepts that classes are replaced by
|
||||
|
|
@ -125,7 +125,7 @@ is implicitly used in both cases. Similarly, when we call
|
|||
=spot.translate()= the same global =bdd_dict= is used by default.
|
||||
|
||||
What really confuses people, is that the association between an atomic
|
||||
proposition (=a=, =b=, ...) and a BDD variable (=0=, =1=, ...) will
|
||||
proposition (=a=, =b=, ...) and a BDD variable (=0=, =1=, ...) will
|
||||
only be held by the =bdd_dict= for the lifetime of the objects (here the
|
||||
automata) that registered this association to the =bdd_dict=.
|
||||
|
||||
|
|
@ -232,9 +232,9 @@ interface:
|
|||
const void* for_me);
|
||||
#+END_SRC
|
||||
|
||||
The last function may be bit tricky to use, because we need to be sure
|
||||
that another object has registered some variables. You can rely on
|
||||
the fact that each =twa= automaton register its variables this way.
|
||||
The last function may be a bit tricky to use, because we need to be
|
||||
sure that another object has registered some variables. You can rely
|
||||
on the fact that each =twa= automaton register its variables this way.
|
||||
|
||||
Now, in most cases, there is no need to worry about the =bdd_dict=.
|
||||
Automata will register and unregister variables as needed. Other
|
||||
|
|
@ -290,7 +290,7 @@ The above code has two definitions.
|
|||
2. The =accepting_set= function iterates over an automaton, and saves
|
||||
all transitions that belong to a given acceptance set number.
|
||||
|
||||
For instance we can now translate an automaton, compute its acceptance
|
||||
For instance, we can now translate an automaton, compute its acceptance
|
||||
set 0, and print it as follows:
|
||||
|
||||
#+begin_src python :noweb strip-export
|
||||
|
|
@ -325,15 +325,15 @@ In this case, the temporary automaton constructed by
|
|||
=spot.translate()= and passed to the =accepting_set()= function is
|
||||
destroyed right after the =ts= object has been constructed. When the
|
||||
automaton is destroyed, it removes all its associations from the
|
||||
=bdd_dict=. This means that before the =print(ts)= the dictionary
|
||||
that was used by the automaton, and that is still stored in the =ts=
|
||||
objects is now empty: calling =bdd_format_formula()= raises an
|
||||
exception.
|
||||
=bdd_dict=. This means that before the =print(ts)=, the dictionary
|
||||
that was used by the automaton and that is still stored in the =ts=
|
||||
objects is now empty. Consequently, calling =bdd_format_formula()=
|
||||
raises an exception.
|
||||
|
||||
This can be fixed in a couple of ways. The easy way is to store the
|
||||
automaton inside the =trans_set= object, to ensure that it will live
|
||||
at least as long as the =trans_set= object. But maybe the automaton
|
||||
is too big and we really want to get rid of it? In this case
|
||||
is too big, and we really want to get rid of it? In this case
|
||||
=trans_set= should tell the =bdd_dict= that it want to retain the
|
||||
associations. The easiest way in this case is to call the
|
||||
=register_all_variables_of()= method, because we know that each
|
||||
|
|
@ -398,15 +398,15 @@ int bdd_dict::register_anonymous_variables(int n, const void* for_me);
|
|||
A range of =n= variables will be allocated starting at the returned
|
||||
index.
|
||||
|
||||
For instance, let's say the our =trans_set= should now store a
|
||||
symbolic representation of a transition relation. For simplicity we
|
||||
For instance, let's say that our =trans_set= should now store a
|
||||
symbolic representation of a transition relation. For simplicity, we
|
||||
assume we just want to store set of pairs =(src,dst)=: each pair will
|
||||
be a conjunction $v_{src}\land v'_{dst}$ between two BDD variables
|
||||
taken from two ranges ($v_i$ representing a source state $i$ and $v'i$
|
||||
representing a destination state $i$), and the entire set will be a
|
||||
disjunction of all these pairs. If the automaton has $n$ states, we
|
||||
want to allocate $2n$ BDD variables for this purpose. We call these
|
||||
variables *anonymous* because their meaning is unknown the the
|
||||
variables *anonymous* because their meaning is unknown to the
|
||||
=bdd_dict=.
|
||||
|
||||
#+begin_src python
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ experience of updating a couple of projects that are using Spot.
|
|||
|
||||
4. [[#formulas][The implementation of LTL formulas has been rewritten]].
|
||||
|
||||
They are no longer pointers but plain objects that performs their
|
||||
They are no longer pointers but plain objects that perform their
|
||||
own reference counting, freeing the programmer from this tedious
|
||||
and error-prone task. They could be handled as if they were
|
||||
shared pointer, with the small difference that they are not using
|
||||
|
|
@ -72,7 +72,7 @@ experience of updating a couple of projects that are using Spot.
|
|||
using preconditions: the acceptance condition does not appear in
|
||||
the type of the C++ object representing the automaton.
|
||||
|
||||
7. [[*Various renamings][Several class, functions, and methods, have been renamed]]. Some
|
||||
7. [[*Various renamings][Several classes, functions, and methods, have been renamed]]. Some
|
||||
have been completely reimplemented, with different interfaces.
|
||||
In particular the =tgba_explicit_*= family of classes
|
||||
(=tgba_explicit_formula=, =tgba_explicit_number=,
|
||||
|
|
@ -110,7 +110,7 @@ experience of updating a couple of projects that are using Spot.
|
|||
|
||||
|
||||
If Spot 1.2.6 was installed in =/usr/local=, its headers are
|
||||
in =/usr/local/include/spot=. One would to write include statements
|
||||
in =/usr/local/include/spot=. One would to write include statements
|
||||
such as
|
||||
#+BEGIN_SRC C++
|
||||
#include <tgba/tgba.hh>
|
||||
|
|
@ -370,7 +370,7 @@ removed, ~8200 lines added), and brings some nice benefits:
|
|||
friendly, and several algorithms that spanned a few pages have been
|
||||
reduced to a few lines. [[file:tut03.org][This page]] illustrates the new interface.
|
||||
|
||||
Also the =spot::ltl= namespace has been removed: everything is
|
||||
Also, the =spot::ltl= namespace has been removed: everything is
|
||||
directly in =spot= now.
|
||||
|
||||
In code where formulas are just parsed from input string, and then
|
||||
|
|
@ -455,7 +455,7 @@ name:
|
|||
#+BEGIN_SRC C++
|
||||
if (!input->acc().is_generalized_buchi())
|
||||
throw std::runtime_error
|
||||
("myalgorithm() can only works with generalized Büchi acceptance");
|
||||
("myalgorithm() can only work with generalized Büchi acceptance");
|
||||
#+END_SRC
|
||||
|
||||
- Some methods of the =tgba= class have been removed, include some
|
||||
|
|
@ -518,7 +518,7 @@ name:
|
|||
So not only do we save the calls to =new= and =delete=, but we also
|
||||
save the time it takes to construct the objects (including setting
|
||||
up the virtual table), and via a =recycle()= method that has to be
|
||||
added to the iterator, we update only the attributes that needs to
|
||||
added to the iterator, we update only the attributes that need to
|
||||
be updated (for instance if the iterator contains a pointer back to
|
||||
the automaton, this pointer requires no update when the iterator is
|
||||
recycled).
|
||||
|
|
@ -586,7 +586,7 @@ for (auto i: aut->succ(s))
|
|||
|
||||
- Each =twa= now has a BDD dictionary, so the =get_dict()= method is
|
||||
implemented once for all in =twa=, and should not be implemented
|
||||
anymore in sub-classes.
|
||||
anymore in subclasses.
|
||||
|
||||
- There should now be very few cases where it is necessary to call
|
||||
methods of the BDD dictionary attached to a =twa=. Registering
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue