spot/doc/org/tut01.org
Alexandre Duret-Lutz 8fb7b279f7 ltlvisit: rename tostring.hh as print.hh and rename printer functions
This actually performs three related changes, but separating them
would be quite inconvenient.

1) rename tostring.hh to print.hh a welcome side-effect is that
I could fix several files that included this file for not reason.

2) de-overload some of the to_string functions, and rename them
as follow:

  to_string -> print_psl, print_sere, str_psl, str_sere
  to_utf8_string -> print_utf8_psl, print_utf8_sere,
                    str_utf8_psl, str_utf8_sere
  to_spin_string -> print_spin_ltl, str_spin_ltl
  to_wring_string -> print_wring_ltl, str_wing_ltl
  to_lbt_string -> print_lbt_ltl, str_lbt_ltl
  to_latex_string -> print_latex_psl, str_latex_psl
  to_sclatex_string -> print_sclatex_psl, str_sclatex_psl

Now it is clearer what these functions do, and their restrictions.

3) all those print_* functions now take the stream to write onto
as their first argument.  This fixes #88.

* src/ltlvisit/tostring.cc, src/ltlvisit/tostring.hh: Rename into...
* src/ltlvisit/print.cc, src/ltlvisit/print.hh: ... those, and make
the changes listed above.
* doc/org/tut01.org, src/bin/common_output.cc,
src/bin/common_trans.cc, src/bin/ltl2tgba.cc, src/bin/ltl2tgta.cc,
src/bin/ltlcross.cc, src/bin/ltldo.cc, src/bin/ltlfilt.cc,
src/bin/randltl.cc, src/ltlparse/ltlparse.yy,
src/ltlvisit/Makefile.am, src/ltlvisit/mark.cc,
src/ltlvisit/relabel.cc, src/ltlvisit/simplify.cc,
src/ltlvisit/snf.cc, src/ta/taexplicit.cc, src/ta/tgtaexplicit.cc,
src/taalgos/tgba2ta.cc, src/tests/equalsf.cc, src/tests/ltl2tgba.cc,
src/tests/ltlrel.cc, src/tests/randtgba.cc, src/tests/reduc.cc,
src/tests/syntimpl.cc, src/tests/tostring.cc, src/twa/bdddict.cc,
src/twa/bddprint.cc, src/twa/taatgba.cc, src/twa/taatgba.hh,
src/twa/twagraph.cc, src/twaalgos/compsusp.cc, src/twaalgos/lbtt.cc,
src/twaalgos/ltl2taa.cc, src/twaalgos/ltl2tgba_fm.cc,
src/twaalgos/neverclaim.cc, src/twaalgos/remprop.cc,
src/twaalgos/stats.cc, wrap/python/ajax/spot.in, wrap/python/spot.py,
wrap/python/spot_impl.i: Adjust.
2015-06-04 22:56:57 +02:00

225 lines
7 KiB
Org Mode

#+TITLE: Parsing and Printing LTL Formulas
#+SETUPFILE: setup.org
#+HTML_LINK_UP: tut.html
Our first task is to read formulas and print them in another syntax.
* Shell command
Using =ltlfilt=, you can easily read an LTL formula in one syntax, and
output it in another syntax. By default the parser will accept a
formula in [[file:ioltl.org][any infix syntax]], but if the input is in the prefix syntax
of LBT, you should use [[file:ioltl.org][=--lbt-input=]]. The output syntax is controlled
using different options such as (=--spin=, =--lbt=, =--latex=, etc.).
Full parentheses can also be requested using =-p=.
#+BEGIN_SRC sh :results verbatim :exports both
formula='[]<>p0 || <>[]p1'
ltlfilt -f "$formula" --lbt
ltlfilt -f "$formula" --spin -p
ltlfilt --lbt-input -f '& & G p0 p1 p2' --latex
#+END_SRC
#+RESULTS:
: | G F p0 F G p1
: ([](<>(p0))) || (<>([](p1)))
: p_{1} \land p_{2} \land \G p_{0}
The reason the LBT parser has to be explicitly enabled is because of
some corner cases that have different meanings in the two syntaxes.
(For instance =t= and =f= are the true constant in LBT's syntax, but they
are considered as atomic propositions in all the other syntaxes.)
* Python bindings
Here are the same operation in Python
#+BEGIN_SRC python :results output :exports both
import spot
f = spot.formula('[]<>p0 || <>[]p1')
print(f.to_str('lbt'))
print(f.to_str('spin', parenth=True))
print(spot.formula('& & G p0 p1 p2').to_str('latex'))
#+END_SRC
#+RESULTS:
: | G F p0 F G p1
: ([](<>(p0))) || (<>([](p1)))
: p_{1} \land p_{2} \land \G p_{0}
The =spot.formula= function wraps the calls to the two formula parsers
of Spot. It first tries to parse the formula using infix syntaxes,
and if it fails, it tries to parse it with the prefix parser. (So
this this might fail to correctly interpret =t= or =f= if you are
processing a list of LBT formulas.) Using =spot.formula=, parse
errors are returned as an exception.
* C++
** Simple wrapper for the two parsers
We first start with the easy parser interface, similar to the one used
above in the python bindings. Here parse errors would be returned as
exceptions.
#+BEGIN_SRC C++ :results verbatim :exports both
#include <iostream>
#include "ltlparse/public.hh"
#include "ltlvisit/print.hh"
int main()
{
const spot::ltl::formula* f = spot::ltl::parse_formula("[]<>p0 || <>[]p1");
print_lbt_ltl(std::cout, f) << '\n';
print_spin_ltl(std::cout, f, true) << '\n';
print_latex_psl(std::cout, spot::ltl::parse_formula("& & G p0 p1 p2"));
f->destroy();
}
#+END_SRC
#+RESULTS:
: | G F p0 F G p1
: ([](<>(p0))) || (<>([](p1)))
: p_{1} \land p_{2} \land \G p_{0}
Notice that the different output routines specify in their name the
syntax the output, and the type of formula they expect. Here we are
only using LTL formulas for demonstration, so those three functions
are OK with that (LTL is a subset of PSL as far as Spot is concerned).
However if we input PSL formula, it's clear that the above code will
be a problem. If you want to add additional checks, you can easily
tests whether =f->is_ltl_formula()= returns true.
Did you notice the calls to =f->destroy()= at the end? The LTL
formula objects are implemented as DAG with sharing of subformulas.
Each (sub)formula is therefore reference counted, and currently this
is done manually by calling =f->clone()= and =f->destroy()= (do not
ever =delete= a formula, always call =f->destroy()=).
We do not recommend using this =parse_formula()= interface because of
the potential formulas (like =f= or =t=) that have different meanings
in the two parsers that are tried.
Instead, depending on whether you want to parse formulas with infix
syntax, or formulas with prefix syntax, you should call the specific
parser. Additionally, this give you control over how to print errors.
** Calling the infix parser explicitly
#+BEGIN_SRC C++ :results verbatim :exports both
#include <string>
#include <iostream>
#include "ltlparse/public.hh"
#include "ltlvisit/print.hh"
int main()
{
std::string input = "[]<>p0 || <>[]p1";
spot::ltl::parse_error_list pel;
const spot::ltl::formula* f = spot::ltl::parse_infix_psl(input, pel);
if (spot::ltl::format_parse_errors(std::cerr, input, pel))
{
if (f)
f->destroy();
return 1;
}
print_lbt_ltl(std::cout, f) << '\n';
print_spin_ltl(std::cout, f, true) << '\n';
f->destroy();
}
#+END_SRC
#+RESULTS:
: | G F p0 F G p1
: ([](<>(p0))) || (<>([](p1)))
So =parse_infix_psl()= processes =input=, and stores any diagnostic in
=pel=, which is a list of pairs associating each error to a location.
You could iterate over that list to print it by yourself as you wish,
or you can call =format_parse_errors()= to do that for you. Note that
as its name implies, this parser can read more than LTL formulas (the
fragment of PSL we support is basically LTL extended with regular
expressions).
If =pel= is empty, =format_parse_errors()= will do nothing and return
false.
If =pel= is non empty, =format_parse_errors()= will display the errors
messages and return true. In the above code, we have decided to
aborts the execution in this case.
However the parser usually tries to do some error recovery. For
instance if you have input =(a U b))= the parser will complain about
the extra parenthesis (=pel= not empty), but it will still return an
=f= that is equivalent to =a U b=. So you could decide to continue
with the "fixed" formula if you wish. Here is an example:
#+BEGIN_SRC C++ :results verbatim :exports both
#include <string>
#include <iostream>
#include "ltlparse/public.hh"
#include "ltlvisit/print.hh"
int main()
{
std::string input = "(a U b))";
spot::ltl::parse_error_list pel;
const spot::ltl::formula* f = spot::ltl::parse_infix_psl(input, pel);
// Use std::cout instead of std::cerr because we can only
// show the output of std::cout in this documentation.
(void) spot::ltl::format_parse_errors(std::cout, input, pel);
if (f == nullptr)
return 1;
print_lbt_ltl(std::cout, f) << '\n';
print_spin_ltl(std::cout, f, true) << '\n';
f->destroy();
}
#+END_SRC
#+RESULTS:
#+begin_example
>>> (a U b))
^
syntax error, unexpected closing parenthesis
>>> (a U b))
^
ignoring trailing garbage
U "a" "b"
(a) U (b)
#+end_example
The formula =f= is only returned as null when the parser really cannot
recover anything.
** Calling the prefix parser explicitly
The only difference here is the call to =parse_prefix_ltl()= instead of
=parse_infix_psl()=.
#+BEGIN_SRC C++ :results verbatim :exports both
#include <string>
#include <iostream>
#include "ltlparse/public.hh"
#include "ltlvisit/print.hh"
int main()
{
std::string input = "& & G p0 p1 p2";
spot::ltl::parse_error_list pel;
const spot::ltl::formula* f = spot::ltl::parse_prefix_ltl(input, pel);
if (spot::ltl::format_parse_errors(std::cerr, input, pel))
{
if (f)
f->destroy();
return 1;
}
print_latex_psl(std::cout, f) << '\n';
f->destroy();
}
#+END_SRC
#+RESULTS:
: p_{1} \land p_{2} \land \G p_{0}