* src/ltlvisit/basicreduce.cc: Perform the following reductions:
(a R b) | Gb = a R b
(a M b) | Gb = a R b
(a U b) & Fb = a U b
(a W b) & Fb = a U b
* src/ltltest/reduccmp.test: Test them.
* src/ltlvisit/basicreduce.cc: Perform the following reductions:
(a U b) & (c W b) = (a & c) U b
(a W b) & (c W b) = (a & c) W b
(a R b) | (c M b) = (a | c) R b
(a M b) | (c M b) = (a | c) M b
* src/ltltest/reduccmp.test: Test them.
* src/ltlvisit/basicreduce.cc: Perform the following reductions.
a R (b & F(a)) = a M b
a M (b & F(a)) = a M b
a R Fa = Fa
a M Fa = Fa
a R b & Fa = a M b
a R b & a M c = a M (b & c)
a M b & a M c = a M (b & c)
* src/ltltest/reduccmp.test: More tests.
* src/ltlvisit/basicreduce.cc: Perform the following reductions.
a U (b | Ga) = a W b
a W (b | Ga) = a W b
a U b | Ga = a W b
a U b | a W c = a W (b | c)
a W b | a W c = a W (b | c)
a U Ga = Ga
a W Ga = Ga
* src/ltltest/reduccmp.test: More tests.
* src/ltlast/binop.cc, src/ltlast/binop.cc: Add support for
these new operators.
* src/ltlparse/ltlparse.yy, src/ltlparse/ltlscan.ll: Parse them.
* src/ltltest/reduccmp.test: Add new tests for W and M.
* src/ltlvisit/basicreduce.cc, src/ltlvisit/contain.cc,
src/ltlvisit/lunabbrev.cc, src/ltlvisit/nenoform.cc,
src/ltlvisit/randomltl.cc, src/ltlvisit/randomltl.hh,
src/ltlvisit/reduce.cc, src/ltlvisite/simpfg.cc,
src/ltlvisit/simpfg.hh, src/ltlvisit/syntimpl.cc,
src/ltlvisit/tostring.cc, src/tgba/formula2bdd.cc,
src/tgbaalgos/eltl2tgba_lacim.cc, src/tgbaalgos/ltl2taa.cc,
src/tgbaalgos/ltl2tgba_fm.cc, src/tgbaalgos/ltl2tgba_lacim.cc:
Add support for W and M.
* src/tgbatest/ltl2neverclaim.test: Test never claim output
using LBTT, this is more thorough. Also we cannot use -N
any more in the spotlbtt.test.
* src/tgbatests/ltl2tgba.cc: Define M and W for ELTL.
* src/tgbatest/ltl2neverclaim.test: Test W and M, and use
-DS instead of -N, because lbtt-translate does not want
to translate these operators for tools that masquerade as Spin.
* src/ltlparse/ltlparse.yy, src/tgbaparse/tgbaparse.yy,
src/evtgbaparse/evtgbaparse.yy, src/eltlparse/eltlparse.yy: Use
token types for %destructor and %printer. Remove the yylex hack,
since %name-prefix is now honored by Bison. Also remove the
useless <token> type. Suggested by Akim Demaille.
* src/ltlparse/fmterror.cc (format_parse_errors): Count columns
starting at 1. Older Bison used to start at 0 but changed to
match the GNU Coding Standards.
* src/ltltest/parseerr.test: Add a test case.
degeneralization.
* src/tgba/tgbatba.cc (tgba_sba_proxy::tgba_tba_proxy): Build the
list of acceptance condition in the reverse order. The order is
still arbitrary, but the bdd_satone() call seems to output the
acceptance conditions that are more used first, and this helps the
degeneralization process.
* src/ltlparse/ltlparse.yy: Change the precedence of "->" and
"<->" so that "a & b -> c" is interpreted as "(a & b) -> c"
instead of "a & (b -> c)". The new interpretation is more
intuitive, and matches that of LBTT.
by default in scc_filter().
Doing so helps the degeneralization algorithm, because it will
have more opportunity to be in an accepting level when it reaches
the accepting SCCs.
* src/tgbaalgos/sccfilter.cc (filter_iter::filter_iter): Take a
remove_all_useless argument.
(filter_iter::process_link): Use the flag to decide whether to
filter acceptance conditions going to accepting SCCs.
(scc_filter): Take a remove_all_useless argument and pass it to
filter_iter.
* src/tgbaalgos/sccfilter.hh (filter_iter): Add the new argument
and document the function.
* src/tgbatest/tgbatests/ltl2tgba.cc (main): Add option use -R3
for remove_all_useless=false and add -R3f for
remove_all_useless=true.
* src/tgbatest/ltl2tgba.test: Show one case where -R3f makes
the degeneralization worse than -R3.
* src/ltlvisit/basicreduce.cc (basic_reduce_visitor): Replace
the FG(a)|FG(b) == FG(a|b) rule by the above more generic one.
Add the dual rule for G(a)&G(b), as we had none (this one won't
improve anything in the translation, but it is more symmetric
this way). Also simplify some pointer checks.
* src/tgba/tgbatba.cc (tgba_sba_proxy::tgba_sba_proxy): Set
cycle_start_ to start in the accepting layer of the degeneralized
automaton if the initial state has an accepting self-loop.
Otherwise, starts at the level of the first acceptance condition
as previously.
(tgba_sba_proxy::get_init_state): Use cycle_start_.
* src/tgba/tgbatba.hh (tgba_tba_proxy::a_): Make it protected so
that we can use it in tgba_sba_proxy::tgba_sba_proxy.
(tgba_sba_proxy::cycle_start_, tgba_sba_proxy::get_init_state):
Declare.
* src/tgbatest/ltl2tgba.test: More tests.
* src/tgba/tgbatba.cc (tgba_tba_proxy_succ_iterator::sync_): Move
the optimization step added by the previous patch outside the
before the bddtrue check, so that it also applies to accepting
states in SBA.
an acceptance condition on all outgoing transitions.
This was motivated by experiments from Rdiger Ehlers, showing
that "ltl2ba -f 'a U (b U c)'" outperformed "ltl2tgba -f -N -R3 'a
U (b U c)'". With this change and the previous one, it is no
longer the case.
* src/tgba/tgbatba.cc (tgba_tba_proxy_succ_iterator::aut_): Store
a pointer to the source automaton and...
(tgba_tba_proxy_succ_iterator::sync_): ... use it in an extra
optimization step to gather the acceptance conditions common
to all outgoing transitions of the destination state, and pretend
they are on the current (ingoing) transition.
(tgba_tba_proxy::succ_iter): Pass the
source automaton to the constructed iterator.
* src/tgbatest/spotlbtt.test: Test -f -N -R3 -r7.
* src/tgbatest/ltl2tgba.test: Add a test case for 'a U (b U c)'.
* src/tgbatest/ltl2tgba.cc (main): Call scc_filter() before the
degeneralization, because it might remove useless acceptance
conditions. I realized this while looking at experiments from
Rdiger Ehlers.
* src/saba/sabacomplementtgba.hh (spot): Rewrite Bchi as B\"uchi
is the BibTex entry used as comment, because some version of sed
will choke on non-ascii character and cause sanity/style.test to
fail.
This is actually the third time I fix random_graph(). On
2007-02-06 I changed the function not to generated dead states,
but in a way that made it non-deterministic. On 2010-01-20 I made
the function deterministic again, but it started to generate dead
states as a side effect. This time, I'm making sure that dead
states won't come again with a test-case that we should have had
from the beginning.
* src/tgbaalgos/randomgraph.cc (random_graph): Add an extra
indirection array, state_randomizer[], so that we can reorder
states indices after a random selection without actually changing
the value of the indices used by unreachable_states and
nodes_to_process.
* src/tgbatest/randtgba.test: New file.
* src/tgbatest/Makefile.am: Add randtgba.test.
* src/sabaalgos/sabareachiter.hh (process_link): Document argument SI.
* src/eltlparse/public.hh (format_parse_errors): Remove the
non-existing eltl_string argument from the description.
(parse_file): Fix name of parameters in documentation.
* wrap/python/cgi-bin/ltl2tgba.in: Add options to symbolically
prune unaccepting SCCs in LaCIM, and explicitely pruns unaccepting
SCCs in all algorithms.
* src/tgbaalgos/reductgba_sim.hh: Conceal most of the file to
SWIG.
* wrap/python/spot.i: Include reductgba_sim.hh.
* src/tgbaalgos/ltl2tgba_fm.cc, src/tgbaalgos/ltl2tgba_fm.hh:
Remove the containment option.
* src/tgbafromfile.cc, src/tgbafromfile.hh: Remove the
containment_ member.
* src/tgbatest/ltl2tgba.cc (syntax): Remove -c option for
FM algorithm, use it exclusively for TAA.
* src/tgba/tgbacomplement.hh, src/tgba/tgbacomplement.cc: Rename
as...
* src/tgba/tgbakvcomplement.hh, src/tgba/tgbakvcomplement.cc:
... these. It makes more sense since we also have
tgba_safra_complement.
* src/tgba/Makefile.am, src/tgbatest/complement.cc, NEWS: Adjust.
implementation of rational operators in a future version.
* src/ltlparse/ltlscan.ll: Do not recognize "*".
* wrap/python/cgi-bin/ltl2tgba.in: Undocument it.
* NEWS: Mention this.
* src/tgbatest/kv.test, src/tgbatest/ltl2tgba.test,
src/tgbatest/reductgba.test: Replace "*" by "&".
* src/tgbatest/ltl2tgba.cc (syntax, main): Introduce -le to select
this algorithm and -lo to add the default LTL operators. This
replace the undocumented hack to add LTL operators when the
formula with read for command-line, or the automaton was output
for LBTT.
* src/tgbatest/eltl2tgba.test, src/tgbatest/spotlbtt.test: Update
call syntax.
* NEWS: Mention -le, -lo, and -taa.
* src/tgbatest/ltl2tgba.cc (syntax): Move -R3b with the other
LaCIM options.
(main): Speak of "symbolic SCC pruning" instead of "deleting
unaccepting SCC", and do that right after the translation, before
degeneralization. Also error out when -R3b is used on non
symbolic automata.
* src/sanity/readme.test: For each AC_OUTPUT Makefile, check that
the directory is documented in README. Also skip non distributed
directories in readme.test.
* README: Fit on 80 columns.
kills a memory leak warning from swig/python.
* src/ltlparse/public.hh (parse_error_list): Declare
as an empty struct for Swig.
* wrap/python/tests/ltlparse.py: Fix copyright.
* src/ltlvisit/length.cc (visit(multop*)): New function. "a & b &
c" has length 5, not 4, even though it is stored as And(a,b,c).
This caused reduc.test to fail on some formulae.