* src/tgbaalgos/powerset.hh (power_map): New structure, allowing
the caller to retrieve the set of original states corresponding to
the set in the deterministic automaton.
(power_set): Adjust prototype to take a power_map instead of the
acc_list.
* src/tgbaalgos/powerset.cc (power_set): Strip all code using
acc_list, and update power_set.
* src/tgbaalgos/minimize.cc (minimize): Rewrite, using an
algorithm similar to the one in the Dax paper to check whether
state of the minimized automaton should be accepting.
correct from ltl2tgba to the library.
* src/tgbaalgos/minimize.hh,
src/tgbaalgos/minimize.cc (minimize_obligation): New function.
* src/tgbatests/ltl2tgba.cc (main): Fix constness of automata,
and call minimize_obligation() for -R3b.
* src/tgbatest/ltl2tgba.cc (-M): New option for building
deterministic monitors.
* src/tgbaalgos/minimize.cc (minimize): Take a monitor
argument and adjust the code.
* src/tgbaalgos/minimize.hh (minimize): Document it.
* src/tgbaalgos/minimize.cc (init_sets, minimize): Fix memory
leaks and a usage of the wrong automaton.
* src/tgbatest/wdba.test: Try using -Rm with -R3 or -R3b, and with
valgrind. This caught all the bugs fixed above.
* src/tgbaalgos/minimize.cc (minimize): Don't add acceptance
conditions if the final set is empty.
* src/tgbaalgos/powerset.cc (tgba_powerset): Add the initial state
to acc_list if it is accepting. Also do not compute an SCC build
map if we don't have to build acc_list.
* src/tgbatest/ltl2tgba.cc (main): Delete the minimized automaton.
* src/tgbaalgos/minimize.cc (minimize): Remove the call to
unregister_variable() at the end. It was both
wrong (unregistering only the first variable) and useless ("delete
del_a" will unregister all these variables). Use a map and a set
to keep track of free BDD variable and reuse them, otherwise the
algorithm would sometimes use more variables than allocated.
New files. Algorithm to minimize an automaton using first the powerset
construction to determinize the input automaton, the automaton is then
minimized using the standard algorithm, using BDDs to check if states
are equivalent.
from the initial automaton.
* src/tgba/tgbaexplicit.cc, src/tgba/tgbaexplicit.hh:
Added class tgba_explicit_number, a tgba_explicit where states are
labelled by integers.
* src/tgbaalgos/powerset.hh, src/tgbaalgos/powerset.cc:
When building the deterministic automaton, keep track of which states
were created from an accepting state in the initial automaton.
The states are added to the new optional parameter (if not 0),
acc_list.
Use tgba_explicit_number instead of tgba_explicit_string to build
the result.
* src/tgbaalgos/scc.cc (spot): Small fix.
Print everything on std::cout.
* src/tgbaalgos/reducerun.hh (tgba_run): Predeclare as a struct
since this is what it is.
* src/tgbatest/randtgba.cc (main): Avoid using "i" with two
different type in the same loop.
* src/ltlast/binop.cc, src/ltlast/binop.cc: Add support for
these new operators.
* src/ltlparse/ltlparse.yy, src/ltlparse/ltlscan.ll: Parse them.
* src/ltltest/reduccmp.test: Add new tests for W and M.
* src/ltlvisit/basicreduce.cc, src/ltlvisit/contain.cc,
src/ltlvisit/lunabbrev.cc, src/ltlvisit/nenoform.cc,
src/ltlvisit/randomltl.cc, src/ltlvisit/randomltl.hh,
src/ltlvisit/reduce.cc, src/ltlvisite/simpfg.cc,
src/ltlvisit/simpfg.hh, src/ltlvisit/syntimpl.cc,
src/ltlvisit/tostring.cc, src/tgba/formula2bdd.cc,
src/tgbaalgos/eltl2tgba_lacim.cc, src/tgbaalgos/ltl2taa.cc,
src/tgbaalgos/ltl2tgba_fm.cc, src/tgbaalgos/ltl2tgba_lacim.cc:
Add support for W and M.
* src/tgbatest/ltl2neverclaim.test: Test never claim output
using LBTT, this is more thorough. Also we cannot use -N
any more in the spotlbtt.test.
* src/tgbatests/ltl2tgba.cc: Define M and W for ELTL.
* src/tgbatest/ltl2neverclaim.test: Test W and M, and use
-DS instead of -N, because lbtt-translate does not want
to translate these operators for tools that masquerade as Spin.
by default in scc_filter().
Doing so helps the degeneralization algorithm, because it will
have more opportunity to be in an accepting level when it reaches
the accepting SCCs.
* src/tgbaalgos/sccfilter.cc (filter_iter::filter_iter): Take a
remove_all_useless argument.
(filter_iter::process_link): Use the flag to decide whether to
filter acceptance conditions going to accepting SCCs.
(scc_filter): Take a remove_all_useless argument and pass it to
filter_iter.
* src/tgbaalgos/sccfilter.hh (filter_iter): Add the new argument
and document the function.
* src/tgbatest/tgbatests/ltl2tgba.cc (main): Add option use -R3
for remove_all_useless=false and add -R3f for
remove_all_useless=true.
* src/tgbatest/ltl2tgba.test: Show one case where -R3f makes
the degeneralization worse than -R3.
This is actually the third time I fix random_graph(). On
2007-02-06 I changed the function not to generated dead states,
but in a way that made it non-deterministic. On 2010-01-20 I made
the function deterministic again, but it started to generate dead
states as a side effect. This time, I'm making sure that dead
states won't come again with a test-case that we should have had
from the beginning.
* src/tgbaalgos/randomgraph.cc (random_graph): Add an extra
indirection array, state_randomizer[], so that we can reorder
states indices after a random selection without actually changing
the value of the indices used by unreachable_states and
nodes_to_process.
* src/tgbatest/randtgba.test: New file.
* src/tgbatest/Makefile.am: Add randtgba.test.
* wrap/python/cgi-bin/ltl2tgba.in: Add options to symbolically
prune unaccepting SCCs in LaCIM, and explicitely pruns unaccepting
SCCs in all algorithms.
* src/tgbaalgos/reductgba_sim.hh: Conceal most of the file to
SWIG.
* wrap/python/spot.i: Include reductgba_sim.hh.
* src/tgbaalgos/ltl2tgba_fm.cc, src/tgbaalgos/ltl2tgba_fm.hh:
Remove the containment option.
* src/tgbafromfile.cc, src/tgbafromfile.hh: Remove the
containment_ member.
* src/tgbatest/ltl2tgba.cc (syntax): Remove -c option for
FM algorithm, use it exclusively for TAA.
* src/tgbaalgos/gtec/ce.cc (couvreur99_check_result::acss_states):
Delete the iterator after using it.
* src/tgbatest/emptchkr.test: Run 'randtgba -z' with valgrind too.
* src/tgbaalgos/gv04.cc (push): Fix the tracking of the accepting
link. This bug was discovered on a random generated graph with
a complex accepting cycle.
* src/tgbatest/emptchk.test: Add the troublesome graph as
test case.
* src/ltlast/automatop.cc, src/ltlast/automatop.hh: Rename nfa as
get_nfa to avoid a name clash with the `nfa' class.
* src/ltlvisit/clone.cc, src/ltlvisit/nenoform.cc,
src/ltlvisit/tostring.cc, src/tgbaalgos/eltl2tgba_lacim.cc: Use
get_nfa instead of nfa.
* src/tgba/tgbasafracomplement.cc: Don't use a
const_reverse_iterator.
* src/tgbaalgos/randomgraph.cc (random_graph): Revert the part of
the patch from 2007-02-06 which silently replaced the use of state
index by state pointers. Storing states pointer in this map cause
some non-determinism because of the memory layout. It was almost
impossible to reproduce bugs found by tests based on randtgba.
speed up a little the translation.
* src/tgbaalgos/ltl2taa.cc: Adjust. Also fix a bug with
acceptance conditions in all_n_tuples.
* src/tgba/taatgba.cc, src/tgba/taatgba.hh: Adjust.
taa_tgba instances labelled by other objects than strings
in the same way Alexandre did for tgba_explicit.
* src/tgba/taatgba.cc, src/tgba/taatgba.hh: Split taa_tgba in two
levels: taa_tgba with no label and taa_tgba_labelled templated by
the type of the label. Define taa_tgba_string (with the interface
of the former taa_tgba class) and taa_tgba_formula for future use
in ltl2taa.cc.
* src/tgbaalgos/ltl2taa.cc, src/tgbatest/taatgba.cc: Adjust to use
taa_tgba_string.
If the input is a tgba_explicit_formula we can output a
tgba_explicit_formula too, and we want to do that because it is
more space efficient.
* src/tgba/tgbaexplicit.hh (get_label): New method.
* src/tgbaalgos/sccfilter.cc (create_transition): New function,
to handle tgba_explicit_formula and tgba_explicit_string output
differently.
(filter_iter): Template it on the output tgba type, and adjust
to call create_transition.
(scc_filter): Use filter_iter<tgba_explicit_formula> or
filter_iter<tgba_explicit_string> depending on the input tgba
type.
A useless acceptance conditions is one that is always implied by
another.
* src/misc/bddop.hh, src/misc/bddop.cc
(compute_neg_acceptance_conditions): New function.
* src/tgba/tgbaexplicit.hh, src/tgba/tgbaexplicit.cc
(set_acceptance_conditions): New function.
* src/tgbaalgos/scc.cc (build_map, build_scc_stats, dump_scc_dot):
Keep track of useful acceptance conditions.
(useful_acc_of): New function.
* src/tgbaalgos/scc.hh (scc_stats, scc_map::scc::useful_scc): New
attributes.
* src/tgbaalgos/sccfilter.cc (filter_iter): Adjust to filter
useless acceptance conditions.
(scc_filter): Compute useful acceptance conditions and pass them
to filter_iter.
prune_scc() leaked memory and failed to remove chains of useless SCCs.
* src/tgbaalgos/reductgba_sim.cc (reduc_tgba_sim): Call
scc_filter() instead of prune_scc(), and do it before running
any simulation-based reduction.
* src/tgbaalgos/reductgba_sim.hh (reduc_tgba_sim): Return a const
tgba*.
* src/tgbatest/ltl2tgba.cc: Call scc_filter() instead of
prune_scc().
* src/tgbatest/scc.test: Add two more tests that failed with
prune_scc().
Also compute useless SCCs.
* src/tgbaalgos/scc.cc (scc_map::scc::trivial): New field.
(scc_stats::useless_scc_map): New field.
* src/tgbaalgos/scc.cc (scc_map::build_map): Mark SCCs that are
not trivial.
(scc_map::accepting): Always return false for trivial SCC.
(build_scc_stats): Fill in useless_scc_map.
* src/tgbaalgos/reachiter.hh (tgba_reachable_iterator::want_state):
New method.
* src/tgbaalgos/reachiter.cc (tgba_reachable_iterator::want_state):
Implement it.
(tgba_reachable_iterator::run): Call want_state before processing
a state.
much time when the formula is large, and it is useless when the
purpose is model-checking with Spin.
* src/tgbaalgos/neverclaim.hh (never_claim_reachable): Add the
comments option.
* src/tgbaalgos/neverclaim.cc (never_claim_bfs,
never_claim_reachable): Honor the comment option.
* src/tgba/tests/ltl2tgba.cc (-N): Do not comment states.
(-NN) New option to output a commented never claim.
* src/tgbaalgos/ltl2taa.cc: Do NOT use the same bdd_dict for both
the translation and the language containment checker.
* src/tgbatest/spotlbtt.test: Update TAA related tests.
This gives a nice speedup (>1.4) in the ltlcounter benchmark,
because we no longer have to generate a copy the string
representations of the LTL formulae.
* src/tgbaalgos/ltl2tgba_fm.cc: Adjust. Also get rid of the
formulae_seen map, since we can now ask the tgba_explicit_formula
if it knows the state.