notebooks: correction of typos
* tests/python/_partitioned_relabel.ipynb, tests/python/_product_weak.ipynb, tests/python/acc_cond.ipynb, tests/python/aliases.ipynb, tests/python/automata.ipynb, tests/python/cav22-figs.ipynb, tests/python/contains.ipynb, tests/python/decompose.ipynb, tests/python/formulas.ipynb, tests/python/games.ipynb, tests/python/highlighting.ipynb, tests/python/ltsmin-dve.ipynb, tests/python/parity.ipynb, tests/python/product.ipynb, tests/python/satmin.ipynb, tests/python/stutter-inv.ipynb, tests/python/synthesis.ipynb, tests/python/twagraph-internals.ipynb, tests/python/word.ipynb, tests/python/zlktree.ipynb: here
This commit is contained in:
parent
858629dd3a
commit
6dc11b4715
20 changed files with 242 additions and 82 deletions
|
|
@ -1047,7 +1047,7 @@
|
||||||
"\n",
|
"\n",
|
||||||
"concerned_aps = a & b # concerned aps are given as a conjunction of positive aps\n",
|
"concerned_aps = a & b # concerned aps are given as a conjunction of positive aps\n",
|
||||||
"# As partitioning can be exponentially costly,\n",
|
"# As partitioning can be exponentially costly,\n",
|
||||||
"# one can limit the number of new letters generated before abadoning\n",
|
"# one can limit the number of new letters generated before abandoning\n",
|
||||||
"# This can be done either as a hard limit and/or as the number of current condition\n",
|
"# This can be done either as a hard limit and/or as the number of current condition\n",
|
||||||
"# times a factor\n",
|
"# times a factor\n",
|
||||||
"relabel_dict = spot.partitioned_relabel_here(aut, False, 1000, 1000, concerned_aps)\n",
|
"relabel_dict = spot.partitioned_relabel_here(aut, False, 1000, 1000, concerned_aps)\n",
|
||||||
|
|
|
||||||
|
|
@ -1819,7 +1819,7 @@
|
||||||
"autslen = len(auts)\n",
|
"autslen = len(auts)\n",
|
||||||
"# In a previous version we used to iterate over all possible left automata with \"for left in auts:\"\n",
|
"# In a previous version we used to iterate over all possible left automata with \"for left in auts:\"\n",
|
||||||
"# however we had trouble with Jupyter on i386, where running the full loop abort with some low-level \n",
|
"# however we had trouble with Jupyter on i386, where running the full loop abort with some low-level \n",
|
||||||
"# exeptions from Jupyter client. Halving the loop helped for some times, but then the timeout\n",
|
"# exceptions from Jupyter client. Halving the loop helped for some times, but then the timeout\n",
|
||||||
"# came back. So we do one left automaton at a time.\n",
|
"# came back. So we do one left automaton at a time.\n",
|
||||||
"left = auts[0]\n",
|
"left = auts[0]\n",
|
||||||
"display(left)\n",
|
"display(left)\n",
|
||||||
|
|
|
||||||
|
|
@ -11,6 +11,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -32,9 +33,9 @@
|
||||||
"\n",
|
"\n",
|
||||||
"Note that the number of sets given can be larger than what is actually needed by the acceptance formula.\n",
|
"Note that the number of sets given can be larger than what is actually needed by the acceptance formula.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Transitions in automata can be tagged as being part of some member sets, and a path in the automaton is accepting if the set of acceptance sets visited along this path satify the acceptance condition.\n",
|
"Transitions in automata can be tagged as being part of some member sets, and a path in the automaton is accepting if the set of acceptance sets visited along this path satisfy the acceptance condition.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Definining acceptance conditions in Spot involves three different types of C++ objects:\n",
|
"Defining acceptance conditions in Spot involves three different types of C++ objects:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"- `spot::acc_cond` is used to represent an acceptance condition, that is: a number of sets and a formula.\n",
|
"- `spot::acc_cond` is used to represent an acceptance condition, that is: a number of sets and a formula.\n",
|
||||||
"- `spot::acc_cond::acc_code`, is used to represent Boolean formula for the acceptance condition using a kind of byte code (hence the name)\n",
|
"- `spot::acc_cond::acc_code`, is used to represent Boolean formula for the acceptance condition using a kind of byte code (hence the name)\n",
|
||||||
|
|
@ -99,10 +100,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"As seen above, the sequence of set numbers can be specified using a list or a tuple. While from the Python language point of view, using a tuple is faster than using a list, the overhead to converting all the arguments from Python to C++ and then converting the resuslting back from C++ to Python makes this difference completely negligeable. In the following, we opted to use lists, because brackets are more readable than nested parentheses."
|
"As seen above, the sequence of set numbers can be specified using a list or a tuple. While from the Python language point of view, using a tuple is faster than using a list, the overhead to converting all the arguments from Python to C++ and then converting the resulting back from C++ to Python makes this difference completely negligible. In the following, we opted to use lists, because brackets are more readable than nested parentheses."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -129,6 +131,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -191,6 +194,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -216,6 +220,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -247,6 +252,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -274,6 +280,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -324,6 +331,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -351,12 +359,13 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## `acc_code`\n",
|
"## `acc_code`\n",
|
||||||
"\n",
|
"\n",
|
||||||
"`acc_code` encodes the formula of the acceptance condition using a kind of bytecode that basically corresponds to an encoding in [reverse Polish notation](http://en.wikipedia.org/wiki/Reverse_Polish_notation) in which conjunctions of `Inf(n)` terms, and disjunctions of `Fin(n)` terms are grouped. In particular, the frequently-used genaralized-Büchi acceptance conditions (like `Inf(0)&Inf(1)&Inf(2)`) are always encoded as a single term (like `Inf({0,1,2})`).\n",
|
"`acc_code` encodes the formula of the acceptance condition using a kind of bytecode that basically corresponds to an encoding in [reverse Polish notation](http://en.wikipedia.org/wiki/Reverse_Polish_notation) in which conjunctions of `Inf(n)` terms, and disjunctions of `Fin(n)` terms are grouped. In particular, the frequently-used generalized-Büchi acceptance conditions (like `Inf(0)&Inf(1)&Inf(2)`) are always encoded as a single term (like `Inf({0,1,2})`).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The simplest way to construct an `acc_code` by passing a string that represent the formula to build."
|
"The simplest way to construct an `acc_code` by passing a string that represent the formula to build."
|
||||||
]
|
]
|
||||||
|
|
@ -391,6 +400,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -418,6 +428,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -444,6 +455,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -471,6 +483,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -539,6 +552,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -589,6 +603,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -615,6 +630,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -727,6 +743,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -762,6 +779,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -791,10 +809,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The `used_inf_fin_sets()` returns a pair of marks instead, the first one with all sets occuring in `Inf`, and the second one with all sets appearing in `Fin`."
|
"The `used_inf_fin_sets()` returns a pair of marks instead, the first one with all sets occurring in `Inf`, and the second one with all sets appearing in `Fin`."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -818,6 +837,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -853,6 +873,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -888,6 +909,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -956,6 +978,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1026,6 +1049,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1054,6 +1078,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1083,6 +1108,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1112,6 +1138,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1143,6 +1170,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1174,6 +1202,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1206,6 +1235,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1234,6 +1264,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1261,6 +1292,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1288,10 +1320,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"For convencience, the `accepting()` method of `acc_cond` delegates to that of the `acc_code`. \n",
|
"For convenience, the `accepting()` method of `acc_cond` delegates to that of the `acc_code`. \n",
|
||||||
"Any set passed to `accepting()` that is not used by the acceptance formula has no influence."
|
"Any set passed to `accepting()` that is not used by the acceptance formula has no influence."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|
@ -1317,6 +1350,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1385,6 +1419,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1411,12 +1446,13 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"`fin_one()` return the number of one color `x` that appears as `Fin(x)` in the formula, or `-1` if the formula is Fin-less.\n",
|
"`fin_one()` return the number of one color `x` that appears as `Fin(x)` in the formula, or `-1` if the formula is Fin-less.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The variant `fin_one_extract()` consider the acceptance condition as a disjunction (if the top-level operator is not a disjunction, we just assume the formula is a disjunction with only one disjunct), and return a pair `(x,c)` where `c` is the disjunction of all disjuncts of the original formula where `Fin(x)` used to appear but where `Fin(x)` have been replaced by `true`, and `Inf(x)` by `false`. Also this function tries to choose an `x` such that one of the disjunct has the form `...&Fin(x)&...` if possible: this is visible in the third example, where 5 is prefered to 2."
|
"The variant `fin_one_extract()` consider the acceptance condition as a disjunction (if the top-level operator is not a disjunction, we just assume the formula is a disjunction with only one disjunct), and return a pair `(x,c)` where `c` is the disjunction of all disjuncts of the original formula where `Fin(x)` used to appear but where `Fin(x)` have been replaced by `true`, and `Inf(x)` by `false`. Also this function tries to choose an `x` such that one of the disjunct has the form `...&Fin(x)&...` if possible: this is visible in the third example, where 5 is preferred to 2."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -18,7 +18,7 @@
|
||||||
"id": "4dc12445",
|
"id": "4dc12445",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Aliases is a feature of the HOA format that allows Boolean formulas to be named and reused to label automata. This can be helpful to reduce the size of a file, but it can also be abused to \"fake\" arbritary alphabets by using an alphabet of $n$ aliases encoded over $\\log_2(n)$ atomic propositions. \n",
|
"Aliases is a feature of the HOA format that allows Boolean formulas to be named and reused to label automata. This can be helpful to reduce the size of a file, but it can also be abused to \"fake\" arbitrary alphabets by using an alphabet of $n$ aliases encoded over $\\log_2(n)$ atomic propositions. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"Spot knows how to read HOA files containing aliases since version 2.0. However support for producing files with aliases was only added in version 2.11.\n",
|
"Spot knows how to read HOA files containing aliases since version 2.0. However support for producing files with aliases was only added in version 2.11.\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
|
@ -653,7 +653,7 @@
|
||||||
"source": [
|
"source": [
|
||||||
"Notice how `p0` and `!p0` were rewritten as disjunction of aliases because no direct aliases could be found for them. \n",
|
"Notice how `p0` and `!p0` were rewritten as disjunction of aliases because no direct aliases could be found for them. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"Generaly, the display code tries to format formulas as a sum of product. It wil recognize conjunctions and disjunctions of aliases, but if it fails, it will resort to printing the original atomic propositions (maybe mixed with aliases)."
|
"Generally, the display code tries to format formulas as a sum of product. It will recognize conjunctions and disjunctions of aliases, but if it fails, it will resort to printing the original atomic propositions (maybe mixed with aliases)."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -194,7 +194,7 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The call the `spot.setup()` in the first cells has installed a default style for the graphviz output. If you want to change this style temporarily, you can call the `show(style)` method explicitely. For instance here is a vertical layout with the default font of GraphViz."
|
"The call the `spot.setup()` in the first cells has installed a default style for the graphviz output. If you want to change this style temporarily, you can call the `show(style)` method explicitly. For instance here is a vertical layout with the default font of GraphViz."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -3013,7 +3013,7 @@
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"source": [
|
"source": [
|
||||||
"# Using +1 in the display options is a convient way to shift the \n",
|
"# Using +1 in the display options is a convenient way to shift the \n",
|
||||||
"# set numbers in the output, as an aid in reading the product.\n",
|
"# set numbers in the output, as an aid in reading the product.\n",
|
||||||
"a1 = spot.translate('GF(a <-> Xa)')\n",
|
"a1 = spot.translate('GF(a <-> Xa)')\n",
|
||||||
"print(a1.prop_weak())\n",
|
"print(a1.prop_weak())\n",
|
||||||
|
|
|
||||||
|
|
@ -990,7 +990,7 @@
|
||||||
"source": [
|
"source": [
|
||||||
"# Figure 3\n",
|
"# Figure 3\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Fig. 3 shows an example of game generated by `ltlsynt` from the LTL specification of a reactive controler, and then how this game can be encoded into an And-Inverter-Graph.\n",
|
"Fig. 3 shows an example of game generated by `ltlsynt` from the LTL specification of a reactive controller, and then how this game can be encoded into an And-Inverter-Graph.\n",
|
||||||
"First we retrieve the game generated by `ltlsynt` (any argument passed to `spot.automaton` is interpreted as a command if it ends with a pipe), then we solve it to compute a possible winning strategy. \n",
|
"First we retrieve the game generated by `ltlsynt` (any argument passed to `spot.automaton` is interpreted as a command if it ends with a pipe), then we solve it to compute a possible winning strategy. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"Player 0 plays from round states and tries to violate the acceptance condition; Player 1 plays from diamond states and tries to satisfy the acceptance condition. Once a game has been solved, the `highlight_strategy` function will decorate the automaton with winning region and computed strategies for player 0 and 1 in red and green respectively. Therefore this game is winning for player 1 from the initial state.\n",
|
"Player 0 plays from round states and tries to violate the acceptance condition; Player 1 plays from diamond states and tries to satisfy the acceptance condition. Once a game has been solved, the `highlight_strategy` function will decorate the automaton with winning region and computed strategies for player 0 and 1 in red and green respectively. Therefore this game is winning for player 1 from the initial state.\n",
|
||||||
|
|
@ -1295,7 +1295,7 @@
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The `solved_game_to_mealy()` shown in the paper does not always produce the same type of output, so it is\n",
|
"The `solved_game_to_mealy()` shown in the paper does not always produce the same type of output, so it is\n",
|
||||||
"better to explicitely call `solved_game_to_split_mealy()` or `solved_game_to_separated_mealy()` depending on the type of output one need. We also show how to use the `reduce_mealy()` method to simplify one."
|
"better to explicitly call `solved_game_to_split_mealy()` or `solved_game_to_separated_mealy()` depending on the type of output one need. We also show how to use the `reduce_mealy()` method to simplify one."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -252,7 +252,7 @@
|
||||||
"source": [
|
"source": [
|
||||||
"# Containement checks between formulas with cache\n",
|
"# Containement checks between formulas with cache\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In the case of containement checks between formulas, `language_containement_checker` instances provide similar services, but they cache automata representing the formulas checked. This should be prefered when performing several containement checks using the same formulas."
|
"In the case of containement checks between formulas, `language_containement_checker` instances provide similar services, but they cache automata representing the formulas checked. This should be preferred when performing several containement checks using the same formulas."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -312,7 +312,7 @@
|
||||||
"\n",
|
"\n",
|
||||||
"Assume you have computed two automata, that `are_equivalent(a1, a2)` returns `False`, and you want to know why. (This often occur when debugging some algorithm that produce an automaton that is not equivalent to which it should.) The automaton class has a method called `a1.exclusive_run(a2)` that can help with this task: it returns a run that recognizes a word is is accepted by one of the two automata but not by both. The method `a1.exclusive_word(a2)` will return just a word.\n",
|
"Assume you have computed two automata, that `are_equivalent(a1, a2)` returns `False`, and you want to know why. (This often occur when debugging some algorithm that produce an automaton that is not equivalent to which it should.) The automaton class has a method called `a1.exclusive_run(a2)` that can help with this task: it returns a run that recognizes a word is is accepted by one of the two automata but not by both. The method `a1.exclusive_word(a2)` will return just a word.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"For instance let's find a word that is exclusive between `aut_f` and `aut_g`. (The adjective *exlusive* is a reference to the *exclusive or* operator: the word belongs to L(aut_f) \"xor\" it belongs to L(aut_g).)"
|
"For instance let's find a word that is exclusive between `aut_f` and `aut_g`. (The adjective *exclusive* is a reference to the *exclusive or* operator: the word belongs to L(aut_f) \"xor\" it belongs to L(aut_g).)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -12,6 +12,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -28,9 +29,9 @@
|
||||||
"- an accepting SCC is **strictly inherently weak** if it is *inherently weak* and not complete (in other words: *weak* but not *terminal*)\n",
|
"- an accepting SCC is **strictly inherently weak** if it is *inherently weak* and not complete (in other words: *weak* but not *terminal*)\n",
|
||||||
"- an accepting SCC is **strong** if it is not inherently weak.\n",
|
"- an accepting SCC is **strong** if it is not inherently weak.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The strengths **strong**, **stricly inherently weak**, and **inherently terminal** define a partition of all accepting SCCs. The following Büchi automaton has 4 SCCs, and its 3 accepting SCCs show an example of each strength.\n",
|
"The strengths **strong**, **strictly inherently weak**, and **inherently terminal** define a partition of all accepting SCCs. The following Büchi automaton has 4 SCCs, and its 3 accepting SCCs show an example of each strength.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Note: the reason we use the word *inherently* is that the *weak* and *terminal* properties are usually defined syntactically: an accepting SCC would be weak if all its transitions belong to the same acceptance sets. This syntactic criterion is a sufficient condition for an accepting SCC to not have any rejecting cycle, but it is not necessary. Hence a *weak* SCC is *inherently weak*; but while an *inherently weak* SCC is not necessarily *weak*, it can be modified to be *weak* without alterning the langage."
|
"Note: the reason we use the word *inherently* is that the *weak* and *terminal* properties are usually defined syntactically: an accepting SCC would be weak if all its transitions belong to the same acceptance sets. This syntactic criterion is a sufficient condition for an accepting SCC to not have any rejecting cycle, but it is not necessary. Hence a *weak* SCC is *inherently weak*; but while an *inherently weak* SCC is not necessarily *weak*, it can be modified to be *weak* without altering the language."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -214,6 +215,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -225,7 +227,7 @@
|
||||||
"- `w`: (strictly inherently) weak\n",
|
"- `w`: (strictly inherently) weak\n",
|
||||||
"- `s`: strong\n",
|
"- `s`: strong\n",
|
||||||
"\n",
|
"\n",
|
||||||
"For instance if we want to preserve only the **strictly inherently weak** part of this automaton, we should get only the SCC with the self-loop on $b$, and the SCC above it so that we can reach it. However the SCC above is not stricly weak, so it should not accept any word in the new automaton."
|
"For instance if we want to preserve only the **strictly inherently weak** part of this automaton, we should get only the SCC with the self-loop on $b$, and the SCC above it so that we can reach it. However the SCC above is not strictly weak, so it should not accept any word in the new automaton."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -343,6 +345,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -502,6 +505,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -600,6 +604,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -682,6 +687,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1124,6 +1130,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1536,6 +1543,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -2355,12 +2363,13 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Note how the two weak automata (i.e., stricly weak and terminal) are now using a Büchi acceptance condition (because that is sufficient for weak automata) while the strong automaton inherited the original acceptance condition.\n",
|
"Note how the two weak automata (i.e., strictly weak and terminal) are now using a Büchi acceptance condition (because that is sufficient for weak automata) while the strong automaton inherited the original acceptance condition.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"When extracting multiple strengths and one of the strength is **strong**, we preserve the original acceptance. For instance extracting **strong** and **inherently terminal** gives the following automaton, where only **stricly inherently weak** SCCs have become rejecting."
|
"When extracting multiple strengths and one of the strength is **strong**, we preserve the original acceptance. For instance extracting **strong** and **inherently terminal** gives the following automaton, where only **strictly inherently weak** SCCs have become rejecting."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -2741,6 +2750,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -3188,6 +3198,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -4219,6 +4230,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -4566,6 +4578,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -4577,7 +4590,7 @@
|
||||||
"\n",
|
"\n",
|
||||||
"### `Acceptance: 0 t`\n",
|
"### `Acceptance: 0 t`\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This occur frequently whant translating LTL formulas that are safety properties:"
|
"This occur frequently when translating LTL formulas that are safety properties:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -4863,6 +4876,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -4957,6 +4971,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -4964,6 +4979,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -5127,6 +5143,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -5509,6 +5526,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -5854,6 +5872,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
|
||||||
|
|
@ -164,7 +164,7 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"If you prefer to print the string in another syntax, you may use the `to_str()` method, with an argument that indicates the output format to use. The `latex` format assumes that you will the define macros such as `\\U`, `\\R` to render all operators as you wish. On the otherhand, the `sclatex` (with `sc` for self-contained) format hard-codes the rendering of each of those operators: this is almost the same output that is used to render formulas using MathJax in a notebook. `sclatex` and `mathjax` only differ in the rendering of double-quoted atomic propositions."
|
"If you prefer to print the string in another syntax, you may use the `to_str()` method, with an argument that indicates the output format to use. The `latex` format assumes that you will the define macros such as `\\U`, `\\R` to render all operators as you wish. On the other hand, the `sclatex` (with `sc` for self-contained) format hard-codes the rendering of each of those operators: this is almost the same output that is used to render formulas using MathJax in a notebook. `sclatex` and `mathjax` only differ in the rendering of double-quoted atomic propositions."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -342,7 +342,7 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Similarly, `is_syntactic_stutter_invariant()` tells wether the structure of the formula guarranties it to be stutter invariant. For LTL formula, this means the `X` operator should not be used. For PSL formula, this function capture all formulas built using the [siPSL grammar](http://www.daxc.de/eth/paper/09atva.pdf)."
|
"Similarly, `is_syntactic_stutter_invariant()` tells whether the structure of the formula guaranties it to be stutter invariant. For LTL formula, this means the `X` operator should not be used. For PSL formula, this function capture all formulas built using the [siPSL grammar](http://www.daxc.de/eth/paper/09atva.pdf)."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -228,7 +228,7 @@
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The `set_state_players()` function takes a list of owner for each of the states in the automaton. In the output,\n",
|
"The `set_state_players()` function takes a list of owner for each of the states in the automaton. In the output,\n",
|
||||||
"states from player 0 use circles, ellispes, or rectangle with rounded corners (mnemonic: 0 is round) while states from player 1 have a losanse shape (1 has only straight lines). \n",
|
"states from player 0 use circles, ellipses, or rectangle with rounded corners (mnemonic: 0 is round) while states from player 1 have a losanse shape (1 has only straight lines). \n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"State ownership can also be manipulated by the following functions:"
|
"State ownership can also be manipulated by the following functions:"
|
||||||
|
|
@ -914,7 +914,7 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"In the graphical output, player 0 is represented by circles (or ellipses or rounded rectangles depending on the situations), while player 1's states are diamond shaped. In the case of `ltlsynt`, player 0 plays the role of the environment, and player 1 plays the role of the controler.\n",
|
"In the graphical output, player 0 is represented by circles (or ellipses or rounded rectangles depending on the situations), while player 1's states are diamond shaped. In the case of `ltlsynt`, player 0 plays the role of the environment, and player 1 plays the role of the controller.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In the HOA output, a header `spot-state-player` (or `spot.state-player` in HOA 1.1) lists the owner of each state."
|
"In the HOA output, a header `spot-state-player` (or `spot.state-player` in HOA 1.1) lists the owner of each state."
|
||||||
]
|
]
|
||||||
|
|
@ -1670,7 +1670,7 @@
|
||||||
"\n",
|
"\n",
|
||||||
"The parity game solver now supports \"local\" and global solutions.\n",
|
"The parity game solver now supports \"local\" and global solutions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"- \"Local\" solutions are the ones computed so far. A strategy is only computed for the part of the automaton that is rachable from the initial state\n",
|
"- \"Local\" solutions are the ones computed so far. A strategy is only computed for the part of the automaton that is reachable from the initial state\n",
|
||||||
"- Global solutions can now be obtained by setting the argument \"solve_globally\" to true. In this case a strategy will be computed even for states not reachable in the original automaton.\n"
|
"- Global solutions can now be obtained by setting the argument \"solve_globally\" to true. In this case a strategy will be computed even for states not reachable in the original automaton.\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|
|
||||||
|
|
@ -153,7 +153,7 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Using these numbers you can selectively hightlight some transitions. The second argument is a color number (from a list of predefined colors)."
|
"Using these numbers you can selectively highlight some transitions. The second argument is a color number (from a list of predefined colors)."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -93,7 +93,7 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Compiling the model creates all several kinds of files. The `test1.dve` file is converted into a C++ source code `test1.dve.cpp` which is then compiled into a shared library `test1.dve2c`. Becauce `spot.ltsmin.load()` has already loaded this shared library, all those files can be erased. If you do not erase the files, `spot.ltsmin.load()` will use the timestamps to decide whether the library should be recompiled or not everytime you load the library.\n",
|
"Compiling the model creates all several kinds of files. The `test1.dve` file is converted into a C++ source code `test1.dve.cpp` which is then compiled into a shared library `test1.dve2c`. Because `spot.ltsmin.load()` has already loaded this shared library, all those files can be erased. If you do not erase the files, `spot.ltsmin.load()` will use the timestamps to decide whether the library should be recompiled or not everytime you load the library.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"For editing and loading DVE file from a notebook, it is a better to use the `%%dve` as shown next."
|
"For editing and loading DVE file from a notebook, it is a better to use the `%%dve` as shown next."
|
||||||
]
|
]
|
||||||
|
|
@ -1832,7 +1832,7 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"You can load this as a Kripke structure by passing the `want_kripke` option to `spot.automaton()`. The type `kripke_graph` stores the Kripke structure explicitely (like a `twa_graph` stores an automaton explicitely), so you may want to avoid it for very large modelsand use it only for development."
|
"You can load this as a Kripke structure by passing the `want_kripke` option to `spot.automaton()`. The type `kripke_graph` stores the Kripke structure explicitly (like a `twa_graph` stores an automaton explicitly), so you may want to avoid it for very large modelsand use it only for development."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -888,7 +888,7 @@
|
||||||
"source": [
|
"source": [
|
||||||
"## Changing the **kind**\n",
|
"## Changing the **kind**\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Generaly to go from `parity max` to `parity min`, it suffices to reverse the order of vertices.\n",
|
"Generally to go from `parity max` to `parity min`, it suffices to reverse the order of vertices.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### max odd 5 → min odd 5:"
|
"### max odd 5 → min odd 5:"
|
||||||
]
|
]
|
||||||
|
|
@ -3446,7 +3446,7 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Here `Streett 1` is just a synonym for `parity max odd 2`. (Spot's automaton printer cannot guess which name should be prefered.)"
|
"Here `Streett 1` is just a synonym for `parity max odd 2`. (Spot's automaton printer cannot guess which name should be preferred.)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -4231,7 +4231,7 @@
|
||||||
"2. a Boolean indicating whether the output should be colored (`True`), or if transition with no color can be used (`False`).\n",
|
"2. a Boolean indicating whether the output should be colored (`True`), or if transition with no color can be used (`False`).\n",
|
||||||
"3. a Boolean indicating whether the output should be layered, i.e., in a max parity automaton, that means the color of a transition should be the maximal color visited by all cycles going through it.\n",
|
"3. a Boolean indicating whether the output should be layered, i.e., in a max parity automaton, that means the color of a transition should be the maximal color visited by all cycles going through it.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"By default, the second argument is `False`, because acceptance sets is a scarse ressource in Spot. The third argument also defaults to `False`, but for empircal reason: adding more colors like this tends to hinder simulation-based reductions."
|
"By default, the second argument is `False`, because acceptance sets is a scarce resource in Spot. The third argument also defaults to `False`, but for empirical reason: adding more colors like this tends to hinder simulation-based reductions."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -798,7 +798,7 @@
|
||||||
"\n",
|
"\n",
|
||||||
"First, we build a product without taking care of the acceptance sets. We just want to get the general shape of the algorithm.\n",
|
"First, we build a product without taking care of the acceptance sets. We just want to get the general shape of the algorithm.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We will build an automaton of type `twa_graph`, i.e., an automaton represented explicitely using a graph. In those automata, states are numbered by integers, starting from `0`. (Those states can also be given a different name, which is why the the `product()` shows us something that appears to be labeled by pairs, but the real identifier of each state is an integer.)\n",
|
"We will build an automaton of type `twa_graph`, i.e., an automaton represented explicitly using a graph. In those automata, states are numbered by integers, starting from `0`. (Those states can also be given a different name, which is why the the `product()` shows us something that appears to be labeled by pairs, but the real identifier of each state is an integer.)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We will use a dictionary to keep track of the association between a pair `(ls,rs)` of input states, and its number in the output."
|
"We will use a dictionary to keep track of the association between a pair `(ls,rs)` of input states, and its number in the output."
|
||||||
]
|
]
|
||||||
|
|
@ -1145,7 +1145,7 @@
|
||||||
"source": [
|
"source": [
|
||||||
"def product1(left, right):\n",
|
"def product1(left, right):\n",
|
||||||
" # A bdd_dict object associates BDD variables (that are \n",
|
" # A bdd_dict object associates BDD variables (that are \n",
|
||||||
" # used in BDDs labeleing the edges) to atomic propositions.\n",
|
" # used in BDDs labeling the edges) to atomic propositions.\n",
|
||||||
" bdict = left.get_dict()\n",
|
" bdict = left.get_dict()\n",
|
||||||
" # If the two automata do not have the same BDD dict, then\n",
|
" # If the two automata do not have the same BDD dict, then\n",
|
||||||
" # we cannot easily detect compatible transitions.\n",
|
" # we cannot easily detect compatible transitions.\n",
|
||||||
|
|
@ -1235,7 +1235,7 @@
|
||||||
"source": [
|
"source": [
|
||||||
"## Second attempt: a working product\n",
|
"## Second attempt: a working product\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This fixes the list of atomtic propositions, as discussed above, and also sets the correct acceptance condition.\n",
|
"This fixes the list of atomic propositions, as discussed above, and also sets the correct acceptance condition.\n",
|
||||||
"The `set_acceptance` method takes two arguments: a number of sets, and an acceptance function. In our case, both of these arguments are readily computed from the number of states and acceptance functions of the input automata."
|
"The `set_acceptance` method takes two arguments: a number of sets, and an acceptance function. In our case, both of these arguments are readily computed from the number of states and acceptance functions of the input automata."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|
@ -1671,7 +1671,7 @@
|
||||||
"\n",
|
"\n",
|
||||||
"The former point could be addressed by calling `set_state_names()` and passing an array of strings: if a state number is smaller than the size of that array, then the string at that position will be displayed instead of the state number in the dot output. However we can do even better by using `set_product_states()` and passing an array of pairs of states. Besides the output routines, some algorithms actually retrieve this vector of pair of states to work on the product.\n",
|
"The former point could be addressed by calling `set_state_names()` and passing an array of strings: if a state number is smaller than the size of that array, then the string at that position will be displayed instead of the state number in the dot output. However we can do even better by using `set_product_states()` and passing an array of pairs of states. Besides the output routines, some algorithms actually retrieve this vector of pair of states to work on the product.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Regarding the latter point, consider for instance the deterministic nature of these automata. In Spot an automaton is deterministic if it is both existential (no universal branching) and universal (no non-deterministic branching). In our case we will restrict the algorithm to existantial input (by asserting `is_existential()` on both operands), so we can consider that the `prop_universal()` property is an indication of determinism:"
|
"Regarding the latter point, consider for instance the deterministic nature of these automata. In Spot an automaton is deterministic if it is both existential (no universal branching) and universal (no non-deterministic branching). In our case we will restrict the algorithm to existential input (by asserting `is_existential()` on both operands), so we can consider that the `prop_universal()` property is an indication of determinism:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -1710,7 +1710,7 @@
|
||||||
"\n",
|
"\n",
|
||||||
" result.prop_universal(left.prop_universal() and right.prop_universal())\n",
|
" result.prop_universal(left.prop_universal() and right.prop_universal())\n",
|
||||||
"\n",
|
"\n",
|
||||||
"because the results the `prop_*()` family of functions take and return instances of `spot.trival` values. These `spot.trival`, can, as their name implies, take one amongst three values representing `yes`, `no`, and `maybe`. `yes` and `no` should be used when we actually know that the automaton is deterministic or not (not deterministic meaning that there actually exists some non determinitic state in the automaton), and `maybe` when we do not know. \n",
|
"because the results the `prop_*()` family of functions take and return instances of `spot.trival` values. These `spot.trival`, can, as their name implies, take one amongst three values representing `yes`, `no`, and `maybe`. `yes` and `no` should be used when we actually know that the automaton is deterministic or not (not deterministic meaning that there actually exists some non deterministic state in the automaton), and `maybe` when we do not know. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"The one-liner above is wrong for two reasons:\n",
|
"The one-liner above is wrong for two reasons:\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
|
|
||||||
|
|
@ -1250,7 +1250,7 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"But do we really need 2 rabin pairs? Let's ask if we can get an equivalent with only one pair. (Note that reducing the number of pairs might require more state, but the `sat_minimize()` function will never attempt to add state unless explicitely instructed to do so. In this case we are therefore looking for a state-based Rabin-1 automaton with at most 4 states.)"
|
"But do we really need 2 Rabin pairs? Let's ask if we can get an equivalent with only one pair. (Note that reducing the number of pairs might require more state, but the `sat_minimize()` function will never attempt to add state unless explicitly instructed to do so. In this case we are therefore looking for a state-based Rabin-1 automaton with at most 4 states.)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -12,17 +12,19 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Stutter-invariant languages\n",
|
"# Stutter-invariant languages\n",
|
||||||
"\n",
|
"\n",
|
||||||
"A language $L$ is said to be _stutter-invariant_ iff $\\ell_0\\ldots\\ell_{i-1}\\ell_i\\ell_{i+1}\\ldots\\in L \\iff \\ell_0\\ldots\\ell_{i-1}\\ell_i\\ell_i\\ell_{i+1}\\ldots\\in L$, i.e., if duplicating a letter in a word or removing a duplicated letter does not change the membership of that word to $L$. These languages are also called _stutter-insensitive_. We use the adjective _sutter-sensitive_ to describe a language that is not stutter-invariant. Of course we can extend this vocabulary to LTL formulas or automata that represent stutter-invariant languages.\n",
|
"A language $L$ is said to be _stutter-invariant_ iff $\\ell_0\\ldots\\ell_{i-1}\\ell_i\\ell_{i+1}\\ldots\\in L \\iff \\ell_0\\ldots\\ell_{i-1}\\ell_i\\ell_i\\ell_{i+1}\\ldots\\in L$, i.e., if duplicating a letter in a word or removing a duplicated letter does not change the membership of that word to $L$. These languages are also called _stutter-insensitive_. We use the adjective _stutter-sensitive_ to describe a language that is not stutter-invariant. Of course we can extend this vocabulary to LTL formulas or automata that represent stutter-invariant languages.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Stutter-invariant languages play an important role in model checking. When verifying a stutter-invariant specification against a system, we know that we have some freedom in how we discretize the time in the model: as long as we do not hide changes of model variables that are observed by the specification, we can merge multiple steps of the model. This, combined by careful analysis of actions of the model that are independent, is the basis for a set of techniques known as _partial-order reductions_ (POR) that postpone the visit of some successors in the model, because we know we can always visit them later."
|
"Stutter-invariant languages play an important role in model checking. When verifying a stutter-invariant specification against a system, we know that we have some freedom in how we discretize the time in the model: as long as we do not hide changes of model variables that are observed by the specification, we can merge multiple steps of the model. This, combined by careful analysis of actions of the model that are independent, is the basis for a set of techniques known as _partial-order reductions_ (POR) that postpone the visit of some successors in the model, because we know we can always visit them later."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -68,6 +70,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -95,13 +98,15 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Of course this `is_stutter_invariant()` function first checks whether the formula is `X`-free before wasting time building automata, so if you want to detect stutter-invariant formulas in your model checker, this is the only function to use. Also, if you hapen to already have an automaton `aut_g` for `g`, you should pass it as a second argument to avoid it being recomputed: `spot.is_stutter_invariant(g, aut_g)`."
|
"Of course this `is_stutter_invariant()` function first checks whether the formula is `X`-free before wasting time building automata, so if you want to detect stutter-invariant formulas in your model checker, this is the only function to use. Also, if you happen to already have an automaton `aut_g` for `g`, you should pass it as a second argument to avoid it being recomputed: `spot.is_stutter_invariant(g, aut_g)`."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -132,6 +137,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -139,10 +145,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Similarly to formulas, automata use a few bits to store some known properties about themselves, like whether they represent a stutter-invariant language. This property can be checked with the `prop_stutter_invariant()` method, but that returns a `trival` instance (i.e., yes, no, or maybe). Some algorithms will update that property whenever that is cheap or expliclitely asked for. For instance `spot.translate()` only sets the property if the translated formula is `X`-free."
|
"Similarly to formulas, automata use a few bits to store some known properties about themselves, like whether they represent a stutter-invariant language. This property can be checked with the `prop_stutter_invariant()` method, but that returns a `trival` instance (i.e., yes, no, or maybe). Some algorithms will update that property whenever that is cheap or explicitly asked for. For instance `spot.translate()` only sets the property if the translated formula is `X`-free."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -164,6 +171,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -188,10 +196,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Note that `prop_stutter_invariant()` was updated as a side-effect so that any futher call to `is_stutter_invariant()` with this automaton will be instantaneous."
|
"Note that `prop_stutter_invariant()` was updated as a side-effect so that any further call to `is_stutter_invariant()` with this automaton will be instantaneous."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -212,10 +221,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"You have to be aware of this property being set in your back because if while playing with `is_stutter_invariant()` you the incorrect formula for an automaton by mistake, the automaton will have its property set incorrectly, and running `is_stutter_inariant()` with the correct formula will simply return the cached property.\n",
|
"You have to be aware of this property being set in your back because if while playing with `is_stutter_invariant()` you the incorrect formula for an automaton by mistake, the automaton will have its property set incorrectly, and running `is_stutter_invariant()` with the correct formula will simply return the cached property.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In doubt, you can always reset the property as follows:"
|
"In doubt, you can always reset the property as follows:"
|
||||||
]
|
]
|
||||||
|
|
@ -239,6 +249,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -331,17 +342,19 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Explaining why a formula is not sutter-invariant"
|
"## Explaining why a formula is not stutter-invariant"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"As explained in our [Spin'15 paper](https://www.lrde.epita.fr/~adl/dl/adl/michaud.15.spin.pdf) the sutter-invariant checks are implemented using simple operators suchs as `spot.closure(aut)`, that augment the language of L by adding words that can be obtained by removing duplicated letters, and `spot.sl(aut)` or `spot.sl2(aut)` that both augment the language that L by adding words that can be obtained by duplicating letters. The default `is_stutter_invariant()` function is implemented as `spot.product(spot.closure(aut), spot.closure(neg_aut)).is_empty()`, but that is just one possible implementation selected because it was more efficient.\n",
|
"As explained in our [Spin'15 paper](https://www.lrde.epita.fr/~adl/dl/adl/michaud.15.spin.pdf) the stutter-invariant checks are implemented using simple operators such as `spot.closure(aut)`, that augment the language of L by adding words that can be obtained by removing duplicated letters, and `spot.sl(aut)` or `spot.sl2(aut)` that both augment the language that L by adding words that can be obtained by duplicating letters. The default `is_stutter_invariant()` function is implemented as `spot.product(spot.closure(aut), spot.closure(neg_aut)).is_empty()`, but that is just one possible implementation selected because it was more efficient.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Using these bricks, we can modify the original algorithm so it uses a counterexample to explain why a formula is stutter-sensitive."
|
"Using these bricks, we can modify the original algorithm so it uses a counterexample to explain why a formula is stutter-sensitive."
|
||||||
]
|
]
|
||||||
|
|
@ -394,22 +407,25 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Note that a variant of the above explanation procedure is already integerated in our [on-line LTL translator tool](https://spot.lrde.epita.fr/app/) (use the <i>study</i> tab)."
|
"Note that a variant of the above explanation procedure is already integrated in our [on-line LTL translator tool](https://spot.lrde.epita.fr/app/) (use the <i>study</i> tab)."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Detecting stutter-invariant states\n",
|
"## Detecting stutter-invariant states\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Even if the language of an automaton is not sutter invariant, some of its states may recognize a stutter-invariant language. (We assume the language of a state is the language the automaton would have when starting from this state.)"
|
"Even if the language of an automaton is not stutter invariant, some of its states may recognize a stutter-invariant language. (We assume the language of a state is the language the automaton would have when starting from this state.)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -616,6 +632,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -645,10 +662,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"For convenience, the `highligh_...()` version colors the stutter-invariant states of the automaton for display.\n",
|
"For convenience, the `highlight_...()` version colors the stutter-invariant states of the automaton for display.\n",
|
||||||
"(That 5 is the color number for red in Spot's hard-coded palette.)"
|
"(That 5 is the color number for red in Spot's hard-coded palette.)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|
@ -825,6 +843,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -832,6 +851,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -839,10 +859,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"This second example illustrates the fact that a state can be marked if it it not sutter-invariant but appear below a stutter-invariant state. We build our example automaton as the disjuction of the following two stutter-sensitive formulas, whose union is equivalent to the sutter-invariant formula `GF!a`."
|
"This second example illustrates the fact that a state can be marked if it is not stutter-invariant but appear below a stutter-invariant state. We build our example automaton as the disjuction of the following two stutter-sensitive formulas, whose union is equivalent to the stutter-invariant formula `GF!a`."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -878,6 +899,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1082,6 +1104,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1286,11 +1309,12 @@
|
||||||
"spot.highlight_stutter_invariant_states(aut, g, 5)\n",
|
"spot.highlight_stutter_invariant_states(aut, g, 5)\n",
|
||||||
"display(aut)\n",
|
"display(aut)\n",
|
||||||
"# The stutter_invariant property is set on AUT as a side effect\n",
|
"# The stutter_invariant property is set on AUT as a side effect\n",
|
||||||
"# of calling sutter_invariant_states() or any variant of it.\n",
|
"# of calling stutter_invariant_states() or any variant of it.\n",
|
||||||
"assert(aut.prop_stutter_invariant().is_true())"
|
"assert(aut.prop_stutter_invariant().is_true())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1415,6 +1439,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1422,10 +1447,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Sutter-invariance at the letter level\n",
|
"## Stutter-invariance at the letter level\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Instead of marking each state as stuttering or not, we can list the letters that we can stutter in each state.\n",
|
"Instead of marking each state as stuttering or not, we can list the letters that we can stutter in each state.\n",
|
||||||
"More precisely, a state $q$ is _stutter-invariant for letter $a$_ if the membership to $L(q)$ of any word starting with $a$ is preserved by the operations that duplicate letters or remove duplicates. \n",
|
"More precisely, a state $q$ is _stutter-invariant for letter $a$_ if the membership to $L(q)$ of any word starting with $a$ is preserved by the operations that duplicate letters or remove duplicates. \n",
|
||||||
|
|
@ -1547,12 +1573,13 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The `stutter_invariant_letters()` functions returns a vector of BDDs indexed by state numbers. The BDD at index $q$ specifies all letters $\\ell$ for which state $q$ would be stuttering. Note that if $q$ is stutter-invariant or reachable from a stutter-invariant state, the associated BDD will be `bddtrue` (printed as `1` below).\n",
|
"The `stutter_invariant_letters()` functions returns a vector of BDDs indexed by state numbers. The BDD at index $q$ specifies all letters $\\ell$ for which state $q$ would be stuttering. Note that if $q$ is stutter-invariant or reachable from a stutter-invariant state, the associated BDD will be `bddtrue` (printed as `1` below).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This interface is a bit inconveniant to use interactively, due to the fact that we need a `spot.bdd_dict` object to print a BDD."
|
"This interface is a bit inconvenient to use interactively, due to the fact that we need a `spot.bdd_dict` object to print a BDD."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -1578,6 +1605,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1585,7 +1613,7 @@
|
||||||
"\n",
|
"\n",
|
||||||
"Consider the following automaton, which is a variant of our second example above. \n",
|
"Consider the following automaton, which is a variant of our second example above. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"The language accepted from state (2) is `!GF(a & Xa) & GF!a` (this can be simplified to `FG(!a | X!a)`), while the language accepted from state (0) is `GF(a & Xa) & GF!a`. Therefore. the language accepted from state (5) is `a & X(GF!a)`. Since this is equivalent to `a & GF(!a)` state (5) recognizes stutter-invariant language, but as we can see, it is not the case that all states below (5) are also marked. In fact, states (0) can also be reached via states (7) and (6), recognizing respectively `(a & X(a & GF!a)) | (!a & X(!a & GF(a & Xa) & GF!a))` and `!a & GF(a & Xa) & GF!a))`, i.e., two stutter-sentive languages."
|
"The language accepted from state (2) is `!GF(a & Xa) & GF!a` (this can be simplified to `FG(!a | X!a)`), while the language accepted from state (0) is `GF(a & Xa) & GF!a`. Therefore. the language accepted from state (5) is `a & X(GF!a)`. Since this is equivalent to `a & GF(!a)` state (5) recognizes stutter-invariant language, but as we can see, it is not the case that all states below (5) are also marked. In fact, states (0) can also be reached via states (7) and (6), recognizing respectively `(a & X(a & GF!a)) | (!a & X(!a & GF(a & Xa) & GF!a))` and `!a & GF(a & Xa) & GF!a))`, i.e., two stutter-sensitive languages."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -1817,6 +1845,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1845,11 +1874,12 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"In cases where we prefer to have a forward-closed set of stutter-invariant states, it is always possible to duplicate\n",
|
"In cases where we prefer to have a forward-closed set of stutter-invariant states, it is always possible to duplicate\n",
|
||||||
"the problematic states. The `make_stutter_invariant_foward_closed_inplace()` modifies the automaton in place, and also returns an updated copie of the vector of stutter-invariant states."
|
"the problematic states. The `make_stutter_invariant_forward_closed_inplace()` modifies the automaton in place, and also returns an updated copy of the vector of stutter-invariant states."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -2128,10 +2158,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Now, state 0 is no longuer a problem."
|
"Now, state 0 is no longer a problem."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -2155,10 +2186,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Let's see how infrequently the set of stutter-invarant states is not closed."
|
"Let's see how infrequently the set of stutter-invariant states is not closed."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -2277,6 +2309,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -2304,10 +2337,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Here is the percentage of stutter-invarant states."
|
"Here is the percentage of stutter-invariant states."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -28,10 +28,10 @@
|
||||||
"The process is decomposed in three steps:\n",
|
"The process is decomposed in three steps:\n",
|
||||||
"- Creating the game\n",
|
"- Creating the game\n",
|
||||||
"- Solving the game\n",
|
"- Solving the game\n",
|
||||||
"- Simplifying the winnning strategy\n",
|
"- Simplifying the winning strategy\n",
|
||||||
"- Building the circuit from the strategy\n",
|
"- Building the circuit from the strategy\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Each of these steps is parametrized by a structure called `synthesis_info`. This structure stores some additional data needed to pass fine-tuning options or to store statistics.\n",
|
"Each of these steps is parameterized by a structure called `synthesis_info`. This structure stores some additional data needed to pass fine-tuning options or to store statistics.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The `ltl_to_game` function takes the LTL specification, and the list of controllable atomic propositions (or output signals). It returns a two-player game, where player 0 plays the input variables (and wants to invalidate the acceptance condition), and player 1 plays the output variables (and wants to satisfy the output condition). The conversion from LTL to parity automata can use one of many algorithms, and can be specified in the `synthesis_info` structure (this works like the `--algo=` option of `ltlsynt`)."
|
"The `ltl_to_game` function takes the LTL specification, and the list of controllable atomic propositions (or output signals). It returns a two-player game, where player 0 plays the input variables (and wants to invalidate the acceptance condition), and player 1 plays the output variables (and wants to satisfy the output condition). The conversion from LTL to parity automata can use one of many algorithms, and can be specified in the `synthesis_info` structure (this works like the `--algo=` option of `ltlsynt`)."
|
||||||
]
|
]
|
||||||
|
|
@ -2238,7 +2238,7 @@
|
||||||
"id": "9d8d52f6",
|
"id": "9d8d52f6",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"If needed, a separated Mealy machine can be turned into game shape using `split_sepearated_mealy()`, which is more efficient than `split_2step()`."
|
"If needed, a separated Mealy machine can be turned into game shape using `split_separated_mealy()`, which is more efficient than `split_2step()`."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -2744,7 +2744,7 @@
|
||||||
"id": "5c2b0b78",
|
"id": "5c2b0b78",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"It can happen that propositions declared as output are ommited in the aig circuit (either because they are not part of the specification, or because they do not appear in the winning strategy). In that case those \n",
|
"It can happen that propositions declared as output are omitted in the aig circuit (either because they are not part of the specification, or because they do not appear in the winning strategy). In that case those \n",
|
||||||
"values can take arbitrary values.\n",
|
"values can take arbitrary values.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"For instance so following constraint mention `o1` and `i1`, but those atomic proposition are actually unconstrained (`F(... U x)` can be simplified to `Fx`). Without any indication, the circuit built will ignore those variables:"
|
"For instance so following constraint mention `o1` and `i1`, but those atomic proposition are actually unconstrained (`F(... U x)` can be simplified to `Fx`). Without any indication, the circuit built will ignore those variables:"
|
||||||
|
|
@ -3412,7 +3412,7 @@
|
||||||
"2. Combine the mealy machines into one before passing it to `mealy_machine_to aig(). This currently only supports input complete machines of the same type (mealy/separated mealy/split mealy)\n",
|
"2. Combine the mealy machines into one before passing it to `mealy_machine_to aig(). This currently only supports input complete machines of the same type (mealy/separated mealy/split mealy)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Note that the method version is usually preferable as it is faster.\n",
|
"Note that the method version is usually preferable as it is faster.\n",
|
||||||
"Also note that in order for this to work, all mealy machines need to share the same `bdd_dict`. This can be ensured by passing a common options strucuture."
|
"Also note that in order for this to work, all mealy machines need to share the same `bdd_dict`. This can be ensured by passing a common options structure."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -20,6 +21,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -27,6 +29,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -164,10 +167,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The graphical representation above is just a convenient representation of that automaton and hides some details. Internally, this automaton is stored as two vectors plus some additional data. All of those can be displayed using the `show_storage()` method. The two vectors are the `states` and `edges` vectors. The additional data gives the initial state, number of acceptance sets, acceptance condition, list of atomic propositions, as well as a bunch of [property flags](https://spot.lrde.epita.fr/concepts.html#property-flags) on the automaton. All those properties default to `maybe`, but some algorithms will turn them to `yes` or `no` whenever that property can be decided at very low cost (usually a side effect of the algorithm). In this example we asked for a deterministic automaton, so the output of the construction is necessarily `universal` (this means no existantial branching, hence deterministic for our purpose), and this property implies `unambiguous` and `semi_deterministic`."
|
"The graphical representation above is just a convenient representation of that automaton and hides some details. Internally, this automaton is stored as two vectors plus some additional data. All of those can be displayed using the `show_storage()` method. The two vectors are the `states` and `edges` vectors. The additional data gives the initial state, number of acceptance sets, acceptance condition, list of atomic propositions, as well as a bunch of [property flags](https://spot.lrde.epita.fr/concepts.html#property-flags) on the automaton. All those properties default to `maybe`, but some algorithms will turn them to `yes` or `no` whenever that property can be decided at very low cost (usually a side effect of the algorithm). In this example we asked for a deterministic automaton, so the output of the construction is necessarily `universal` (this means no existential branching, hence deterministic for our purpose), and this property implies `unambiguous` and `semi_deterministic`."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -417,6 +421,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -801,6 +806,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -832,6 +838,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1199,6 +1206,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1539,6 +1547,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1546,6 +1555,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -1952,10 +1962,11 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Such an inconsistency will cause many issues when the automaton is passed to algorithm with specialized handling of universal automata. When writing an algorithm that modify the automaton, it is your responsibility to update the property bits as well. In this case it could be fixed by calling `aut.prop_universal(False); aut.prop_unambiguous(spot.trival_maybe()); ...` for each property, or by reseting all properties to `maybe` with `prop_reset()`:"
|
"Such an inconsistency will cause many issues when the automaton is passed to algorithm with specialized handling of universal automata. When writing an algorithm that modify the automaton, it is your responsibility to update the property bits as well. In this case it could be fixed by calling `aut.prop_universal(False); aut.prop_unambiguous(spot.trival_maybe()); ...` for each property, or by resetting all properties to `maybe` with `prop_reset()`:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -2019,6 +2030,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -2290,6 +2302,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -2319,6 +2332,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -2351,6 +2365,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -2378,6 +2393,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -2601,6 +2617,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -3069,6 +3086,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -3574,12 +3592,13 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Here we have created two universal transitions: `0->[0,2]` and `2->[0,1]`. The destination groups `[0,2]` and `[0,1]` are stored in a integer vector called `dests`. Each group is encoded by its size immediately followed by the state numbers of the destinations. So group `[0,2]` get encoded as `2,0,2` at position `0` of `dests`, and group `[0,1]` is encoded as `2,0,1` at position `3`. Each group is denoted by the index of its size in the `dests` vector. When an edge targets a destination group, the complement of that destination index is written in the `dst` field of the `edges` entry, hence that `~0` and `~3` that appear here. Using a complement like this allows us to quickly detect universal edges by looking at the sign bit if their `dst` entry.\n",
|
"Here we have created two universal transitions: `0->[0,2]` and `2->[0,1]`. The destination groups `[0,2]` and `[0,1]` are stored in a integer vector called `dests`. Each group is encoded by its size immediately followed by the state numbers of the destinations. So group `[0,2]` get encoded as `2,0,2` at position `0` of `dests`, and group `[0,1]` is encoded as `2,0,1` at position `3`. Each group is denoted by the index of its size in the `dests` vector. When an edge targets a destination group, the complement of that destination index is written in the `dst` field of the `edges` entry, hence that `~0` and `~3` that appear here. Using a complement like this allows us to quickly detect universal edges by looking at the sign bit if their `dst` entry.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"To work on alternating automata, one can no longuer just blindingly use the `dst` field of outgoing iterations:"
|
"To work on alternating automata, one can no longer just blindingly use the `dst` field of outgoing iterations:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -3609,6 +3628,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -3642,6 +3662,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -3675,6 +3696,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -4238,6 +4260,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -5302,6 +5325,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -5309,6 +5333,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
@ -5611,6 +5636,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
|
|
||||||
|
|
@ -243,7 +243,7 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"To convert the run into a word, using `spot.twa_word()`. Note that our runs are labeled by Boolean formulas that are not necessarily a conjunction of all involved litterals. The word is just the projection of the run on its labels."
|
"To convert the run into a word, using `spot.twa_word()`. Note that our runs are labeled by Boolean formulas that are not necessarily a conjunction of all involved literals. The word is just the projection of the run on its labels."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -12,6 +12,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "fb656100",
|
"id": "fb656100",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -21,17 +22,18 @@
|
||||||
"\n",
|
"\n",
|
||||||
"These two structures are used to decompose an acceptance condition (or automaton) into trees that alternate accepting and rejecting elements in order to help converting an automaton to parity acceptance. Spot implements those structures, includes some display code to better explore them iteractively, and finally use them to implement a transformation to parity acceptance.\n",
|
"These two structures are used to decompose an acceptance condition (or automaton) into trees that alternate accepting and rejecting elements in order to help converting an automaton to parity acceptance. Spot implements those structures, includes some display code to better explore them iteractively, and finally use them to implement a transformation to parity acceptance.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"For a formal tratement of these, in a slightly different formalism, see [Optimal Transformations of Games and Automata Using Muller Conditions](https://arxiv.org/abs/2011.13041) by Casares, Colcombet, and Fijalkow. In Spot those definitions have been adapted to use Emerson-Lei acceptance, and support transitions labeled by multiple colors (the main differences are for the Zielonka Tree, the ACD is almost identical)."
|
"For a formal treatment of these, in a slightly different formalism, see [Optimal Transformations of Games and Automata Using Muller Conditions](https://arxiv.org/abs/2011.13041) by Casares, Colcombet, and Fijalkow. In Spot those definitions have been adapted to use Emerson-Lei acceptance, and support transitions labeled by multiple colors (the main differences are for the Zielonka Tree, the ACD is almost identical)."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "4e8b5d3f",
|
"id": "4e8b5d3f",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Zielonka Tree\n",
|
"# Zielonka Tree\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The Zielonka tree is built from an acceptance formula and is labeled by sets of colors. The root contains all colors used in the formula. If seing infinitely all colors of one node would satisfy the acceptance condition, we say that the node is accepting and draw it with an ellipse, otherwise is is rejecting and drawn with a rectangle. The children of an accepting (resp. rejecting) node, are the largest subsets of colors that are rejecting (resp. accepting).\n",
|
"The Zielonka tree is built from an acceptance formula and is labeled by sets of colors. The root contains all colors used in the formula. If seeing infinitely all colors of one node would satisfy the acceptance condition, we say that the node is accepting and draw it with an ellipse, otherwise is is rejecting and drawn with a rectangle. The children of an accepting (resp. rejecting) node, are the largest subsets of colors that are rejecting (resp. accepting).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Here is an example:"
|
"Here is an example:"
|
||||||
]
|
]
|
||||||
|
|
@ -231,6 +233,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "d7629725",
|
"id": "d7629725",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -241,7 +244,7 @@
|
||||||
"\n",
|
"\n",
|
||||||
"This tree is also layered: all nodes in each layers are alternatively rejecting and accepting. Layers are numbered incrementally from 0 at the root. In this example, leaves are in layer 3. Since it is conventional to put the root at the top, we will say that a node is high in the tree when it has a small level.\n",
|
"This tree is also layered: all nodes in each layers are alternatively rejecting and accepting. Layers are numbered incrementally from 0 at the root. In this example, leaves are in layer 3. Since it is conventional to put the root at the top, we will say that a node is high in the tree when it has a small level.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In this example, odd levels are accepting: we say the tree is odd. On another example, it could be the other way arround. The `is_even()` method tells us which way it is."
|
"In this example, odd levels are accepting: we say the tree is odd. On another example, it could be the other way around. The `is_even()` method tells us which way it is."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -266,6 +269,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "15fbd4e6",
|
"id": "15fbd4e6",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -295,6 +299,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "de4cdc45",
|
"id": "de4cdc45",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -329,6 +334,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "4c3bf70b",
|
"id": "4c3bf70b",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -358,6 +364,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "0d865f30",
|
"id": "0d865f30",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -387,6 +394,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "8b6b3928",
|
"id": "8b6b3928",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -430,16 +438,18 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "656e05f4",
|
"id": "656e05f4",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"If we imagine an run looping on tree transitions labeled by `[1]`, `[0]`, `[3]`, we know (from the original acceptance condition) that it should be accepting. Infinite repetition of the `step()` procedure will emit many levels, but the smallest level we see infinitely often is `1`. It corresponds to node 2, labeled by `{0,1,3}`: this is the highest node we visit infinitely often while steping through this tree in a loop.\n",
|
"If we imagine an run looping on tree transitions labeled by `[1]`, `[0]`, `[3]`, we know (from the original acceptance condition) that it should be accepting. Infinite repetition of the `step()` procedure will emit many levels, but the smallest level we see infinitely often is `1`. It corresponds to node 2, labeled by `{0,1,3}`: this is the highest node we visit infinitely often while stepping through this tree in a loop.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Similarly, a loop of two transitions labeled by `[1]` and `[3]` should be rejecting. Stepping through the tree will emit infinitely many 2, a rejecting level."
|
"Similarly, a loop of two transitions labeled by `[1]` and `[3]` should be rejecting. Stepping through the tree will emit infinitely many 2, a rejecting level."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "5c7014e9",
|
"id": "5c7014e9",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -450,6 +460,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "2750cb1d",
|
"id": "2750cb1d",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -1078,11 +1089,12 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "ea452913",
|
"id": "ea452913",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Here the parity automaton output has as many proprities as there are levels in the Zielonka tree.\n",
|
"Here the parity automaton output has as many priorities as there are levels in the Zielonka tree.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The call to `copy_state_names_from()` above causes the states to be labeled by strings of the form `orig#leaf` when `orig` is the original state number, and `leaf` is a leaf of the Zielonka tree.\n",
|
"The call to `copy_state_names_from()` above causes the states to be labeled by strings of the form `orig#leaf` when `orig` is the original state number, and `leaf` is a leaf of the Zielonka tree.\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
|
@ -1111,6 +1123,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "da8d9e97",
|
"id": "da8d9e97",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -1716,6 +1729,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "9bf70138",
|
"id": "9bf70138",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -1756,6 +1770,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "147a71a6",
|
"id": "147a71a6",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -2718,6 +2733,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "77db26c3",
|
"id": "77db26c3",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -2727,7 +2743,7 @@
|
||||||
"The `zielonka_tree` class accepts a few options that can alter its behaviour.\n",
|
"The `zielonka_tree` class accepts a few options that can alter its behaviour.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Options `CHECK_RABIN`, `CHECK_STREETT`, `CHECK_PARITY` can be combined with\n",
|
"Options `CHECK_RABIN`, `CHECK_STREETT`, `CHECK_PARITY` can be combined with\n",
|
||||||
"`ABORT_WRONG_SHAPE` to abort the construction as soon as it is detected that the Zielonka tree has the wrong shape. When this happens, the number of branchs of the tree is set to 0.\n",
|
"`ABORT_WRONG_SHAPE` to abort the construction as soon as it is detected that the Zielonka tree has the wrong shape. When this happens, the number of branches of the tree is set to 0.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"For instance we can check that the original acceptance condition does not behaves like a Parity condition."
|
"For instance we can check that the original acceptance condition does not behaves like a Parity condition."
|
||||||
]
|
]
|
||||||
|
|
@ -2755,6 +2771,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "4786f64c",
|
"id": "4786f64c",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -2941,6 +2958,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "9d7688b3",
|
"id": "9d7688b3",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -2949,6 +2967,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "75838579",
|
"id": "75838579",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -2985,6 +3004,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "cb546bc2",
|
"id": "cb546bc2",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -4229,6 +4249,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "0f2f00c4",
|
"id": "0f2f00c4",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -4260,6 +4281,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "3a3db431",
|
"id": "3a3db431",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -4270,6 +4292,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "6595333d",
|
"id": "6595333d",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -4327,6 +4350,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "1c6d4fe9",
|
"id": "1c6d4fe9",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -4421,6 +4445,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "ad201f45",
|
"id": "ad201f45",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
|
@ -4439,6 +4464,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "04d7cc51",
|
"id": "04d7cc51",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -4468,6 +4494,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "1015abb6",
|
"id": "1015abb6",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -4476,6 +4503,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "b89a6186",
|
"id": "b89a6186",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -5106,6 +5134,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "f039aeaa",
|
"id": "f039aeaa",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -5135,6 +5164,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "07aaab3a",
|
"id": "07aaab3a",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -5746,11 +5776,12 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "f14ee428",
|
"id": "f14ee428",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"This transformation can have substantiually fewer states than the one based on Zielonka tree, because the branches are actually restricted to only those that matter for a given state."
|
"This transformation can have substantially fewer states than the one based on Zielonka tree, because the branches are actually restricted to only those that matter for a given state."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -5796,6 +5827,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "ce62b966",
|
"id": "ce62b966",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -5804,6 +5836,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "e3d0ff64",
|
"id": "e3d0ff64",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -5842,19 +5875,21 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "09ec9887",
|
"id": "09ec9887",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Calling the above methods withtout passing the relevant option will raise an exception."
|
"Calling the above methods without passing the relevant option will raise an exception."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "816ee0eb",
|
"id": "816ee0eb",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Additonally, when the goal is only to check some typeness, the construction of the ACD can be aborted as soon as the typeness is found to be wrong. This can be enabled by passing the additional option `spot.acd_options_ABORT_WRONG_SHAPE`. In case the construction is aborted the ACD forest will be erased (to make sure it is not used), and `node_count()` will return 0."
|
"Additionally, when the goal is only to check some typeness, the construction of the ACD can be aborted as soon as the typeness is found to be wrong. This can be enabled by passing the additional option `spot.acd_options_ABORT_WRONG_SHAPE`. In case the construction is aborted the ACD forest will be erased (to make sure it is not used), and `node_count()` will return 0."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -5904,16 +5939,17 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "f380ca5f",
|
"id": "f380ca5f",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## State-based transformation\n",
|
"## State-based transformation\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The ACD usage can be modified slightly in order to produce a state-based automaton. The rules for stepping through the ACD are similar, except that when we detect that a cycle through all children of a node has been done, we return the current node without going to the leftmost leave of the next children. When stepping a transition from a node a child, we should act as if we were in the leftmost child of that node containing the source of tha transition. Stepping through a transition this way do not emit any color, instead the color of a state will be the level of its associated node.\n",
|
"The ACD usage can be modified slightly in order to produce a state-based automaton. The rules for stepping through the ACD are similar, except that when we detect that a cycle through all children of a node has been done, we return the current node without going to the leftmost leave of the next children. When stepping a transition from a node a child, we should act as if we were in the leftmost child of that node containing the source of that transition. Stepping through a transition this way do not emit any color, instead the color of a state will be the level of its associated node.\n",
|
||||||
"(This modified transformation do not claim to be optimal, unlike the transition-based version.)\n",
|
"(This modified transformation do not claim to be optimal, unlike the transition-based version.)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The `ORDER_HEURISTIC` used below will be explained in the next section, it justs alters ordering of children of the ACD in a way that the state-based transformation prefers."
|
"The `ORDER_HEURISTIC` used below will be explained in the next section, it just alters ordering of children of the ACD in a way that the state-based transformation prefers."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -7165,6 +7201,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "98a1474c",
|
"id": "98a1474c",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -7258,11 +7295,12 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "c9725681",
|
"id": "c9725681",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The state-based version of the ACD transformation is `acd_transform_sbacc()`. In the output, the cycle between `2#8` and `2#0` corresponds to the repeatitions of edges 9 and 10 we stepped through above."
|
"The state-based version of the ACD transformation is `acd_transform_sbacc()`. In the output, the cycle between `2#8` and `2#0` corresponds to the repetitions of edges 9 and 10 we stepped through above."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -8081,6 +8119,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "5b770f23",
|
"id": "5b770f23",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -8108,6 +8147,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "9f39b61d",
|
"id": "9f39b61d",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -8649,6 +8689,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "03f0eda7",
|
"id": "03f0eda7",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -8657,11 +8698,12 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "45596824",
|
"id": "45596824",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The `ORDER_HEURISTIC` option of the ACD construction, attemps to order the children of a node by decreasing number of number of successors that are out of the node. It is activated by default inside `spot.acd_transform_sbacc()`."
|
"The `ORDER_HEURISTIC` option of the ACD construction, attempts to order the children of a node by decreasing number of number of successors that are out of the node. It is activated by default inside `spot.acd_transform_sbacc()`."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
@ -9170,6 +9212,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "15f094c0",
|
"id": "15f094c0",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -10220,6 +10263,7 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "36629c32",
|
"id": "36629c32",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|
@ -11118,11 +11162,12 @@
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "7d638d20",
|
"id": "7d638d20",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"An issue in Spot is to always ensure that property bits of automata (cleaming that an automaton is weak, inherently weak, deterministic, etc.) are properly preserved or reset.\n",
|
"An issue in Spot is to always ensure that property bits of automata (claiming that an automaton is weak, inherently weak, deterministic, etc.) are properly preserved or reset.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Here if the input is inherently weak, the output should be weak. "
|
"Here if the input is inherently weak, the output should be weak. "
|
||||||
]
|
]
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue