Giter VIP home page Giter VIP logo

utwente-fmt / ltsmin Goto Github PK

View Code? Open in Web Editor NEW
51.0 13.0 30.0 7.39 MB

The LTSmin model checking toolset

Home Page: http://ltsmin.utwente.nl

License: BSD 3-Clause "New" or "Revised" License

C 78.45% Shell 0.75% C++ 3.88% Tcl 0.22% Makefile 2.52% M4 13.80% Lex 0.18% Yacc 0.09% Ruby 0.01% HTML 0.05% JavaScript 0.02% SCSS 0.02%
model-checking linear-temporal-logic model-checker computation-tree-logic mu-calculus petri-net pnml promela dve

ltsmin's Introduction

LTSmin Build Status FMT UT

What is LTSmin

LTSmin started out as a generic toolset for manipulating labelled transition systems. Meanwhile the toolset was extended to a a full (LTL/CTL/μ-calculus) model checker, while maintaining its language-independent characteristics.

To obtain its input, LTSmin connects a sizeable number of existing (verification) tools: muCRL, mCRL2, DiVinE, SPIN (SpinS), UPPAAL (opaal), SCOOP, PNML, ProB and CADP. Moreover, it allows to reuse existing tools with new state space generation techniques by exporting LTSs into various formats.

Implementing support for a new language module is in the order of 200--600 lines of C "glue" code, and automatically yields tools for standard reachability checking (e.g., for state space generation and verification of safety properties), reachability with symbolic state storage (vector set), fully symbolic (BDD-based) reachability, distributed reachability (MPI-based), and multi-core reachability (including multi-core compression and incremental hashing).

The synergy effects in the LTSmin implementation are enabled by a clean interface: all LTSmin modules work with a unifying state representation of integer vectors of fixed size, and the PINS dependency matrix which exploits the combinatorial nature of model checking problems. This splits model checking tools into three mostly independent parts: language modules, PINS optimizations, and model checking algorithms.

On the other hand, the implementation of a verification algorithm based on the PINS matrix automatically has access to muCRL, mCRL2, DVE, PROMELA, SCOOP, UPPAAL xml, PNML, ProB, and ETF language modules.

Finally, all tools benefit from PINS2PINS optimizations, like local transition caching (which speeds up slow state space generators), matrix regrouping (which can drastically reduce run-time and memory consumption of symbolic algorithms), partial order reduction and linear temporal logic.

Algorithmic Backends

LTSmin offers different analysis algorithms covering four disciplines (algorithmic backends):

  • Symbolic CTL/mu-calculus model checking using different BDD/MDD packages, including the parallel BDD package Sylvan,
  • Multi-core LTL model checking using tree compression,
  • Sequential LTL model checking, with partial-order reduction and optional BDD-based state storage, and
  • Distributed LTS instantiation, export, and minimization modulo branching/strong bisimulation.

The PINS interface divides our model checking tools cleanly into the two These algorithms each have their own strength, depending on the input model's structure and verification problem. For models with combinatorial state structure, the symbolic tools can process billions of states per second using only few memory. Models that exhibit much concurrency can be reduced significantly using partial-order reduction. Models with more dependencies can be explored with LTSmin's multi-core algorithms, which employ aggressive lossless state compression using a concurrent tree data structure. Finally, our distributed minimization techniques can aid the verification of multiple properties on a single state space.

Language Front-ends

LTSmin supports language independence via its definition of a Partitioned Next-State Interface (PINS), which exposes enough internal structure of the input model to enable the highly effective algorithms above, while at the same time making it easy to connect your own language module. The interface is also simple enough to support very fast next-state functions such as SPIN's (performance comparison here). LTSmin already connects a sizeable number of existing verification tools as language modules, enabling the use of their modeling formalisms:

Moreover, LTSmin supports multiple export formats for state spaces, which allows interoperability with other tools like CADP.

the PINS Interface

The Partitioned Next-State Interface (PINS) splits up the next-state function in different groups. For example, each transition group can represent a line of code in an imperative language module or a summand in a process-algebraic language module. Using the static dependency information between transition groups and state vector variables (most groups only depend on a few variables), LTSmin's algorithms can exploit the combinatorial structure of the state space. This leads to exponential gains in performance for the symbolic algorithms, which can now learn the transition relation on-the-fly in a very effectively way, because it is partitioned. Using the same principle, LTSmin provides transition storing for transition caching with negligible memory overhead.

To connect a new language module, one merely needs to implement the PINS next-state functions and provide some type information on the state vector contents, which should be encoded according to PINS unifying integer vector format. By providing additional transition/state dependency information via the PINS dependency matrices, the symbolic exploration algorithms and PINS2PINS modules (see below) can exploit their full potential. Finally, by providing few additional information on transition guards the partial order reduction algorithms become enabled.

PINS2PINS Wrappers

The PINS interface divides our model checking tools cleanly into the two independent parts discussed above: language modules and model checking algorithms. However it also enables us to create PINS2PINS modules, that reside between the language module and the algorithm, and modify or optimize the next-state function. These PINS2PINS modules can benefit all algorithmic backends and can be turned on and off on demand:

  • transition storing/caching speeds up slow language modules,
  • regrouping speeds up the symbolic algorithms by optimizing dependencies, and
  • partial order reduction reduces the state space by dropping irrelevant transitions.

PINS-interface

Furthermore, we implement linear temporal logic (LTL) as a PINS2PINS module, which is automatically turned on when an LTL formula is supplied and transforms the state space on-the-fly by calculating the cross product with the formula.

References

If you were to refer to the LTSmin toolset in your academic paper, we would appreciate if you would use one of the following references (depending on the part(s) of the toolset that you are referring to):

  • Gijs Kant, Alfons Laarman, Jeroen Meijer, Jaco van de Pol, Stefan Blom and Tom van Dijk: LTSmin: High-Performance Language-Independent Model Checking. TACAS 2015
  • Alfons Laarman, Jaco van de Pol and Michael Weber. Multi-Core LTSmin: Marrying Modularity and Scalability. NFM 2011
  • Stefan Blom, Jaco van de Pol and Michael Weber. LTSmin: Distributed and Symbolic Reachability. CAV 2010

Contributors

Ordered by the number of commits (January 2017) LTSmin's contributors are:

Selected Documentation

  • ltsmin (LTSmin's main man page)
  • lps2lts-sym (BDD-based reachability with mCRL2 frontend)
  • dve2lts-mc (multi-core reachability with DiVinE 2 frontend)
  • prom2lts-mc (multi-core reachability with Promela SpinS frontend)
  • lpo2lts-seq (sequential enumerative reachability with muCRL frontend)
  • etf2lts-dist (distributed reachability with ETF frontend)
  • lps2torx (TorX testing tool connector with mCRL2 frontend)
  • pbes2lts-sym (Symbolic reachability tool with PBES frontend: example)
  • pins2lts-sym (Symbolic reachability tool using the dlopen API: tutorial)
  • mapa2lts-sym (Symbolic reachability tool with MAPA frontend)
  • pnml2lts-sym (Symbolic reachability tool with PNML frontend)
  • prob2lts-sym (Symbolic reachability tool with ProB frontend)

More manpages can be found at here.

Supported Systems

  • GNU/Linux (tested on Arch Linux, Ubuntu, Debian, OpenSuSE 11.2 and Red Hat Enterprise Linux 6)
  • MacOS X, version 10.10 "Yosemite"
  • MacOS X, version 10.7 "Lion"
  • MacOS X, version 10.6 "Snow Leopard" (no multi-core muCRL/mCRL2)
  • MacOS X, version 10.5 "Leopard" (no multi-core muCRL/mCRL2)
  • Cygwin/Windows (tested on Windows 7 with Cygwin 1.7)
  • Windows (native, with MinGW/MSYS)

For the use of the multi-core BDD package Sylvan and the multi-core reachability algorithms (*2lts-mc), we further recommend using a 64-bit OS.

Installation Instructions

If you are building the software from a Git repository rather than a release tarball, refer to Section Building from a Git Repository for additional set-up instructions. Then install the dependencies listed in Section Build Dependencies below.

# Unpack the tarball
$ tar xvzf ltsmin-<version>.tar.gz
$ cd ltsmin-<version>

# Configure
$ ./configure --disable-dependency-tracking --prefix /path/

Building a Windows target

If you are building a Windows target with MinGW you need to pass additional variables and flags to ./configure:

  1. You need to set the correct C compiler: CC=x86_64-w64-mingw32-gcc,
  2. You need to set the correct C++ compiler with the correct threading model: CXX=x86_64-w64-mingw32-g++-posix,
  3. You need to set the correct host triplet: --host=--host=x86_64-w64-mingw32.
  4. You need to add additional linker flags: LDFLAGS='-static-libgcc -static-libstdc++ -Wl,-Bstatic,--whole-archive -Wl,-lwinpthread -L/path/to/libmman.a -lmman -Wl,--no-whole-archive'.

So the whole commanline becomes:

$ ./configure CC=x86_64-w64-mingw32-gcc CXX=x86_64-w64-mingw32-g++-posix --host=x86_64-w64-mingw32 LDFLAGS='-static-libgcc -static-libstdc++ -Wl,-Bstatic,--whole-archive -Wl,-lwinpthread -L/path/to/libmman.a -lmman -Wl,--no-whole-archive' PKG_CONFIG_PATH=/path/to/Windows/packages/lib/pkgconfig  --disable-dependency-tracking --prefix /path/ 

Note that this example assumes the MinGW compiler that is shipped by default with Debian/Ubuntu, hence the host triplet is x86_64-w64-mingw32. If you are using MXE, the host triplet to use is x86_64-w64-mingw32.static, and you may omit specifying the variables CC, and CXX. However, you must specify the location of pkgconf packages using PKG_CONFIG_PATH_x86_64_w64_mingw32_static, instead of PKG_CONFIG_PATH.

Some notes on configuring pkgconf

If you have installed pkgconf in a non-standard location, you may need to specify the location of the pkg-config utility, e.g.:

$ ./configure --disable-dependency-tracking --prefix /path/ PKG_CONFIG="/path/to/pkg-config"

If you have installed dependencies (e.g. Sylvan) in a non-standard location, you may need to tell pkgconf to search for *.pc files in this non standard location. E.g. if Sylvan is installed in /opt/local/ and Sylvan's sylvan.pc is located in /opt/local/lib/pkgconfig, you need to set PKG_CONFIG_PATH to /opt/local/lib/pkgconfig:

$ ./configure --disable-dependency-tracking --prefix /path/ PKG_CONFIG_PATH="/opt/local/lib/pkgconfig"

Note that as usual, you can separate multiple paths to *.pc files with :.

It is a good idea to check the output of ./configure, to see whether all dependencies were found.

# Build
$ make

# Install
$ make install

You can also run tests with

# Run tests
$ make check

Additional Build Options

configure options

For one-shot builds, the following option speeds up the build process by not recording dependencies:

./configure --disable-dependency-tracking ...

Non-standard compilers, etc., can be configured by using variables:

./configure CFLAGS='-O3 -m64' MPICC=/sw/openmpi/1.2.8/bin/mpicc ...

This would add some options to the standard CFLAGS settings used for building, to enable more optimizations and force a 64-bit build (for the GCC C compiler). Furthermore, the MPI compiler wrapper is set explicitly instead of searching it in the current shell PATH.

Note that libraries installed in non-standard places need special attention: to be picked up by the configure script, library and header search paths must be added, e.g.:

./configure LDFLAGS=-L/opt/local/lib CPPFLAGS=-I/opt/local/include

Additional setting of (DY)LD_LIBRARY_PATH might be needed for the dynamic linker/loader (see, e.g., "man ld.so" or "man dyld").

See ./configure --help for the list of available variables, and file INSTALL for further details.

Static Linking

If you want to configure LTSmin to statically link binaries, LTSmin needs to run pkg-config with the --static flag. This will resolve additional flags required for static linking, e.g.:

$ ./configure --enable-pkgconf-static

make targets

The following additional make targets are supported:

mostlyclean::
clean::
    Clean the source tree.

doxygen-doc::
    Builds Doxygen documentation for the source code.

Build Dependencies

We list the external libraries and tools which are required to build this software.

If you want to compile a Windows target using MinGW we recommend using MXE. MXE provides pre-built Windows libraries as Debian packages. This is useful if you don't want to compile LTSmin's dependencies manually.

popt

Download popt (>= 1.7) from http://rpm5.org/files/popt/. We tested with popt 1.16. If you are compiling popt for Windows with MinGW make sure to apply this patch.

zlib

Download zlib from http://www.zlib.net/.

GNU make

Download GNU make from http://www.gnu.org/software/make/.

Apache Ant

Download Apache Ant from http://ant.apache.org/. We tested with ant 1.7.1. Note that ant is not required for building from a distribution tarball (unless Java files were modified). Note that we require JavaCC task support for Ant.

pkgconf

Download pkgconf from https://github.com/pkgconf/pkgconf.

mman (Windows only)

If you are compiling LTSmin for Windows using MinGW, you need an additional wrapper library for mman, which can be downloaded from: https://github.com/witwall/mman-win32/.

Optional Dependencies

Sylvan

To build the parallel symbolic algorithms, Sylvan (>=1.4) is required. If Sylvan is installed in a non-standard location please refer to this note on aclocal and pkgconf, and this note on configuring pkgconf.

muCRL

Download muCRL (>= 2.18.5) from http://www.cwi.nl/~mcrl/mutool.html. We tested with muCRL-2.18.5. Without muCRL, the AtermDD decision diagram package will not be built.

Note that for 64-bit builds, you have to explicitly configure muCRL for this (otherwise, a faulty version is silently build):

./configure --with-64bit

For best performance, we advise to configure muCRL like this:

./configure CC='gcc -O2' --with-64bit

mCRL2

Download the latest version of mCRL2 from http://www.mcrl2.org/.

Build and install mCRL2:

  cmake . -DCMAKE_INSTALL_PREFIX=...
  make
  make install

The graphical tools of mCRL2 are not required for ltsmin to work, hence you can also build mCRL2 without:

cmake . -DMCRL2_ENABLE_GUI_TOOLS=OFF -DCMAKE_INSTALL_PREFIX=...

Note: to enable the PBES tools use the latest svn version of mCRL2.

CADP

See the CADP website http://www.inrialpes.fr/vasy/cadp/ on how to obtain a license and download the CADP toolkit.

TBB malloc

For multi-core reachability on timed automata with the opaal frontend, install TBB malloc. Scalability can be limited without a concurrent allocator, hence the opaal2lts-mc frontend is not compiled in absence of TBB malloc. See http://threadingbuildingblocks.org/file.php?fid=77

DiVinE 2

Download version 2.4 of DiVinE from http://divine.fi.muni.cz/. We tested with the version 2.4. Apply the patch from <contrib/divine-2.4.patch> to the DiVinE source tree:

cd divine-2.4
patch -p1 < /path/to/ltsmin/contrib/divine-2.4.patch

Alternatively, download a LTSmin-enabled version of DiVinE via git:

git clone https://github.com/utwente-fmt/divine2.git

Build with:

cd divine2
mkdir _build && cd _build
cmake .. -DGUI=OFF -DRX_PATH= -DCMAKE_INSTALL_PREFIX=... -DMURPHI=OFF
make
make install

On MacOS X, option -DHOARD=OFF might have to be added to the cmake command line to make it compile without errors. Also, on MacOS X Divine2 only compiles with the GNU std C++ library. Thus on MacOS X one has to provide the option -DCMAKE_CXX_FLAGS="-stdlib=libstdc++"

The LTSmin configure script will find the DiVinE installation automatically, if the divine binary is in the search path.

With suitable options, the divine compile DVE compiler produces LTSmin compatible libraries:

divine compile -l model.dve

This produces a file model.dve2C, which can also be passed to LTSmin tools. (This step is done automatically by the LTSmin tools when passing in a .dve model, doing it manually is rarely needed.)

Scoop

See the scoop/README file.

SpinS / Promela

Use SpinS (distributed as submodule LTSmin) to compile PROMELA model leader.pm to 'leader.pm.spins':

    spins -o3 leader.pm

The optional flag +-o3+ turns off control flow optimizations.

The resulting compiled SpinS module can be loaded by all SpinS-related LTSmin tools (prom*).

opaal / UPPAAL XML

Download opaal from https://code.launchpad.net/~opaal-developers/opaal/opaal-ltsmin-succgen. Follow the included installation instructions.

UPPAAL xml models can be compiled to PINS binaries (.xml.so) using the included opaal_ltsmin script.

opaal_ltsmin --only-compile model.xml

The multcore LTSmin tool can explore the model (opaal2lts-mc). Provide the TBB allocator location at load run time for good performance: LD_PRELOAD=/usr/lib/libjemalloc.so.1 opaal2lts-mc <model.xml.so>

libDDD

Download libDDD (>= 1.7) from http://move.lip6.fr/software/DDD/. We tested with libDDD 1.7.

MPI

In principle, any MPI library which supports MPI-IO should work. However, we tested only with Open MPI http://www.open-mpi.org/. Without MPI, the distributed tools (xxx2lts-mpi, ltsmin-mpi) will not be built.

AsciiDoc

Download AsciiDoc (>= 8.4.4) from http://www.methods.co.nz/asciidoc/. We tested with asciidoc-8.4.4. Without asciidoc, documentation cannot be rebuilt. For convenience, release tarballs are shipping with pre-built man pages and HTML documentation.

xmlto

Download xmlto from http://cyberelk.net/tim/software/xmlto/. We tested with xmlto-0.0.18. Without xmlto, man pages cannot be rebuilt. Note that xmlto in turn requires docbook-xsl to be installed. We tested with docbook-xsl-1.76.1.

Doxygen

Download Doxygen from http://www.doxygen.org/. We tested with doxygen-1.5.5. Without doxygen, internal source code documentation cannot be generated.

MacOS X

For cross-compilation builds on MacOS X, the Apple Developer SDKs must be installed. They are available from Apple http://developer.apple.com/tools/download/, or from the MacOS X installation CDs.

CZMQ

To build the ProB front-end, CZMQ3 is required.

libxml2

To build the Petri net front-end, libxml2 is required. Note that the libxml2 package that macOS provides is not properly configured. If you want to build the PNML front-end, please install a proper libxml2 package, e.g. libxml2 in Homebrew. And do not forget to set PKG_CONFIG_PATH, as suggested by Homebrew.

Spot

To use Büchi automata created by Spot, download Spot (>=2.3.1) from https://spot.lrde.epita.fr/index.html.

BuDDy

To use BuDDy's FDDs implementation for symbolic state storage, you have two options.

  1. Install Spot which distributes a BuDDy fork called BDDX (recommended),
  2. Install BuDDy from https://github.com/utwente-fmt/buddy/releases, which is a fork originally included in LTSmin.

If both versions of BuDDy are installed, LTSmin will use the Spot version of BuDDy.

SDD

To use SDDs for symbolic state storage, the simplest way is as follows.

  1. Download SDD from http://reasoning.cs.ucla.edu/sdd/ (version 2.0)
  2. Build the SDD package. This produces the file /sdd/build/libsdd.a
  3. Move or copy the file libsdd.a to /usr/local/lib/libsdd.a
  4. Move or copy the file sddapi.h to /usr/local/include/sddapi.h

Alternatively, the files libsdd.a and sddapi.h may be put in a different directory dir, so long as dir/lib contains libsdd.a and dir/include contains sddapi.h. In this case, add the option --with-libsdd=/path/to/dir to the configure call, e.g.,

./configure --with-libsdd=/path/to/dir

Alternatively, use --with-libsdd=no to configure without SDD.

Building from a Git Repository

Before building the software as described above, the following commands have to be executed in the top-level source directory:

$ git submodule update --init
$ ./ltsminreconf

A note on aclocal and pkgconf

If you have installed pkgconf in a non-standard location you may need to specify the location to pkg.m4 using the ACLOCAL_PATH variable manually, before running ltsminreconf. Assuming the path to pkg.m4 is ~/.local/share/aclocal/pkg.m4, run:

$ ACLOCAL_PATH=~/.local/share/aclocal ./ltsminreconf

Dependencies

Building from another source than the release tarball requires some extra tools to be installed:

GNU automake

Download automake (>= 1.14) from http://www.gnu.org/software/automake/. We tested with automake-1.14.

GNU autoconf

Download autoconf (>= 2.65) from http://www.gnu.org/software/autoconf/. We tested with autoconf-2.69.

GNU libtool

Download libtool (>= 2.2.6) from http://www.gnu.org/software/libtool/. We tested with libtool-2.4.

Flex

Download Flex (>= 2.5.35) from http://flex.sourceforge.net/. We tested with flex 2.5.35.

Bison

Download Bison from (>= 3.0.2) from https://www.gnu.org/software/bison/.

pkgconf

See above.

Contact

Related papers

License

Copyright (c) 2008 - 2018 Formal Methods and Tools, University of Twente All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  • Neither the name of the University of Twente nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

ltsmin's People

Contributors

alaarman avatar axel-b avatar bergfi avatar codermann63 avatar gijskant avatar jacopol avatar jkeiren avatar mchro avatar meijuh avatar michaelw avatar patere avatar pimw avatar pkoerner avatar sccblom avatar ssoelvsten avatar thedobs avatar trolando avatar vbloemen avatar yanntm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ltsmin's Issues

MPI deadlock when MPI process is killed.

If one of the MPI processes is killed then using the current recommended settings for OpenMPI the mpi program will end up in a deadlock situation. This is wrong, if this happens then the whole application shoud terminate.

This is of course a bug in MPI rather than in LTSmin,
but by adding deadlock detection we can make certain
that we terminate.

Alfons:
HREexit / HREabort facilities do not close down the comm channels in hre_pthread

Move from submodule to subtree

I suggest we move to use subtree instead of submodule.

We can add ".sh" files that assist updating a subtree from commit A to commit B.

Advantage

  • users no longer need to manually run submodule update/init/sync
  • no dependency on availability of other git repositories

Disadvantage

  • maintainers need to learn about subtree method

Refine PINS transitions as seperate guards and actions

Currently, PINS is defined on the level of transitions.
The new POR layer already exposes the internals of these objects: guards and actions.
By redefining PINS on the level of guards and actions, we obtain several benefits:

  • It could support better symbolic algorithms that exploit the finer granularity of guards and actions, because currently the action reads cannot be distinguished from the guard reads.
  • reevaluation of guards will be avoided (currently the POR layer first queries guards and later emits transitions which again has the effect of evaluating the guards)
    this will also speedup other greybox methods such as the symbolic tools.
  • it becomes more easy to implement weaker version of the POR reduction theory, because the read write matrices become specialized towards the actions (currently reads include the guards)

The refinement can be done while maintaining backwards compatibility by adding new functions and matrices. However in the long run the transition-level functions should be removed from the LTSmin code.

Object copying during build process

We link libraries statically in LTSmin. What I see happening is that the linker copies a lot of the binaries around to avoid collisions or something. See build output below. Can this be avoided?

copying selected object files to avoid basename conflicts...
libtool: link: ln .libs/libalgorithm.lax/libhre.a/libutil_la-treedbs.o .libs/libalgorithm.lax/lt1-libutil_la-treedbs.o || cp .libs/libalgorithm.lax/libhre.a/libutil_la-treedbs.o .libs/libalgorithm.lax/lt1-libutil_la-treedbs.o
libtool: link: ln .libs/libalgorithm.lax/libhre.a/libutil_la-bitset.o .libs/libalgorithm.lax/lt2-libutil_la-bitset.o || cp .libs/libalgorithm.lax/libhre.a/libutil_la-bitset.o .libs/libalgorithm.lax/lt2-libutil_la-bitset.o
libtool: link: ln .libs/libalgorithm.lax/libhre.a/libutil_la-is-balloc.o .libs/libalgorithm.lax/lt3-libutil_la-is-balloc.o || cp .libs/libalgorithm.lax/libhre.a/libutil_la-is-balloc.o .libs/libalgorithm.lax/lt3-libutil_la-is-balloc.o
libtool: link: ln .libs/libalgorithm.lax/libhre.a/libutil_la-dfs-stack.o .libs/libalgorithm.lax/lt4-libutil_la-dfs-stack.o || cp .libs/libalgorithm.lax/libhre.a/libutil_la-dfs-stack.o .libs/libalgorithm.lax/lt4-libutil_la-dfs-stack.o
libtool: link: ln .libs/libalgorithm.lax/libhre.a/libutil_la-zobrist.o .libs/libalgorithm.lax/lt5-libutil_la-zobrist.o || cp .libs/libalgorithm.lax/libhre.a/libutil_la-zobrist.o .libs/libalgorithm.lax/lt5-libutil_la-zobrist.o
libtool: link: ln .libs/libalgorithm.lax/libhre.a/libutil_la-MurmurHash3.o .libs/libalgorithm.lax/lt6-libutil_la-MurmurHash3.o || cp .libs/libalgorithm.lax/libhre.a/libutil_la-MurmurHash3.o .libs/libalgorithm.lax/lt6-libutil_la-MurmurHash3.o
libtool: link: ln .libs/libalgorithm.lax/libhre.a/libutil_la-chunk_support.o .libs/libalgorithm.lax/lt7-libutil_la-chunk_support.o || cp .libs/libalgorithm.lax/libhre.a/libutil_la-chunk_support.o .libs/libalgorithm.lax/lt7-libutil_la-chunk_support.o
libtool: link: ln .libs/libalgorithm.lax/libhre.a/libutil_la-tables.o .libs/libalgorithm.lax/lt8-libutil_la-tables.o || cp .libs/libalgorithm.lax/libhre.a/libutil_la-tables.o .libs/libalgorithm.lax/lt8-libutil_la-tables.o
libtool: link: ln .libs/libalgorithm.lax/libhre.a/libutil_la-chunk_table_factory.o .libs/libalgorithm.lax/lt9-libutil_la-chunk_table_factory.o || cp .libs/libalgorithm.lax/libhre.a/libutil_la-chunk_table_factory.o .libs/libalgorithm.lax/lt9-libutil_la-chunk_table_factory.o
libtool: link: ln .libs/libalgorithm.lax/libhre.a/libutil_la-balloc.o .libs/libalgorithm.lax/lt10-libutil_la-balloc.o || cp .libs/libalgorithm.lax/libhre.a/libutil_la-balloc.o .libs/libalgorithm.lax/lt10-libutil_la-balloc.o
libtool: link: ln .libs/libalgorithm.lax/libhre.a/libutil_la-fast_set.o .libs/libalgorithm.lax/lt11-libutil_la-fast_set.o || cp .libs/libalgorithm.lax/libhre.a/libutil_la-fast_set.o .libs/libalgorithm.lax/lt11-libutil_la-fast_set.o
libtool: link: ln .libs/libalgorithm.lax/libhre.a/libutil_la-util.o .libs/libalgorithm.lax/lt12-libutil_la-util.o || cp .libs/libalgorithm.lax/libhre.a/libutil_la-util.o .libs/libalgorithm.lax/lt12-libutil_la-util.o

Remove traces of -grey from ltsmin

  • Split PINS in must-implement and may-implement functions
  • Rename GB** to pins____
  • Explain ltstype functions (which must be implemented)
  • Rename action matrix to update matrix

Create a separate document on how to implement PINS

Improve multi-process memory management

The current allocator in the multi-process environment (--procs=P) is a hack. It claims a large piece of anonymous pages using mmap at pre-fork time. Memory allocations are handled by giving out pointers continuously from this region (HREdefaultRegion(HREglobal()) is automatically set in the process environment).
For lack of administration freeing or reallocating memory is not possible.

To limit fragmentation two sets of functions were designed:
RTmalloc(...) - for local (non-scaling) allocation by the OS
HREmalloc(region, ...) - allocating from a designate region (Warning: it may fall back on RTmalloc)

RTswitchAlloc (true/false) is used to switch the behavior of RTmalloc to HREmalloc(HREdefaultRegion(HREglobal()), ...) transparently. This enables the use of allocator oblivious data structures.

TODO:
1 - integrate a true allocator that works with inter-process memory.
a viable option seems the simple SSmalloc, it needs to be adapted to used the preallocated region though
2 - consolidate HREmalloc and RTmalloc (See #698)
with good allocator, HREmalloc with default region can be used as default

Improve testsuite

From TODO file in testsuite/ :

The test suite needs still to be extended with:

tests for the distributed tools using --mpi, --workers and --procs.
The distributed backend is tested in alg_backends__, but without parallelism.
See also the HAVE_POSIX flag in Makefile.am
A test case which does (parallel) LTS construction, reduction comparison.
See lts_generation tests and mini tutorial in main LTSmin manual.
Optional multi-core backend tests. See HAVE_ATOMICS in configure.ac
More fine-grained dependencies. See Makefile.am, e.g. lts_generation can be
executed partly without CADP or Mcrl2.
Integration of src/tests/test-_ binaries
POR tests
DFS_FIFO tests

Inequality operators for invariant expressions

It would be a nice addition if we could have the possibility to express also relations like ">" and "<" between expressions in an invariant. Currently, only equality "==" is available.

Support pretty printing of variables

Add a function to the PINS interface that pretty prints all types (also non-enum) in the model.
This allows the printing of state and transition labels according to the frontend standards. For example a signed char can be printed as -128, ..., 127

Move regrouping layer

Subsumption can reduce the number of groups, which can be beneficial for partial order reduction.
However, the regrouping is only loaded after the POR layer. This should be reversed and the regrouping layer should take care of correct handling of state and transition labels.

Add functions to dump internal data structures

Add functions to dump internal data structures for debugging purposes, for example as dotty graphs. Currently, it is not easy to find where the issue in corrupted data structures comes from.

E.g., treedbs, vset/symbolic state spaces

[Arch Linux] mapa2lts-* segfault

Executing mapa2lts-* results into a segfault under Arch Linux. This is my output by gdb:

[...]
mapa2lts-dist: No output, just counting the number of states
mapa2lts-dist: level 0 has 1 states, explored 0 states 0 transitions
mapa2lts-dist: reward label 0=

Program received signal SIGSEGV, Segmentation fault.
dm_project_vector (m=0x69b780 <dm_info>, row=4, src=src@entry=0x1, tgt=tgt@entry=0x7fffffff9960) at dm.c:1377
1377 tgt[k++] = src[i];
(gdb) bt
#0 dm_project_vector (m=0x69b780 <dm_info>, row=4, src=src@entry=0x1, tgt=tgt@entry=0x7fffffff9960) at dm.c:1377
#1 0x000000000042d6c2 in project_dest (context=0x7fffffffdbc0, ti=0x7fffffff99e0, dst=0x1, cpy=0x0) at pins.c:140
#2 0x0000000000407a25 in prcrl_callback () at mapa-pins.c:170
#3 0x00007ffff7a268da in cdjp_info () from /usr/local/lib/ltsmin-2.0/libscoop.so
#4 0x0000000000000004 in ?? ()
#5 0x00007ffff2416d30 in ?? ()
#6 0x00007ffff4723fc0 in ?? ()
#7 0x00007ffff6215672 in stg_newPinnedByteArrayzh () from /usr/lib/ghc-7.8.3/rts-1.0/libHSrts_thr-ghc7.8.3.so
#8 0x0000ff000000ff00 in ?? ()
#9 0x0000000000000000 in ?? ()

(gdb) bt
#0 dm_project_vector (m=0x69b780 <dm_info>, row=4, src=src@entry=0x1, tgt=tgt@entry=0x7fffffff9960) at dm.c:1377
#1 0x000000000042d6c2 in project_dest (context=0x7fffffffdbc0, ti=0x7fffffff99e0, dst=0x1, cpy=0x0) at pins.c:140
#2 0x0000000000407a25 in prcrl_callback () at mapa-pins.c:170
#3 0x00007ffff7a268da in cdjp_info () from /usr/local/lib/ltsmin-2.0/libscoop.so
#4 0x0000000000000004 in ?? ()
#5 0x00007ffff2416d30 in ?? ()
#6 0x00007ffff4723fc0 in ?? ()
#7 0x00007ffff6215672 in stg_newPinnedByteArrayzh () from /usr/lib/ghc-7.8.3/rts-1.0/libHSrts_thr-ghc7.8.3.so
#8 0x0000ff000000ff00 in ?? ()
#9 0x0000000000000000 in ?? ()

Mcrl tests fail in testsuite

Must be some TCL string escape weirdness

starting /home/fmg/laarman/ltsmin-hre/testsuite/../src/pins2lts-mc/lpo2lts-mc -z6 --strategy=dfs --state=tree --procs=5 /home/fmg/laarman/ltsmin-hre/testsuite/../examples/hef_wrong.tbf
spawn /home/fmg/laarman/ltsmin-hre/testsuite/../src/pins2lts-mc/lpo2lts-mc -z6 --strategy=dfs --state=tree --procs=5 /home/fmg/laarman/ltsmin-hre/testsuite/../examples/hef_wrong.tbf^M
gmake2: * [all] Broken pipe^M
gmake2: * [all] Broken pipe^M
MCRL grey box: File /tmp/gmake2: Entering directory `/home/fmg/laarman/ltsmin-hre/testsuite/alg_backends_lpo'.c cannot be opened^M

Symbolic back-end can not cope with ~> 10^380 states

Currently, many vset implementations use a double to count the number of states. A floating point expection is triggered when trying to count more than 10^380 states.
A solution might be to use long doubles, but this will not fit in the operations cache.

documentation issues

Online documentation issues

  1. I cannot find any link to the GCF page on the website. Maybe a
    link can be added to the main page, or it might even be better
    to provide a list of all available tools (e.g. similar to mCRL2:
    http://mcrl2.org/wiki/index.php/Tool_manual_pages)

  2. I couldn't find anywhere in the documentation what GCF stands for,
    or what a GCF archive actually is.

  3. In section 1 of the main page, "mCRL2" is misspelled as "mcrl2".

  4. In the manual page for lpo-reach, the links to mcrl, mcrl22mcrl,
    mcrl22lps and the ICTAC paper are incorrect.

  5. The manual page of lps2lts-grey does not state what the file format
    is of the output file.

  6. According to the documentation, lps-reach "performs a reachability
    analysis on a specification". I assume this means that it calculates
    all reachable states.

  7. The stats link on the bottom right of the main page is broken.

PRISM output is not interpreted correctly

In these files, the state 0 is a final state, while the initial state is state 1048.

Then I use ltsmin-convert to make SmithTest2.gcf file.
If I use the command:
src/ltsmin-convert/ltsmin-convert --rdwr --bfs SmithTest2.tra SmithTest2.gcf
it saids that: * error *: only 1 out of 1086 states reachable, since it starts the search from a final state.
If I change state 0 to state 1048 and state 1048 to state 0 by hand (also make changes in .tra file), the tool works fine, since now the initial state is state 0.

http://pastebin.com/uVsiMpLU
http://pastebin.com/sPgA4LjF

Move remaining sequential tool functionality to multi-core tool

pins2lts-mc.c contains rudimentary support for LTS storage (full vectors) en POR (alleen stack proviso)

To eliminate the sequential tool, we have to:
add the color/queue proviso in the multi-core tool
distinguish state revisiting algorithms (NDFSs) and limit their combination with POR to single threaded cases only
write LTSs in index format (extend PBFS with local resizing hash/tree tables and transition communication)
Th SCC algorithm will be lost, but more efficient versions are available for the multi-core tool in a local branch

Obtaining label information in the symbolic tools

Aim: We want to establish if a transition with a certain label is possible in our state space.

In discussion with Jaco I came up with some (symbolic) algorithms for the above and it seems that to start we need to extra pieces of functionality added two the pins interface to achieve this:

Given a label l, return the groups that potentially have a transition labeled l. I assume this should actually be split up in two functions, namely (1) given a textual label return the related number assigned to it and (2) given the number return the groups. Note that this is an over approximation: the the transition from the group might not necessarily be possible in our state space.
Given a group and a set of states (projected down to that group), is a transition labeled l possible from these states.
Once we have the above it will become possible to implement the first of the algorithms we designed. For the other two related ones some more functionality is needed:

Backward expansion of groups, i.e. given some target states I want to expand my transition relation for the group (note that if some of the data is potentially infinite this might be impossible).
Given a label l I'd like to obtain all the states from which a transition labeled l is possible.
Given a transition system, I'd like to know an upper bound for the maximum value that can be assigned to each entry of the state vector (this is needed to initialize the a bdd in atermdd to e.g. the full relation).


Alfons:
see also the new --invariant violation detection, which has not yet been implemtented for the symbolic tools.

See also the EVAR variable that is defined in the mu-calculus language, but unused.

Rename HRE functions

Warning --> Print

Add multiple info levels -v and -vv and -vvv (the last one should also provide localized tourist information)

consolidate HRE* and RT* ?

ltl2ba update

We should add the best tools to translate LTL to Buchi automata, ltl3ba and ltl2tgba.
This should result in faster LTL model checken, since they produce better Buchi automata
(smaller, more deterministic).

This should be added as flag to the LTL layer (--ltl), in the form of: --ltl2ba={ltl2ba,ltl3ba,ltl2tgba}
ltl3ba is from Jan Strejcek. ltl2tgba is from Alexander Duret-Lutz. I will add his information
on this ticket separately.


Information from Alexander Duret-Lutz:

Some pointers to follow up on tonight's diner (SPIN 2013 - Stony Brook)

ltl3ba is a better replacement for ltl2ba (faster processings, smaller automata)

Spot is a model checking library that includes a command-line tool
called ltl2tgba,
that can produce output compatible with ltl2ba/ltl3ba when given the
--spin option.
It implements more simplifications than ltl3ba, but in general it
takes more time.
It can also translate PSL formulae.


note that the goal of the current implementation was to obtain similar state counts to competitive model checkers such as spin and divine, in order to do experimental comparisons. Therefore, an optional interface to different buchi translators would be useful.


Esparza and K?etínský have a new method which results in completely deterministic (and small) "Buchi" automata:
http://arxiv.org/abs/1402.3388
(deterministic results tend to reduce the size of the cross product)

http://www.model.in.tum.de/~kretinsk/LTLdeterminization.html

barrier is not cleaned up correctly

For correct barrier deallocation see http://locklessinc.com/articles/barriers/

~/ltsmin-hre/src/pins2lts-mc/dve2lts-mc ~/tests/dve/ltl-new/elevator2.3.prop4.dve2C --threads=48 --state=cleary-tree
dve2lts-mc: Precompiled divine module initialized
dve2lts-mc( 0/48), * error : Cleary tree not supported in combination with error trails or the MCNDFS algorithms.
segmentation fault *
*
Please send information on how to reproduce this problem to:
[email protected]
along with all output preceding this message.
In addition, include the following information:
Package: ltsmin 1.8
Stack trace:
segmentation fault ***
Please send information on how to reproduce this problem to:
[email protected]
along with all output preceding this message.
In addition, include the following information:
Package: ltsmin 1.8
Stack trace:

Document --pins-guards

Are guards being used by default in the symbolic tool? Do they speed up computation or save memory?
The manual does not mention this yet: the --pins-guards option has o be documented

Also, --pins-guards was initially used to speedup the next_all function by evaluating guards and calling get-actions (instead of next_long). Is it appropriate to use this option name for guards in the symbolic tool as well?

All I can find is (which seems to me a debugging option we better hide):
--no-soundness-check
When using --pins-guards=assume-true disable the soundness check of the model
specification. The soundness check may be expensive and can be disabled when the model
specification is known to be sound. A guard may not evaluate to true or false but to maybe.
A guard which evaluates to maybe depends on the evaluation of another guard. For languages
such as Promela and DVE it checks whether if a guard evaluates to maybe there is another
guard on the left which evaluates to false. For languages such as mCRL2 it also checks
whether a guard on the right evaluates to false. If for all states this holds then the
model specification is sound for guard-splitting.

Confluence in symbolic generator

The idea is to use statically derived confluence information (as provided by [lps]confcheck) during symbolic state space generation.

Several scenarios are possible:

  1. ctau-groups are confluent and terminating (CR & WN)
  2. assume ctau-groups are confluent only (not necessarily SN)
  3. assume ctau-groups are weakly confluent

In case 1, every state is represented by its unique normal form. A symbolic algorithm is attached.
In case 2, every state is represented by an element in its terminal (bottom) SCC.
In case 3, every state is represented by its terminal (bottom) SCC.
This boils down to computing all the BSCCS of a graph encoded as BDD.
An algorithm based on pivoting individual states an be constructed, but might be suboptimal

Implementing 1 would be interesting, in order to see if confluence reduction
is beneficial in a symbolic context.

Unsolved problem: how to push confluence information through the PINS interface?

Option 1:

  • a separate array, indicating which groups are confluent
  • easy to provide for mCRL and muCRL2
  • careful: regrouping should only merge groups in the same category

Option 2:

  • provide another, boolean, label on each transition (besides the action label)
  • seems to be more flexible (same group can generate confluent and not confluent steps)
  • it is less obvious how to enumerate all [non] confluent transitions

Attached: a symbolic algorithm for scenario 1.

ETF breaks on negative numbers

(any) is represented using a reserved value that could collide with a state slot
Additionally, negative integers can be printed into an ETF but not parsed again

scoop submodule commit mising

$ git submodule update
fatal: reference is not a tree: 504867a82e2ff4f3ed4add1dc3d1faa6762143bc
Unable to checkout '504867a82e2ff4f3ed4add1dc3d1faa6762143bc' in submodule path 'scoop'

PINS state and label manipulation functionality

Add highlevel functions/types to PINS for:

StateVariable : Name, Type, GroupDependencies

addStateVariable (model, int index, variable)

addStateLabel (model, int index, ltsmin_expression)

These function can greatly reduce the amount of code in the front-end glue code (e.g. prom-pins.c) and pins2pins wrappers.

Note that an ltsmin_expression can function as a (boolean) state label. Once added, labels can be reused in other predicates (See mark_predicate function for how the labels are treated as state variables).

SpinS already exports guards and other labels (progress, accept and valid end) in the correct way.


Alfons:
My latest POR branch contains a pins-util.ch files with some initial PINS helper functions

floating point error in gcf tool?

~/ut/ltsmin-2.0/src/gcf-tool(master) $ ./gcf --list ../xyz.gcf
Archive ../xyz.gcf contains:
stream size compressed stream name (compression)
4 12 init (gzip)
316 45 SL-0-0 (rle32|gzip)
316 50 SL-0-1 (rle32|gzip)
316 55 SL-0-2 (rle32|gzip)
316 46 SL-0-3 (rle32|gzip)
316 31 SL-0-4 (rle32|gzip)
316 30 SL-0-5 (rle32|gzip)
316 48 SL-0-6 (rle32|gzip)
316 58 SL-0-7 (rle32|gzip)
316 58 SL-0-8 (rle32|gzip)
316 25 SL-0-9 (rle32|gzip)
316 25 SL-0-10 (rle32|gzip)
316 60 SL-0-11 (rle32|gzip)
316 56 SL-0-12 (rle32|gzip)
316 53 SL-0-13 (rle32|gzip)
316 28 SL-0-14 (rle32|gzip)
316 29 SL-0-15 (rle32|gzip)
316 55 SL-0-16 (rle32|gzip)
316 60 SL-0-17 (rle32|gzip)
316 50 SL-0-18 (rle32|gzip)
316 28 SL-0-19 (rle32|gzip)
316 25 SL-0-20 (rle32|gzip)
316 59 SL-0-21 (rle32|gzip)
316 60 SL-0-22 (rle32|gzip)
316 46 SL-0-23 (rle32|gzip)
316 25 SL-0-24 (rle32|gzip)
316 28 SL-0-25 (rle32|gzip)
316 49 SL-0-26 (rle32|gzip)
316 58 SL-0-27 (rle32|gzip)
316 31 SL-0-28 (rle32|gzip)
316 31 SL-0-29 (rle32|gzip)
432 48 ES-0-ofs (diff32|gzip)
432 126 ED-0-ofs (diff32|gzip)
11 19 CT-0 (gzip)
522 info
totals:
10881 2029 files: 35 (81.35%)
gcf file size 1048576 (51579.45% overhead, -9536.76% compression)

Documentation lacking

it is not so easy for mcrl2 users to find their way in the documentation.
one must click on "sequential", "symbolic" or "distributed" to get at

the page "lpo2lts-grey" etc, to find out that one needs lps2lts-grey.

Just an index (sorted list) of all tools could help to navigate, or even better
extra page with "example dialogue for mcrl2 users". (probably similar

for Promela)

example:

Linearization
mcrl22lps -D onebit.mcrl2 onebit.lps
Symbolic generator:
lps-reach onebit.lps
Explicit generator/compressed file, with and without caching:
lps2lts-grey -out onebit1.gcf onebit.lps
lps2lts-grey -cache -out onebit1.gcf onebit.lps
Distributed generator, (2 processors), this time in (compressed) dir-format:
mkdir onebit2.dir
mpirun -np 2 -mca btl tcp,self lps2lts-mpi -out onebit2.dir/%s onebit.lps
Reduction (and conversion to aut):
mpirun -mca btl tcp,self ltsmin-mpi -b onebit1.gcf onebit-min1.gcf
mpirun -np 2 -mca btl tcp,self ltsmin-mpi -b onebit2.dir/%s onebit-min2.gcf
Conversion to aut or bcg:
ltsmin-convert onebit-min1.gcf onebit-min1.aut

ltsmin-convert onebit-min2.gcf onebit-min2.bcg

extend IO with DOT format

For easy inspection of state spaces it would be useful to have dot output.

Example code can be found in pins2lts-seq. If implemented in IO-lib, the output format will be supported by all tools.


Branch bergfi/dot-io at http://deathegg.student.utwente.nl/git/ltsmin.git does this. Uses record output of dot and shows state vector, state labels and a single edge label
Some special labels:
custom_dot: allows the front-end to customize dot output (both edge and state label)
action: this edge label is printed as edge label
error: 0: normal, 1: state is coloured red (state label)
errormessage: char[] chunk containing an error message. If non-empty, is printed in the state (state label AND state vector variable)

Support better LTL to Buchi solutions

LTL3Buchi can be integrated (as in divine)
Spot also contains better algorithms.

LTL2Buchi should still stay available for easy check of state counts between SPIN and LTSmin.
Note also that the integration of LTL2Buchi is rather adhoc and exploits the LTL2Buchi internal data structures. Hopefully DiVinE already demonstrates a cleaner use for LTL3Buchi.

ltsmin-mpi lacking documentation on segements?

When generating a gcf file with the following command (see issue 462 for the lps file)

lps2lts-grey -c --write-state CookingKettle_v3.min.lps CookingKettle_v3.gcf

and then invoking the attached pbs file on the cluster, the following output is generated:

ltsmin-mpi( 0/10), ** error **: Number of segments 1 does not equal number of workers 10

I'm guessing that the gcf file should also have been generated by the mpi tools to get multiple segments, but this does not seem to be documented.

Stefan:
All state space generators running with N workers produce an LTS with N segments.

The MPI reduction tools require an LTS with as many segments as there are workers.

The sequential tools (especially ltstrans) can load and store an LTS with an
arbitrary number of segments.

git submodule update

I obtain the following messages for a submodule update (I'm currently using the next brach of ltsmin)

fatal: reference is not a tree: a443e1a236163463a703cc81f2e614c5a68ff765
fatal: reference is not a tree: ae6fdce1a8dcf3a5f6d8f6f58bc73e4efcda702a
Unable to checkout 'a443e1a236163463a703cc81f2e614c5a68ff765' in submodule path 'buddy'
Unable to checkout 'ae6fdce1a8dcf3a5f6d8f6f58bc73e4efcda702a' in submodule path 'sylvan'

FSM format direct-write check missing

lps2lts-seq --strategy=bfs ieee-11073.lps model.fsm
lps2lts-seq: mCRL2 language module initialized^M
lps2lts-seq: loading model from /home/fmg/laarman/ltsmin-hre/testsuite/../examples/ieee-11073.lps^M
lps2lts-seq: mCRL2 rewriter: jittyc^M
lps2lts-seq: length is 15, there are 290 groups^M
lps2lts-seq: There are 0 state labels and 1 edge labels^M
lps2lts-seq: got initial state^M
lps2lts-seq: Writing output to model.fsm^M
lps2lts-seq: keeping action_labels^M
segmentation fault ***^M
^M
Please send information on how to reproduce this problem to: ^M
[email protected]^M
along with all output preceding this message.^M
In addition, include the following information:^M
Package: ltsmin 1.8^M
Stack trace:^M
0: /home/fmg/laarman/ltsmin-hre/testsuite/../src/pins2lts-seq/lps2lts-seq(HREprintStack+0x18) [0x4877d8]^M
1: /home/fmg/laarman/ltsmin-hre/testsuite/../src/pins2lts-seq/lps2lts-seq [0x487e26]^M
2: /lib64/libpthread.so.0 [0x2ad690163c00]^M
FAIL: test11_fsm: Program exited with LTSMIN_EXIT_ERROR

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.