Giter VIP home page Giter VIP logo

axiom-profiler-2's People

Contributors

cschmitter avatar dspil avatar jonasalaif avatar marcoeilers avatar oskari1 avatar richardluo20 avatar xswcdev1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

axiom-profiler-2's Issues

Some comments

  • The popup about too many nodes and slow loading should only apper if I'm increasing the number of nodes (not when it's staying the same or even reducing)
  • The number of edges is a better indicator of graphviz time imo, maybe change the popup to be based on that (fine to keep the default filters as-is)
  • Have different indications for if there are missing parents or children of a node. e.g. Node 6355 (open the log file from here and expand the first node 6342) only has missing parents
  • The display-by-cost is wrong: node 6342 in that graph has a cost of 2.5 and is shown, while node 597 (parent of 6355) has cost 313 and is not shown

Documentation on how to view Quantifier Instantiations

Hihi! Sorry if this isn't the right place to ask this? feel free to close if so!

I'm running the axiom profiler on some logs, and I'd like to view the concrete instantiations of an quantified axiom that z3 tried in its proof search.

Is there a place in the UI to view this? I tried looking at the documentation, but I couldn't find anything.

Term index toggle missing in canary version

The stable version of the latest deployment (commit 1dcb604) has in the upper right corner a toggle "Ignore term IDs" as seen in the first screenshot below. When it is not selected, the IDs of the terms are shown as seen in the second screenshot below.

Currently there is a bug in the canary version of the latest deployed version, as this toggle is not present.

Stable version with selected toggle:
image

Stable version with deselected toggle:
image

Canary version without toggle:
image

Compose Expressions?

Hello everyone!

This project looks amazing. However, I loaded a simple trace from the Caesar verifier and got a bunch of objects that look like internal formatting expressions everywhere. For example, there's stuff like this:

Body: ∀ exponent!9
: Int, base!8
: Real. {string} compose(string, string) ∨ ¬compose(string, string) ∨ indent(compose(string, compose(indent(compose(string, string)), indent(compose(string, string, compose(string, string), string))), compose(string, string, string, string), string))

These function applications do seem to show up in the original z3 trace as well, so it's probably not an issue with the axiom profiler. But maybe you have some advice where this might come from?

Edit: It looks like there was a previous issue of Z3 where this happened (Z3Prover/z3#4571). Caesar is using Z3 4.12.1.0, which should include the fix. But I still see this issue.

Thanks a lot!

Unsanitized variable and function names being converted to html

Function and variable names are not sanitized before being formatted into HTML, leading to names not being correctly displayed.

Currently this bug is particularly notable in log files generated by silicon, which annotates Viper variable and functions names with their types, e.g. silicon takes the function append(p, q) and rewrites it as append<Sequence<Int>>(p, q). When these variable names are formatted into HTML to be displayed in the selected nodes info section the <Sequence<Int> is interpreted as an HTML tag leaving append>(p, q) to be displayed.

image

This issue is not present in the matching loop graphs as the <> are not interpreted as tags.

image

Reconnect stuck in sequences-18.log

If I open the sequences-18.log and then apply the filter "Show quant k!354" then the tool gets stuck in the VisibleInstGraph::reconnect function, see here.

I've tried to limit the maximum number of edges to 1000 or so to inspect the intermediate graph of quant k!354 and saw that it adds a lot of edges, see screenshot below.
image

I'll also link two SVG files to inspect the subgraph of k!354 (dark purple nodes), once without and once with other quants:
sequences-18-k!354_2
sequences-18-k!354

My suspicion is that it's not a correctness issue but a performance issue, due to how the equality nodes are represented in the edges of the reconnected graph.

Wrong Blame Dependencies and Matched Terms

Sometimes the Axiom Profiler gives unexpected blame dependencies and incorrectly lists the matched terms for each trigger in the Selected Nodes Info section. It is not clear to me why this occurs sometimes and not others.

In the example below we have that trigger #0 matches j(i(i(i(a)))) against j(X) and that X is bound to j(i(i(i(a)))). This is the expected behavior. However trigger #1 matches i(i(i(i(a)))) against i(i(i(X))). This is incorrect. For a consistent binding of X we should expect the matched term for trigger #1 to be i(i(i(i(i(i(a)))))).

Furthermore, note that although there are only two triggers, there are four enodes blamed. i302->i303 and e432->i303 correctly indicate blame for triggers #0 and #1 (although, note that e432->i303 is incorrectly labelled #3). The other two blame dependencies list subterms of e432. I have observed this behavior, where subterms of actual matched terms are listed as dependencies in other examples as well.

image
Note how there are four dependencies whereas there should only be two.
Screenshot from 2024-07-11 11-24-49

This bug appears to present a bit differently in the next example, however I expect they have the same cause. Here, the matched terms listed are correct for the quantifier, however the matched terms listed for triggers #0 and #1 are swapped: it erroneously claims that cons(empty, head(b@3@01)) matches against Eq(p, q) and that Eq(e!1, empty) matches against cons(q, a).

image

Find attached z3 log files exhibiting this bug:
BasicSequenceAxiomatization.forupload.z3.log
CrissCross.z3.log
Disjointed.z3.log

Request: tabular data for quantifier instantiations

Hi,

I have been using the Axiom Profiler 2 to troubleshoot performance & completeness issues for a Viper-related project, and could benefit from a feature that would allow viewing tabular data about the instantiations of a quantifier made during a run of Z3.

For example, I may be comparing two axiomatizations and notice that one run produces 500 instances of a particular quantifier, while the other run produces 1500 instances of the same quantifier. This is too much data to explore graphically, so it would be helpful if I could generate tables (either within the tool's UI, or as output files) for these instantiations that contains the costs, number of parents/children, trigger matches, bound and yield terms, etc.

In particular, grouping the instances of a quantifier with "identical" trigger or yield terms could help build some understanding of Z3 runs that involve a lot of potential branching/case-splitting.

Thanks!

Reusing equality explanations

When opening
cg_sort.log with the AP and filtering out all descendants of node i12, we can see in the screenshot below, that the equality node f(i) = f(inc(inc(inc(i)))) blames three nodes, namely f(i) = f(inc(i)), f(inc(i)) = f(inc(inc(i))), and f(inc(inc(i))) = f(inc(inc(inc(i)))), so it does not correctly compute the "shortest path" in the e-graph.

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.