Giter VIP home page Giter VIP logo

compiler's Introduction

Getting Started with Boa Development

This document describes how you might set up your development environment to view, edit and test the Boa compiler's source code on your local computer. The Boa infrastructure hosted at http://boa.cs.iastate.edu uses the same code, but that execution would use a Hadoop cluster.

Using Eclipse

  1. Setup your development environment by following the instruction from Development Setup page.
    After following the instructions above, you would have successfully imported the project into your Eclipse IDE. You should also be able to trigger ant builds and edit project source files within your Eclipse installation.

  2. In order to run a Boa program locally, the Boa compiler compiles a Boa program to a Java program and runs the generated Java program using reflection and Hadoop libraries. To enable this within the Eclipse IDE

    1. Create a directory named "compile" in the project's root directory.
    2. From the project's "Properties > Java Build Path", select the "Libraries" tab."
    3. Use "Add External Class Folder" to add newly created directory (compile) in classpath.
    4. After adding the "compile" directory, your "Libraries" tab should look like this:
    5. From "Run > Run Configuration > Java Application" select "Main" tab to create a "Run Configuration" for the BoaEvaluator class. After this step your "Main" tab should look something like this:
    6. Select "Arguments" tab in same window to provide program arguments separated by single space.
      Program arguments include
      1. Path to Boa program
      2. Path of the local dataset
      3. Path of the output directory.
        Your "Arguments" tab should be look like:
    7. Hit apply and Run, this will run your Boa program on local data. Once the program execution completes, your "Console" will look similar to this: Note that depending on your Boa program, the data set, and the capabilities of your local computer the execution may take some time.
    8. Problems with the Boa compiler, and questions regarding Boa programming can be asked at the Boa user forum.

Using IntelliJ IDEA

  1. Setup your development environment by following the instruction from Development Setup page.
    After following the instructions above, you would have successfully imported the project into your Eclipse IDE. You should also be able to trigger ant builds and edit project source files within your Eclipse installation.

  2. In order to run a Boa program locally, the Boa compiler compiles a Boa program to a Java program and runs the generated Java program using reflection and Hadoop libraries. To enable this within the IntelliJ IDE

    1. Create a directory named "compile" in the project's root directory.

    2. From the project's "Properties > Compiler", select the "Dependencies" tab."

    3. Use "+" (available at bottom) to add newly created directory (compile) in classpath.

    4. After adding the "compile" directory, your "Dependencies" tab should look like this:

    5. From "Run > Edit Configuration > Application" select "Configuration" tab to create a "Run Configuration" for the BoaEvaluator class.

    6. In the "Arguments" field in same window, provide program arguments separated by single space.
      Program arguments include

      1. Path to Boa program
      2. Path of the local dataset
      3. Path of the output directory.
        After this step your "Configuration" tab should look something like this: .
    7. Hit apply and Run, this will run your Boa program on local data. Once the program execution completes, your "Console" will look similar to this: Note that depending on your Boa program, the data set, and the capabilities of your local computer the execution may take some time.

    8. Problems with the Boa compiler, and questions regarding Boa programming can be asked at the Boa user forum.

Sample Data Set

A small data set is provided within the Boa repository to test the compiler and your modifications. You can access this data under the "dataset" directory located in the root directory. The organization of this dataset is identical to that used by the Boa infrastructure. A complete Boa dataset consists of 3 files:

  1. index: this is a map file that stores a mapping from project index to the data location in the AST sequence file (see below). For more on the map file format see its documentation.
  2. data: this file stores the abstract syntax tree (AST) of each project as a sequence file. See the documentation for more information on the sequence file format.
  3. projects.seq: this file stores the metadata for each project e.g. commit logs, authors, etc. as a sequence file.

The sample dataset contain only three projects to keep the download size small: Boa, PaniniJ, and Panini.

compiler's People

Contributors

adamhammes avatar ankuraga1508 avatar cheshianhung avatar gupadhyaya avatar hoanisu avatar hridesh avatar hyjorc1 avatar jingyisu avatar johirbuet avatar nbhide avatar nguyenhoan avatar psybers avatar ramiowastateuniversity avatar roberthschmidt avatar sarahysh12 avatar sayemimtiaz avatar sumonbis avatar swflint avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

compiler's Issues

No error is given for return in after visit statements

If you give a return inside a before visit statement, it properly provides an error message:

foo.boa: compilation failed: Encountered typecheck error at line 8, column 3. return statement not allowed inside visitors
            return;
            ^^^^^^^

However if you use a return inside an after visit statement, no error is given and the code actually compiles just fine. This however is not the correct logic and breaks optimizations/transformations that rely on the fact a return never appears in visit statements.

Type checker considers a proto list and array different types

repos: array of CodeRepository = proj.code_repositories;

leads to the compile error:

test.boa: compilation failed: Encountered typecheck error at line 1, column 33. incorrect type 'protolist of CodeRepository' for assignment to 'repos: array of CodeRepository'
repos: array of CodeRepository = input.code_repositories;
                                 ^^^^^^^^^^^^^^^^^^^^^^^

In terms of documentation, we consider a 'protolist' and 'array' the same. The compiler still considers these separate types. We need to fix the mismatch.

Another example of where this fails is with passing protolist's to user-defined functions:

id := function(fields: array of Variable) ...
id(node.fields);

Codegen fails when if/else both contain stop statement

If all paths of a visit clause terminate in a stop statement, the codegen still places a default 'return true;' at the end and this return is now unreachable. E.g.:

o: output sum of int;
visit(input, visitor {
    before s: Statement ->
        if (s.kind == StatementKind.IF) {
            o << 1;
            stop;
        } else {
            stop;
        }
});

leads to this error:

Bug.java:144: error: unreachable statement
                        return true;
                        ^

Lift function calls out of quantifier conditions for improved performance

Consider the following code:

foreach (i: int; someMethod(...)[i] != "foo")
   ...

If we can assume the method call will always return the same set of values, it would be best to lift the call out of the loop:

temp := someMethod(...);
foreach (i: int; temp[i] != "foo")
   ...

I propose that we assume any such method call always returns the same set of values and make this optimization occur.

I think this is a safe assumption, because the semantics are very strange if the set of values returned is different with each call. We would start iterating and take the first value, then do a 2nd call and take the 2nd value. But perhaps in this call there is no 2nd value and thus we actually hit a runtime error. So it seems to me we implicitly assume this anyway and can just make the assumption a bit more explicit to enable this optimization.

Code generation error for using a protobuf value as map key

The code generation strategy for storing into a map is currently broken. If the key is complex and contains getting values from a protobuf field, it breaks. This is not a problem if the value being stored is a protobuf field, only if the key is.

Here is a simple test case to trigger the error:

o: output sum of int;
o << 1;

committers: map[string] of bool;

foreach (i: int; def(input.code_repositories[i]))
    committers[input.code_repositories[i].url] = true;

And here is the output:

Bug.java:140: error: illegal start of expression
___committers.put(_input.getCodeRepositoriesList().get(___i)).getUrl(, true);}
                                                                     ^

Using array access as indexer for map breaks codegen

m: map[string] of bool;
a: array of string = {"a", "b"};

foreach (i: int; a[i])
    m[a[i]] = true;

generates a codegen error:

Test.java:145: error: ']' expected
                            ___m.put(___a[___i, true)];}
                                              ^
Test.java:145: error: ';' expected
                            ___m.put(___a[___i, true)];}
                                                     ^

Notice the right square bracket is placed in the wrong position.

add functions to convert stacks and sets into arrays

Similar to how keys() and values() works on maps, add a function to convert types of stack or set into arrays. E.g., if we name the new function values():

s: stack of int;
push(s, 3);
push(s, 4);
a: array of int = values(s);
# a[0] = 3, a[1] = 4

And similarly if 's' was defined as 'set of int'.

Data generation stores types as int, language still treats them as a Type object

VoidMethodsTotal: output sum of int;
VoidMethodsMax: output maximum(1) of string weight int;
VoidMethodsMin: output minimum(1) of string weight int;
VoidMethodsMean: output mean of int;

p: Project = input;

void_meth_cur_val := 0;
void_meth_s: stack of int;

q15 := visitor {
    before node: CodeRepository -> {
        snapshot := getsnapshot(node, "SOURCE_JAVA_JLS");
        foreach (i: int; def(snapshot[i]))
            visit(snapshot[i]);
        stop;
    }
    before node: Declaration ->
        if (node.kind == TypeKind.CLASS || node.kind == TypeKind.ANONYMOUS) {
            push(void_meth_s, void_meth_cur_val);
            void_meth_cur_val = 0;
        } else
            stop;
    after node: Declaration -> {
        VoidMethodsTotal << void_meth_cur_val;
        if (void_meth_cur_val > 0) {
            VoidMethodsMax << p.id weight void_meth_cur_val;
            VoidMethodsMin << p.id weight void_meth_cur_val;
            VoidMethodsMean << void_meth_cur_val;
        }
        void_meth_cur_val = pop(void_meth_s);
    }
    before node: Method ->
        if (node.return_type.name == "void")
            void_meth_cur_val++;
};

visit(p, q15);

error: int cannot be dereferenced if (___node.getReturnType().getName().equals("void"))

Return type for getName is int here for Type class.

Support iterating through a collection

Currently, there is no way to iterate through all elements in a set/map/stack.

Not sure how it will look like.
I guess something like foreach (e : coll) ... would not be allowed in Boa. Maybe foreach (e in coll) ...

Fully implement support for tuple types

Tuple types (e.g., writing {3, "foo", false} is a 3-tuple of type {int, string, bool}) are only partially supported in Boa.

I believe the parsing support is in place, but most/all of the type checking and code generation is not.

Add sampling aggregators

Output aggregators for computing a sample would be very useful.

For example, Sawzall has 3 different aggregators: sample, distinctsample (a uniform sample), and weightedsample (biases based on higher weights).

I can also envision a sample that is representative of the dataset.

Excerpts from http://szl.googlecode.com/svn/doc/sawzall-language.html:

DistinctSample

The table distinctsample takes a uniform sample of a given size from the set of all values seen. Conceptually that means first removing duplicate copies of all values that occur more than once in the input, and then taking a sample without replacement from the resulting duplicate-free data set. For example,

my_sample: table distinctsample(100) of phrase: string weight int; emit
my_sample <- logrecord.phrase weight 1;

picks a sample of 100 song phrases, each phrase being chosen with equal probability, regardless of its multiplicity. In addition, the distinctsample aggregator keeps track of the number of times each sampled phrase appears in the data set.

WeightedSample

The table weightedsample takes a sample of a given size from the set of all values seen. The sample is biased towards the values with higher weights. For example,

x: table weightedsample(2) of x_i: string weight w_i: float;
emit x <- "a" weight -1.0;  # definitely not chosen
emit x <- "b" weight 0.0;  # definitely not chosen
emit x <- "c" weight 1.0;  # unlikely to be chosen
emit x <- "d" weight 100.0;  # likely to be chosen
emit x <- "e" weight inf;  # definitely chosen, unless more than 2 (table parameter) inputs have weight = inf

picks a sample of 2 strings. The input "d" is more likely to be chosen than "c" because its weight is much higher. The probability to be chosen is generally not proportional to the weight. When all weights are the same, the table is a rigorously uniform random sample, regardless of how the inputs are sharded. The table does not remove duplicated inputs or sum the weights for each distinct input. If you emit "x" twice, with weights 1 and 2, respectively, both of them may get into the table, so it is different from emitting a single "x" with weight 3.

Code generation strategy does not work with recursion

If code is recursive (including visitors), the code generation strategy of storing all variables as fields in the generated class will fail, as the recursive call(s) will override prior values and upon unwinding back up the stack the field will only contain the value from the recursive calls. For example:

names: output set of string;

visit(input, visitor {
  before n: Method -> cur_name := n.name;
  after n: Method -> names << cur_name;
});

this code will not generate the set of method names, in the case of a tree that has nested methods.

Compiler does not warn about redeclared quantifier variables

foreach (i: int; a[i])
    foreach (i : int; b[i])
        ...

should signal a semantic check error of redefining i, instead we get a codegen error:

Bug.java:138: error: variable ___i is already defined in method map(Project,Mapper<Text,BytesWritable,EmitKey,EmitValue>.Context)
                            for (long ___i = 0; ___i < ___a.length; ___i++)
                                      ^

Codegen error if 2 vars declared in different scopes with same name

foreach (i: int; a[i]) {
  v := a[i];
  ..
}
..
foreach (i: int; a[i]) {
  v := a[i];
  ..
}

this code has a codegen problem, because the variable 'v' declared inside the scope of each quantifier (2 separate variable declarations, in different scopes - perfectly valid) generates code where 'v' becomes a field. So the class winds up with 2 fields named 'v':

Test.java:127: error: variable ___v is already defined in class Job0
            String ___v;
                   ^

Code gen error for getName in boa.types.Ast.Type

USES: output collection[string][string][time] of int;
p: Project = input;

project_url := p.project_url;
file_name: string;
commit_date: time;

diamond := visitor {
    before node: ChangedFile -> {
        if (!iskind("SOURCE_JAVA_JLS", node.kind))
            stop;
        file_name = node.name;
    }
    before node: Revision -> commit_date = node.commit_date;
    before node: Type ->
        if (strfind("<>", node.name) > -1)
            USES[project_url][file_name][commit_date] << 1;
};

visit(p, diamond);
error: method indexOf in class BoaStringIntrinsics cannot be applied to given types;
                        if (boa.functions.BoaStringIntrinsics.indexOf("<>", ___node.getName()) > -1l)
                                                             ^
  required: String,String
  found: String,int

Return type for ___node.getName() is int should be String. Similar issue as in #48

Can't assign to parameters inside a function

Parameters to a function are not assignable.

f := function(i: int) {
    i = 3;
};

leads to a compiler error:

AssignFuncParams.java:144: error: final parameter ___i may not be assigned
                        ___i = 3l;
                        ^

This is an artifact of the codegen strategy where we mark all function parameters as final.

tuple generation fails when the type isn't explicitly declared

If the tuple type is not explicitly declared and used, such as in the following code:

a := { {0, 1.0}, {1, 0.5} };

then the compiler will fail to generate the class for the tuple type. A temporary workaround is to explicitly declare the tuple type:

type t = { int, float };
a: array of t = { {0, 1.0}, {1, 0.5} };

parser bug with '-1' in for loops

o: output sum of int;

for (i := 0; i < 0 - 5; i++)
    o << 1;
for (i := 0; i < 0 -5; i++)
    o << 1;

Notice the parser accepts the first for loop, but gives error on the 2nd for loop (which has no space between minus and 5):

test.boa: compilation failed: Encountered parser error "[@37,80:81='-5',<70>,5:19]" at line 5, column 19. extraneous input '-5' expecting {';', '.', '(', '[', 'or', '|', '||', 'and', '&', '&&', '+', '-', '^', '*', '/', '%', '>>', '<<'}
for (i := 0; i < 0 -5; i++)
                   ^^

Provide a uniform frontend for all Boa-related tools

This is regarding the code base for three (3) components related to Boa that currently reside in three different repositories but share critical components: compiler, backend, and evaluator.

The compiler and backend share data representation e.g. protocol buffer definition. The compiler and evaluator share grammar, parsing, and builtin function code.

This organization has four problems.

  1. It makes the task of calculating impact of a proposed change difficult for a newcomer.
  2. It increases efforts required to make an update (due to duplication of artifacts).
  3. It increases the perceived complexity of the system.
  4. It would (in future) make distribution of Boa difficult because there are too many parts.

It is for all of these reasons, and to exploit opportunities presented by a shared codebase, that I propose the following organization of the Boa-related components.

  1. There be exactly one jar file with all three components: compiler, evaluator, and backend to produce an example dataset.
  2. All three functionalities can be selected via command line options
    -c,--compile compile a Boa program
    -e,--execute execute a Boa program
    -g,--generate generate a Boa dataset
    -p,--parse check a Boa program (parse & semantic check)

Each of these functionalities can have their own set of command-line options.

  1. Last but not least, they all be maintained in one Git repository on GitHub so that we do not have to chase down multiple repository to assemble a Boa system.

Complex quantifier conditions can lead to poor performance

When generating code for a complex quantifier such as:

foreach (i: int; n.files[i].SOMETHING && n.files[i].OTHER)

each expression indexed by 'i' is used to compute the length of the generated for-loop. In this case however, both expressions are identical. So the upper bound on the loop should just be len(n.files). Instead we get:

java.lang.Math.min(n.getFilesList().size(), n.getFilesList().size())

What we need to do is compute the set of expressions indexed by 'i' and then use that set to generate the bounds.

format() is possibly truncating data

From the user list: http://boa.cs.iastate.edu/forum.php?place=msg%2Fboa-user%2F2URws5oQKnk%2FnLkWQnp9EwAJ

"I've been having an issue with output from format() in this script (line 36). For 1320366/1323627 lines of output, a double quote is inserted at the end using this format. However, for 3261 lines (0.2% of the output), this final quote is omitted; this isn't simply a newline appearing in the line, and thus the output continues on the next." To demonstrate, one line of surrounding output is given. Note the end of the middle line (it is truncated):

analyst1001/OpenRefine-Hbase,f0ba42f355c8229df4300076926ab9fcfb432831,1270772594000000,"make sure to check the tests as well\n\n\ngit-svn-id: http://google-refine.googlecode.com/svn/trunk@431 7d457c2a-affb-35e4-300a-418c747d4874\n"
analyst1001/OpenRefine-Hbase,16a2600a49493a5befa2d86cd8c6f897238efc1e,1270774844000000,"now it's jslint time to be happier: (!
analyst1001/OpenRefine-Hbase,cc2209074b75892e6cc6d747e97c986d9f793bfa,1270775673000000,"more jslint goodness\n\n\ngit-svn id: http://google-refine.googlecode.com/svn/trunk@433 7d457c2a-affb-35e4-300a-418c747d4874\n" 

The commit being truncated: analyst1001/OpenRefine-Hbase@16a2600

map/set/stack/array do not allow complex value types

Right now, 'map', 'set', 'stack', and 'array' only accept basic_type as their value types. This leads to code generation errors if you attempt something like:

m: map[string] of stack of int;

Such types are legal and pass type check. The error is in the code generation strategy.

Built-in functions sometimes return java int instead of long, causing codegen errors

If you use a function that returns an int in Java (not long) then you are not able to push that value into maps that expect boa integers (which are long in Java). This would actually need an explicit cast in the generated Java code. For example:

paths: map[string] of int;
paths["first"] = len(paths);

gives the error:

Bug.java:135: error: no suitable method found for put(String,int)
                ___paths.put("first", ___paths.keySet().size());
                        ^
    method HashMap.put(String,Long) is not applicable
      (actual argument int cannot be converted to Long by method invocation conversion)
    method AbstractMap.put(String,Long) is not applicable
      (actual argument int cannot be converted to Long by method invocation conversion)

While this exact bug (using len() method) can probably be easily fixed, we need to ensure all built in functions are properly casting back to long.

Support '+=' operator

i := 0;
i += 5;

gives parser errors:

test.boa: compilation failed: Encountered parser error "[@4,8:8='i',<76>,3:0]" at line 3, column 0. error: ';' expected
i += 5;
^
    at unknown stack
test.boa: compilation failed: Encountered parser error "[@5,10:10='+',<55>,3:2]" at line 3, column 2. error: ';' expected
i += 5;
  ^
    at unknown stack
test.boa: compilation failed: Encountered parser error "[@6,11:11='=',<67>,3:3]" at line 3, column 3. no viable alternative at input '+='
i += 5;
   ^

Using maps often leads to NPE

Using maps typically leads to a null pointer exception, as many fail to check if a value exists before updating it. For example, many would instinctively write this code:

m[k] = m[k] + 1;

which would most likely lead to a NPE at runtime, due to trying to read the 'm[k]' value on the RHS which the first time through does not exist for that given k.

There are two proper ways of writing this code:

if (!haskey(m, k)) m[k] = 0;
m[k] = m[k] + 1;

or using the lookup() function:

m[k] = lookup(m, k, 0) + 1;

Perhaps this just needs a warning generated? And then users can decide what to do from there.

Support times as weights

It would be nice to be able to do a top/bottom based on times. In this scenario, each value would probably appear exactly once and we are essentially just sorting and taking the first/last N.

times: output top(100) of string weight time;

times << input.project_url weight input.created_date;

gives an error:

Encountered typecheck error at line 1, column 40. invalid weight type, found: time expected: float
times: output top(100) of string weight time;
                                        ^^^^

Resolve types in data generation

Currently we only parse source files and provide the parse tree (as a custom AST). We should also provide resolved types, where possible, and a list of probable resolved types in other cases where we are not certain.

Support nested complex types

Right now, 'map', 'set', 'stack', and 'array' only accept basic_type as their value types. This needs changed, so that code like the following works:

m: map[string] of map[string] of bool;
s: stack of map[string] of bool;

Code generation error as map key

Counts: output sum of int;
Projects: output sum[string] of int;
p: Project = input;

types: map[string] of int;

visit(p, visitor {
    before n: Declaration -> {
        exists (i: int; n.parents[i].name == "Runnable"
                || n.parents[i].name == "Thread"
                || n.parents[i].name == "TimerTask"
                || n.parents[i].name == "Executor"
                || n.parents[i].name == "ExecutorService"
                || n.parents[i].name == "ScheduledExecutorService"
                || n.parents[i].name == "AbstractExecutorService"
                || n.parents[i].name == "ThreadPoolExecutor"
                || n.parents[i].name == "ScheduledThreadPoolExecutor"
                || match(`^Callable($|<.*>$)`, n.parents[i].name))
            types[n.parents[i].name] = 1;
        exists (i: int; match(`^Callable($|<.*>$)`, n.parents[i].name))
            types["Callable"] = 1;
    }
});

if (len(types) > 0) {
    haslog := false;
    visit(p, visitor {
        before n: Revision -> {
            if (match(`\b(race|deadlock|violation|deadlocking)\b`, n.log))
                haslog = true;
            stop;
        }
    });
    if (haslog) {
        Counts << 1;
        Projects[p.id] << 1;
    }
}
error: illegal start of expression
___types.put(___n.getParentsList().get(___i)).getName(, 1l);}

SImilar issue as in #1

Quantifying over arrays of ints fails

o: output sum of int;

a: array of int;

foreach (i: int; a[i])
    o << a[i];

This breaks code generation, as the condition in the foreach becomes def(a[i]) which generates Java code of the form a[i] != null where a[i] is of type long (and thus can not be compared to null).

We need to change the code generation for the def() macro to be aware of the type of its argument.

Type checking error with functions that take BoaTypeVar

s: stack of int;
push(s, s);

This code should give an error, as push would expect a stack and an int as parameters, but is given two stacks.

The compiler gives no error, instead leading to a javac compilation error:

Test.java:152: error: method push in class Stack<E> cannot be applied to given types;
                ___s.push(___s);
                    ^
  required: Long
  found: Stack<Long>
  reason: actual argument Stack<Long> cannot be converted to Long by method invocation conversion
  where E is a type-variable:
    E extends Object declared in class Stack

keys()/values() fail with non-scalar maps

If you call keys() on a map with a non-scalar key type, or call values() on a map with non-scalar value type, it will fail. The problem is that these functions return arrays, so if the type is non-scalar it would be a non-scalar array which is not currently supported in Boa.

Support '==' and '!=' on complex types

For sets, stacks, maps, and arrays, the '==' and '!=' operators do not work.

These operators should pair-wise compare the two arguments (of same type) and return true if they hold the same values. Value comparison will use '==' as well.

Detect and warn about infinite recursion (where possible)

visitor {
   before n: Node -> visit(n);
}

f := function() { f(); };

Clearly these cases (and many more) can be detected statically and we should warn on such cases.

The basic analysis is to find a recursive call that is not control-dependent on any other statement and then mark it an error. This is sound, but no-where near complete.

Java 8 type annotations not supported

The datagen Java8 visitor does not currently support annotations on types. This is complicated by Boa's current data model, so that the following is not easily supported:

int @A[] @B[] @C[];

The visitor currently has several FIXME comments indicating places where this feature needs supported.

Can't chain indexers

Code like the following:

m: map[string] of map[int] of bool;
m["s"][0] = true;

does not work. This gives a codegen error, because it will generate something like ____m.put("s").get(0, true); due to the workaround for put() we have in place currently.

Compilation error when using selector on a function call

If you try to use a selector immediately after calling a function, you get an unimplemented exception. Example input:

o: output collection of string;
s: stack of Project;
push(s, input);
o << peek(s).id;

generates:

bug.boa: compilation failed: java.lang.RuntimeException: unimplemented
    at boa.compiler.visitors.CodeGeneratingVisitor.visit(CodeGeneratingVisitor.java:786)
    at boa.compiler.ast.Selector.accept(Selector.java:57)
    at boa.compiler.visitors.CodeGeneratingVisitor.visit(CodeGeneratingVisitor.java:676)
    at boa.compiler.ast.Factor.accept(Factor.java:78)
    at boa.compiler.visitors.CodeGeneratingVisitor.visit(CodeGeneratingVisitor.java:797)
    at boa.compiler.ast.Term.accept(Term.java:94)
    at boa.compiler.visitors.CodeGeneratingVisitor.visit(CodeGeneratingVisitor.java:1378)
    at boa.compiler.ast.expressions.SimpleExpr.accept(SimpleExpr.java:96)
    at boa.compiler.visitors.CodeGeneratingVisitor.visit(CodeGeneratingVisitor.java:570)
    at boa.compiler.ast.Comparison.accept(Comparison.java:84)
    at boa.compiler.visitors.CodeGeneratingVisitor.visit(CodeGeneratingVisitor.java:638)
    at boa.compiler.ast.Conjunction.accept(Conjunction.java:104)
    at boa.compiler.visitors.CodeGeneratingVisitor.visit(CodeGeneratingVisitor.java:1292)
    at boa.compiler.ast.expressions.Expression.accept(Expression.java:88)
    at boa.compiler.visitors.CodeGeneratingVisitor.visit(CodeGeneratingVisitor.java:928)
    at boa.compiler.ast.statements.EmitStatement.accept(EmitStatement.java:117)
    at boa.compiler.visitors.CodeGeneratingVisitor.visit(CodeGeneratingVisitor.java:428)
    at boa.compiler.ast.Program.accept(Program.java:48)
    at boa.compiler.visitors.AbstractVisitorNoArg.visit(AbstractVisitorNoArg.java:39)
    at boa.compiler.ast.Start.accept(Start.java:55)
    at boa.compiler.visitors.AbstractVisitorNoArg.start(AbstractVisitorNoArg.java:35)
    at boa.compiler.BoaCompiler.main(BoaCompiler.java:162)
    at boa.BoaMain.main(BoaMain.java:50)
Exception in thread "main" java.lang.RuntimeException: no files compiled without error
    at boa.compiler.BoaCompiler.main(BoaCompiler.java:215)
    at boa.BoaMain.main(BoaMain.java:50)

See: https://git.io/vwg39

Weight type 'int' is cast to 'float' in output

o: output top(10) of string weight int;
o << "Foo" weight 1;

In this program, we expect to see output with a single string 'Foo' and a weight equal to the number of projects processed. In the output text however, that weight is shown as a float (with decimal ".0" on the end) instead of an int.

Can't run compiler inside of Eclipse

The compiler assumes it is running inside a Jar file and thus you can't directly call BoaMain or BoaCompiler inside Eclipse without eventually hitting an exception.

support quantification over sets, stacks, maps

Currently quantification (foreach/exists/ifall) only supports arrays. Add the ability to quantify over sets, stacks, and maps.

For sets and stacks, this could easily use the values() function proposed in #4 and transform into code like:

v := values(s);
foreach (i: int; v[i])

For maps, if we decide to support them, do we quantify over an array of tuples (key+value)? Or just the values? Languages like Java return a KeyVal tuple when iterating.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.