Giter VIP home page Giter VIP logo

rune's Introduction

rune logo


Visit the site ๐ŸŒ โ€” Read the book ๐Ÿ“–

rune

github crates.io docs.rs build status chat on discord

The Rune Language, an embeddable dynamic programming language for Rust.


Contributing

If you want to help out, please have a look at Open Issues.


Highlights of Rune


Rune scripts

You can run Rune programs with the bundled CLI:

cargo run --bin rune -- run scripts/hello_world.rn

If you want to see detailed diagnostics of your program while it's running, you can use:

cargo run --bin rune -- run scripts/hello_world.rn --dump-unit --trace --dump-vm

See --help for more information.


Running scripts from Rust

You can find more examples in the examples folder.

The following is a complete example, including rich diagnostics using termcolor. It can be made much simpler if this is not needed.

use rune::{Context, Diagnostics, Source, Sources, Vm};
use rune::termcolor::{ColorChoice, StandardStream};
use std::sync::Arc;

let context = Context::with_default_modules()?;
let runtime = Arc::new(context.runtime()?);

let mut sources = Sources::new();
sources.insert(Source::memory("pub fn add(a, b) { a + b }")?);

let mut diagnostics = Diagnostics::new();

let result = rune::prepare(&mut sources)
    .with_context(&context)
    .with_diagnostics(&mut diagnostics)
    .build();

if !diagnostics.is_empty() {
    let mut writer = StandardStream::stderr(ColorChoice::Always);
    diagnostics.emit(&mut writer, &sources)?;
}

let unit = result?;
let mut vm = Vm::new(runtime, Arc::new(unit));

let output = vm.call(["add"], (10i64, 20i64))?;
let output: i64 = rune::from_value(output)?;

println!("{}", output);

rune's People

Contributors

bernsteining avatar blurgyy avatar campeis avatar cocuh avatar dependabot[bot] avatar dillonhicks avatar dranikpg avatar dstoza avatar endisnull avatar genusistimelord avatar jbdutton avatar jdomantas avatar jrobsonchase avatar killercup avatar lnicola avatar maxmcd avatar minusgix avatar modprog avatar nleguen avatar patchmixolydic avatar pkolaczk avatar roba1993 avatar robojumper avatar seanchen1991 avatar stoically avatar sww13 avatar tgolsson avatar theleonsver1 avatar udoprog avatar xemiru avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rune's Issues

Use rune tests for testing rune

Currently, a lot of tests are Rust tests that execute a single script to validate the language. In order to dogfood the language capabilities we could migrate some of these tests to actual Rune tests instead.

[Tooling] IDE Syntax highlighting

Rune Syntax is almost the same as Rust syntax (without types) we could use the same Rust Syntax highlighting for VSCode here.

I could start working on that after finishing what I missed at #35

Related to #44

Support moving values into closures

This is a spin-off of #149, which currently unconditionally moves the value into the closure. This behavior should instead be configurable with the move modifier.

let value = /* .. */;

// This should clone the value if converted to a sync function.
let a = || {
    dbg(value);
}

let value = /* .. */;

// This should move value.
let b = move || {
    dbg(value);
}

Using `?` can lead to stack corruption in non-linear code.

This is an interesting case:

struct Foo {
	x,
	y,
}

struct Bar {
	x, 
	y,
}

impl Bar {
	fn from_foo(foo) {
		Some(Self {
			x: foo.x?,
			y: foo.y?,
		})
	}
}

pub fn main() {
	let f = Foo { x: Some(1), y: None };
	Bar::from_foo(f)
}

If the first element is Some, any following None will cause the VM to error out:

โžœ cargo run --bin rune -- --dump-instructions -t miscomp.rn
    Finished dev [unoptimized + debuginfo] target(s) in 0.08s
     Running `target/debug/rune --dump-instructions -t miscomp.rn`
# instructions
fn main() (0xde7d86e18013c5c8):
  0000 = push 1
  0001 = call 0xf39f074fddf64c6b, 1 // variant ::std::option::Option::Some
  0002 = call 0x68988acb6b2de244, 0 // variant ::std::option::Option::None
  0003 = struct 0x30d77e445d4bc057, 0
  0004 = copy 0 // var `f`
  0005 = call 0x94fbe6278aba2ca5, 1 // fn Bar::from_foo
  0006 = clean 1
  0007 = return

fn Bar::from_foo(foo) (0x94fbe6278aba2ca5):
  0008 = copy 0 // var `foo`
  0009 = object-index-get 0
  0010 = dup
  0011 = is-value
  0012 = jump-if 2 // label:try_not_error_2
  0013 = clean 1
  0014 = return
try_not_error_2:
  0015 = unwrap
  0016 = copy 0 // var `foo`
  0017 = object-index-get 1
  0018 = dup
  0019 = is-value
  0020 = jump-if 2 // label:try_not_error_3
  0021 = clean 1
  0022 = return
try_not_error_3:
  0023 = unwrap
  0024 = struct 0x9a4ef349214d9ab1, 0
  0025 = call 0xf39f074fddf64c6b, 1 // variant ::std::option::Option::Some
  0026 = clean 1
  0027 = return
# strings
0xcd01b01a74097893 = "x"
0xef5992decfa6104f = "y"
# object keys
0 = ["x", "y"]
fn main() (0xde7d86e18013c5c8):
  0000 = push 1
  0001 = call 0xf39f074fddf64c6b, 1 // variant ::std::option::Option::Some
  0002 = call 0x68988acb6b2de244, 0 // variant ::std::option::Option::None
  0003 = struct 0x30d77e445d4bc057, 0
  0004 = copy 0 // var `f`
  0005 = call 0x94fbe6278aba2ca5, 1 // fn Bar::from_foo
fn Bar::from_foo(foo) (0x94fbe6278aba2ca5):
  0008 = copy 0 // var `foo`
  0009 = object-index-get 0
  0010 = dup
  0011 = is-value
  0012 = jump-if 2 // label:try_not_error_2
try_not_error_2:
  0015 = unwrap
  0016 = copy 0 // var `foo`
  0017 = object-index-get 1
  0018 = dup
  0019 = is-value
  0020 = jump-if 2 // label:try_not_error_3
  0021 = clean 1
  0022 = return
== ! (stack error: tried to access out-of-bounds stack entry (at inst 22)) (231.437ยตs)
error: virtual machine error
   โ”Œโ”€ miscomp.rn:15:20
   โ”‚
15 โ”‚                 y: foo.y?
   โ”‚                    ^^^^^^ stack error: tried to access out-of-bounds stack entry

As far as I can tell this comes from the fact that the original foo is still on the stack when returning:

[2020-12-06T20:05:05Z TRACE runestick::vm] 21: clean 1
    1+0 = Foo { x: Some(1), y: None }
    1+1 = None
  miscomp.rn:15  -                 y: foo.y?
  0022 = return

and it should likely be cleaned here as well too.

Fix error message when trying to downcast references of `Any` types mutably

This was reported by @tgolsson on Discord.

Currently our diagnostics for when we try to use a mutable reference out of a plain reference is quite poor:

use rune::{Errors, Options, Sources, Warnings};
use runestick::{Any, Context, FromValue, Module, Source, Vm};
use std::sync::Arc;

#[derive(Any)]
struct Foo {
}

impl Foo {
    pub fn test(&mut self) {
        println!("hello");
    }
}

#[tokio::main]
async fn main() -> runestick::Result<()> {
    let mut my_module = Module::with_item(&["mymodule"]);
    my_module.ty::<Foo>()?;
    my_module.inst_fn("test", Foo::test)?;

    // snip

    let foo = Foo {
    };

    let vm = Vm::new(Arc::new(context.runtime()), Arc::new(unit));
    let output = vm.call(&["main"], (&foo,))?;
    Ok(())
}

This results in:

Error: bad argument #0: expected data of type `mut_instance_fn::Foo`, but found `Foo` (at inst 2)

Why this happens

The underlying issue is that the error is raised in a function called Shared::downcast_borrow_mutwhich simply just makes the wrong assumption if the typecast fails:

return Err(AccessError::UnexpectedType {

This diagnostic is correct if they are two distinct types, but incorrect if we're trying to downcast a shared to an exclusive reference.

The impl for raw_as_mut for wrapped references always returns None to signal that it's not possible.

_ => return None,

So we need a way to signal with more granularity why the downcast failed (better errors) in order to provide relevant diagnostics.

Runtime traits to represent protocols

We currently have "protocols" to implement various functions. These are really just free instance functions with custom name hashes that can't be named dynamically.

This means that protocol impls currently can't be defined dynamically. So there is no way to make this run (which requires the STRING_DISPLAY protocol):

struct Foo;

fn main() {
    let foo = Foo;
    println(`{foo}`);
}

It should be possible to add "runtime traits" to Rune to represent these protocols. To allow for the following:

struct Foo;

impl std::fmt::Display for Foo {
    fn fmt(self, f) {
        write(f, "Foo as a string")
    }
}

fn main() {
    let foo = Foo;
    println(`{foo}`); // -> "Foo as a string"
}

Support modules

Most of the infrastructure for this was added today. We now need the indexing stage to queue up modules to be parsed and indexed.

  • Basic implementation loading from the filesystem.
  • Implement visibility rules.
  • [ ] Make filesystem loading optional (through a setting in Options).

Unreferenced public imports are stripped

e.g,

pub use other_file::do_something;

pub fn main() {
     do_something; // this line does nothing but if you remove it `do_something` won't be externally visible.
}

This requires one to create bogus references to all public functions; as in https://github.com/tgolsson/aoc-2020/blob/main/scripts/main.rn#L11.

I've noticed similar issues with module-private functions being marked as unused if the main function isn't called main, e.g.,
https://github.com/tgolsson/aoc-2020/blob/main/scripts/day1.rn#L3 filter_inner is marked as unused by the LSP.

Fix tracing when sub-vms are being spawned

Currently the CLI is unable to both provide diagnostics or traces when it spawns a sub-vm, because only the instruction pointer for the topmost virtual machine is available.

So if an error happens in e.g. an async function or closure it will only be able to point to the call site, rather than where the error happened.

There are several possible solutions:

  • We can propagate instruction pointers on errors. Sort of like unwinding the stack, but walking up the various VMs and accumulating the various states they were in when the error occured.
  • Since the VM is single-threaded, it strictly speaking can only do one thing at a time. This should allow for separating runtime state (like the instruction pointer). But the reason we use a sub-vm right now is because futures can be suspended, and it's a convenient way to suspend the state of an execution.

REPL and IDE support for Rune

Do you have any plans to implement a REPL for Rune, or a language server?

What I'd like for my personal projects is to be able to ship a binary that contains a REPL for the project (in this instance a MIDI programming environment for livecoding), and a language server that can be used with VSCode/Emacs/VIM, etc. Maybe something like a Rune-Repl crate and a Rune-RLS crate (naming is hard).

More than happy to experiment with this on my own and see what I end up with, but wasn't sure if you had something in mind, or if there was code that I had missed, or if you had different ideas about this?

Awesome project btw. Very impressed with what you've done so far, and I really like the focus of this project.

Fix evaluation order of index set / get operations to be in line with Rust

Currently we evaluate index set and index get operations from left-to-right, so the following would result in 1:

fn main() { (return 1)[return 2] = return 3; }

But Rust evaluates its arguments from value, target, index, as you can see if you play around with the following test program (Playground):

fn foo() -> u32 {
    let m = std::collections::HashMap::<u32, u32>::new();

    ({
        if true {
            return 1
        }

        m
    })[return 2] = return 3;

    return 0;
}

fn main() {
    println!("{}", foo());
}

We need to fix rune so that it evaluates expressions in the same order.

Add executeCommand support to LSP

For example, the LSP could expose commands to run, check, or test the current code by sharing some code with rune-cli.

Potentially has some overlap with DAP, for debugging capabilities.

Reference cycles memory leak

This code will slowly eat all available memory:

loop {
    let a = #{};
    a.a = a;  // if you comment this line, everything will be fine
}

It happens every time an object contains a reference to itself.

Memory leak can also be reproduced in Rust without using any objects:

pub struct Leaky {
    leak: Option<Shared<Leaky>>, // Shared can be replaced with Rc<RefCell<T>>
}

impl std::ops::Drop for Leaky {
    fn drop(&mut self) {
        println!("didn't leak");
    }
}

fn main() {
    let foo = Shared::new(Leaky { leak: None });
    let bar = foo.clone();
    foo.borrow_mut().unwrap().leak = Some(bar);
    // Nothing is printed, unless you comment the line above
}

I see a few solutions:

  • Disallow such cycles entirely
  • Add weak references
  • Run a tracing garbage collector infrequently to find cycles (hard to integrate with Rust though)

Support macros

With 49af732 we now have a functional macro subsystem. With this it should be possible to use and parse dynamic macro fn's like these:

macro fn hello(context, ast) {
    // parse ast and return a modified stream
    stream
}

From a compiler perspective, macro functions aren't super special. We would have to implement an intermediate assembly state for functions so a macro vm can be assembled on-the-fly out of this. The same query system can be used to cache the compilation of functions, while marking the things used as used_by_macro so we know we shouldn't include them in the main compilation phase unless they're pulled in by something else.

The one caveat is that macros can't depend on themselves, so detection for circular queries have to be added.

Todo

  • Support macro expansions in declaration positions.
  • Support procedural macros as attributes (depends on #43).
  • Add a limit to the number of macro expansions supported.
  • Support macro expansions in pattern positions.
  • Support macro use in any item position before it's been imported (bug issue #525).
  • Support macro imports without crate specification which is the :: prefix (also issue #525).

Auto-formatter

It'd be very nice to have an auto-formatter for Rune which can take a source-snippet and format it properly.

Define a preliminary bytecode format for units

Units have been designed so that they can be serialized. This can be used in the future cache a compilation or distribute units in bytecode format for later execution.

This task is to design a preliminary format which will probably be subject to change before becoming stable.

A unit contains:

  • A sequence of instructions, starting at 0x0.
  • A hash-to-function map.
  • A hash-to-type information map, which answers the question "what is this type hash?".
  • Zero-based collection of static strings, byte arrays, and object keys.

To allow for checking that a context is compatible to execute a unit, the format should also include:

Expanding Any derive to provide more features automatically

The Any derive should be expanded to provide the following glue code automatically:

Getters and setters for any fields which are pub, and not explicitly marked as #[any(skip)].

Constructor glue so that the type can be constructed in Rune with syntax appropriate for the type.

For example:

#[derive(Any)]
struct Npc {
    x: i64,
    y: i64,
}

#[derive(Any)]
struct Point(i64, i64);

Can be constructed like this in Rune (unless any field is marked as non-public or #[any(skip)]):

let npc = Npc { x: 10, y: 10 };
let p = Point(20, 40);

Note: this would be implementation by using a generated constructor and mapping the appropriate constructor syntax in Rune using CompileMeta to this constructor.

Checklist

  • Generating getters and setters for field not marked with #[any(skip)].
  • Generating constructors and mapping them in the Rune compiler through CompileMeta.
    • For tuple types.
    • For struct types.
    • For enum types (tuple and struct variants).

Compilation of rune 0.7.0 fails due to codespan_reporting

I tried the following code:

src/main.rs

fn main() {}

Cargo.toml

[package]
name = "rune_issue"
version = "0.1.0"
authors = ["Ruby Lazuli"]
edition = "2018"

[dependencies]
rune = "0.7.0"

I expected the code to compile with no errors and do nothing.

Instead, this happened:

$ cargo run
...
   Compiling runestick v0.7.0
   Compiling rune v0.7.0
error[E0432]: unresolved import `codespan_reporting::diagnostic`
  --> /home/sparkpin/.cargo/registry/src/github.com-1ecc6299db9ec823/rune-0.7.0/src/diagnostics.rs:15:25
   |
15 | use codespan_reporting::diagnostic::{Diagnostic, Label};
   |                         ^^^^^^^^^^ could not find `diagnostic` in `codespan_reporting`

error: aborting due to previous error

For more information about this error, try `rustc --explain E0432`.
error: could not compile `rune`

Tested rustc versions:

rustc 1.48.0 (7eac88abb 2020-11-16)
binary: rustc
commit-hash: 7eac88abb2e57e752f3302f02be5f3ce3d7adfb4
commit-date: 2020-11-16
host: x86_64-unknown-linux-gnu
release: 1.48.0
LLVM version: 11.0
rustc 1.51.0-nightly (257becbfe 2020-12-27)
binary: rustc
commit-hash: 257becbfe4987d1f7b12af5a8dd5ed96697cd2e8
commit-date: 2020-12-27
host: x86_64-unknown-linux-gnu
release: 1.51.0-nightly

Runtime monomorphization/JIT-optimization

This is a topic I've been thinking about for a while based on the following observations:

  • Value conversions take an observable amount of time in flamegraphs
  • Function lookup and calling takes an observable amount of time
  • Untyped ISA leads to an interpreter with heavy branching and in a mixed workload there's a low likelihood that the branch predictor will work well

Thus, I'd like to propose a secondary compilation stage (or maybe deferring primary compilation). In this new mode of operation, every function call becomes a generic function over the input types. Invocation leads to monomoprhization over all input value types, including receiver.

Thus, code along the lines of

fn add(a, b) {
    a + b
}

fn main() {
    let value1 = add(2, 3); // case A
    let value2 = add(2.0, 3); // case B
    let value3 = add(4, 5); // case A, reuse compilation artifacts
}

would compile the following code for Case A:

fn add<Integer, Integer>(a: Integer, b: Integer) -> Integer {
    Inst::IntOpPlus();
} 

and the function call would reach this function and the interpreter execution would continue branch free. For the second case, we'd transform the current runtime error to a compilation/monomorphization error instead of a runtime error, and the final call will reuse the first compilation artifacts. This'd apply recursively, so monomorphized functions would end up calling other monomorphized, direct jump functions. This'd work for both native and Rune function calls.

There are some challenges here that I foresee, which share some overlap/complexities/incompatibilities with SSA IR. The biggest challenge by far is that not all code paths are created equal during invocation. Code that doesn't execute cannot be monomorphized, and we'll thus need to incrementally compile it as more code paths are explored. A function can therefore exist in three states: raw, partially monomoprhized, fully monomorphized, and no matter if these states are explicit or implicit we need to deal with them.

I still think this'd open up the door for a lot of other optimizations when no type information/function hashes/... needs to be looked up at runtime. Like SSA, it will likely be more impactful for a register machine than a stack machine, but either approach should benefit. I think that straight out of the box we'd see improvements for points 2 and 3 above, while point 1 likely require a bit more thought.

I tried implementing this as part of the current instruction set and tracing during execution but it became very complex to manage modification in-place. I think a more tractable approach would be to implement a streaming compiler from IR to a modified set of runtime instructions, or streaming to a lower level instruction set. This'd require some funky tricks to switch to interpretation when one hits an unknown branch and then inserting those branches back but it seems tractable.

This is mostly just a brain dump at the moment to get some initial thoughts while I'm reading papers on JIT-compilation.

Support assign-based operators for objects and tuples

Related to #38

We currently don't support these:

let a = #{};
a.foo = 0; // so far so good
a.foo += 1; // nope

There are currently no instruction which have an addressing mode of "operate on a field" or "operate on a tuple field". And to support this, they would need to be added. Possibly these could be added to the existing operators (AddAssign etc...).

Note that desugaring to the following is not feasible, since it would create a copy of types which are copy:

let a = #{};
a.foo = 0; // so far so good
let temporary = a.foo; // copy created here.
temporary += 1; // modified copy
assert(a.foo == 1); // would fail

Implement a item-based compile cache

For everything that can be queried in the compiler, it's feasible to cache the results of the query in case they are unchanged.

This would require structural hashing of anything in the ast which might have an effect on the result of the compilation, but would be a big boon in the future if this cache can be stored somewhere.

Support attributes

Support rust style attributes;

#[hello]
fn test() {
}

#[hello]
struct Foo {
}

This will be invoked as macros with the ast of the thing that it annotates as input, and anything produced by the macro will be appended to the source (to work the same as Rust procedural macros).

  • Implement parsing (#83).
  • Add compiler support to process attributes.

Implementation note: If you need inspiration for how to structure the AST, look into Attribute in the syn project.

Support const evaluation

Support constant expressions, like these:x

const VALUE = #{hello: "World"};

They can reference other constant expressions, but will fail to compile if there are cycles:

const VALUE = #{hello: OTHER};
const OTHER = "World";

Fail:

const VALUE = #{hello: OTHER};
const OTHER = #{hello: VALUE};

Plumbing is being built in #93. As it stands, constant value resolving is hooked into the query phase through a component called the ConstCompiler. Which will probably be renamed to something involving the world eval, since that is really all it does. Essentially it's an eval function which does AST walking.

When CompileMeta::Const is resolved for a query, it includes the calculated ConstValue. All const resolving is memoized.

Checklist
  • constant fns (which will always be constant evaled if used in a non-const context).
  • const blocks const { <block> }.
    • This will need yet another re-implementation of scopes. This time simpler, since we don't need to keep track of Vm state.
  • if expressions (started in #93).
  • while expressions (started in #93).
  • String operations.
  • Template strings (started in #93).
  • integer operations (started in #93).
  • float operations (started in #93).
  • Literals (started in #93, #94).

Current state

The current state of the constant compiler is a very naive ast walker. With the existing design it's not possible to implement break or returns, because each expression is evaluated recursively and that would require returning early everywhere in a manner which would be very cumbersome.

Instead, the constant compiler should be redesigned to execute a stack, where returning simply pops the execution context of each scope until it's reached the thing it should break out of or return from. The root of the execution simply executed the AST that is at the top of the stack until it receives instructions to behave differently.

But the current constant compiler is fully capable of understanding cycles, and can do string templating, like this:

const LIMIT = 10;
const URL = `https://api.example.com/query?limit={LIMIT}`;

fn main() {
    let _ = http::get(URL);
}

Add more detail to DebugInst and Diagnostics

In order to provide good diagnostics it'd be helpful if the DebugInst not only pointed to the span of the instruction but also any related parts that are useful for diagnostics. For example, in a let binding it would make sense to have spans for all lhs elements as well as rhs. In a binary op, having the two operands would make sense. This can then be used to generate more detailed diagnostics with more exact spans for codespan-reporting.

Separate compile-time and runtime metadata

This will make the latter more sparse. Runtime metadata is fur example the number of arguments a dynamic function takes so it can be checked. Or which type hash an instance function is associated with.

Closures currently have a set of captured variables in there, which is actually only needed at compile time.

See the Meta enum.

Conflicts in type hashes

This issue is open to document a potential soundness issue in Rune that might arise in case multiple Any types share the same type hashes.

This is documented in detail in the book. This issue is kept open to highlight the issue and leave it open to discussion.

Unsupported Assign Expression

When running:

fn main() {
            let alpha = #{
                beta: #{}
            };
            alpha.beta.zeta = 4
}

Behavior: It produces an UnsupportedAssignExpr Compile Error.
Expected Behavior: Assigning the value 4 to a new property zeta on alpha.beta.

Allow using external objects containing references

For example:

#[derive(runestick::Any, Default)]
struct MyObject<'a> {
    something: &'a u32,
}

Currently, runestick::Any requires std::any::Any which requires 'static, and also UnsafeFromValue and UnsafeToValue require 'static, so this is impossible. Also the derive macro doesn't handle the lifetime correctly.

It would be nice if this were allowed when not using async.

Closure Type

How does one receive a closure as a value? Ex:

use rune_testing::*;
fn main() -> runestick::Result<()> {
    let function: Function = rune! {
        Function => r#"fn main() {
            |a, b| a + b
        }"#
    };
    println!("{}", function.call::<(i64, i64), i64>((1, 3))?);
    println!("{}", function.call::<(i64, i64), i64>((2, 6))?);
    Ok(())
}

Would panic with: thread 'main' panicked at 'program to run successfully: VmError { kind: Expected { expected: StaticType(StaticType { name: "Function", hash: Hash(0x45b788b02e7f231c) }), actual: Hash(Hash(0x9aa62663879132fb)) } }', src/main.rs:4:30
Expected behavior: Being able to receive the closure and call it.
(My main usecase for storing closures would be for callbacks, ex: a button being pressed, so somewhat similar to the example)

doesn't doesn't

[via hackernews... miscompile, wrong closure] ...the one that doesn't doesn't perform...

Implement Rust Enum to Rune Support

Most likely setting it up so we can do the following for

pub enum Kind {
    Npc,
    Player,
}

to add it to rune like:

let kind = module.en::<Kind>().unwrap();
kind.add_types(&[("Npc", Kind::Npc), ("Player", Kind::NPlayerpc)])?;

or like

module.en::<Kind>()?;
module.en_type(&["Kind", "Npc"], Kind::Npc)?;
module.en_type(&["Kind", "Player"]", Kind::Player)?;

Basic compiler optimizations

This lowers the complicated AST to an intermediate representation, on which more optimizations can be performed.

Constant folding

With the advent of the new IR, if fed through the IrCompiler it would be possible to mark expressions which are fully constant. If this is the case, they can be constant folded, so:

fn foo() {
    1 + 2
}

Which currently generates:

fn foo() (0x27dc4241305d08c5):
  0000 = push 1
  0001 = push 2
  0002 = op +
  0003 = return

Could instead generate:

fn foo() (0x27dc4241305d08c5):
  0000 = push 3
  0003 = return

Inlining

Trivial function calls can either be fully inlined, or at least they can be reduced to avoid copies. This currently generates more instructions than necessary:

fn foo(a, b) {
    a + b
}

Unit:

fn foo(a, b) (0x27dc4241305d08c5):
  0000 = copy 0 // var `a`
  0001 = copy 1 // var `b`
  0002 = op +
  0003 = clean 2
  0004 = return

But it should be possible to reduce it to:

fn foo(a, b) (0x27dc4241305d08c5):
  0002 = op +
  0004 = return

Because the incoming parameters (a and b) are already pushed on the stack in the correct order.

Support of global changeable variables

Hi all,

I'm switching from rhai to rune because of the async support of rune.
One feature I'm really missing are global changeable variables.

In my case it's needed because I have up to 30 different inputs for the engine which all can change within the engine (by rhai code from the user) and need to be read out at the end again.

My actual workaround is to have a struct which is the input of each function and need to be the output. But this is not really user friendly and prevents additional use-cases where I want to return function specific data in addition.

Macros producing statements are evaluated out-of-order

Macros in statement positions might produce expressions or statements. This causes issues if the macro produces an item, because items need to be indexed before macros using them in case they use the item produced.

As an example, println! can use a constant value as the first argument, but this would currently cause issues:

pub fn main() {
    println!(FORMAT, "World");
    declare_format!(); // produces: `const FORMAT = "Hello {}";
}

This could be solved in following ways:

  • Identify with metadata that a macro needs to use constant evaluation, in which case it is not permitted to expand to an item.
  • Not allowing constant evaluation in macros, which would be a shame.
  • Live with the fact that combining const values and macros have an inherent evaluation order.

Related: #153

Use dynamically dispatched generics to reduce namespace contamination

Come up with a generics scheme that works at runtime and allows for:

  • Dynamically dispatching a function call to the correct implementation depending on the type of the arguments.
  • Directly address the exact generic implementation at compile-time using generic function-call syntax (e.g. std::parse::<i64>(string)).

Suggested implementation details

The idea is that native generic functions are monomorphized at registration time. Take:

str::parse::<int>("42");
"1,2,3,4".split::<char>(',');
"1,2,3,4".split::<str>("2,3"); // Note: different argument type, different function impl is needed because its a different kind of pattern.

It would actually look up a function named "parse", who's hash includes the int type constructor.
Programmatically this would be something like:

// NB: this is the regular parse function hash.
let parse_fn_hash = Hash::type_hash(&["parse"]);
Hash::type_hash_generic(parse_fn_hash, (i64::type_hash(),))

So what happens if we call the function generically?

This will error, since there's no generic type information available. We don't know which str::parse impl to pick:

str::parse("42");

The above case has to be specified at compile time, because the type of the argument doesn't determine the implementation to use:

str::parse::<i64>("42");

For case where we can look at the type of the argument (like String::split), the generic function by first looking up the split instance fn, then using the metadata to resolve all generic parameters and notice that they are input arguments, so it can generate the necessary type hash:

"1,2,3,4".split(',');

Or can be used more explicitly to avoid the dynamic dispatch:

"1,2,3,4".split::<char>(',');

Add support for bitwise operations

This includes:

  • Shl <<
  • ShlAssign <<=
  • Shr >>
  • ShrAssign >>=
  • BitOr |
  • BitOrAssign |=
  • BitXor ^
  • BitXorAssign ^=
  • BitAnd &
  • BitAndAssign &=
  • ! (bitwise not) for numbers.

Stateful Hot-Reloading.

First of all, Thanks for this great project ๐Ÿ˜„
Could we have at least a Hot-Reloading? I would like to see that built-in the Engine (VM) itself.

An embeddable language into Rust would usually get used into one of these cases (at least):

  1. GameDev ๐ŸŽฎ
  2. User Scripting (Plugins). ๐Ÿ”Œ
  3. Extend The Program logic at Runtime without Requiring Recompiling the whole project.

All of these cases would benefit from Hot-Reloading, especially if we could get a Stateful Hot Reload.

I would like to contribute and push the language and the VM to see this feature happens. ๐Ÿ˜„

Figure out how to permit delayed iterator conversion for values

It would be great if certain iterator functions could make use of the INTO_ITER implementation which is registered for iterators, for example once we implement flat_map it would be nice if we could change this:

let values = [1, 3, 5, 7].iter().flat_map(|n| [n, n + 1].iter());

To:

let values = [1, 3, 5, 7].iter().flat_map(|n| [n, n + 1]);

This would use the INTO_ITER implementation that is already available for Vec to convert it into an iterator.

This is currently difficult because these implementations require access to the current unit and context. These are specifically not available in values during ToValue and FromValue conversion because these traits need to be useful outside of the Vm.

An alternative is to introduce a new type ValueWithVm (temporary name) and optionally provide the Vm through thread-local storage. If the Vm is not available during conversion, a VmError would be raised.

This would mean that the FromValue impl for ValueWithVm would error if called outside of a Vm.

Lines starting with an object literal are hidden by mdbook

Lines in a code block starting with a hash (#) are hidden by mdbook, which is generally good for hiding lines that are irrelevant while still allowing doctests/playground runs to build. However, this clashes with Rune object literals, which also start with a hash. This can be seen in the last example of the Pattern Matching chapter.

The example displays as:

fn describe_car(car) {
    match car {
        _ => "Can't tell ๐Ÿ˜ž",
    }
}

fn main() {
    println(describe_car(#{"model": "Ford", "make": 2000}));
    println(describe_car(#{"model": "Honda", "make": 1980}));
    println(describe_car(#{"model": "Volvo", "make": 1910}));
}

with the output

$> cargo run -- scripts/book/pattern_matching/fast_cars.rn
Pretty fast!
Can't tell ๐Ÿ˜ž
What, where did you get that?
== () (5.3533ms)

...which doesn't make very much sense, until you view the source file:

fn describe_car(car) {
    match car {
        #{"make": year, ..} if year < 1950 => "What, where did you get that?",
        #{"model": "Ford", "make": year, ..} if year >= 2000 => "Pretty fast!",
        _ => "Can't tell ๐Ÿ˜ž",
    }
}

fn main() {
    println(describe_car(#{"model": "Ford", "make": 2000}));
    println(describe_car(#{"model": "Honda", "make": 1980}));
    println(describe_car(#{"model": "Volvo", "make": 1910}));
}

Define a format for external definitions

This will be required for native items (functions, types, macros, ...) to be known by the language server so that they can compile without having to load the native modules in.

Preliminarily the definition format can be like Rune, but without declaration bodies:

#![definition(mod = game)]

/// Always opaque, body is not known.
struct Player {
    /// The health of the player.
    #[getter]
    health,
}

impl Npc {
    /// The documentation for an instance function.
    fn instance_fn();
}

/// The documentation for a free function.
fn free_fn();

Furthermore, we can load the definition into the module, which will then fail to load in case definitions are missing or wrong:

fn module() -> Result<runestick::Module, runestick::ContextError> {
    let module = runestick::Module::from_definition(include_str!("path/to/game.d.rn"))?;
    // install things into the module.
    module
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.