Giter VIP home page Giter VIP logo

luajit-test-cleanup's Introduction

********************************************
** THIS IS NOT THE TEST SUITE FOR LUAJIT! **
********************************************

In fact it doesn't even have the steps to build it or run it,
so please don't complain.

This repo is a place to collect and cleanup tests for LuaJIT.
They should eventually be merged into the main LuaJIT repo.

It's definitely not in the best state and needs a serious
cleanup effort. Sorry.


Many issues need to be resolved before the merge can be performed:

- Choose a portable test runner
  Requirement: very few dependencies, possibly Lua/Shell only

- Minimal test runner library, wherever assert() is not enough

- Debugging test failures is a lot simpler, when individual tests can still
  be run from the LuaJIT command line without any big dependencies

- Define consistent grouping of all tests

- Define consistent naming of all tests

- Split everything into a lot of tiny tests

- Reduce time taken to run the test suite
  Separate tiers, parallelized testing

- Some tests can only run under certain configurations (e.g. FFI)

- Some tests need a clean slate to give reproducible results
  Most others should be run from the same state for performance resons

- Hard to check that the JIT compiler actually generates the intended code
  Maybe use a test matching variant of the jit.dump module

- Portability concerns

- Avoiding undefined behavior in tests or ignoring it

- Matrix of architectures + configuration options that need testing

- Merge tests from other sources, e.g. the various Lua test suites.

- Tests should go into the LuaJIT git repo, but in separate tarballs
  for the releases


There are some benchmarks, too:

- Some of the benchmarks can be used as tests (with low scaling)
  by checksumming their output and comparing against known good results

- Most benchmarks need different scalings to be useful for comparison
  on all architectures


Note from Mike Pall:

I've removed all tests of undeterminable origin or that weren't explicitly
contributed with the intention of being part of a public test suite.

I hereby put all Lua/LuaJIT tests and benchmarks that I wrote under the
public domain. I've removed any copyright headers.

If I've forgotten an attribution or you want your contributed test to be
removed, please open an issue.

There are some benchmarks that bear other copyrights, probably public
domain, BSD or MIT licensed. If the status cannot be determined, they
need to be replaced or removed before merging with the LuaJIT repo.

luajit-test-cleanup's People

Contributors

corsix avatar dibyendumajumdar avatar ladc avatar wiladams avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

luajit-test-cleanup's Issues

Test 220 failing on luajit v2.1

ubuntu@arm:~$ ./sandbox/orig/LuaJIT/src/luajit LuaJIT-test-cleanup/test/test.lua 220
[220/508] lib/string/format/num.lua --- ExploringBinary.com/print-precision-of-dyadic-fractions-varies-by-language/
LuaJIT-test-cleanup/test/lib/string/format/num.lua:13: expected string.format("%.99e", "0") == "4.940656458412465441765687928682213723650598026143247644255856825006755072702087518652998363616359924e-324", but got "0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000e+00"
stack traceback:
[C]: in function 'error'
LuaJIT-test-cleanup/test/lib/string/format/num.lua:13: in function 'check'
LuaJIT-test-cleanup/test/lib/string/format/num.lua:163: in function <LuaJIT-test-cleanup/test/lib/string/format/num.lua:160>
[C]: in function 'xpcall'
LuaJIT-test-cleanup/test/test.lua:378: in function 'execute_plan'
LuaJIT-test-cleanup/test/test.lua:413: in main chunk
[C]: at 0x00469c01
0 passed, 1 failed

ARM system, luajit revision: 1d7b5029c5ba36870d25c67524034d452b761d27

Testing framework

Agree on a testing library and a test runner with minimal dependencies, preferably Lua/shell only.
Test cases should still be easy to run without any (big) dependencies.

Test 366 failing on LuaJIT v2.1

Test 366 is failing for me with the v2.1 branch (commit LuaJIT/LuaJIT@d3e36e7):

$ ~/git/luajit/src/luajit test.lua 366
[366/508] lib/contents.lua --- pre-5.2 table
lib/contents.lua:80: got: "concat:foreach:foreachi:getn:insert:maxn:move:remove:sort"
expected: "concat:foreach:foreachi:getn:insert:maxn:remove:sort"
stack traceback:
        [C]: in function 'error'
        lib/contents.lua:17: in function 'check'
        lib/contents.lua:80: in function <lib/contents.lua:79>
        [C]: in function 'xpcall'
        test.lua:378: in function 'execute_plan'
        test.lua:413: in main chunk
        [C]: at 0x00404a70
0 passed, 1 failed

Any tips on how to resolve this?

Add Lua 5.1/5.2 test suites

This issue will track progress for adding the test suites from Lua (http://www.lua.org/tests/) to this repository. Modifications will be made to the test suites where necessary to make them work with LuaJIT.

It maybe possible to backport additional tests from 5.2.x and 5.3.x series where the language & libraries are compatible.

I will do the work on a branch initially and once working will merge it to the master.

License: The Lua Test suites are under MIT license.

Using a declarative data-driven approach to organize the test suite (like TestML)

I suggest we use a declarative data-driven format to organize the test cases in each individual test file so that it is independent of the actual test runner and framework. A good test specification syntax I've been using myself for many years is TestML:

http://testml.org/specification/language//index.html

This way we can free ourselves in using what way or what combinations of ways to run the tests without touching the test files themselves.

This also makes it easy when we add unit tests for LuaJIT's internal pipelines (like individual JIT compiler's optimization passes and intermediate IR form) in the future.

I hope we can draw a clean line between test case representation and test running strategies.

Test 219 failing on luajit trunk

Hello.

$ luajit test.lua 219
[219/492] lib/string/format/num.lua --- ExploringBinary.com/print-precision-of-dyadic-fractions-varies-by-language/
lib/string/format/num.lua:13: expected string.format("%.99e", "0") == "4.940656458412465441765687928682213723650598026143247644255856825006755072702087518652998363616359924e-324", but got "0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000e+00"
stack traceback:
[C]: in function 'error'
lib/string/format/num.lua:13: in function 'check'
lib/string/format/num.lua:163: in function <lib/string/format/num.lua:160>
[C]: in function 'xpcall'
test.lua:378: in function 'execute_plan'
test.lua:413: in main chunk
[C]: at 0x55fc2171a080
0 passed, 1 failed

x86_64 system, luajit revision: e296f56b825c688c3530a981dc6b495d972f3d01

CI for benchmarks online

This repo is cool! I am really happy to have a test suite. This seems great for people who want to maintain their own branches and keep track of how they compare with everybody else's. Like, have I broken something? Have my optimizations worked? Has somebody else made some optimizations that I should merge? etc. Just now I would like to maintain a branch called lowlevel to soak up things like intrinsics and DynASM Lua-mode so this is right on target for me.

I whipped up a Continuous Integration job to help. The CI downloads the latest code for some well-known branches, runs the benchmark suite 100 times for each branch, and reports the results. This updates automatically when any of the branches change (including the benchmark definitions).

The reason I run the benchmarks 100 times is to support tests that use randomness to exercise non-determinism in the JIT, like roulette (#9). Repeated tests mean that we can quantify how consistent the benchmark results are between runs, and once we have a metric for consistency then it is more straightforward to optimize (see LuaJIT/LuaJIT#218).

The branches I am testing now are master, v2.1, agentzh-v2.1, corsix/x64, and lukego/lowlevel. If anybody would like a branch added (or removed) just drop me a comment here. Currently the benchmark definitions are coming from my fork because I wanted to include roulette to check that variation is measured correctly.

Screenshot of the first graph (click to zoom):

benchmarks

and links:

Hope somebody else finds this useful, too! Feedback & pull requests welcome. I plan to keep this operational.

Convert, split up and reorganize tests

Split up the tests in to many small tests, and convert them into the format required by the testing framework where applicable. Reorganize them into the new directory structure.

Commit access

Anyone who wants to help with the cleanup effort and feels qualified, please apply here for commit access to this repo.

Note: this does not automatically grant you commit access to the LuaJIT main repo, which is a tad more sensitive.

Contributor PR workflow

Howdy! I am a first-time contributor with PR #9 and I have some questions:

  • Who is responsible for deciding when to merge my code?
  • What are this person's requirements?
  • What actions do I need to take to satisfy them?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.