Giter VIP home page Giter VIP logo

marioai's People

Contributors

karakovskiy avatar nikolaysohryakov avatar melink14 avatar

Watchers

James Cloos avatar

marioai's Issues

Level Generation is not consistent between certain runs using the same environment.

I spent all day looking into this one, and I'm not sure of the cause but I have 
an isolated test case and some fixes.

What steps will reproduce the problem?
1. Download and run LevelGenBugTest.java
2. Remember the Fitness of the 2nd run.
3. Run the replay of the second run

What is the expected output? What do you see instead?
Expect the Fitnesses to be the same.  The fitnesses are different.

What version of the product are you using? On what operating system?
Newest from Subversion. Windows 7 etc.

Please provide any additional information below.
This is a bug that happens specifically when doing multiple runs in the same 
instance with different arguments.  Additionally, the specific trigger seems to 
be the '-lb on' argument.  If this argument is set then any following run which 
does not have that set will generate a different level than it should.

You can test that by switching to '-lb off' and noticing that the problem fixes 
itself.

I'm not sure exactly why the problem happens but it can be fixed by adding one 
line to LevelGenerator.createLevel() right after line 110 and before 111:
counters = new Level.objCounters();

It seems like old data from the previous runs is messing up subsequent runs but 
I can't see how it's happening exactly.

It's not a problem with the Recorder because I have my own replay system which 
shows the same problem and is fixed similarly.  (However, if the Recorder was 
working a bit better it wouldn't show the problem. see issue 10)

Original issue reported on code.google.com by [email protected] on 28 Oct 2010 at 9:30

Attachments:

evaluateSubmission in LearningEvaluation doesn't use learningAgent to run simulations

What steps will reproduce the problem?
1.  In LearningEvaluation.java set your agent at line 106 in the "main" method.
2.  Run LearningEvaluation.
3.  Wait for the first level to finish.

What is the expected output? What do you see instead?
The expected output is to see your learned agent playing the level they 
learned.  Instead, the visualization shows the human keyboard agent.

What version of the product are you using? On what operating system?
Latest from Subversion. IntelliJ 9.0.3.  Windows 7 x64

Please provide any additional information below.

We pass in both a CmdLineOptions argument and a LearningAgent argument to 
evaluateSubmission, but what evaluateSubmission does with those arguments is a 
bit strange.  The key bit to notice is that cmdLineOptions does not have an 
agent set.

In evaluateSubmission, a ProgressTask task is made using the cmdLineOptions.  
This means the task has no agent except the default agent!  A similar problem 
occurs later with basicTask.

I think the best solution is to add the agent to the cmdLineOptions in 'main' 
and then in evaluateSumbission you can make the task as normal, and use 
cmdLineOptions.getAgent() for the agent, though the unsafe cast is annoying.

Original issue reported on code.google.com by [email protected] on 27 Oct 2010 at 3:37

Replays cause a null pointer exception

What steps will reproduce the problem?
1. record a run by calling cmdLineOptions.setRecordFile("name") before an 
episode.
2.  Replay file by running Main with "-rep name"

What is the expected output? What do you see instead?
Expect to see the reply.  Instead, Java prints out a null pointer exception:

java.lang.NullPointerException
    at ch.idsia.benchmark.mario.engine.LevelScene.reset(LevelScene.java:992)
    at ch.idsia.benchmark.mario.environments.MarioEnvironment.reset(MarioEnvironment.java:92)
    at ch.idsia.benchmark.tasks.BasicTask.reset(BasicTask.java:56)
    at ch.idsia.scenarios.Main.main(Main.java:28)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:115)
Exception in thread "main" java.lang.NullPointerException
    at ch.idsia.benchmark.mario.engine.LevelScene.reset(LevelScene.java:1020)
    at ch.idsia.benchmark.mario.environments.MarioEnvironment.reset(MarioEnvironment.java:92)
    at ch.idsia.benchmark.tasks.BasicTask.reset(BasicTask.java:56)
    at ch.idsia.scenarios.Main.main(Main.java:28)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:115)

What version of the product are you using? On what operating system?
Latest from subversion. IntelliJ 9.0.3. Windows latest x64

Original issue reported on code.google.com by [email protected] on 27 Oct 2010 at 5:03

  • Merged into: #5

LearningEvaluation usage of setArgs ambiguous

There is no bug here but I wanted to raise this issue since I (personally) 
can't tell if the behavior was intentional.

In the 'main' function of LearningEvaluation there are 5 levels each which 
starts off with a cmdLineOptions.setArgs("...") call to set the options for the 
level.

This is fine, but because options persist between runs (they're all stored in 
optionsHashMap) all previous options are kept unless they are replaced by a new 
value.

This might be a problem if someone thinks that each run is separate and adds an 
argument that later runs don't use.  Those runs will still have that argument 
set!

For instance, in the current LearningEvaluation.main(), 'level 1' has '-le off' 
set.  However, 'level 2' doesn't mention '-le' at all.  Did the original author 
mean to have default enemies?  Or did he leave it out knowing that '-le off' 
would propagate from 'level 1'?  I don't know, but it seems like such a 
situation would be bad.

What if someone else were shuffling levels around in the function?  They might 
miss such a subtle detail.

Again, not a real bug.  Just something to consider.

Original issue reported on code.google.com by [email protected] on 28 Oct 2010 at 10:22

LevelSceneTest: testGetSerializedLevelSceneObservationZ fails

What steps will reproduce the problem?
1. Checkout MarioAI from source.
2. Run Unit Tests.

What is the expected output? What do you see instead?
I expected to see no output but instead see an Assertion error:

***********************************************************************
junit.framework.AssertionFailedError: expected:<0> but was:<-60>
    at ch.idsia.unittests.LevelSceneTest.testGetSerializedLevelSceneObservationZ(LevelSceneTest.java:70)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
    at org.junit.runners.Suite.runChild(Suite.java:128)
    at org.junit.runners.Suite.runChild(Suite.java:24)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
    at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
    at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:94)
    at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:192)
    at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:64)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:115)
************************************************************************

What version of the product are you using? On what operating system?
I'm using the latest from Subversion.  Windows 7 64 bit. IntelliJ IDEA 9.0.3.

Original issue reported on code.google.com by [email protected] on 26 Oct 2010 at 7:35

Current Unit Test Failures

There are 4 test which fail.  These would be easy to fix by changing the 
expected value to the actual value but I wanted to make sure nobody else 
thought they shouldn't have changed.

Test:
testTotalNumberOfOptions -- expected:<53> but was:<54>
testForwardJumpingAgentFitnessWithoutCreatures -- expected:<7320> but was:<7288>
testForwardJumpingAgentFitnessWithDefaultCreatures -- expected:<8166> but 
was:<8134>
testForwardAgentFitnessWithoutCreatures -- expected:<7432> but was:<7400>

Original issue reported on code.google.com by [email protected] on 30 Nov 2010 at 9:49

build.xml is not portable

What steps will reproduce the problem?
1. check out the source
2. run ant

What is the expected output? 
The project builds.

What do you see instead?
The libraries in lib/ aren't included, and the testng lib cannot be found, so 
the compiler is unable to resolve a bunch of symbols. In addition, the 
build.properties file contains machine-specific absolute paths.

What version or revision from SVN of the product are you using? On what
operating system?
trunk r@767

Please provide any additional information below.
Attached is a patch that provides a working, hand-written build file. It 
includes tasks for compiling, running the human keyboard play target, and 
testing.

One thing that is not in the patch file is that testng-6.3.jar needs to be 
downloaded and placed in the lib/ directory. It can be downloaded here: 
http://testng.org/testng-6.3.zip

This build file is easy to maintain and does not need to be updated when new 
source files, tests, or libraries are added to the project. Tests are found 
anywhere in the source, and all jars placed in the lib directory are 
automatically added to the classpath.

Original issue reported on code.google.com by [email protected] on 30 Oct 2011 at 10:46

Attachments:

MarioAIBenchmarkTest fail due to conversion to intS

A lot of test fail here due to having incorrect values for their final weighted 
fitnesses.

They are:
testForwardJumpingAgentFitnessWithDefaultCreatures
testReceptiveField_1x2
testReceptiveField_3x1
testReceptiveField_1x1
testForwardAgentFitnessWithDefaultCreatures
testForwardAgentFitnessWithDefaultCreaturesVisual

The problem seems to be that the expected fitnesses are floats but that due to 
distance being stored as an int (recently?) the value returned by the 
EvaluationInfo object is always an int and thus off by a fraction each time.

The solution here is probably to just update the unit tests.

Original issue reported on code.google.com by [email protected] on 26 Oct 2010 at 7:44

Running a replay on a file with visibility turned off causes a bug.

What steps will reproduce the problem?
1.  Run Unit Test.
2. "java Replay recorderTest.zip"

What is the expected output? What do you see instead?
You expect to see a replay of the level played with '-vis off' during the 
tests.  Instead you get the following error:

java.lang.NullPointerException
    at ch.idsia.benchmark.mario.environments.MarioEnvironment.tick(MarioEnvironment.java:137)
    at ch.idsia.benchmark.tasks.ReplayTask.playOneFile(ReplayTask.java:48)
    at ch.idsia.benchmark.tasks.ReplayTask.startReplay(ReplayTask.java:97)
    at ch.idsia.scenarios.Replay.main(Replay.java:27)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:115)

What version of the product are you using? On what operating system?
Same as always.

Please provide any additional information below.
The patch I proved in issue 5 fixes this issue but is fixes other non broken 
things as well. :)

The problem is that the environment is reset using the options from the replay, 
but because visualization was turned off, the VisualComponent is not 
initialized and thus the error happens.  I've included a small patch which 
fixes just this problem.

Original issue reported on code.google.com by [email protected] on 27 Oct 2010 at 11:40

Attachments:

getintermediateReward

Hi, I have a question about this method. How does it works?

I've only found in basictask class but it has never been updated

Original issue reported on code.google.com by [email protected] on 2 May 2012 at 12:01

-server option missing for TCP

"$ java ch.idsia.scenarios.Main -server on" returns Error: Undefined parameter 
'-server on', also src\ch\idsia\utils\ParameterContainer.java does not define 
-server option either. it seems this option is defined in the source file of 
the 2009 competition.

I'd like to use Python to program the agnet, How should start server in this 
case?

Original issue reported on code.google.com by [email protected] on 9 Nov 2010 at 3:35

Documentation

Hi, I have a question: Where is the documentation? Thanks, Jorge.

Original issue reported on code.google.com by [email protected] on 11 Sep 2013 at 5:07

Incorrect comparison in LevelScene.getEnemyFloatPos()

LevelScene.java
line 458:
if (sprite.kind >= Sprite.KIND_GOOMBA && sprite.kind <= Sprite.KIND_MUSHROOM)
// if( int >= 80 && int <= 2)      
// This will always evaluate to false!

I'm pretty sure my version is up to date. Note that this means NO data can be 
retrieved using environment.getEnemyFloatPos(), whish is very very bad!

Hope I'm not mistaken!

- Isaiah Hines
- [email protected]

Original issue reported on code.google.com by [email protected] on 5 Oct 2010 at 4:42

Replays cause a null pointer exception

What steps will reproduce the problem?
1. record a run by calling cmdLineOptions.setRecordFile("name") before an 
episode.
2.  Replay file by running Main with "-rep name"

What is the expected output? What do you see instead?
Expect to see the reply.  Instead, Java prints out a null pointer exception:

java.lang.NullPointerException
    at ch.idsia.benchmark.mario.engine.LevelScene.reset(LevelScene.java:992)
    at ch.idsia.benchmark.mario.environments.MarioEnvironment.reset(MarioEnvironment.java:92)
    at ch.idsia.benchmark.tasks.BasicTask.reset(BasicTask.java:56)
    at ch.idsia.scenarios.Main.main(Main.java:28)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:115)
Exception in thread "main" java.lang.NullPointerException
    at ch.idsia.benchmark.mario.engine.LevelScene.reset(LevelScene.java:1020)
    at ch.idsia.benchmark.mario.environments.MarioEnvironment.reset(MarioEnvironment.java:92)
    at ch.idsia.benchmark.tasks.BasicTask.reset(BasicTask.java:56)
    at ch.idsia.scenarios.Main.main(Main.java:28)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:115)

What version of the product are you using? On what operating system?
Latest from subversion. IntelliJ 9.0.3. Windows latest x64

Original issue reported on code.google.com by [email protected] on 27 Oct 2010 at 5:01

getLevelSceneZ returns 19 x 19 grid

I expect to get a 22 x 22 grid, but what I get is a 19 x 19 grid. Now I do not 
know if I can trust that Mario is at 11,11 as stated in all the tutorials.

I am using the newest version of the code and running it without errors or 
warnings in Eclipse Gallileio.

I hope to have the issue resolved by getting verified information about mario's 
coordinates (are they 11,11 or something else?), and by having explained the 19 
to 22 difference.

-Erik D. Johnson

Original issue reported on code.google.com by [email protected] on 9 Nov 2010 at 9:08

Recorder doesn't use level.lvl when run the normal way

What steps will reproduce the problem?
1. Follow the steps in issue 9

What is the expected output? What do you see instead?
You expect that even though the level generator messed up the recording would 
still work because it stores a copy of the level that was generated.  Instead, 
the level is generated freshly, level.lvl is never used, and the bug in issue 9 
occurs.

What version of the product are you using? On what operating system?
Same as always.

Please provide any additional information below.
This bug happens because currently in LevelScene.reset() line 986 the call to 
cmdLineOptions.getReplayOptions() checks for a -rep argument to get the replay 
filename.  However, currently when using Replay.java no -rep argument is taken 
and so no replayFileName exists and the level is never read from the replay 
object.

This is easily fixable since there are many ways to get the filename to the 
correct classes.  Also, since levels are supposed to be the same between plays 
maybe this doesn't matter if issue 9 is fixed.

Original issue reported on code.google.com by [email protected] on 28 Oct 2010 at 9:37

Max values in evaluation info are nonsensical

The values printed out after a run give a (0 out of MAX) rating for all items 
which can be enumerated, however it always just gives a really big number which 
is not representative of the actual number of (enemies, coins, blocks, 
whatever).

It seems upon further investigation that the number is just Interger.MAX which 
is used in LevelGenerator.createLevel in order to set the counters.totalX 
variables.

This might be by design but I thought I'd mention it.

Original issue reported on code.google.com by [email protected] on 28 Oct 2010 at 7:31

  • Merged into: #9

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.