Giter VIP home page Giter VIP logo

Comments (24)

DMonitor avatar DMonitor commented on August 28, 2024 1

In story_manger.py:
self.memory = 20
It’s not set to eight, but I believe this could be the parameter that determines how many pairs is sent to the model

from aidungeon.

WAUthethird avatar WAUthethird commented on August 28, 2024

To confirm, you are running with CUDA on your GTX 970? If you are running via CPU, 16 GB should be plenty.

from aidungeon.

kik4444 avatar kik4444 commented on August 28, 2024

To confirm, you are running with CUDA on your GTX 970? If you are running via CPU, 16 GB should be plenty.

I don't know how to check which it is, but I don't think I have CUDA installed, since I remember removing a CUDA package in the past cause it was 4 GB alone, so probably CPU

from aidungeon.

WAUthethird avatar WAUthethird commented on August 28, 2024

Do you get a bunch of warnings when you start the script?

from aidungeon.

kik4444 avatar kik4444 commented on August 28, 2024

Do you get a bunch of warnings when you start the script?

No everything is normal

from aidungeon.

kik4444 avatar kik4444 commented on August 28, 2024

Well just in case I installed CUDA from the official repos, rebooted my PC, but it still crashed on the 15th reply with this log

Traceback (most recent call last):
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
    return fn(*args)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
    target_list, run_metadata)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
         [[{{node sample_sequence/while/model/GatherV2_1}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "play.py", line 211, in <module>
    play_aidungeon_2()
  File "play.py", line 180, in play_aidungeon_2
    result = "\n" + story_manager.act(action)
  File "/Fast-Games/AI Dungeon 2/story/story_manager.py", line 181, in act
    result = self.generate_result(action_choice)
  File "/Fast-Games/AI Dungeon 2/story/story_manager.py", line 186, in generate_result
    block = self.generator.generate(self.story_context() + action)
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/gpt2_generator.py", line 108, in generate
    text = self.generate_raw(prompt)
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/gpt2_generator.py", line 91, in generate_raw
    self.context: [context_tokens for _ in range(self.batch_size)]
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956, in run
    run_metadata_ptr)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
    run_metadata)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
         [[node sample_sequence/while/model/GatherV2_1 (defined at /usr/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]

Original stack trace for 'sample_sequence/while/model/GatherV2_1':
  File "play.py", line 211, in <module>
    play_aidungeon_2()
  File "play.py", line 74, in play_aidungeon_2
    generator = GPT2Generator()
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/gpt2_generator.py", line 44, in __init__
    temperature=temperature, top_k=top_k, top_p=top_p
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/src/sample.py", line 112, in sample_sequence
    back_prop=False,
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2753, in while_loop
    return_same_structure)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2245, in BuildLoop
    pred, body, original_loop_vars, loop_vars, shape_invariants)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2170, in _BuildLoop
    body_result = body(*packed_vars_for_body)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2705, in <lambda>
    body = lambda i, lv: (i + 1, orig_body(*lv))
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/src/sample.py", line 82, in body
    next_outputs = step(hparams, prev, past=past)
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/src/sample.py", line 70, in step
    lm_output = model.model(hparams=hparams, X=tokens, past=past, reuse=tf.AUTO_REUSE)
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/src/model.py", line 157, in model
    h = tf.gather(wte, X) + tf.gather(wpe, positions_for(X, past_length))
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
    return target(*args, **kwargs)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/array_ops.py", line 3956, in gather
    params, indices, axis, name=name)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_array_ops.py", line 4082, in gather_v2
    batch_dims=batch_dims, name=name)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
    op_def=op_def)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
    attrs, op_def, compute_device)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
    op_def=op_def)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
    self._traceback = tf_stack.extract_stack()

from aidungeon.

 avatar commented on August 28, 2024

I'm having the same issue, I'm instead using the offical online version on Google Colaboratory.

from aidungeon.

summerstay avatar summerstay commented on August 28, 2024

It looks like I submitted the same bug (#75) I'm pretty sure what is causing it is too long of an input to GPT-2. Once it gets beyond 1024 tokens in one input, it crashes. The relevant line is here:
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
[[{{node sample_sequence/while/model/GatherV2_1}}]]

from aidungeon.

kik4444 avatar kik4444 commented on August 28, 2024

It looks like I submitted the same bug (#75) I'm pretty sure what is causing it is too long of an input to GPT-2. Once it gets beyond 1024 tokens in one input, it crashes. The relevant line is here:
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
[[{{node sample_sequence/while/model/GatherV2_1}}]]

But then how do longer stories exist? Cause I have seen stories much longer than my own.

Also I tried to make another story just to see how far I would get just by writing nothing. Here's the entire terminal output since starting the game:

AI Dungeon 2 will save and use your actions and game to continually improve AI
 Dungeon. If you would like to disable this enter 'nosaving' for any action.
 This will also turn off the ability to save games.

Initializing AI Dungeon! (This might take a few minutes)




 ▄▄▄       ██▓   ▓█████▄  █    ██  ███▄    █   ▄████ ▓█████  ▒█████   ███▄    █    
▒████▄    ▓██▒   ▒██▀ ██▌ ██  ▓██▒ ██ ▀█   █  ██▒ ▀█▒▓█   ▀ ▒██▒  ██▒ ██ ▀█   █    
▒██  ▀█▄  ▒██▒   ░██   █▌▓██  ▒██░▓██  ▀█ ██▒▒██░▄▄▄░▒███   ▒██░  ██▒▓██  ▀█ ██▒   
░██▄▄▄▄██ ░██░   ░▓█▄   ▌▓▓█  ░██░▓██▒  ▐▌██▒░▓█  ██▓▒▓█  ▄ ▒██   ██░▓██▒  ▐▌██▒   
 ▓█   ▓██▒░██░   ░▒████▓ ▒▒█████▓ ▒██░   ▓██░░▒▓███▀▒░▒████▒░ ████▓▒░▒██░   ▓██░   
 ▒▒   ▓▒█░░▓      ▒▒▓  ▒ ░▒▓▒ ▒ ▒ ░ ▒░   ▒ ▒  ░▒   ▒ ░░ ▒░ ░░ ▒░▒░▒░ ░ ▒░   ▒ ▒    
  ▒   ▒▒ ░ ▒ ░    ░ ▒  ▒ ░░▒░ ░ ░ ░ ░░   ░ ▒░  ░   ░  ░ ░  ░  ░ ▒ ▒░ ░ ░░   ░ ▒░   
  ░   ▒    ▒ ░    ░ ░  ░  ░░░ ░ ░    ░   ░ ░ ░ ░   ░    ░   ░ ░ ░ ▒     ░   ░ ░    
      ░  ░ ░        ░       ░              ░       ░    ░  ░    ░ ░           ░    
                  ░                                                                




Pick a setting.
0) fantasy (recommended)
1) mystery
2) apocalyptic
3) zombies
4) custom
Enter the number of your choice: 4

Enter a prompt that describes who you are and the first couple sentences of where
 you start out ex:
 'You are a knight in the kingdom of Larion. You are hunting the evil dragon
 who has been terrorizing the kingdom. You enter the forest searching for the
 dragon and see' 
Starting Prompt: nothingness

AI Dungeon 2 Instructions:
 Enter actions starting with a verb ex. "go to the tavern" or "attack the orc."
 To speak enter 'say "(thing you want to say)"' or just "(thing you want to say)"
 

The following commands can be entered for any action: 
  "revert"   Reverts the last action allowing you to pick a different action.
  "quit"     Quits the game and saves
  "restart"  Starts a new game and saves your current one
  "save"     Makes a new save of your game and gives you the save ID
  "load"     Asks for a save ID and loads the game if the ID is valid
  "print"    Prints a transcript of your adventure (without extra newline formatting)
  "help"     Prints these instructions again

Generating story...


nothingness.
> nothingness

You feel the air in your lungs begin to constrict, and you cough violently as
 you try to force the air out of your body. The pain is excruciating, but it's
 not enough to stop you from trying to breathe.

"What are you doing?" a voice says
> nothingness

The voice speaks again, this time with a cold tone. "Nothingness".
You don't know what that means, but you're going to find out soon.
 
> nothingness

You hear the sound of a door opening up behind you, and turn around to see a
 man standing there. He wears a black suit and has short brown hair. His eyes
 seem to be staring right through you, though. "I'm sorry about earlier," he
 begins
> nothingness

"No need for apologies", you reply. "It was an accident".
> nothingness

"Well, I suppose it was an accident. But I still think you should apologize".
> nothingness

"Nonsense! You can't just go apologizing to me all the time!"
> nothingness

"Okay, okay, I get it. I'll apologize to you later".
> nothingness

"Yeah, yeah, I get it. Goodbye".
> nothingness

With those words, the man turns away from you, leaving you alone in the room.
 You sit down on one of the chairs, feeling drained. You close your eyes, and
 fall asleep..
> nothingness

You wake up, lying on a bed. It looks like the same one you were sleeping in,
 except now it's covered in blood. There's also a knife stuck into your chest.

"Huh? What...what happened?" you ask.


> nothingness

You look at yourself in the mirror, which seems to have changed its appearance.
 Your face appears much older than when you first woke up. Your hair is also
 different, and now hangs down past your shoulders. Your skin is pale, and your
 teeth are white
> nothingness

Your head starts to hurt, and you start to shake. You scream out in pain, before
 falling back onto the mattress. A nurse comes running over, and takes off her
 gloves. She examines you, then leaves.

A few hours pass, and you realize that you haven't moved
> nothingness

Suddenly, you open your eyes. You're in a hospital room, with a woman sitting
 across from you. Her eyes are red, and she's crying. "I'm so sorry..". she says.
"Who are you?", you ask.

> nothingness

She doesn't answer, instead continuing to cry. "I'm so sorry..".
> nothingness

She continues to cry, and you take a deep breath. "Why did you do it?".
> nothingness

"Because I wanted to".
> nothingness

"Yes, because of my desire to kill you".
> nothingness
Woops that action caused the model to start looping. Try a different action to
 prevent that.
> nothingness

"So why didn't you tell anyone?"
> nothingness

"Because I thought no-one would believe me".
> nothingness

"How could you do such a thing?"
> you attempt to say something different for the 21st reply

"I don't understand".
> "you wouldn't understand"

P.S. It froze on the last line but that's probably cause I used direct speech.
Anyways here I managed to get to around 21 successful replies. So it's definitely not an arbitrary bug that happens around 15-20 replies. The problem, as @summerstay mentioned, is probably from the length of each reply. My replies are normally long so the game always crashes around the same reply number. I don't think it's because of the length of a single reply though. In my experience once I get blocked by crashing like this, it will always happen around the nth reply, even if I don't input anything on the last reply before the crash. However, if I revert and alter my choices before the nth reply, I might make it another reply or 2 forward before the crashing begins again.
In other words, I think this error tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024) is from the cumulative length of all your choices, not the individual length of each choice.

from aidungeon.

 avatar commented on August 28, 2024

On colab I'm on a seemingly infinite story. On 50+ responses now

from aidungeon.

DMonitor avatar DMonitor commented on August 28, 2024

For each action you type the model is fed the context sentence as well as the past N action-result pairs in its memory to generate the result. We found N=8 to be a good amount of memory to feed the model.

So the model only processes the last 8 action-result pairs. It would make sense that a bunch of short responses wouldn't trigger, since the model never sees them all at once. But if the last 8 are over the limit, then that would trigger a crash

from aidungeon.

 avatar commented on August 28, 2024

Seems like you could measure the length and cut the number of action-result pairs to fit?

from aidungeon.

summerstay avatar summerstay commented on August 28, 2024

Where is the value 8 set? I'd like to experiment with changing it to see if that fixes the problem.

from aidungeon.

kik4444 avatar kik4444 commented on August 28, 2024

In story_manger.py:
self.memory = 20
It’s not set to eight, but I believe this could be the parameter that determines how many pairs is sent to the model

Wouldn't lowering this number however reduce the quality of the AI responses?
Is it possible to instead change the tokens to something higher than 1024 so you can't reasonably exceed them within the AI's memory?

EDIT: Anyways I reduced it to self.memory = 15 and so far this has fixed my crashing, but I wonder if it has impacted the quality of my stories by reducing the AI's memory by 5 choices.

from aidungeon.

summerstay avatar summerstay commented on August 28, 2024

you would have to retrain GPT-2 Large in order to have more than 1024 tokens. That's not something anyone but a large company could do. I also reduced the memory from 20 and it seemed to help.

from aidungeon.

MrKrzYch00 avatar MrKrzYch00 commented on August 28, 2024

@JamesHutchison I think the cutting would be the good way to avoid crashes. I'm doing similar thing in my GPT-2 mariadb version with parameters. Output length set to negative value like -96 will produce exactly this many output tokens and read 1024 - 96 input tokens cutting the beginning part of the input to fit that amount thus to never exceed 1024 tokens total (-128 will process the input as 1024 - 128 and so on). However, I don't use any intelligent sentence detection for that which sometimes results in input having cut sentence in half at the beginning (after the beginning part of the input being cut to not crash) - optimal would maybe be to also get rid of cut sentence.

Best would be for algorithm to just get rid of the most meaningless for the story actions/responses, but... Anything could have the potential to turn into something significant later.

from aidungeon.

TheGlader2 avatar TheGlader2 commented on August 28, 2024

In story_manger.py:
self.memory = 20
It’s not set to eight, but I believe this could be the parameter that determines how many pairs is sent to the model

Is there a way to do this on browser? For me and others who come to this post for an answer to this problem in the future?

from aidungeon.

kik4444 avatar kik4444 commented on August 28, 2024

In story_manger.py:
self.memory = 20
It’s not set to eight, but I believe this could be the parameter that determines how many pairs is sent to the model

Is there a way to do this on browser? For me and others who come to this post for an answer to this problem in the future?

Don't you need to have the game's files in your Google drive to play through the browser? I think you can just download the story manager file, edit it, and upload it again

from aidungeon.

AZaitzeff avatar AZaitzeff commented on August 28, 2024

In story_manger.py:
self.memory = 20
It’s not set to eight, but I believe this could be the parameter that determines how many pairs is sent to the model

Is there a way to do this on browser? For me and others who come to this post for an answer to this problem in the future?

Don't you need to have the game's files in your Google drive to play through the browser? I think you can just download the story manager file, edit it, and upload it again

To learn how to do that, see the link below:

https://colab.research.google.com/notebooks/io.ipynb#scrollTo=BaCkyg5CV5jF

from aidungeon.

DMonitor avatar DMonitor commented on August 28, 2024

You don’t need to download to google drive just to edit the code. The git clone command clones all of the project’s code into colab. It’s placed in /content/AIDungeon. You only have to clone to google drive if you want the changes to persist between instances.

from aidungeon.

kylemiller3 avatar kylemiller3 commented on August 28, 2024

I was having the same issue and setting self.memory to 14 fixed it for me.

from aidungeon.

laclcia avatar laclcia commented on August 28, 2024

Edit: running the current version from this codebase not from a fork running locally on my machine. no google drive or such. i forgot to mention it in the original post.
End of Edit

i have also encountered the same problem.
windows 10
32 GB of ram
the game's python instance never uses more than 8gb of ram (not limited and total system use doesn't ever reach 50% or over during play)
using tensorflow cpu 1.15
have the same error with the relevant section
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,1024] = 1024 is not in [0, 1024)
i did reduce the self memory settings all the way down to 9.
i do not have the option to use cuda as i do not have the hardware necessary for it and no opencl version of TF is available on windows so GPU is not an option altho i dont mind waiting for calculations.

i may not be versed in python but may ai throw a possible solution and if it is unfesable, naive or simply wrong please do tell me.

from what i can gather the problem is having token's that are too long for tensorflow. so would doing a check before running trough tensorflow and checking if the token size would go over 1024 be possible? if it is possible to check then having the system test the current self memory value minus one and retest if that value is over the limit and repeat until it is under the limit so the system can kinda auto adjust the value so that it does not crash. and this would also make it possible to go in reverse if the new value is not kept between run's so that after each user input this could run before generation and thus have a dynamically adjusted value so as to not overflow and keep the maximum memory value given the user's usage and inputs.

if this approach is impossible for any reason then please excuse my ignorance.

from aidungeon.

RyushoYosei avatar RyushoYosei commented on August 28, 2024

I haven't hit this myself, well not quite, I might of, but I'm currently exactly on command 20, and it's starting to do other weirdness, namely, for example

>Check the closet.
You check the closet.
You check the closet.
You check the closet.
You check the closet.
You check the closet.
You check the closet.
You check the closet.
You check the closet.
You check the closet.

>back away
You check the.
You check the.
You check the.
You check the.
You check the.
You check the.
You check the.
You check the.
You check the.
You check the.
You check the.
You check the.

...Yeah, I think the AI just hemoraged hard

A legit one that just happened.
``
You noticeIn fact you'res.
You noticeThe A
You are taller

than the Ape. You are taller than the Ape. You are taller than the Ape. You
are the Ape. You are the Ape. You are the Ape. You are the Ape. You are the
Ape. You are the Ape

. You are the Ape. You are the Ape. You are the Ape. You are the Ape. You are
the Ape. You are the Ape. You are the Ape. You are the Ape. You are the Ape
Woops that action caused the model to start looping. Try a different action to
prevent that.

from aidungeon.

Sporking avatar Sporking commented on August 28, 2024

This could be a code correctness problem (buffer overflow or improper bounds checking), not an out-of-memory problem. See bug #251.

from aidungeon.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.