Giter VIP home page Giter VIP logo

aidungeon's Introduction

AIDungeon2

Read more about AIDungeon2 and how it was built here.

Play the mobile app version of the game by following the links here

Play the game online by following this link here

Play the game in Colab here.

To play the game locally, it is recommended that you have an nVidia GPU with 12 GB or more of memory, and CUDA installed. If you do not have such a GPU, each turn can take a couple of minutes or more for the game to compose its response. To install and play locally:

git clone --branch master https://github.com/AIDungeon/AIDungeon/
cd AIDungeon
./install.sh # Installs system packages and creates python3 virtual environment
./download_model.sh
source ./venv/bin/activate
./play.py

Finetune the model yourself

Formatting the data. After scraping the data I formatted text adventures into a json dict structure that looked like the following:

{   
    "tree_id": <someid>
    "story_start": <start text of the story>
    "action_results": [
    {"action":<action1>, "result":<result1>, "action_results": <A Dict that looks like above action results>},
    {"action":<action2>, "result":<result2>, "action_results": <A Dict that looks like above action results>}]
}

Essentially it's a tree that captures all the action result nodes. Then I used this to transform that data into one giant txt file. The txt file looks something like:

<|startoftext|>
You are a survivor living in some place...
> You search for food
You search for food but are unable to find any
> Do another thing
You do another thing...
<|endoftext|>
(above repeated many times)

Then once you have that you can use the finetuning script to fine tune the model provided you have the hardware.

Fine tuning the largest GPT-2 model is difficult due to the immense hardware required. I no longer have access to the same hardware so there are two ways I would suggest doing it. I originally fine tuned the model on 8 32GB V100 GPUs (an Nvidia DGX1). This allowed me to use a batch size of 32 which I found to be helpful in improving quality. The only cloud resource I could find that matches those specs is an aws p3dn.24xlarge instance so you'd want to spin that up on EC2 and fine tune it there. (might have to also request higher limits). Another way you could do it is to use a sagemaker notebook (similar to a colab notebook) and select the p3.24xlarge instance type. This is equivalent to 8 16 GB V100 GPUs. Because each GPU has only 16GB memory you probably need to reduce the batch size to around 8.

Community

AIDungeon is an open source project. Questions, discussion, and contributions are welcome. Contributions can be anything from new packages to bugfixes, documentation, or even new core features.

Resources:

Contributing

Contributing to AIDungeon is easy! Just send us a pull request from your fork. Before you send it, summarize your change in the [Unreleased] section of the CHANGELOG and make sure develop is the destination branch.

AIDungeon uses a rough approximation of the Git Flow branching model. The develop branch contains the latest contributions, and master is always tagged and points to the latest stable release.

If you're a contributor, make sure you're testing and playing on develop. That's where all the magic is happening (and where we hope bugs stop).

aidungeon's People

Contributors

akababa avatar allisoncl8 avatar applenick avatar ben-bay avatar cmazzullo avatar dependabot[bot] avatar dlavati avatar dmonitor avatar dyc3 avatar gautamkrishnar avatar godofgrunts avatar hirohito1 avatar interfect avatar jushbjj avatar kevinmoonglow avatar latueur avatar ledcoyote avatar louisgv avatar maxrobinsonthegreat avatar natis1 avatar nickwalton avatar niko-dunixi avatar rainrat avatar samcamwilliams avatar scottshingler avatar spenserblack avatar stylemistake avatar ts-co avatar w1r3w0lf avatar wwboyer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aidungeon's Issues

text has started to run off the screen

im not sure what exactly i changed but no matter what setting i change the text goes off the screen and the game box now has a scrollbar on the bottom making it annoying to play. Is there a way to make the text wrap?
dfrge

[FEAT] Downloading model should be separated from install.

From my observation, it seems a majority of failure comes from torrent download not working for some reason, at least on the Collab notebook. I propose the model downloading to be separated from the install script and put in a separate shell script, and to be invoked optionally by the user.

In my case, I downloaded the model using my desktop torrent client so I can seed it, and I also upload a copy of the model into my google drive. Then I have a code block on collab (will make a PR for them soon) that mount the drive and copy the model into AIDungeon. This works very well so far.

Tokenization

More specifically, named entity tokenization. If every named entity in the fine-tuning data was tokenized to something like <NE-PERSON> <NE-PLACE>, it could open the possibility of adding in name generator sub-modules.

This probably falls outside the scope of this GPT-2 project for now, but you have to admit it's a fun idea...

What is winning?

Once I had the game declare "YOU WIN!" (the context was something like "you live happily ever after) and automatically save state for me. And once I had it declare

You win!
You win!
You win!

like 10 times, but it didn't save the game state. (I was assuming the first was a win, and the second was just something it said.)

But more interestingly, what is the condition that leads the game to detect a win? I noticed after the second kind win (where it repeated itself) that everything I did thereafter, it would just echo back to me:

> Invent penicillin.
You invent penicillin.

> Knit the Statue of Liberty in Spanish.
You knit the Statue of Liberty in Spanish.

(I'm making that up -- I don't remember what the actual input was, but that's the process that was happening.)

[Feature] Translate the game!

Hey,
This project is really amazing and could be enjoyed by a lot of people, which is why I think it would be incredible to translate the model. Would it be possible? And if yes, how?
For information there is a lot of RP players in France so it would be interesting to try it in french

From someone who is somewhat tech illiterate

How do I open this up to play? I downloaded an app that recognizes the Python files, but all it does is open for not even a second before closing itself. I've tried this multiple times, and even checked to make sure it wasn't just running in the background, but nothing has come up.

ValueError: Can't load save_path when it is None.

https://colab.research.google.com/github/nickwalton/AIDungeon/blob/master/AIDungeon_2.ipynb

Download Complete!

but then

AI Dungeon 2 will save and use your actions and game to continually improve AI
 Dungeon. If you would like to disable this enter 'nosaving' for any action.
 This will also turn off the ability to save games.

Initializing AI Dungeon! (This might take a few minutes)

ERROR:tensorflow:Couldn't match files for checkpoint generator/gpt2/models/model_v5/model-550
Traceback (most recent call last):
  File "play.py", line 211, in <module>
    play_aidungeon_2()
  File "play.py", line 74, in play_aidungeon_2
    generator = GPT2Generator()
  File "/content/AIDungeon/generator/gpt2/gpt2_generator.py", line 49, in __init__
    saver.restore(self.sess, ckpt)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/saver.py", line 1277, in restore
    raise ValueError("Can't load save_path when it is None.")
ValueError: Can't load save_path when it is None.

[Improve] Recieve the same message with the same keyword but the story is going futher

He smiles at you, and says "Hello," then he points to the candle flame. "This
is your first step towards controlling fire".

concentration

You concentrate hard, and the flame begins to grow brighter. Soon, it fills the
entire room. Then, the flame becomes bright enough to burn through paper. As
the flame grows larger, the man continues to smile at you.

You continue to watch as the flames consume everything in their path. Eventually,
they burn themselves out, leaving nothing but ash behind

meet the old man again

feel the little fire inside

You reach down, and you feel something inside you. It's hot, and it's burning.
You immediately grab hold of it, and you start to run outside. You rush out
the front door, and you find yourself in a field

set fire to the rain

You run forward, and you put your hand out in front of you. The flame catches
the air, and it ignites the grass. You watch as the grass turns red, and soon,
the entire field is ablaze. You stare at the blaze, and you can't help but laugh

You let the flame guide you, and you let it guide the flame. Soon, you find yourself
soaring through the sky, and you let the flame guide you. Soon, you find yourself
floating through the air, and you let the flame guide you

concentration

You concentrate hard, and the flame begins to grow brighter. Soon, it fills the
entire room. Then, the flame becomes bright enough to burn through paper. As
the flame grows larger, the man continues to smile at you.

As you can see, it repeat the message

High hosting cost - 5 GB model

Making an issue for this. I'm busy at the moment and I'm fighting with getting the python dependencies installed on the Ubuntu subsystem on windows, but can spare a few minutes so to create this issue.

Some thoughts -

  1. Is the notebook downloading the 5 GB model every time it runs, or does it cache the download for the user? I.E. is the download a one-time thing or an every-time thing? Sorry I'm not familiar with how those work. If it's doing it every run, I'd take that down until a better solution is presented for mainstream users.
  2. One option might be to shard the file up, split the shards onto their own github repos, then combine then back after downloading them. Most other hosting solutions have costs for egress but github does not. I'm sure there's other options but this is certainly something I could throw together. Github has a size cap of 1 GB and they will complain after your repo hits 100 MB, so it'd be quite a few shards but would be manageable with some scripting. Not sure if it's violating any TOS.
  3. Could also host using BitTorrent.

Error on starting it.

It worked yesterday, but trying to start it today gives me this:

Initializing AI Dungeon! (This might take a few minutes)

Traceback (most recent call last):
File "play.py", line 211, in
play_aidungeon_2()
File "play.py", line 74, in play_aidungeon_2
generator = GPT2Generator()
File "/content/AIDungeon/generator/gpt2/gpt2_generator.py", line 27, in init
self.enc = encoder.get_encoder(self.model_name, models_dir)
File "/content/AIDungeon/generator/gpt2/src/encoder.py", line 109, in get_encoder
with open(os.path.join(models_dir, model_name, 'encoder.json'), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'generator/gpt2/models/model_v5/encoder.json'

Notebook loading error

TensorRT?

I'm trying to get tensorRT working but I need the saved_model.pb file. Can you upload it?

Cloud Deployment Specs

Hey,

Awesome project here! I have been wanting to get better with my ML, and this is right up my alley.

I am curious what the specs for deploying this on the cloud would be, such as AWS. Obviously instances without GPU are considerably cheaper.

Is there work being on on hosted cloud infrastructure for this, and if so, what is being used?

Thanks again!

"No such file or directory: 'generator/gpt2/models/model_v5/vocab.bpe'"

When I go to this page and select "Run all", this is what I get:

AI Dungeon 2 will save and use your actions and game to continually improve AI
 Dungeon. If you would like to disable this enter 'nosaving' for any action.
 This will also turn off the ability to save games.

Initializing AI Dungeon! (This might take a few minutes)

Traceback (most recent call last):
  File "play.py", line 211, in <module>
    play_aidungeon_2()
  File "play.py", line 74, in play_aidungeon_2
    generator = GPT2Generator()
  File "/content/AIDungeon/AIDungeon/AIDungeon/AIDungeon/AIDungeon/generator/gpt2/gpt2_generator.py", line 27, in __init__
    self.enc = encoder.get_encoder(self.model_name, models_dir)
  File "/content/AIDungeon/AIDungeon/AIDungeon/AIDungeon/AIDungeon/generator/gpt2/src/encoder.py", line 111, in get_encoder
    with open(os.path.join(models_dir, model_name, 'vocab.bpe'), 'r', encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'generator/gpt2/models/model_v5/vocab.bpe'

KeyboardInterrupt

The game won't start. Even if I can't do anything
sdx

Browser: Chrome, without any extension

[BUG] SyntaxError: invalid syntax

Describe the bug
After running python play.py

% python play.py
Traceback (most recent call last):
  File "play.py", line 2, in <module>
    from generator.gpt2.gpt2_generator import *
  File "/Users/travismyrick/rando/AIDungeon/generator/gpt2/gpt2_generator.py", line 7, in <module>
    from generator.gpt2.src import sample, encoder, model
  File "/Users/travismyrick/rando/AIDungeon/generator/gpt2/src/sample.py", line 62
    def sample_sequence(*, hparams, length, start_token=None, batch_size=None, context=None, temperature=1, top_k=0, top_p=1):
                         ^
SyntaxError: invalid syntax

To Reproduce
Steps to reproduce the behavior:

  1. git clone
  2. install
  3. run it - get the error

Expected behavior
For the program to load properly without a syntax error?

Screenshots

Additional context

  • osx catalina

Better multi-core CPU performance?

The game runs best on a (very parallel) GPU, but when run on CPU, it only manages to keep about 1.5 cores busy and takes a couple of minutes to generate each response. If it could use all cores effectively it would be much more playable.

I've tried adding some profiling code to ask TensorFlow what it's doing, as described here, and as far as I can tell it's mostly doing one long serial chain of MatMul operations as it goes one by one through each layer of the network, for each word it has to generate:

image

(This is with inter- and intra-op parallelism both set to 8.)

I think that the problem might be that these are matrix-vector multiplies, which TensorFlow doesn't parallelize on CPU (tensorflow/tensorflow#6752), but which a GPU can churn through in parallel no problem.

It might not be possible to make CPU performance any better until TensorFlow learns to do these operations more like a GPU does. Alternatively, turning up self.batch_size in GPT2Generator might hack around the problem by making all the multiplies actually be matrix-matrix multiplies, but changing that variable makes things start crashing because some of the code in GPT2 (like penalize_used) is written to expect only one sample coming through at a time.

Reproducible out of memory crash

Describe the current behavior:
Out of memory crash. To reproduce: load 95c2fb00-1906-11ea-a04f-0242ac1c0002 and type "Try to cast the spell written on the wall"

Describe the expected behavior:
No crash.

The web browser you are using (Chrome, Firefox, Safari, etc.):
Desktop Chrome Version 79.0.3945.74 (Official Build) beta (64-bit)

Training scripts?

Can you share the training scripts?
Contributing on the models would be much easier then!

No Save Game on Quit for Loaded Games

From help:
"quit"     Quits the game and saves

Game saves on "quit" for new stories as expected.
If you load a game and "quit" then no save is generated.

[BUG] gsutil not found

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:
Not sure, I was a few hours into my game and I issued sit on throne and the game crashed.

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
    return fn(*args)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
    target_list, run_metadata)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
	 [[{{node sample_sequence/while/model/GatherV2_1}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./play.py", line 261, in <module>
    play_aidungeon_2()
  File "./play.py", line 224, in play_aidungeon_2
    result = "\n" + story_manager.act(action)
  File "/mnt/data/AIDungeon/story/story_manager.py", line 206, in act
    result = self.generate_result(action_choice)
  File "/mnt/data/AIDungeon/story/story_manager.py", line 211, in generate_result
    block = self.generator.generate(self.story_context() + action)
  File "/mnt/data/AIDungeon/generator/gpt2/gpt2_generator.py", line 116, in generate
    text = self.generate_raw(prompt)
  File "/mnt/data/AIDungeon/generator/gpt2/gpt2_generator.py", line 99, in generate_raw
    self.context: [context_tokens for _ in range(self.batch_size)]
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 956, in run
    run_metadata_ptr)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
    run_metadata)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
	 [[node sample_sequence/while/model/GatherV2_1 (defined at /usr/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]

Original stack trace for 'sample_sequence/while/model/GatherV2_1':
  File "./play.py", line 261, in <module>
    play_aidungeon_2()
  File "./play.py", line 102, in play_aidungeon_2
    generator = GPT2Generator()
  File "/mnt/data/AIDungeon/generator/gpt2/gpt2_generator.py", line 49, in __init__
    top_p=top_p,
  File "/mnt/data/AIDungeon/generator/gpt2/src/sample.py", line 121, in sample_sequence
    back_prop=False,
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2753, in while_loop
    return_same_structure)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2245, in BuildLoop
    pred, body, original_loop_vars, loop_vars, shape_invariants)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2170, in _BuildLoop
    body_result = body(*packed_vars_for_body)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2705, in <lambda>
    body = lambda i, lv: (i + 1, orig_body(*lv))
  File "/mnt/data/AIDungeon/generator/gpt2/src/sample.py", line 90, in body
    next_outputs = step(hparams, prev, past=past)
  File "/mnt/data/AIDungeon/generator/gpt2/src/sample.py", line 76, in step
    hparams=hparams, X=tokens, past=past, reuse=tf.AUTO_REUSE
  File "/mnt/data/AIDungeon/generator/gpt2/src/model.py", line 185, in model
    h = tf.gather(wte, X) + tf.gather(wpe, positions_for(X, past_length))
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
    return target(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/ops/array_ops.py", line 3956, in gather
    params, indices, axis, name=name)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_array_ops.py", line 4082, in gather_v2
    batch_dims=batch_dims, name=name)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
    op_def=op_def)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
    attrs, op_def, compute_device)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
    op_def=op_def)
  File "/usr/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
    self._traceback = tf_stack.extract_stack()

Exception ignored in: <bound method Story.__del__ of <story.story_manager.Story object at 0x7f3b70f93dd8>>
Traceback (most recent call last):
  File "/mnt/data/AIDungeon/story/story_manager.py", line 37, in __del__
    self.save_to_storage()
  File "/mnt/data/AIDungeon/story/story_manager.py", line 135, in save_to_storage
    stderr=subprocess.STDOUT,
  File "/usr/lib/python3.6/subprocess.py", line 729, in __init__
    restore_signals, start_new_session)
  File "/usr/lib/python3.6/subprocess.py", line 1364, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'gsutil': 'gsutil'

Expected behavior
The game doesn't crash.

I think you just need to add gsutil to requirements.txt

Additional context

  • OS, environment
  • Arch Linux, python3.6
  • Game settings
  • Fantasy, Wizard

[BUG] Always crashes after around the 15th reply

Describe the bug
The game always crashes around reply number 15-20, no matter what I write, even if I enter a blank reply (technically if I leave a blank reply the game comes up with a reply for me and THEN it crashes)

To Reproduce
Steps to reproduce the behavior:

  1. Play normally until you get anywhere between reply number 15 and 20
  2. Enter anything, even blank
  3. Crash

Expected behavior
No crashing

Additional context

  • I'm running the uncensored fork from https://github.com/WAUthethird/AIDungeon-Uncensored
  • I'm running the game locally on my Arch Linux - Linux Kiko-LM 5.3.12-1-MANJARO #1 SMP PREEMPT Thu Nov 21 10:55:53 UTC 2019 x86_64 GNU/Linux. I have a GTX 970, Ryzen 1700, 16GB RAM
  • This has happened on 2 different stories so far. In both cases I've tried restarting my PC, attempting to go down several routes before the crashing reply, but it always crashes when I get to the nth reply where the crash was happening.
  • On the first story it always crashed on the 18th reply. After a lot of trial and error I managed to reach my 19th reply before it started crashing and have not made any more progress.
  • On the 2nd story the crash happens on the 15th reply
  • In all cases the game takes up between 10-12 GB RAM, but I don't think it's a RAM issue cause I always have a few GB free when it crashes.
  • Here's the python error log it generates
Traceback (most recent call last):
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
    return fn(*args)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
    target_list, run_metadata)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
         [[{{node sample_sequence/while/model/GatherV2_1}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "play.py", line 211, in <module>
    play_aidungeon_2()
  File "play.py", line 180, in play_aidungeon_2
    result = "\n" + story_manager.act(action)
  File "/Fast-Games/AI Dungeon 2/story/story_manager.py", line 181, in act
    result = self.generate_result(action_choice)
  File "/Fast-Games/AI Dungeon 2/story/story_manager.py", line 186, in generate_result
    block = self.generator.generate(self.story_context() + action)
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/gpt2_generator.py", line 108, in generate
    text = self.generate_raw(prompt)
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/gpt2_generator.py", line 91, in generate_raw
    self.context: [context_tokens for _ in range(self.batch_size)]
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956, in run
    run_metadata_ptr)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
    run_metadata)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
         [[node sample_sequence/while/model/GatherV2_1 (defined at /usr/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]

Original stack trace for 'sample_sequence/while/model/GatherV2_1':
  File "play.py", line 211, in <module>
    play_aidungeon_2()
  File "play.py", line 74, in play_aidungeon_2
    generator = GPT2Generator()
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/gpt2_generator.py", line 44, in __init__
    temperature=temperature, top_k=top_k, top_p=top_p
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/src/sample.py", line 112, in sample_sequence
    back_prop=False,
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2753, in while_loop
    return_same_structure)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2245, in BuildLoop
    pred, body, original_loop_vars, loop_vars, shape_invariants)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2170, in _BuildLoop
    body_result = body(*packed_vars_for_body)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2705, in <lambda>
    body = lambda i, lv: (i + 1, orig_body(*lv))
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/src/sample.py", line 82, in body
    next_outputs = step(hparams, prev, past=past)
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/src/sample.py", line 70, in step
    lm_output = model.model(hparams=hparams, X=tokens, past=past, reuse=tf.AUTO_REUSE)
  File "/Fast-Games/AI Dungeon 2/generator/gpt2/src/model.py", line 157, in model
    h = tf.gather(wte, X) + tf.gather(wpe, positions_for(X, past_length))
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
    return target(*args, **kwargs)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/array_ops.py", line 3956, in gather
    params, indices, axis, name=name)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_array_ops.py", line 4082, in gather_v2
    batch_dims=batch_dims, name=name)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
    op_def=op_def)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
    attrs, op_def, compute_device)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
    op_def=op_def)
  File "/usr/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
    self._traceback = tf_stack.extract_stack()

[BUG] Error during game.

Generating story...

You are ljx, a knight living in the kingdom of Larion. You have a steel longsword
and a wooden shield. You are on a quest to defeat the evil dragon of Larion.
You've heard he lives up at the north of the kingdom. You set on the path to
defeat him and walk into a dark forest. As you enter the forest you see a large
black dragon flying over the trees. He looks down upon you as he flies overhead

.

"Ah, another one of your kind". The dragon says. "I'm sure they're all like that".

The dragon turns his head slightly and sees you. His eyes narrow as he stares
at you

.

attack

You swing your sword at the dragon's face. It swings its tail at you, but you
dodge it by jumping over it. You then land on the dragon's back and begin to
stab with your sword.

attack

You continue to stab the dragon. He lets out a roar and smashes his claw into
your chest. You fall off of him and roll away from the blow. You quickly get
up and try to run, but the dragon catches you again

hide in the wood

You crawl through the undergrowth. Suddenly, you hear a noise behind you. You
turn around and see a large group of orcs come out of the woods. They look angry
and ready for battle.

ask for help to kill the dragon

You stand up and raise your hands in surrender. The orcs stop their fighting
and stare at you. One orc raises his hand and points at you. He says:
"This is our king! We want no more trouble!"

I am a knight, not a king.

You look down at yourself and say:
"I don't know what you mean, but I'm not a king".

let them kill the dragon

You walk towards the dragon and start stabbing it repeatedly with your sword.
You keep attacking until you feel the dragon's life force drain away. You then
grab the dragon's body and throw it onto the ground.

eat the dragon

You pick up the dragon corpse and put it in your mouth. You chew on the dragon
for several minutes before swallowing it whole

Resurrect the dragon

You take the dragon's blood and pour some into your wounds. You immediately feel
better

find another dragon

Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
return fn(*args)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
target_list, run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[1,48,2,25,498,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node sample_sequence/while/concat}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

[[sample_sequence/while/Exit_3/_1387]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

(1) Resource exhausted: OOM when allocating tensor with shape[1,48,2,25,498,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node sample_sequence/while/concat}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "play.py", line 271, in
play_aidungeon_2()
File "play.py", line 234, in play_aidungeon_2
result = "\n" + story_manager.act(action)
File "/content/AIDungeon/story/story_manager.py", line 207, in act
result = self.generate_result(action_choice)
File "/content/AIDungeon/story/story_manager.py", line 212, in generate_result
block = self.generator.generate(self.story_context() + action)
File "/content/AIDungeon/generator/gpt2/gpt2_generator.py", line 119, in generate
text = self.generate_raw(prompt)
File "/content/AIDungeon/generator/gpt2/gpt2_generator.py", line 102, in generate_raw
self.context: [context_tokens for _ in range(self.batch_size)]
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 956, in run
run_metadata_ptr)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[1,48,2,25,498,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node sample_sequence/while/concat (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

[[sample_sequence/while/Exit_3/_1387]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

(1) Resource exhausted: OOM when allocating tensor with shape[1,48,2,25,498,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node sample_sequence/while/concat (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

0 successful operations.
0 derived errors ignored.

Original stack trace for 'sample_sequence/while/concat':
File "play.py", line 271, in
play_aidungeon_2()
File "play.py", line 106, in play_aidungeon_2
generator = GPT2Generator()
File "/content/AIDungeon/generator/gpt2/gpt2_generator.py", line 51, in init
top_p=top_p,
File "/content/AIDungeon/generator/gpt2/src/sample.py", line 120, in sample_sequence
back_prop=False,
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2753, in while_loop
return_same_structure)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2245, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2170, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2705, in
body = lambda i, lv: (i + 1, orig_body(*lv))
File "/content/AIDungeon/generator/gpt2/src/sample.py", line 98, in body
else tf.concat([past, next_outputs["presents"]], axis=-2),
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/array_ops.py", line 1420, in concat
return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/gen_array_ops.py", line 1257, in concat_v2
"ConcatV2", values=values, axis=axis, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()

Game saved.
To load the game, type 'load' and enter the following ID: 745e2362-1bbe-11ea-ada8-0242ac1c0002

[BUG] How to avoid re-downloading model after timeout?

Prior to the torrent solution for downloading the NN weights, if a session timed out, you could reconnect and "run all" (or simply select "restart and run all") and it would not have to re-download the model, displaying instead "model already installed". But now, using either method, it takes a long time to do that, because (I think) it's downloading the model again.

Am I wrong?

[FEAT] Save the model on Google Collab to Google drive

Google Collab has a 4 hour uptime limit for an instance. After playing for 4hr, the instance will get shutdown. Also it'd be awesome to save the model and reuse it.

Solution:

  • Mount GDrive in the google collab
  • Add a code snippet for loading game and saving game to GDrive as 2 separate code block. This allows user to trigger those block whenever needed.

Cons:

  • The readme will need to tell user to run code block manually instead of run all

I had experience with GDrive + GCollab, so I can definitely prepare a PR by the weekends to add these features into the collab notebook.

Making this issue to gauge interest and feedback!

[BUG] Loading from splash doesn’t use console_print()

The loaded data isn’t output with proper formatting because it uses print() instead of console_print(result)

This was an oversight on my part, but I don’t think it’s major enough to make a formal pull request, since it’s just 7 characters.

Just change line 137 in play.py from print(result) to `console_print(result)

[BUG] Notebook Loading Issue

Describe the bug
Colab fails to load the notebook. Looks like a syntax error.

To Reproduce
Try to load via Colab

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
Screenshot_20191209-212137

Additional context

  • OS, environment
  • Game settings
  • Add any other context about the problem here.

Add a license

Add a license! Otherwise people can't contribute...

[BUG] Dialog items not show.

Describe the bug
Notes or readable items are not shown

To Reproduce
Try to reach a prompt when the AI show a note. It will always ask for input instead of showing the note.

Expected behavior
It should show the rest of the prompt.

Screenshots

image

The note are cut-off right after ":"

Additional context

  • Collab
  • 2 - 0

Permissions error [Errno13]

When I try to run the "python3 play.py" command, I get "PermissionError" [Errno13] Permission denied: 'generator/gpt2/models/model_v5/encoder.json'"

python3 play.py
(A bunch of "Future Warning"s)
AI Dungeon 2 will save and use your actions and game to continually improve AI
 Dungeon. If you would like to disable this enter 'nosaving' for any action.
 This will also turn off the ability to save games.

Initializing AI Dungeon! (This might take a few minutes)

Traceback (most recent call last):
  File "play.py", line 271, in <module>
    play_aidungeon_2()
  File "play.py", line 106, in play_aidungeon_2
    generator = GPT2Generator()
  File "/home/user/AIDungeon/generator/gpt2/gpt2_generator.py", line 31, in __init__
    self.enc = encoder.get_encoder(self.model_name, models_dir)
  File "/home/user/AIDungeon/generator/gpt2/src/encoder.py", line 124, in get_encoder
    with open(os.path.join(models_dir, model_name, "encoder.json"), "r") as f:
PermissionError: [Errno 13] Permission denied: 'generator/gpt2/models/model_v5/encoder.json'

How can I fix this?

Error handling for story_manager

Error handling needs some beefing up in this class. Loading a non-existing file causes a crash (in load_from_local)

Another suggestion: Allow loading a save game on the "pick a setting" screen, instead of waiting for the user to boot into a new game first.

Also it seems to default to doing a cloud save? Perhaps some sort of fail-over to local storage?

Crash output:
Traceback (most recent call last): File ".\play.py", line 222, in <module> play_aidungeon_2() File ".\play.py", line 131, in play_aidungeon_2 id = story_manager.story.save_to_storage() File "C:\Users\zcanann\source\repos\AIDungeon\story\story_manager.py", line 131, in save_to_storage p = Popen(['gsutil', 'cp', file_name, 'gs://aidungeonstories'], stdout=FNULL, stderr=subprocess.STDOUT) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1520.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 800, in __init__ restore_signals, start_new_session) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1520.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 1207, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified Exception ignored in: <function Story.__del__ at 0x000001E61E3BEAF8> Traceback (most recent call last): File "C:\Users\zcanann\source\repos\AIDungeon\story\story_manager.py", line 35, in __del__ self.save_to_storage() File "C:\Users\zcanann\source\repos\AIDungeon\story\story_manager.py", line 131, in save_to_storage p = Popen(['gsutil', 'cp', file_name, 'gs://aidungeonstories'], stdout=FNULL, stderr=subprocess.STDOUT) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1520.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 800, in __init__ restore_signals, start_new_session) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1520.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 1207, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified

403 Forbidden when downloading model_v5

--2019-12-07 20:25:19--  http://130.211.31.182/model_v5/checkpoint
Connecting to 130.211.31.182:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-12-07 20:25:20 ERROR 403: Forbidden.

--2019-12-07 20:25:20--  http://130.211.31.182/model_v5/encoder.json
Connecting to 130.211.31.182:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-12-07 20:25:20 ERROR 403: Forbidden.

--2019-12-07 20:25:20--  http://130.211.31.182/model_v5/hparams.json
Connecting to 130.211.31.182:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-12-07 20:25:20 ERROR 403: Forbidden.

--2019-12-07 20:25:20--  http://130.211.31.182/model_v5/model-550.index
Connecting to 130.211.31.182:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-12-07 20:25:20 ERROR 403: Forbidden.

--2019-12-07 20:25:20--  http://130.211.31.182/model_v5/model-550.meta
Connecting to 130.211.31.182:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-12-07 20:25:20 ERROR 403: Forbidden.

--2019-12-07 20:25:20--  http://130.211.31.182/model_v5/vocab.bpe
Connecting to 130.211.31.182:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-12-07 20:25:20 ERROR 403: Forbidden.

Play on CPU?

HI! So, I have a pretty beefy computer in terms of everything but the GPU, which is an AMD card. I have 24 gigs of ram and a pretty powerful CPU though.

Would it be possible to modify the code to run on CPU? (and possibly modify scripts to run on Windows without needing Bash?)

I find it annoying whenever I get an OOM error in Colab, so it'd be great if I could run it locally.

Edit: Time is not an issue for me, I just want to run it locally.

Local installation isn't documented

When running the game locally, the instructions are currently incorrect:

  • Both instructions and install.sh assume that python and pip refer to Python 3. Usually that will be python3 and pip3 however.
  • You need to run sudo pip3 install gsutil before you can run install.sh
  • The tensorflow module is an additional requirement not listed in requirements.txt.

How do I launch it?

I have no idea on how to start this, and I was directed by the creator to come here to ask how.

Discord, torrent availablity

Hey,

So I just saw a post on Reddit for sharing AiDungeon via torrents due to your high cost.

Firstly make the torrent available here it's surely the easiest please c:.
Secondly you could or host your website on an ipfs page so you will never have surcharge or I can help you hosting it with the 10 gb/s node that I used for sharing the torrent too c:.

And open a discord ! C: I maybe didn't saw it too but I am sure it will be really useful for everyone ! Thanks for this project ^^

[BUG] Game crashes after too many inputs

I get the error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)

this happens when the input to GPT-2 is more than 1024 tokens, I think.

Running on Multiple GPU?

I have two gtx 1070s, non-sli. Has anyone experienced running the aidungeon on two gpus? What were some of the issues or challenges? The program suggests a beefy amount of gpu ram. Would a configuration with two gpus work well?

Unicode Decode Error when reading opening.txt

On the machine I'm using I get a UnicodeDecodeError when I read the opening.txt file.

Traceback (most recent call last):
File "play.py", line 211, in
play_aidungeon_2()
File "play.py", line 79, in play_aidungeon_2
starter = file.read()
File "/global/software/sl-7.x86_64/modules/langs/python/3.6/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 2: ordinal not in range(128)

Replacing the file with some ASCII text fixed it, though.

[BUG] found Map quest(?) location sentence incorrect

Describe the bug
I found a map in a mansion but the sentence describing what the map shows (I guess its kind of a quest?) doesn't look right.

You order the bandits to come out with their hands raised. They comply and you
 order them to surrender. They obey and slowly emerge from hiding places. You
 tie them up and put them in cages. You lead them to a mansion
> search the mansion

You search the mansion and find many documents and books. One of them is a map
 of the entire continent. It shows the way to the location ofYou find a great
 city ofThe map leads to completeThe map showsYou find the city and its:

[BUG]

When ever i try to run it it show me this message
(copied from the AI Dungeon)

Traceback (most recent call last):
File "play.py", line 271, in
play_aidungeon_2()
File "play.py", line 106, in play_aidungeon_2
generator = GPT2Generator()
File "/content/AIDungeon/generator/gpt2/gpt2_generator.py", line 31, in init
self.enc = encoder.get_encoder(self.model_name, models_dir)
File "/content/AIDungeon/generator/gpt2/src/encoder.py", line 124, in get_encoder
with open(os.path.join(models_dir, model_name, "encoder.json"), "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'generator/gpt2/models/model_v5/encoder.json'

and then it dosn't run if I'm doing something wrong please contact me at [email protected]

please i love this game and i need i back up and running soon
Thank you for your time

P.S it worked earlier today

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.