latitudegames / aidungeon Goto Github PK
View Code? Open in Web Editor NEWInfinite adventures await!
Home Page: http://www.aidungeon.io/
License: MIT License
Infinite adventures await!
Home Page: http://www.aidungeon.io/
License: MIT License
The 'here' link in the readme is broken.
It leads to the following URL.
https://github.com/nickwalton/AIDungeon/blob/master/www.aidungeon.io
Probably adding some kind of start.sh
script which would handle the task would be a simplest solution.
HI! So, I have a pretty beefy computer in terms of everything but the GPU, which is an AMD card. I have 24 gigs of ram and a pretty powerful CPU though.
Would it be possible to modify the code to run on CPU? (and possibly modify scripts to run on Windows without needing Bash?)
I find it annoying whenever I get an OOM error in Colab, so it'd be great if I could run it locally.
Edit: Time is not an issue for me, I just want to run it locally.
More specifically, named entity tokenization. If every named entity in the fine-tuning data was tokenized to something like <NE-PERSON>
<NE-PLACE>
, it could open the possibility of adding in name generator sub-modules.
This probably falls outside the scope of this GPT-2 project for now, but you have to admit it's a fun idea...
How do I open this up to play? I downloaded an app that recognizes the Python files, but all it does is open for not even a second before closing itself. I've tried this multiple times, and even checked to make sure it wasn't just running in the background, but nothing has come up.
When ever i try to run it it show me this message
(copied from the AI Dungeon)
Traceback (most recent call last):
File "play.py", line 271, in
play_aidungeon_2()
File "play.py", line 106, in play_aidungeon_2
generator = GPT2Generator()
File "/content/AIDungeon/generator/gpt2/gpt2_generator.py", line 31, in init
self.enc = encoder.get_encoder(self.model_name, models_dir)
File "/content/AIDungeon/generator/gpt2/src/encoder.py", line 124, in get_encoder
with open(os.path.join(models_dir, model_name, "encoder.json"), "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'generator/gpt2/models/model_v5/encoder.json'
and then it dosn't run if I'm doing something wrong please contact me at [email protected]
please i love this game and i need i back up and running soon
Thank you for your time
P.S it worked earlier today
From help:
"quit" Quits the game and saves
Game saves on "quit" for new stories as expected.
If you load a game and "quit" then no save is generated.
Describe the bug
Notes or readable items are not shown
To Reproduce
Try to reach a prompt when the AI show a note. It will always ask for input instead of showing the note.
Expected behavior
It should show the rest of the prompt.
Screenshots
The note are cut-off right after ":"
Additional context
The 'termios' package is not supported on Windows:
https://docs.python.org/3/library/termios.html
https://github.com/AIDungeon/AIDungeon/blob/d7b77553d21a171268d4d59e822b2c1635875d26/play.py#L4
I haven't tried via Windows Subsystem for Linux, can someone chime in if they've gotten it working on there?
Removing the imports and tcflush lines allows me to play.
https://colab.research.google.com/github/nickwalton/AIDungeon/blob/master/AIDungeon_2.ipynb
Download Complete!
but then
AI Dungeon 2 will save and use your actions and game to continually improve AI
Dungeon. If you would like to disable this enter 'nosaving' for any action.
This will also turn off the ability to save games.
Initializing AI Dungeon! (This might take a few minutes)
ERROR:tensorflow:Couldn't match files for checkpoint generator/gpt2/models/model_v5/model-550
Traceback (most recent call last):
File "play.py", line 211, in <module>
play_aidungeon_2()
File "play.py", line 74, in play_aidungeon_2
generator = GPT2Generator()
File "/content/AIDungeon/generator/gpt2/gpt2_generator.py", line 49, in __init__
saver.restore(self.sess, ckpt)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/saver.py", line 1277, in restore
raise ValueError("Can't load save_path when it is None.")
ValueError: Can't load save_path when it is None.
I have no idea on how to start this, and I was directed by the creator to come here to ask how.
When running the game locally, the instructions are currently incorrect:
install.sh
assume that python
and pip
refer to Python 3. Usually that will be python3
and pip3
however.sudo pip3 install gsutil
before you can run install.sh
tensorflow
module is an additional requirement not listed in requirements.txt
.It worked yesterday, but trying to start it today gives me this:
Initializing AI Dungeon! (This might take a few minutes)
Traceback (most recent call last):
File "play.py", line 211, in
play_aidungeon_2()
File "play.py", line 74, in play_aidungeon_2
generator = GPT2Generator()
File "/content/AIDungeon/generator/gpt2/gpt2_generator.py", line 27, in init
self.enc = encoder.get_encoder(self.model_name, models_dir)
File "/content/AIDungeon/generator/gpt2/src/encoder.py", line 109, in get_encoder
with open(os.path.join(models_dir, model_name, 'encoder.json'), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'generator/gpt2/models/model_v5/encoder.json'
It seems that '
is not processed right. Cells go to infinite loop or something with inputs like What's your name?
The in
check is not lowercase. You could run lower()
on both strings, perhaps?
Making an issue for this. I'm busy at the moment and I'm fighting with getting the python dependencies installed on the Ubuntu subsystem on windows, but can spare a few minutes so to create this issue.
Some thoughts -
I have two gtx 1070s, non-sli. Has anyone experienced running the aidungeon on two gpus? What were some of the issues or challenges? The program suggests a beefy amount of gpu ram. Would a configuration with two gpus work well?
Prior to the torrent solution for downloading the NN weights, if a session timed out, you could reconnect and "run all" (or simply select "restart and run all") and it would not have to re-download the model, displaying instead "model already installed". But now, using either method, it takes a long time to do that, because (I think) it's downloading the model again.
Am I wrong?
Once I had the game declare "YOU WIN!" (the context was something like "you live happily ever after) and automatically save state for me. And once I had it declare
You win!
You win!
You win!
like 10 times, but it didn't save the game state. (I was assuming the first was a win, and the second was just something it said.)
But more interestingly, what is the condition that leads the game to detect a win? I noticed after the second kind win (where it repeated itself) that everything I did thereafter, it would just echo back to me:
> Invent penicillin.
You invent penicillin.
> Knit the Statue of Liberty in Spanish.
You knit the Statue of Liberty in Spanish.
(I'm making that up -- I don't remember what the actual input was, but that's the process that was happening.)
Can you share the training scripts?
Contributing on the models would be much easier then!
When I go to this page and select "Run all", this is what I get:
AI Dungeon 2 will save and use your actions and game to continually improve AI
Dungeon. If you would like to disable this enter 'nosaving' for any action.
This will also turn off the ability to save games.
Initializing AI Dungeon! (This might take a few minutes)
Traceback (most recent call last):
File "play.py", line 211, in <module>
play_aidungeon_2()
File "play.py", line 74, in play_aidungeon_2
generator = GPT2Generator()
File "/content/AIDungeon/AIDungeon/AIDungeon/AIDungeon/AIDungeon/generator/gpt2/gpt2_generator.py", line 27, in __init__
self.enc = encoder.get_encoder(self.model_name, models_dir)
File "/content/AIDungeon/AIDungeon/AIDungeon/AIDungeon/AIDungeon/generator/gpt2/src/encoder.py", line 111, in get_encoder
with open(os.path.join(models_dir, model_name, 'vocab.bpe'), 'r', encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'generator/gpt2/models/model_v5/vocab.bpe'
Hey,
This project is really amazing and could be enjoyed by a lot of people, which is why I think it would be incredible to translate the model. Would it be possible? And if yes, how?
For information there is a lot of RP players in France so it would be interesting to try it in french
Unexpected string in JSON at position 2706
SyntaxError: Unexpected string in JSON at position 2706
at JSON.parse ()
at za.program_ (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20191206-082401-RC00_284190142:1583:377)
at Ba (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20191206-082401-RC00_284190142:12:336)
at za.next_ (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20191206-082401-RC00_284190142:10:453)
at Da.next (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20191206-082401-RC00_284190142:13:206)
at b (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20191206-082401-RC00_284190142:22:43)
Describe the bug
Colab fails to load the notebook. Looks like a syntax error.
To Reproduce
Try to load via Colab
Expected behavior
A clear and concise description of what you expected to happen.
Additional context
Describe the bug
I found a map in a mansion but the sentence describing what the map shows (I guess its kind of a quest?) doesn't look right.
You order the bandits to come out with their hands raised. They comply and you
order them to surrender. They obey and slowly emerge from hiding places. You
tie them up and put them in cages. You lead them to a mansion
> search the mansion
You search the mansion and find many documents and books. One of them is a map
of the entire continent. It shows the way to the location ofYou find a great
city ofThe map leads to completeThe map showsYou find the city and its:
We nabbed the PyPI domain https://pypi.org/project/aidungeon/
Assigning to myself:
Add a license! Otherwise people can't contribute...
I get the error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
this happens when the input to GPT-2 is more than 1024 tokens, I think.
Describe the bug
The game always crashes around reply number 15-20, no matter what I write, even if I enter a blank reply (technically if I leave a blank reply the game comes up with a reply for me and THEN it crashes)
To Reproduce
Steps to reproduce the behavior:
Expected behavior
No crashing
Additional context
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
return fn(*args)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
target_list, run_metadata)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
[[{{node sample_sequence/while/model/GatherV2_1}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "play.py", line 211, in <module>
play_aidungeon_2()
File "play.py", line 180, in play_aidungeon_2
result = "\n" + story_manager.act(action)
File "/Fast-Games/AI Dungeon 2/story/story_manager.py", line 181, in act
result = self.generate_result(action_choice)
File "/Fast-Games/AI Dungeon 2/story/story_manager.py", line 186, in generate_result
block = self.generator.generate(self.story_context() + action)
File "/Fast-Games/AI Dungeon 2/generator/gpt2/gpt2_generator.py", line 108, in generate
text = self.generate_raw(prompt)
File "/Fast-Games/AI Dungeon 2/generator/gpt2/gpt2_generator.py", line 91, in generate_raw
self.context: [context_tokens for _ in range(self.batch_size)]
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956, in run
run_metadata_ptr)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
run_metadata)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
[[node sample_sequence/while/model/GatherV2_1 (defined at /usr/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]
Original stack trace for 'sample_sequence/while/model/GatherV2_1':
File "play.py", line 211, in <module>
play_aidungeon_2()
File "play.py", line 74, in play_aidungeon_2
generator = GPT2Generator()
File "/Fast-Games/AI Dungeon 2/generator/gpt2/gpt2_generator.py", line 44, in __init__
temperature=temperature, top_k=top_k, top_p=top_p
File "/Fast-Games/AI Dungeon 2/generator/gpt2/src/sample.py", line 112, in sample_sequence
back_prop=False,
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2753, in while_loop
return_same_structure)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2245, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2170, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2705, in <lambda>
body = lambda i, lv: (i + 1, orig_body(*lv))
File "/Fast-Games/AI Dungeon 2/generator/gpt2/src/sample.py", line 82, in body
next_outputs = step(hparams, prev, past=past)
File "/Fast-Games/AI Dungeon 2/generator/gpt2/src/sample.py", line 70, in step
lm_output = model.model(hparams=hparams, X=tokens, past=past, reuse=tf.AUTO_REUSE)
File "/Fast-Games/AI Dungeon 2/generator/gpt2/src/model.py", line 157, in model
h = tf.gather(wte, X) + tf.gather(wpe, positions_for(X, past_length))
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/array_ops.py", line 3956, in gather
params, indices, axis, name=name)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_array_ops.py", line 4082, in gather_v2
batch_dims=batch_dims, name=name)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/usr/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
self._traceback = tf_stack.extract_stack()
The game runs best on a (very parallel) GPU, but when run on CPU, it only manages to keep about 1.5 cores busy and takes a couple of minutes to generate each response. If it could use all cores effectively it would be much more playable.
I've tried adding some profiling code to ask TensorFlow what it's doing, as described here, and as far as I can tell it's mostly doing one long serial chain of MatMul
operations as it goes one by one through each layer of the network, for each word it has to generate:
(This is with inter- and intra-op parallelism both set to 8.)
I think that the problem might be that these are matrix-vector multiplies, which TensorFlow doesn't parallelize on CPU (tensorflow/tensorflow#6752), but which a GPU can churn through in parallel no problem.
It might not be possible to make CPU performance any better until TensorFlow learns to do these operations more like a GPU does. Alternatively, turning up self.batch_size
in GPT2Generator
might hack around the problem by making all the multiplies actually be matrix-matrix multiplies, but changing that variable makes things start crashing because some of the code in GPT2 (like penalize_used
) is written to expect only one sample coming through at a time.
Describe the bug
After running python play.py
% python play.py
Traceback (most recent call last):
File "play.py", line 2, in <module>
from generator.gpt2.gpt2_generator import *
File "/Users/travismyrick/rando/AIDungeon/generator/gpt2/gpt2_generator.py", line 7, in <module>
from generator.gpt2.src import sample, encoder, model
File "/Users/travismyrick/rando/AIDungeon/generator/gpt2/src/sample.py", line 62
def sample_sequence(*, hparams, length, start_token=None, batch_size=None, context=None, temperature=1, top_k=0, top_p=1):
^
SyntaxError: invalid syntax
To Reproduce
Steps to reproduce the behavior:
Expected behavior
For the program to load properly without a syntax error?
Screenshots
Additional context
Hey,
Awesome project here! I have been wanting to get better with my ML, and this is right up my alley.
I am curious what the specs for deploying this on the cloud would be, such as AWS. Obviously instances without GPU are considerably cheaper.
Is there work being on on hosted cloud infrastructure for this, and if so, what is being used?
Thanks again!
Hey,
So I just saw a post on Reddit for sharing AiDungeon via torrents due to your high cost.
Firstly make the torrent available here it's surely the easiest please c:.
Secondly you could or host your website on an ipfs page so you will never have surcharge or I can help you hosting it with the 10 gb/s node that I used for sharing the torrent too c:.
And open a discord ! C: I maybe didn't saw it too but I am sure it will be really useful for everyone ! Thanks for this project ^^
When I try to run the "python3 play.py" command, I get "PermissionError" [Errno13] Permission denied: 'generator/gpt2/models/model_v5/encoder.json'"
python3 play.py
(A bunch of "Future Warning"s)
AI Dungeon 2 will save and use your actions and game to continually improve AI
Dungeon. If you would like to disable this enter 'nosaving' for any action.
This will also turn off the ability to save games.
Initializing AI Dungeon! (This might take a few minutes)
Traceback (most recent call last):
File "play.py", line 271, in <module>
play_aidungeon_2()
File "play.py", line 106, in play_aidungeon_2
generator = GPT2Generator()
File "/home/user/AIDungeon/generator/gpt2/gpt2_generator.py", line 31, in __init__
self.enc = encoder.get_encoder(self.model_name, models_dir)
File "/home/user/AIDungeon/generator/gpt2/src/encoder.py", line 124, in get_encoder
with open(os.path.join(models_dir, model_name, "encoder.json"), "r") as f:
PermissionError: [Errno 13] Permission denied: 'generator/gpt2/models/model_v5/encoder.json'
How can I fix this?
Generating story...
You are ljx, a knight living in the kingdom of Larion. You have a steel longsword
and a wooden shield. You are on a quest to defeat the evil dragon of Larion.
You've heard he lives up at the north of the kingdom. You set on the path to
defeat him and walk into a dark forest. As you enter the forest you see a large
black dragon flying over the trees. He looks down upon you as he flies overhead.
"Ah, another one of your kind". The dragon says. "I'm sure they're all like that".
The dragon turns his head slightly and sees you. His eyes narrow as he stares
at you.
attack
You swing your sword at the dragon's face. It swings its tail at you, but you
dodge it by jumping over it. You then land on the dragon's back and begin to
stab with your sword.attack
You continue to stab the dragon. He lets out a roar and smashes his claw into
your chest. You fall off of him and roll away from the blow. You quickly get
up and try to run, but the dragon catches you againhide in the wood
You crawl through the undergrowth. Suddenly, you hear a noise behind you. You
turn around and see a large group of orcs come out of the woods. They look angry
and ready for battle.ask for help to kill the dragon
You stand up and raise your hands in surrender. The orcs stop their fighting
and stare at you. One orc raises his hand and points at you. He says:
"This is our king! We want no more trouble!"I am a knight, not a king.
You look down at yourself and say:
"I don't know what you mean, but I'm not a king".let them kill the dragon
You walk towards the dragon and start stabbing it repeatedly with your sword.
You keep attacking until you feel the dragon's life force drain away. You then
grab the dragon's body and throw it onto the ground.eat the dragon
You pick up the dragon corpse and put it in your mouth. You chew on the dragon
for several minutes before swallowing it wholeResurrect the dragon
You take the dragon's blood and pour some into your wounds. You immediately feel
betterfind another dragon
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
return fn(*args)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
target_list, run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[1,48,2,25,498,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node sample_sequence/while/concat}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.[[sample_sequence/while/Exit_3/_1387]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.(1) Resource exhausted: OOM when allocating tensor with shape[1,48,2,25,498,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node sample_sequence/while/concat}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.0 successful operations.
0 derived errors ignored.During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "play.py", line 271, in
play_aidungeon_2()
File "play.py", line 234, in play_aidungeon_2
result = "\n" + story_manager.act(action)
File "/content/AIDungeon/story/story_manager.py", line 207, in act
result = self.generate_result(action_choice)
File "/content/AIDungeon/story/story_manager.py", line 212, in generate_result
block = self.generator.generate(self.story_context() + action)
File "/content/AIDungeon/generator/gpt2/gpt2_generator.py", line 119, in generate
text = self.generate_raw(prompt)
File "/content/AIDungeon/generator/gpt2/gpt2_generator.py", line 102, in generate_raw
self.context: [context_tokens for _ in range(self.batch_size)]
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 956, in run
run_metadata_ptr)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[1,48,2,25,498,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node sample_sequence/while/concat (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.[[sample_sequence/while/Exit_3/_1387]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.(1) Resource exhausted: OOM when allocating tensor with shape[1,48,2,25,498,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node sample_sequence/while/concat (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.0 successful operations.
0 derived errors ignored.Original stack trace for 'sample_sequence/while/concat':
File "play.py", line 271, in
play_aidungeon_2()
File "play.py", line 106, in play_aidungeon_2
generator = GPT2Generator()
File "/content/AIDungeon/generator/gpt2/gpt2_generator.py", line 51, in init
top_p=top_p,
File "/content/AIDungeon/generator/gpt2/src/sample.py", line 120, in sample_sequence
back_prop=False,
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2753, in while_loop
return_same_structure)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2245, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2170, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2705, in
body = lambda i, lv: (i + 1, orig_body(*lv))
File "/content/AIDungeon/generator/gpt2/src/sample.py", line 98, in body
else tf.concat([past, next_outputs["presents"]], axis=-2),
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/array_ops.py", line 1420, in concat
return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/gen_array_ops.py", line 1257, in concat_v2
"ConcatV2", values=values, axis=axis, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()Game saved.
To load the game, type 'load' and enter the following ID: 745e2362-1bbe-11ea-ada8-0242ac1c0002
--2019-12-07 20:25:19-- http://130.211.31.182/model_v5/checkpoint
Connecting to 130.211.31.182:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-12-07 20:25:20 ERROR 403: Forbidden.
--2019-12-07 20:25:20-- http://130.211.31.182/model_v5/encoder.json
Connecting to 130.211.31.182:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-12-07 20:25:20 ERROR 403: Forbidden.
--2019-12-07 20:25:20-- http://130.211.31.182/model_v5/hparams.json
Connecting to 130.211.31.182:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-12-07 20:25:20 ERROR 403: Forbidden.
--2019-12-07 20:25:20-- http://130.211.31.182/model_v5/model-550.index
Connecting to 130.211.31.182:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-12-07 20:25:20 ERROR 403: Forbidden.
--2019-12-07 20:25:20-- http://130.211.31.182/model_v5/model-550.meta
Connecting to 130.211.31.182:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-12-07 20:25:20 ERROR 403: Forbidden.
--2019-12-07 20:25:20-- http://130.211.31.182/model_v5/vocab.bpe
Connecting to 130.211.31.182:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-12-07 20:25:20 ERROR 403: Forbidden.
The loaded data isn’t output with proper formatting because it uses print()
instead of console_print(result)
This was an oversight on my part, but I don’t think it’s major enough to make a formal pull request, since it’s just 7 characters.
Just change line 137 in play.py
from print(result)
to `console_print(result)
On the machine I'm using I get a UnicodeDecodeError when I read the opening.txt file.
Traceback (most recent call last):
File "play.py", line 211, in
play_aidungeon_2()
File "play.py", line 79, in play_aidungeon_2
starter = file.read()
File "/global/software/sl-7.x86_64/modules/langs/python/3.6/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 2: ordinal not in range(128)
Replacing the file with some ASCII text fixed it, though.
He smiles at you, and says "Hello," then he points to the candle flame. "This
is your first step towards controlling fire".
concentration
You concentrate hard, and the flame begins to grow brighter. Soon, it fills the
entire room. Then, the flame becomes bright enough to burn through paper. As
the flame grows larger, the man continues to smile at you.
You continue to watch as the flames consume everything in their path. Eventually,
they burn themselves out, leaving nothing but ash behind
meet the old man again
feel the little fire inside
You reach down, and you feel something inside you. It's hot, and it's burning.
You immediately grab hold of it, and you start to run outside. You rush out
the front door, and you find yourself in a field
set fire to the rain
You run forward, and you put your hand out in front of you. The flame catches
the air, and it ignites the grass. You watch as the grass turns red, and soon,
the entire field is ablaze. You stare at the blaze, and you can't help but laugh
You let the flame guide you, and you let it guide the flame. Soon, you find yourself
soaring through the sky, and you let the flame guide you. Soon, you find yourself
floating through the air, and you let the flame guide you
concentration
As you can see, it repeat the message
Due to the recent additions to install.sh, it is now impossible to use the program on Windows. Please update this program for universal compatibility.
I'm trying to get tensorRT working but I need the saved_model.pb file. Can you upload it?
From my observation, it seems a majority of failure comes from torrent download not working for some reason, at least on the Collab notebook. I propose the model downloading to be separated from the install script and put in a separate shell script, and to be invoked optionally by the user.
In my case, I downloaded the model using my desktop torrent client so I can seed it, and I also upload a copy of the model into my google drive. Then I have a code block on collab (will make a PR for them soon) that mount the drive and copy the model into AIDungeon. This works very well so far.
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Not sure, I was a few hours into my game and I issued sit on throne
and the game crashed.
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
return fn(*args)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
target_list, run_metadata)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
[[{{node sample_sequence/while/model/GatherV2_1}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./play.py", line 261, in <module>
play_aidungeon_2()
File "./play.py", line 224, in play_aidungeon_2
result = "\n" + story_manager.act(action)
File "/mnt/data/AIDungeon/story/story_manager.py", line 206, in act
result = self.generate_result(action_choice)
File "/mnt/data/AIDungeon/story/story_manager.py", line 211, in generate_result
block = self.generator.generate(self.story_context() + action)
File "/mnt/data/AIDungeon/generator/gpt2/gpt2_generator.py", line 116, in generate
text = self.generate_raw(prompt)
File "/mnt/data/AIDungeon/generator/gpt2/gpt2_generator.py", line 99, in generate_raw
self.context: [context_tokens for _ in range(self.batch_size)]
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 956, in run
run_metadata_ptr)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
run_metadata)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
[[node sample_sequence/while/model/GatherV2_1 (defined at /usr/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]
Original stack trace for 'sample_sequence/while/model/GatherV2_1':
File "./play.py", line 261, in <module>
play_aidungeon_2()
File "./play.py", line 102, in play_aidungeon_2
generator = GPT2Generator()
File "/mnt/data/AIDungeon/generator/gpt2/gpt2_generator.py", line 49, in __init__
top_p=top_p,
File "/mnt/data/AIDungeon/generator/gpt2/src/sample.py", line 121, in sample_sequence
back_prop=False,
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2753, in while_loop
return_same_structure)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2245, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2170, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 2705, in <lambda>
body = lambda i, lv: (i + 1, orig_body(*lv))
File "/mnt/data/AIDungeon/generator/gpt2/src/sample.py", line 90, in body
next_outputs = step(hparams, prev, past=past)
File "/mnt/data/AIDungeon/generator/gpt2/src/sample.py", line 76, in step
hparams=hparams, X=tokens, past=past, reuse=tf.AUTO_REUSE
File "/mnt/data/AIDungeon/generator/gpt2/src/model.py", line 185, in model
h = tf.gather(wte, X) + tf.gather(wpe, positions_for(X, past_length))
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/ops/array_ops.py", line 3956, in gather
params, indices, axis, name=name)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_array_ops.py", line 4082, in gather_v2
batch_dims=batch_dims, name=name)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/usr/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
self._traceback = tf_stack.extract_stack()
Exception ignored in: <bound method Story.__del__ of <story.story_manager.Story object at 0x7f3b70f93dd8>>
Traceback (most recent call last):
File "/mnt/data/AIDungeon/story/story_manager.py", line 37, in __del__
self.save_to_storage()
File "/mnt/data/AIDungeon/story/story_manager.py", line 135, in save_to_storage
stderr=subprocess.STDOUT,
File "/usr/lib/python3.6/subprocess.py", line 729, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.6/subprocess.py", line 1364, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'gsutil': 'gsutil'
Expected behavior
The game doesn't crash.
I think you just need to add gsutil
to requirements.txt
Additional context
Error handling needs some beefing up in this class. Loading a non-existing file causes a crash (in load_from_local
)
Another suggestion: Allow loading a save game on the "pick a setting" screen, instead of waiting for the user to boot into a new game first.
Also it seems to default to doing a cloud save? Perhaps some sort of fail-over to local storage?
Crash output:
Traceback (most recent call last): File ".\play.py", line 222, in <module> play_aidungeon_2() File ".\play.py", line 131, in play_aidungeon_2 id = story_manager.story.save_to_storage() File "C:\Users\zcanann\source\repos\AIDungeon\story\story_manager.py", line 131, in save_to_storage p = Popen(['gsutil', 'cp', file_name, 'gs://aidungeonstories'], stdout=FNULL, stderr=subprocess.STDOUT) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1520.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 800, in __init__ restore_signals, start_new_session) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1520.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 1207, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified Exception ignored in: <function Story.__del__ at 0x000001E61E3BEAF8> Traceback (most recent call last): File "C:\Users\zcanann\source\repos\AIDungeon\story\story_manager.py", line 35, in __del__ self.save_to_storage() File "C:\Users\zcanann\source\repos\AIDungeon\story\story_manager.py", line 131, in save_to_storage p = Popen(['gsutil', 'cp', file_name, 'gs://aidungeonstories'], stdout=FNULL, stderr=subprocess.STDOUT) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1520.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 800, in __init__ restore_signals, start_new_session) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1520.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 1207, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified
Google Collab has a 4 hour uptime limit for an instance. After playing for 4hr, the instance will get shutdown. Also it'd be awesome to save the model and reuse it.
I had experience with GDrive + GCollab, so I can definitely prepare a PR by the weekends to add these features into the collab notebook.
Making this issue to gauge interest and feedback!
Describe the current behavior:
Out of memory crash. To reproduce: load 95c2fb00-1906-11ea-a04f-0242ac1c0002 and type "Try to cast the spell written on the wall"
Describe the expected behavior:
No crash.
The web browser you are using (Chrome, Firefox, Safari, etc.):
Desktop Chrome Version 79.0.3945.74 (Official Build) beta (64-bit)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.