hendrikstrobelt / lstmvis Goto Github PK
View Code? Open in Web Editor NEWVisualization Toolbox for Long Short Term Memory networks (LSTMs)
License: BSD 3-Clause "New" or "Revised" License
Visualization Toolbox for Long Short Term Memory networks (LSTMs)
License: BSD 3-Clause "New" or "Revised" License
Can we center the whole tool?
@sebastianGehrmann: we should distribute the hdf5 and a torch lstm model.
Please include the versions of the various packages used for which this has been tested and works particularly torch
I suggest to add a "reload data" functionality. If I start the server and afterwards change the data, I have to restart it again in order to see the changes. It would be great if this could be done without reloading the entire server.
Great tool btw :)
Any plans to port this to Python 3?
From the comments
It is good to see the hidden state dynamics can reflect some patterns corresponding to the range of inputs. But IMO, a more important thing is to understand the advantages of LSTMs over simple RNNs. For example, at first we can compare the different hidden dynamics between a LSTM and a simple RNN, to justify our hypothesis that a LSTM can remember the longer dependencies than a SRN. Then we may watch the behavior of the input/output/forget gates to help us understanding why this could happen.
Sure, this is a good suggestion. Maybe we can add one shared dataset with a simple RNN, LSTM (with gates), and a GRU. Would this let you look for the examples you are interested in?
When I run this execution:
python lstm_server.py -dir data
Getting this error below. Ideas on how to fix?
Traceback (most recent call last):
File "lstm_server.py", line 8, in <module>
from lstmdata.data_handler import LSTMDataHandler
File "/home/user/Documents/GitHub/LSTMVis/lstmdata/data_handler.py", line 11, in <module>
import helper_functions as hf
ModuleNotFoundError: No module named 'helper_functions'
Upon running python lstm_server.py -dir data
I get the error below. I'm running Python 2.7.15 in a virtualenv and I haven't altered any files or directories save for creating /data
and downloading the two corpus packs. Any help would be greatly appreciated, thank you :)
File "lstm_server.py", line 169, in <module>
app.add_api('lstm_server.yaml')
File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/connexion/app.py", line 168, in add_api
validator_map=self.validator_map)
File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/connexion/api.py", line 108, in __init__
validate_spec(spec)
File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/validator20.py", line 97, in validate_spec
validate_apis(apis, bound_deref)
File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/validator20.py", line 310, in validate_apis
idx=idx,
File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/validator20.py", line 243, in validate_parameter
validate_default_in_parameter(param, deref)
File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/validator20.py", line 172, in validate_default_in_parameter
deref=deref,
File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/common.py", line 29, in wrapper
sys.exc_info()[2])
File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/common.py", line 24, in wrapper
return method(*args, **kwargs)
File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/validator20.py", line 155, in validate_value_type
validate_schema_value(schema=deref(schema), value=value, swagger_resolver=swagger_resolver)
File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/ref_validators.py", line 101, in validate_schema_value
create_dereffing_validator(swagger_resolver)(schema, resolver=swagger_resolver).validate(value)
File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/jsonschema/validators.py", line 130, in validate
raise error
swagger_spec_validator.common.SwaggerValidationError: ('\'states,words\' is not of type \'array\'\n\nFailed validating \'type\' in schema:\n {\'collectionFormat\': \'csv\',\n \'default\': \'states,words\',\n \'description\': "list of required data dimensions, like state values (\'states\'), token values (\'words\'),\\n or meta information (\'meta_XYZ\')\\n",\n \'in\': \'query\',\n \'items\': {\'type\': \'string\'},\n \'name\': \'dims\',\n \'required\': False,\n \'type\': \'array\'}\n\nOn instance:\n \'states,words\'', <ValidationError: "'states,words' is not of type 'array'">)
Any ideas what I'm missing? @HendrikStrobelt Thx!
the script takes a config file (threshold, selected cells) and a binary ground truth to evaluate precision and recall
Just get the size from the loaded model.
Traceback (most recent call last):
File "server.py", line 321, in
create_data_handlers(args.dir)
File "server.py", line 307, in create_data_handlers
data_handlers[p_dir] = LSTMDataHandler(directory=p_dir, config=config)
File "/home/ubuntu/Projects/LSTMVis/lstmdata/data_handler.py", line 79, in init
if self.config['meta']:
KeyError: 'meta'
false:
cs[cs >= activation_threshold_corrected] = 1
cs[cs < activation_threshold_corrected] = -1
right:
all_below = cs<activation_threshold_corrected
cs[:,:] = 1
cs[all_below] = -1
Hey guys,
Fantastic project!
As part of my research, I am looking into extending your platform outside of the NLP domain. In other words, I would like to be able to explore the activation states:
Would you be interested in collaborating on something of the kind?
Currently regex search uses one character per position -> Introduce text to position mapping to fix this. In data_handler.py
I get an error when I don't give the unsigned keyword
Make a new option that stores the model as :float() t7
This makes it fail with weird errors.
In the keras instruction:
# Reshape y_train:
y_train_tiled = numpy.tile(y_train, (num_time_steps,1))
y_train_tiled = y_train_tiled.reshape(len(y_train), num_time_steps , 1)
y_train_tiled should be transposed between tiling and reshaping if we want to assign the same label to the time distributed outputs of a single example.
Not sure if this is addressed in the doc or elsewhere; I'll close this as soon as possible if I found out...
I did some development recently with OpenNMT. Is there a way to use this visualization tool for OpenNMT models? Especially since both OpenNMT and LSTMVi are developed by the same lab?
Thanks in advance!
In the simple example of an lstm.yml:
"files: # assign files to reference name
states: cbt_epoch10.h5 # HDF5 files have to end with .h5 or .hdf5 !!!
word_ids: train.h5 ------- # May be you mean train: train.h5
words: words.dict # dict files have to end with .dict !!
"
Since the following word_sequence cannot find the train file
"word_sequence: # defines the word sequence
file: train # HDF5 file
"
I am not sure about this, please confirm
Hi All,
Came across this project and it looks really helpful. I'm looking to use it to explore LSTM in a current project.
I'm a bit confused about the states.h5
input file. I imagine that the state vector from each timestep is the h_t
output of the LSTM, for each t
(as in the notation of this LSTM description). Is this correct?
If so, the states are dependent on the input sequence. So I can put any sequence into the LSTM and then get these hidden states, but wouldn't this tool also need the states from all of my other input sequences for some of the analysis? Or does it only look at one input at a time? Is there an easy way to swap between or compare activations across input sequences?
Thanks for the clarification, and for contributing this tool!
I find this work very interesting. I'm wondering if the author could release more test dataset such as the parentheses dataset.
Currently, the only public test data (apart from the live service) is the children's book.
Dear developers,
I ran the "install" instructions from the main page and downloaded the parens data as a test.
However, I get the following error, indicating a missing module:
Traceback (most recent call last): File "lstm_server.py", line 8, in <module> from lstmdata.data_handler import LSTMDataHandler File "/home/dnieuwenhuijse/Software/LSTMVis/lstmdata/data_handler.py", line 11, in <module> import helper_functions as hf ModuleNotFoundError: No module named 'helper_functions'
I couldn't find the module on your github repo either, so I am guessing that you have removed the module from your repo by accident?
Kind regards,
David
Jeffrey is going to get me some data for a CNN over text to add.
The match view is not working (both fast and precise). Even with a small subset of my dataset, it does not seem to work!
Using the Training documentation I was able to generate the required HDF5 files (states, topk , saliency ...) .. I am not able to visualize this dataset thats running on my localhost after I started the server (python lstm_server.py ).... Is there any documentation for viewing my dataset (HDF5 files) in the Visualization tab
currently it's index -- adding a dataset changes static URLs
I downloaded the sample dataset (zipped format) from Drive, but when I extract it (system Ubuntu16.04), it throws the above mentioned error
We should like to training.md, but it should not be in the main readme. Main readme should point to a data example (hosted on dropbox, google drive)
I have a tagging task where each token has a bunch of annotations associated with it (the most important being the predicted and actual class). It would be great if the annotations could be shown below the main visualization. This ways, I could more easily see, e.g., which neurons lead to a wrong decision.
Getting the above error when I attempt to execute get_states.lua. This is on my own data. Here is what I do before executing get_states.lua
First, I download the tiny-Shakespeare dataset in jcjohson's github repo, torch-rnn; and split it into 2 datasets: train_tiny-shakespeare.txt and validation_tiny-shakespeare.txt.
Then, I run preprocess like so:
python model/preprocess.py data/tinyshakespeare/train_tiny-shakespeare.txt
data/tinyshakespeare/validation_tiny-shakespeare.txt 50 64
data/tinyshakespeare/convert/tiny-shakespeare
Then, I run main.lua to train on this data, like so:
th model/main.lua -rnn_size 128 -word_vec_size 64-num_layers 2
-epochs 50 -data_file data/tinyshakespeare/convert/tiny-shakespeare.hdf5
-val_data_file data/tinyshakespeare/convert/tiny-shakespeareval.hdf5
-gpuid 0 -savefile cv/tinyshakespeare
| tee train-tinyshakespeare.log
Then, I run get_states.lua like so:
th model/get_states.lua
-data_file data/tinyshakespeare/convert/tiny-shakespeare.hdf5
-checkpoint_file cv/tinyshakespeare_epoch20.00_420.16.t7
-output_file data/reads/tinyshakespeare_states.h5
This is where I get the error.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.