DoCAI (Document Conversational AI) is an AI-powered document analyzing tool and question-answering chatbot. Given a URL address, document or free text, DoCAI analysis them, answers questions in the context of the document and provides insights like topic distribution of the document using Natural Language Processing. Any user/professional referring a large set of documents on a daily basis can leverage DoCAI to quickly go through and assimilate large amounts of information.
The very important component of DoCAI is the Question Answer chatbot. Given a document or context, the bot answers questions asked by user from documents.
For question answering chatbot, I am using Stanford Question Answering Dataset (SQuAD).
It is very important to understand the dataset and visualize insights in data before building the model. Please run the Dataset Analysis notebook to pre-process data and analyze insights from the dataset.
*** open Dataset Analysis.ipnb ***
The initial step is to convert the SQuAD JSON data into Pandas Dataframe and convert tokens into word embeddings (pretrained GloVe embeddings) and character embeddings.
*** python pre-process.py ***
I have choosed Bidirectional Attention Flow architecture for building the Question Answer baseline model. BiDAF takes advantage of attention mechanism to the encoded sequence.
The hyperparameters for this training the model are:
*** python3 train.py -n baseline ***
*** tensorboard --logdir save --port 5678 *** #runs the tensorboard
The final step isto run the test.py function to get the answers for new questions.
EM(Exact Match) and F1 are standard evaluation metrics for SQuAD. EM is strict compared to F1, for example, if the actual answer is Elon Musk and your predicted answer is Elon, Em score will be 0. However, we get 50% recall (as half of the actual answer is correct) and 100% precision (as full of the predicted answer is correct). Hence, the F1 score is 2precisionrecall / (precision + recall) = 66.66%.
After 30 epochs of training, baseline gave me 58% F1 and 51% EM.
I am currently planning to modify my baseline and develop new architectures. I am developing code to use ELMO and BERT in place of GloVe embeddings. In baseline I used bidirectional LSTM's, I want to see if GRU gives more performance. I also wanted to implement Transformers with attention. I also wanted to see if an ensemble of multiple models performs well.
Insights given by DoCAI include topic distribution of documents (https://github.com/kaushikData/Document-Classification-Using-Wikipedia-Data) and sentiment of documents.