Giter VIP home page Giter VIP logo

detext's Introduction

"Relax like a sloth, let DeText do the understanding for you"

Python 3.7 application tensorflow License

DeText: A Deep Neural Text Understanding Framework

DeText is a Deep Text understanding framework for NLP related ranking, classification, and language generation tasks. It leverages semantic matching using deep neural networks to understand member intents in search and recommender systems.

As a general NLP framework, DeText can be applied to many tasks, including search & recommendation ranking, multi-class classification and query understanding tasks.

More details can be found in the LinkedIn Engineering blog post.

Highlight

  • Natural language understanding powered by state-of-the-art deep neural networks
    • automatic feature extraction with deep models
    • end-to-end training
    • interaction modeling between ranking sources and targets
  • A general framework with great flexibility
    • customizable model architectures
    • multiple text encoder support
    • multiple data input types support
    • various optimization choices
    • standard training flow control
  • Easy-to-use
    • Configuration based modeling (e.g., all configurations through command line)

General Model Architecture

DeText supports a general model architecture that contains following components:

  • Word embedding layer. It converts the sequence of words into a d by n matrix.

  • CNN/BERT/LSTM for text encoding layer. It takes into the word embedding matrix as input, and maps the text data into a fixed length embedding.

  • Interaction layer. It generates deep features based on the text embeddings. Options include concatenation, cosine similarity, etc.

  • Wide & Deep Feature Processing. We combine the traditional features with the interaction features (deep features) in a wide & deep fashion.

  • MLP layer. The MLP layer is to combine wide features and deep features.

All parameters are jointly updated to optimize the training objective.

Model Configurables

DeText offers great flexibility for clients to build customized networks for their own use cases:

  • LTR/classification layer: in-house LTR loss implementation, or tf-ranking LTR loss, multi-class classification support.

  • MLP layer: customizable number of layers and number of dimensions.

  • Interaction layer: support Cosine Similarity, Hadamard Product, and Concatenation.

  • Text embedding layer: support CNN, BERT, LSTM with customized parameters on filters, layers, dimensions, etc.

  • Continuous feature normalization: element-wise rescaling, value normalization.

  • Categorical feature processing: modeled as entity embedding.

All these can be customized via hyper-parameters in the DeText template. Note that tf-ranking is supported in the DeText framework, i.e., users can choose the LTR loss and metrics defined in DeText.

User Guide

Dev environment set up

  1. Create your virtualenv (Python version >= 3.7)
    VENV_DIR = <your venv dir>
    python3 -m venv $VENV_DIR  # Make sure your python version >= 3.7
    source $VENV_DIR/bin/activate  # Enter the virtual environment
  2. Upgrade pip and setuptools version
    pip3 install -U pip
    pip3 install -U setuptools
  3. Run setup for DeText:
    pip install . -e
  4. Verify environment setup through pytest. If all tests pass, the environment is correctly set up
    pytest 
  5. Refer to the training manual (TRAINING.md) to find information about customizing the model:
    • Training data format and preparation
    • Key parameters to customize and train DeText models
    • Detailed information about all DeText training parameters for full customization
  6. Train a model using DeText (e.g., run_detext.sh)

Tutorial

If you would like a simple try out of the library, you can refer to the following notebooks for tutorial

  • text_classification_demo.ipynb

    This notebook shows how to use DeText to train a multi-class text classification model on a public query intent classification dataset. Detailed instructions on data preparation, model training, model inference are included.

  • autocompletion.ipynb

    This notebook shows how to use DeText to train a text ranking model on a public query auto completion dataset. Detailed steps on data preparation, model training, model inference examples are included.

Citation

Please cite DeText in your publications if it helps your research:

@manual{guo-liu20,
  author    = {Weiwei Guo and
               Xiaowei Liu and
               Sida Wang and 
               Huiji Gao and
               Bo Long},
  title     = {DeText: A Deep NLP Framework for Intelligent Text Understanding},
  url       = {https://engineering.linkedin.com/blog/2020/open-sourcing-detext},
  year      = {2020}
}

@inproceedings{guo-gao19,
  author    = {Weiwei Guo and
               Huiji Gao and
               Jun Shi and 
               Bo Long},
  title     = {Deep Natural Language Processing for Search Systems},
  booktitle = {ACM SIGIR 2019},
  year      = {2019}
}

@inproceedings{guo-gao19,
  author    = {Weiwei Guo and
               Huiji Gao and
               Jun Shi and 
               Bo Long and 
               Liang Zhang and
               Bee-Chung Chen and
               Deepak Agarwal},
  title     = {Deep Natural Language Processing for Search and Recommender Systems},
  booktitle = {ACM SIGKDD 2019},
  year      = {2019}
}

@inproceedings{guo-liu20,
  author    = {Weiwei Guo and
               Xiaowei Liu and
               Sida Wang and 
               Huiji Gao and
               Ananth Sankar and 
               Zimeng Yang and 
               Qi Guo and 
               Liang Zhang and
               Bo Long and 
               Bee-Chung Chen and 
               Deepak Agarwal},
  title     = {DeText: A Deep Text Ranking Framework with BERT},
  booktitle = {ACM CIKM 2020},
  year      = {2020}
}

@inproceedings{jia-long20,
  author    = {Jun Jia and
               Bo Long and
               Huiji Gao and 
               Weiwei Guo and 
               Jun Shi and
               Xiaowei Liu and
               Mingzhou Zhou and
               Zhoutong Fu and
               Sida Wang and
               Sandeep Kumar Jha},
  title     = {Deep Learning for Search and Recommender Systems in Practice},
  booktitle = {ACM SIGKDD 2020},
  year      = {2020}
}

@inproceedings{wang-guo20,
  author    = {Sida Wang and
               Weiwei Guo and
               Huiji Gao and
               Bo Long},
  title     = {Efficient Neural Query Auto Completion},
  booktitle = {ACM CIKM 2020},
  year      = {2020}
}

@inproceedings{liu-guo20,
  author    = {Xiaowei Liu and
               Weiwei Guo and
               Huiji Gao and
               Bo Long},
  title     = {Deep Search Query Intent Understanding},
  booktitle = {arXiv:2008.06759},
  year      = {2020}
}

detext's People

Contributors

amberkaur1 avatar anukaal avatar cyzhao2013 avatar guoweiwei avatar jakiejj avatar nini2yoyo avatar starwang avatar wang-jia-rui avatar xwli-chelsea avatar yazhigao avatar zhoutong-fu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

detext's Issues

how to generate wide sparse features

Hi,I'm confused about how to generate the wide sparse features. Here is my understanding: combine the multi field categorical features together and form the multi hot sparse feature. then the index is generated by hash value or simliar way like the labelencode way?

Dependency issue while testing

Following the installation instructions,
I'd setup a new venv and had successfully run the setup step python setup.py develop
Unfortunately, while running a sample test using pytest, there seems to be ModuleNotFoundError for its dependency on tensorflow.
While checking in the same venv, i get the exact version of Tf installed as the requirement in setup.py
error trace
image

Although model training toy example using bash run_detext.sh at test/resources seems to be working fine

LIBERT model

Hi, is your LIBERT model available through Huggingface or some other platform to download and use? Thanks

[TO DO] ranking demo notebook

Can I know when the ranking demo will be released?

If it is not recent, is there any guideline with how we can use Detext for document ranking?

Pre-trained models

hi

will you be releasing any pre-trained models? Say on MSMARCO passage/document ranking?

Can it rank the documents semantically without pretraining

Hello Team,

Say I don't have any query to documents mapped, I have only documents will it rank the documents semantically based on add hoc questions? I have went through your documentation but didn't find such feature listed.

Thanks in advance.

Newest Post is Not Shown on Hashtags!

The issue is that whenever I try to see a daily post of a Hashtag like #dailycoding Or #100daysprogrammingchallenge.
The newest Post is not shown on that hashtag so I have to go to people profiles and then I'm able to see that?

Sequence Completion

Hi,
In the Linkedin engineering blog, it's said that "Currently, DeText can support ranking, classification, and sequence completion—3 of the 6 representative tasks". I couldn't see any instructions or training examples for sequence completion(query auto completion). Can you help me with this?

Randomness in the demo

Sometimes running the demo would result in precision@1 = 0.5 instead of the expected precision@1 = 1

e.g. https://github.com/linkedin/detext/runs/1001643393

def test_demo():
      from subprocess import run, PIPE
      from pathlib import Path
      completed_process = run(['sh', 'run_detext.sh'], stderr=PIPE, cwd=f'{Path(__file__).parent}/resources')
      assert completed_process.returncode == 0
>       assert completed_process.stderr.endswith(b'metric/precision@1 = 1.0\n')
E       assert False
E        +  where False = <built-in method endswith of bytes object at 0x558411bbd750>(b'metric/precision@1 = 1.0\n')
E        +    where <built-in method endswith of bytes object at 0x558411bbd750> = b"WARNING:tensorflow:\nThe TensorFlow contrib module will not be included in TensorFlow 2.0.\nFor more information, pl...FO:tensorflow:metric/precision@1 = 0.5\nI0819 06:13:35.714258 140186665367360 logger.py:27] metric/precision@1 = 0.5\n".endswith
E        +      where b"WARNING:tensorflow:\nThe TensorFlow contrib module will not be included in TensorFlow 2.0.\nFor more information, pl...FO:tensorflow:metric/precision@1 = 0.5\nI0819 06:13:35.714258 140186665367360 logger.py:27] metric/precision@1 = 0.5\n" = CompletedProcess(args=['sh', 'run_detext.sh'], returncode=0, stderr=b"WARNING:tensorflow:\nThe TensorFlow contrib modu...O:tensorflow:metric/precision@1 = 0.5\nI0819 06:13:35.714258 140186665367360 logger.py:27] metric/precision@1 = 0.5\n").stderr

Inclusive language

Can you address a couple of instances of language in the project where the wording can be made more inclusive? (example: whiltelist -> allowlist). We are trying to make the code in all LinkedIn projects more inclusive. Could you please examine whether or not these need to be updated, and make the changes? For suggested replacements see go/inclusivelanguage or google. THANK YOU!

Term URL
Master https://github.com/linkedin/detext/blob/38e7b74879debd8ae5f2685367c81cc3a8aa003b/.github/workflows/python-app-py3.yml
Master https://github.com/linkedin/detext/blob/1f91764f4adaea3655cabe6ce4c4f1ae23d2ac67/NOTICE
Master https://github.com/linkedin/detext/blob/faa9957888a4dfbf1c15f82b2d4c8e8942c18776/notebooks/text_classification_demo.ipynb
Master https://github.com/linkedin/detext/blob/180433fdcaf25baa4dcd7ba7212db23254565d5d/src/detext/run_detext.py
Master https://github.com/linkedin/detext/blob/38e7b74879debd8ae5f2685367c81cc3a8aa003b/RELEASING.md
Master https://github.com/linkedin/detext/blob/0ce996f846dc6998d1512a4160d52504bb15aff8/src/detext/model/bert/modeling.py
Master https://github.com/linkedin/detext/blob/1f91764f4adaea3655cabe6ce4c4f1ae23d2ac67/test/resources/vocab.txt
Master https://github.com/linkedin/detext/blob/38e7b74879debd8ae5f2685367c81cc3a8aa003b/.github/workflows/python-app-py3.yml
Master https://github.com/linkedin/detext/blob/1f91764f4adaea3655cabe6ce4c4f1ae23d2ac67/NOTICE
Master https://github.com/linkedin/detext/blob/faa9957888a4dfbf1c15f82b2d4c8e8942c18776/notebooks/text_classification_demo.ipynb
Master https://github.com/linkedin/detext/blob/180433fdcaf25baa4dcd7ba7212db23254565d5d/src/detext/run_detext.py
Master https://github.com/linkedin/detext/blob/38e7b74879debd8ae5f2685367c81cc3a8aa003b/RELEASING.md
Master https://github.com/linkedin/detext/blob/0ce996f846dc6998d1512a4160d52504bb15aff8/src/detext/model/bert/modeling.py
Master https://github.com/linkedin/detext/blob/1f91764f4adaea3655cabe6ce4c4f1ae23d2ac67/test/resources/vocab.txt
Slave https://github.com/linkedin/detext/blob/1f91764f4adaea3655cabe6ce4c4f1ae23d2ac67/test/resources/vocab.txt
Slave https://github.com/linkedin/detext/blob/1f91764f4adaea3655cabe6ce4c4f1ae23d2ac67/test/resources/vocab.txt
Ghetto https://github.com/linkedin/detext/blob/1f91764f4adaea3655cabe6ce4c4f1ae23d2ac67/test/resources/vocab.txt
Ghetto https://github.com/linkedin/detext/blob/1f91764f4adaea3655cabe6ce4c4f1ae23d2ac67/test/resources/vocab.txt

TypeError: __init__() got an unexpected keyword argument 'feature_names'

I am trying to run query autocomplete. Unable to run the line

args = DetextArg(ftr_ext="cnn", num_filters=50, num_units=64, emb_sim_func=["inner"], # cosine matching function ltr_loss_fn="softmax", # learning-to-rank loss optimizer="bert_adam", # same AdamWeightDecay optimizer as in BERT training learning_rate=0.002, max_len=16, min_len=3, use_deep=True, num_train_steps=300, steps_per_stats=30, steps_per_eval=30, train_batch_size=64, test_batch_size=64, pmetric="mrr@5", vocab_file=vocab_file, feature_names=["label","usr_prefix","doc_suffix"], train_file="train.tfrecord", dev_file="dev.tfrecord", test_file="test.tfrecord", out_dir="output")

Query the data.

Hey!
Once i have trained the model with custom data, is there a method/function to query the ingested data?
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.