Giter VIP home page Giter VIP logo

multirc's People

Contributors

heglertissot avatar in2uitive avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

multirc's Issues

"sentences_used" field is inaccurate or wrong?

I think some "sentences_used" field is inaccurate or wrong. For example:

The article is:
"paragraph": {"text": "<b>Sent 0: </b>timothy likes to play sports .<br><b>Sent 1: </b>he spends his time after school playing basketball and baseball .<br><b>Sent 2: </b>sometimes timothy pretends he is a famous baseball pitcher for his favorite team with his friends .<br><b>Sent 3: </b>he plays with his friends mandy and andrew .<br><b>Sent 4: </b>timothy also plays pretend when he is alone .<br><b>Sent 5: </b>he has an imaginary friend named sean .<br><b>Sent 6: </b>sean is an elephant who watches television with timothy .<br><b>Sent 7: </b>mandy likes playing baseball but she also likes to paint .<br><b>Sent 8: </b>mandy 's favorite class at school is art .<br><b>Sent 9: </b>she likes making pictures of flowers .<br><b>Sent 10: </b>her teacher says she is a good artist .<br><b>Sent 11: </b>she painted a picture of a tree for her teacher .<br><b>Sent 12: </b>there were red and yellow leaves on it .<br><b>Sent 13: </b>it had apples on it .<br><b>Sent 14: </b>when andrew goes home after baseball , he likes to eat a snack .<br><b>Sent 15: </b>he eats carrots and bananas .<br><b>Sent 16: </b>if he is a good boy his mom , mrs. smith , sometimes gives him milk and cookies .<br><b>Sent 17: </b>afterwards , andrew finishes his homework .<br><b>Sent 18: </b>"

The question and answer is:

{"multisent": false, "question": "who does timothy play with ?", "sentences_used": [0, 1], "answers": [{"text": "timothy plays with mandy and andrew . timothy also plays with his imaginary friend sean", "isAnswer": true, "scores": {}}, {"text": "the famous baseball pitcher", "isAnswer": false, "scores": {}}, {"text": "mrs smith", "isAnswer": false, "scores": {}}, {"text": "timothy likes to play sports", "isAnswer": false, "scores": {}}, {"text": "basketball and baseball", "isAnswer": false, "scores": {}}], "idx": "18"}

We can see clearly that we cannot answer this question or ask this question just base on sentence 0 and sentence 1.

Codalab output file(mentioned in the leaderboard) style is not the same as out put in multirc human's out put file

{"pid":"News/CNN/cnn-3b5bbf3ba31e4775140f05a8b59db55b22ee3e63.txt","qid":"0","scores":[1,0,0,1,0]},

different with below

{
"WH_dev_1639": "best",
"WH_dev_3121": "united kingdom",
"WH_dev_844": "horror",
"WH_dev_612": "canada",
"WH_dev_2211": "13",
"WH_dev_1956": "kuwait",
"WH_dev_2779": "rome",
"WH_dev_3432": "edwards air force base",
"WH_dev_2655": "rock",
"WH_dev_4437": "santa cruz",

Meaning of "multisent" property unclear

What does the multisent property of a question mean exactly?

According to the paper, it has been verified that every question is multi-sentence, so shouldn't multisent always be true? In fact, the following question from the dev set clearly relies on multiple sentences, but is annotated as "multisent": false.

          {
            "question": "How many years after he entered the army did Cavour become prime minister?",
            "sentences_used": [0, 7],
            "multisent": false
          },

Context:
Cavour was a younger son of a noble Piedmontese family, and entered the army in 1826, serving in the engineers.
[...]
In 1850 he became minister of commerce; in 1852, prime minister. After that, his history is the history of Italy itself.

Being able to run the pre-trained model of surface-lr on a given input file

@microth
Would be great if we can change the script a little bit such that it is more compatible with the setting we have in the CodaLab.

I think we need the following settings:

  • training script: take the local train file and train on it.
  • evaluation script: given the pre-trained model, load it and evaluate on a given input file. The following format would be ideal:
> script_name  <input-data-json-file> <output-prediction-json-path>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.