Giter VIP home page Giter VIP logo

Comments (11)

christineaa avatar christineaa commented on May 29, 2024

Thanks for your attention to this work!
We have no plan to release the training code.

from knover.

robinsongh381 avatar robinsongh381 commented on May 29, 2024

@christineaa I have a one more question regarding the experiment in the QKConv paper.
When you train and evaluate for QReCC task, did you use the 14K conversations as the training dataset or 80K question-answer pairs as the training dataset ?

And similarly for the test dataset - did you use the conversation level or question-answer level?

Thank you!

from knover.

christineaa avatar christineaa commented on May 29, 2024

We used the question-answer pairs as the training/dev/test dataset, with 60.4K, 3.1K, and 16.4K samples respectively.

from knover.

dhx20150812 avatar dhx20150812 commented on May 29, 2024

Hi, @christineaa . Thanks for your nice work.

I have one more question: how should I build the BM25 index for QRecc task. I notice you post a link to the ml-qrecc repo. Whether should I download the webpages from both the Common Crawl and the Wayback Machine and build the BM25 index?

from knover.

christineaa avatar christineaa commented on May 29, 2024

Thanks for your attention to this work!
Yes, you should download both web pages and follow the instructions in the ml-qrecc repo.

from knover.

dhx20150812 avatar dhx20150812 commented on May 29, 2024

Thanks for your reply.

from knover.

robinsongh381 avatar robinsongh381 commented on May 29, 2024

@christineaa I have further questions regarding training and evaluation of QKConv model on QReCC dataset.

  1. In your paper, the third footprint states "We remove conversations without truth responses.". What is the meaning of this ? Did you apply this for training or test dataset or both ? Please provide me a detailed code for this processing if available. Because in your QKConv inference or dataset code, I cannot find any relevant information.

  2. Upon evaluation for Table 2 in QKConv paper for QReCC, did you apply the above "removed" version of qrecc-test.json ? or plain qrecc-test.json ? I mean qrecc-test.json from here.

from knover.

robinsongh381 avatar robinsongh381 commented on May 29, 2024

Also, when you report Table2, did you exlcude test examples which do not have gold knowledge ?

from knover.

christineaa avatar christineaa commented on May 29, 2024

@robinsongh381 Thanks for your attention to this work!

  1. We remove the samples when "Truth_answer"/"Answer" is an empty string for the training set (57946 samples left) and the test set (15024 samples left).
  2. We use the "removed" version of the test set, coding as evaluation code.
  3. We include samples without golden knowledge in Table 2 as the absence of golden knowledge does not affect response generation evaluation, and only exclude them in the knowledge selection evaluation.

from knover.

robinsongh381 avatar robinsongh381 commented on May 29, 2024

@christineaa Thank you for kind response.

I have a further question for Question 3.

The absence of gold knowledge indicates that the essential and required piece of information does not exist within the knowledge pool and hence factually correct and knowledg-grounded response cannot be obtained.

For this reason, I have found that previous works on QReCC evaluation, such as DPR-IHN[1], and CONQRR[2], have excluded such cases (i.e., examples without gold-knowledge annotation) in their evaluation.

What is your opinion on this ?

Thank you

[1] Saving Dense Retriever from Shortcut Dependency in Conversational Search

[2] CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement Learning

from knover.

christineaa avatar christineaa commented on May 29, 2024

@robinsongh381
The knowledge base for QReCC contains 54M passages, and more or less, there is knowledge relevant to the questions. We demonstrate how the model utilizes incorrect retrieved knowledge in Table 5 & Table 6.

However, DPR-IHN and CONQRR excluding samples without golden knowledge are another case. They present knowledge selection Recall metrics as their main results, and Recall metrics cannot be applied without golden knowledge.

from knover.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.