Giter VIP home page Giter VIP logo

wikiedits's Introduction

Wiki Edits 2.0

A collection of scripts for automatic extraction of edited sentences from text edition histories, such as Wikipedia revisions. It was used to create the WikEd Error Corpus --- a corpus of corrective Wikipedia edits published in:

@inproceedings{wiked2014,
    author = {Roman Grundkiewicz and Marcin Junczys-Dowmunt},
    title = {The WikEd Error Corpus: A Corpus of Corrective Wikipedia Edits and its Application to Grammatical Error Correction},
    booktitle = {Advances in Natural Language Processing -- Lecture Notes in Computer Science},
    editor = {Adam Przepiórkowski and Maciej Ogrodniczuk},
    publisher = {Springer},
    year = {2014},
    volume = {8686},
    pages = {478--490},
    url = {http://emjotde.github.io/publications/pdf/mjd.poltal2014.draft.pdf}
}

WikEd Error Corpus

The corpus has been prepared for two languages:

The repository also includes format conversion scripts for WikEd. The scripts work independently from Wiki Edits and can be found in the bin directory.

Requirements

This is a new version of Wiki Edits and it is not compatible with the old version! Back to commit 163d771 if you need old scripts.

This package is tested on Ubuntu with Python 2.7.

Required python packages:

Optional packages:

Run tests by typing nosetests from main directory.

Installation

Installation of all requirements is possible via Makefile if you have pip already installed:

sudo apt-get install python-pip
sudo make all

Usage

To extract edits from parallel texts:

./bin/txt_edits.py tests/data/lorem_ipsum.old.txt tests/data/lorem_ipsum.new.txt

And from a Wikipedia dump file:

zcat tests/data/enwiki-20140102.tiny.xml.gz | ./bin/wiki_edits.py

The last script extracts edits from a list of dump files or URLs:

./bin/collect_wiki_edits.py -w /path/to/work/dir dumplist.txt

Language-specific options

All scripts are mostly language-independent. A few components need to be updated to run the scripts for non-English languages:

Currently supported languages: English, Polish, German.

wikiedits's People

Contributors

gurunathparasaram avatar snukky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

wikiedits's Issues

Typo in the readme ;)

Here's an error to be corrected (and noted in the edit history):

The scripts work independently form Wiki Edits

It should be from.

Clarity on Levenshtein ratio

Hello @snukky, I wanted some clarity on this piece of code

def __levenshtein_ratio(self, old_sent, new_sent):
    old_words = old_sent.split()
    new_words = new_sent.split()

   min_words = min(len(old_words), len(new_words))
   dist = self.__levenshtein_on_words(old_words, new_words)
   ratio = dist / float(min_words) * math.log(min_words,
   self.LEVENSHTEIN_RATIO_LOG_BASE)

particularly how the ratio is calculated and how it can be interpreted
Is it a scaled version of - how much in ratio will the smaller string need to be operated on given the distance?
Could you point me to relevant literature if available, Thanks

Can't download the datasets

Hi,
I am having problems downloading the ready datasets, wiked-v1.0.en.tgz and wiked-v1.0.en.prepro.tgz - do the links work?
If I were to process the dump from scratch, which dump should I download? I am looking at the page of the current dump https://dumps.wikimedia.org/enwiki/20220920/, is it the pages-articles file?
Thanks!

Anna

NUCLE like edits

The paper mentions +Select corpus which filters only edits that are like those that appear in NUCLE corpus. Any script available to do that?

txt_edits.py file can't handle large text files

  1. In bin/txt_edits.py, the text in input and output text files are being read as a whole in the lines
    old_text = " ".join(line.rstrip() for line in file.readlines()) and
    new_text = " ".join(line.rstrip() for line in file.readlines())
    which doesn't work for large files(such as datasets, whose size is in GBs).

  2. Also in setup.py, the line has to be corrected from
    long_description=open('README.txt').read(), to
    long_description=open('README.md').read(),

Filter edits by English level of language proficiency/activity on wiki

Hi!
What would be the best way to consider/filter only edits performed by users with:

and possibly:

  • consider only main namespace-related edits only
  • consider edits labeled as grammar/spelling fixes in the tag

?

I'm asking because by looking at wiked-v1.0.en.prepro.tgz, spelling error corrections, grammatical error corrections and stylistic changes seem to be outnumbered by edits that are less useful for grammatical error correction tasks (some of the edits are non-vandalizing regressions, generally poor, or anyway fail to correct all problems in the sentences). Also, my gut feeling is that "meta" discussions/talk might have a lower language quality.

What would be the best way to implement this in the code? I was thinking of labeling the unedited/edited snippets with username + comment, by making wiki_dump_parser.py or edit_filter.py so to print out namespace/username/comment-filtered edits (but I am not sure of what would be the easiest approach).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.