Korean news dataset generator for building the NLP dataset. The generated file can be used in pointer-generator project by Abigail See. Please refer this: Get To The Point: Summarization with Pointer-Generator Networks
This is a sub-project of skku-coop-project.
To run this, you should first install prerequisites. you can install them by run command below:
pip install requests koalanlp hanja beautifulsoup4 textrankr tensorflow
Also, you need to install khaiii
. Please refer to this instruction. khaiii
supports only Linux environments.
First, you need to gather raw news data from BigKinds service. you can adjust parameters in data
to gather news from other categories. Default: 'society news'.
python crawler.py <article_size> <save_path>
Note: BigKinds limits client requests strictly. They might block your IP after doing this job. Please use carefully.
Next, you need to process raw news data into regularized text and generate summaries. To do this, run command below:
python preprocessor.py <load_path> <save_path>
This may take a long time. After this, you can find processed articles in .story
format under save_path
. The files are plain text files. You can open them using a text editor.
To generate binary datasets which work with the pointer-generator project, type command below:
python dataset-processor.py <load_path>
Note:
dataset-processor.py
file is originally not written by me. Please refer this file.
load_path
is the location of preprocessed files, which is the output of preprocessor.py
.
After this, you can find the finished_files
directory under your working directory. This directory contains necessary files used by the pointer-generator project. You can test the dataset by the project's instructions.