This repository contains the scripts, data, and results for our paper Research Artifacts in Software Engineering Publications: Status and Trends.
Remark:
- If you are only interested in the data we provided,
archive/processed_papers.json
is all you need. Our analysis scripts are based on this data. - See our website that provides extented information based on this artifact: CS-Artifacts
archive
: our data for paper lists (archive/papers.json
) and paper metadata (archive/processed_papers.json
)collection
: scripts for data collectionpipeline
: scripts for data processinganalysis
: scripts for data analysisdata
: data filesimages
: images used in our paperutils
andconfigs
: utility scriptsconfig.py
: configuration file for data processing scripts
- Our envrionment: Python 3.10+, Ubuntu 22.04
- Clone this repository with git
- Create your Python virtual environment and install the required packages
cd SE-artifact
conda create -n se-artifact python=3.10
conda activate se-artifact
pip install -r requirements.txt
We conduct our study in three stages: data collection, data processing, and data analysis.
python3 -m collection.dblp_crawler
This script retrieves the paper lists for ICSE, ASE, ISSTA, and FSE from 2017-2022, excluding those with fewer than 6 pages. The results will be saved in data/dblp.xlsx, data/dblp.json, and data/dblp.csv.
After that, you may see the following error:
Invalid page range: xxix-xxx
Error: ICSE-2022-0
Invalid page range: xxxi-xxxii
Error: ICSE-2022-1
After manual verification, the errors are associated with two entries: "Message from the ICSE 2022 General Chair" and "Message from the ICSE 2022 Program Chairs." Since these entries are not submitted papers for ICSE 2022, they can be disregarded.
We manually downloaded the papers according to the paper lists and mark them for the following:
- paper_id
- title
- artifact_url
- artifact_url_valid
- URL_location
- URL_format
URL location: a(abstract), i(introduction), c(conclusion), o(others), t(title).
URL format: f(footnote), r(reference), t(in-text), h(hyperlink), s(section).
We annotated the papers in paper.xlsx. For the ease of parsing, we transfer it into "data/papers.json" and "data/papers.csv" by running
python3 -m collection.transfer_format
Based on "data/papers.json", we add the github_info(repo_name, repo_url, stars, forks, watches, issue_count, open_issue_count, update_date, programming_language), storage_website_type(Github, Artifact service, Personal homepage, Temporary drive and others), and code smells (Python, Java) of artifacts, and save to "data/processed_papers.json" .
In "pipeline/pipeline.py", there are three processing stages:
add_github_info(input_file, output_file)
add_storage_website_type(input_file, output_file)
add_code_smell(input_file, output_file)
Remarks:
- The "add_github_info" script obtain github_info by github APIs. Before running the script, you need to add your github token in "config.py".
- The "add_code_smell" script obtain code smells by Pylint(Python code smell detector) and PMD(Java code smell detector). Before running the script, you need to make sure you can run "pylint" in your command line, and provide the path of PMD in "config.py".
- The "add_code_smell" script must be run after the "add_github_info" script.
For example, you may complete "config.py" as follows:
PMD_PATH = "/the/path/to/your/bin/pmd"
GITHUB_TOKEN = "ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" // your github token
Then, you can add the infos by running the following commands:
python3 -m pipeline.pipeline
If you want to know more about the messages provided by Pylint and PMD, please refer to the following links:
- https://pylint.pycqa.org/en/latest/user_guide/messages/messages_overview.html
- https://pmd.github.io/pmd/pmd_rules_java.html
We first run the following command to generate the documentary template ("data/documentary.xlsx").
python3 -m analysis.documentary
Then, we manually check and fill in the documentary situation of Github artifacts from ICSE, and save the results in "data/documentary_fills.xlsx".
We convert the "data/documentary_fills.xlsx" into "data/documentary_fills.json" for the ease of parsing.
python3 -m analysis.documentary_convert
We use the following command to generate the images and tables used in our paper.
python3 -m analysis.draw
This command will save the figures and tables in "images/*".
We use the following command to generate the star ranking list ("data/github_star_ranking.csv").
python3 -m analysis.star
This script will generate "data/top_starred.xlsx" as a template. We manually fill in the characteristic infomation of top-starred artifacts (whose star number surpasses 100), and save the results in "data/top_starred_fills.xlsx".
We convert the "data/top_starred_fills.xlsx" into "data/top_starred_fills.csv" by running the following command:
python3 -m analysis.star_convert
You can find other analysis in "analysis/statistics.py", which can be run by the following command:
python3 -m analysis.statistics
If you have any questions, please submit issues or email to [email protected] .