artifact-evaluation's Issues
"Artifact Available" badge guidelines
Currently, the publisher will apply the ACM "Artifact Available" badge to the camera ready of a paper if the authors submit a DOI for their artifact. This is in contrast to the rest of the badges that are given when the artifact evaluation committee evaluates an artifact. The publisher might check that the DOI is valid, but certainly will not check the contents of that artifact (e.g. it might be empty). Should there be a review process at all for "Artifact Available?" ICSE 2020 has one, ISSTA 2019 did not.
Data analysis
For empirical artifacts: must an artifact include both the “raw” pre-processed data, and the post-processed (as analyzed) data?
Artifact Extendability
To what extent should reviewers try to extend an artifact (build on a tool, analyze a dataset in a new way, or provide new inputs/configurations to that tool or analysis), in comparison to simply running it?
Anonymization of artifacts
How can we support researchers struggling to publish (often qualitative) data due to potential privacy violations?
- Which degree of anonymity is required, especially for demographic data (K-anonymity and Differential privacy, etc.)?
- How to handle interview transcripts or open-ended survey questions?
Replicated, Reproduced and functional?
Currently, there is some confusion on the "Results Replicated" and "Results Reproduced" badges. The process for reviewing, reporting, and retroactive badge awarding needs to be clarified. FSE 2018 and 2019, ICSE 2019 and 2020 do not award the "Functional" badge. Is this a good practice? Should we generally move away from this badge?
Artifact Evaluation Process: Communication
How much communication should be allowed between evaluators and authors during the review period?
Examples range from:
- No communication other than the authors submit an artifact and eventually get reviews
- A "kick-the-tires" phase, in which evaluators can do some sanity checking, ask questions, and get a response (one time).
- Open discussion, where evaluators can freely discuss with authors (blind or not)
Implications of artifact evaluation on paper acceptance
What happens when reviewers find major flaws during artifact evaluation?
Can artifact evaluation impact paper acceptance?
Artifact completeness and size
Must the artifact provide data and tools to replicate ALL experiments in a paper, or is it allowable to scope an artifact to consider only part of the claims?
Who decides (authors, reviewers or chairs) what claims in a given paper should be supported by the artifact?
What should we consider "too much data" or "too long of an experiment" that can’t be submitted in full for artifact evaluation? For instance, one researcher might consider a 2GB dataset too large to submit in full, while another might submit a 2TB dataset.
Whatever the criteria for "too big" is - what process should authors follow if their artifact is too big to submit some subset of their artifact for evaluation?
Artifact Automation
Should an artifact automatically generate any tables/graphs that appear in a paper, or is it OK (or, perhaps even better, given the potential for mistakes in these scripts) for this to be a somewhat manual process?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.