Giter VIP home page Giter VIP logo

fse16's Introduction

Fse'16 Artifacts Track

This site is an experiment in open reviewing.

  • All reviews are public domains.
  • All final decisions are public domain.
  • All software for managing that process (*) is open source and readily usable

(*) Github issues and Google Sheets.


For information about particular submissions, please visit the master control sheet http://tiny.cc/fse16art.

fse16's People

Contributors

obaysal avatar ai4se avatar timm avatar andrew659 avatar monikadhok avatar mikefhsu avatar hjjandy avatar mreif avatar zhaoguixu avatar danilofes avatar mdangl avatar nirhasabnis avatar qhanam avatar floatfeather avatar stahlbauer avatar

Stargazers

Isabel K. Villanes avatar Manh Nguyen avatar  avatar Daniel Negri avatar Wai Ting Cheung avatar  avatar  avatar

Watchers

 avatar  avatar James Cloos avatar  avatar  avatar Gregory Gay avatar  avatar  avatar  avatar Isabel K. Villanes avatar  avatar Yulei Sui avatar Chungha Sung avatar  avatar  avatar  avatar  avatar Adarsh Yoga avatar  avatar  avatar

fse16's Issues

register users

  • weiwei16
  • danilofes
  • mreif
  • stahlbauer
  • monikadhok
  • sch8906
  • mikefhsu
  • adarshyoga
  • andrew659
  • floatfeather
  • hjjandy
  • mdangl
  • nirhasabnis
  • obaysal
  • rockysui
  • qhanam
  • timm
  • zhaoguixu

References of the artifacts papers

Should references be excluded from the one-page limit for artifacts submission (proceedings page)? Since authors will append their artifacts papers (if accepted) to the FSE papers, references are likely to overlap between the two.

need text for letter to reviewers

olga,
what special instructions for reviews?

  1. Need your github ids
  2. Note that this is an open reviewing process. so be polite and constructive
  3. what else?

mail to:

David Lo [email protected], Arie Gurfinkel [email protected], Latifa Guerrouj [email protected], Mark Van Den Brand [email protected], Barbara Russo [email protected], Christoph Treude [email protected], Felienne Hermans [email protected], Collin McMillan [email protected], Hongyu Zhang [email protected], Emily Hill [email protected], Yasutaka Kamei [email protected]

processing dirk's comments on artifact predicates

Conference Publishing - Info
2:01 PM (9 hours ago)

to me, Tom, Thomas, Tim, Olga

do you have any more info on the ACM predicates? i can't grok "Artifacts Evaluated - Functional" and i can't find info on-line

Please see the below descriptions.

I would like to ask you to use this only internally for FSE-internal purposes,
because this is not yet published.

Best,
Dirk

We recommend that three separate brands related to artifact review be associated with research
articles in ACM publications: Artifacts Evaluated, Artifacts Available and Results Validated.
These brands are considered independent and any one, two or all three can be applied to any
given paper depending on review procedures developed by the journal or conference.

Artifacts Evaluated
This brand is applied to papers whose associated artifacts have successfully completed an
independent audit. Artifacts need not be made publicly available to be considered for this brand.
However, they do need to be made available to reviewers. Two levels are distinguished, only one
of which should be applied in any instance:
• Artifacts Evaluated – Functional
The artifacts associated with the research are found to be documented, complete,
exercisable, and include appropriate evidence of verification and validation.
Notes
Documented: At minimum, an inventory of artifacts is included, and sufficient
description provided to enable the artifacts to be exercised.Consistent: The artifacts are relevant to the associated paper, and contribute in some
inherent way to the generation of its main results.
Complete: To the extent possible, all components relevant to the paper in question are
included. (Proprietary artifacts need not be included. If they are required to exercise
the package then this should be documented, along with instructions on how to obtain
them. Proxies for proprietary data should be included so as to demonstrate the
analysis.)
Exercisable: Included scripts and/or software used to generate the results in the
associated paper can be successfully executed, and included data can be accessed and
appropriately manipulated.
• Artifacts Evaluated – Reusable
The artifacts associated with the paper are of a quality that significantly exceeds minimal
functionality. They are very carefully documented and well-structured to the extent that
reuse and repurposing is facilitated. In particular, norms and standards of the research
community for artifacts of this type are strictly adhered to.
Artifacts Available
This brand is applied to papers in which associated artifacts have been made permanently
available for retrieval.
• Artifacts Available
Author-created artifacts relevant to this paper have been placed on a publically accessible
archival repository. A DOI or link to this repository along with a unique identifier for the
object is provided.
Notes
We do not mandate the use of specific repositories. Publisher repositories,
institutional repositories, or open commercial repositories (e.g., figshare or Dryad) are
acceptable. In all cases, repositories used to archive data should have a declared plan
to enable permanent accessibility. Personal web pages are not acceptable for this
purpose.
Artifacts do not need to have been formally evaluated in order for an article to receive
this brand. In addition, they need not be complete in the sense described above. They
simply need to be relevant to the study and add value beyond the text in the article.
Such artifacts could be something as simple as the data from which the figures are
drawn, or as complex as a complete software system under study.
Results Validated
This brand is applied to papers in which the main results of the paper have been successfully
obtained by a person or team other than the author. Two levels are distinguished:
• Results Replicated
The main results of the paper have been obtained in a subsequent study by a person or
team other than the authors, using, in part, artifacts provided by the author.•
Results Reproduced
The main results of the paper have been independently obtained in a subsequent study by
a person or team other than the authors, without the use of author-supplied artifacts.
In each cases, exact replication or reproduction of results is not required, or even expected.
Instead, the results must be in agreement to within a tolerance deemed acceptable for
experiments of the given type. In particular, differences in the results should not change the main
claims made in the paper.

process dirk's comments on artifact badges

Conference Publishing - Info
2:21 PM (9 hours ago)

to me, Tom, Thomas, Tim, Olga

also, regarding artifact badges, i cant find info on this (the pdf link is dead, and acm portal reports only 2 papers for "faddegon" and faddegon's web site does not contain these papers)

Sorry for writing every piece separately, but each required some different look-up.

For the artifact badges, there is no standard process established yet.
Thus, you can for now use whatever stamps/badges you like.

For example, the programming-languages community uses a badge that looks like the one on the PLDI paper.

As you can see on this record:
http://dx.doi.org/10.1145/2858788.2688501
(click on the "Source Materials" tab)
we can provide some text with those papers.

For example, you can provide us with the badge and we would
(a) put the badge on every selected paper and
(b) give the picture to ACM for inclusion in the DL and hopefully they display it.

Furthermore, you can provide some wording like for the above paper and a link to a
description of your evaluation process, and we would submit to the DL for display.

Best,
Dirk
(for Conference Publishing Consulting)

ask tom: how to using github for reviewing

tom
please check tiny.cc/fse16art

this is our proposed master sheet for reviewing fss artifacts

any suggestions? (i ask due to your experience with the superdog book)

t

for olga: next steps

  • olga: please check http://tiny.cc/fse16art {master}. note that i did not have FS paper ids so i added our own
  • olga confirm: re http://tiny.cc/fse16art {registrations}. my next step is to email reviewers and say (1) please register yourself at {registrations} and (2) mark your COIs on {conflicts, column E} and,(3), optionally, if you want to, volunteer to review a paper by adding your name to columns F,G,H

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.