Comments (38)
I hope the value to clinical research that riskscorer gives is now a little clearer.
👍 thanks for this clear explanation @meyera
from joss-reviews.
Also @arfon I'm sorry for doing the review before being officially assigned as a reviewer but I figured out I'd better do it when I had time + I guess the review can still be useful.
Hey no problem. Thank you so much for picking this up!
from joss-reviews.
Friendly reminder @meyera - how are you getting on?
from joss-reviews.
/ cc @openjournals/joss-reviewers - would anyone be willing to review this submission?
If you would like to review this submission then please comment on this thread so that others know you're doing a review (so as not to duplicate effort). Something as simple as :hand: I am reviewing this
will suffice.
Reviewer instructions
- Please work through the checklist at the start of this issue.
- If you need any further guidance/clarification take a look at the reviewer guidelines here http://joss.theoj.org/about#reviewer_guidelines
- Please make a publication recommendation at the end of your review
Any questions, please ask for help by commenting on this issue!
from joss-reviews.
I am reviewing this, surge ahead in case of latency
from joss-reviews.
So far;
@meyera
Repository is up and running, resolving to mentioned url.
Authors Affiliation and Credit: Checks out
Software License, MIT
References are clear enough.
Minor amendments:
You only mispelt "therefore". I'm asserting your audience is in medical or
clinical coding based on the text, this needs an explicit statement in the
abstract in the positive.
So there's a need for a easily accessible risk calculation structure,
engine or kernel to verify the math, with appropriate methods, you probably
need to have formal verification and peer review embedded in the process?
The second paragraph of the abstract is enough to work with for those with
familiarity to clinical or mission critical environments but you need to
offer more detail on the context of extensibility should you want
contributors or such without scaring anyone off yet, what features do you
want to add beyond score methods citing "extensibility"?
By simple programming interface do you mean a command line interface, a
graphic user interface with everyday WIMP or touch based icons, feel free
to elaborate if this will operate exclusively as a headless service or
using any other incarnation you deem appropriate for your evaluation and
operating environments.
Fleshing out the structure before under or over committing to needs will do
enough.
Test data parses fine by dry run reading
https://github.com/meyera/riskscorer/blob/master/README.md
Setting up my cloud for R is taking long
On 12 May 2016 17:35, "Arfon Smith" [email protected] wrote:
/ cc @openjournals/joss-reviewers
https://github.com/orgs/openjournals/teams/joss-reviewers - would
anyone be willing to review this submission?If you would like to review this submission then please comment on this
thread so that others know you're doing a review (so as not to duplicate
effort). Something as simple as✋ I am reviewing this will suffice.Reviewer instructions
- Please work through the checklist at the start of this issue.
- If you need any further guidance/clarification take a look at the
reviewer guidelines here
http://joss.theoj.org/about#reviewer_guidelines- Please make a publication recommendation at the end of your review
Any questions, please ask for help by commenting on this issue!
🚀 —
You are receiving this because you are on a team that was mentioned.
Reply to this email directly or view it on GitHub
#19 (comment)
from joss-reviews.
Authors Affiliation and Credit: Checks out
✅
@Spencerx - feel free to check off items at the top of the issue as you go...
from joss-reviews.
@Spencerx @arfon
Thank you for reviewing my submission. I am not totally sure how the processes work, that's why I just would like to ask: should I start to revise the points mentioned by @Spencerx right now and commit or wait until the review is finished and I get the official decision to revise?
Thank you very much for your effort.
Alex
from joss-reviews.
should I start to revise the points mentioned by @Spencerx right now and commit or wait until the review is finished and I get the official decision to revise?
@meyera - feel free to make some changes as you go. Just let @Spencerx know that you've made them.
from joss-reviews.
@meyera feel free to take your time and get it right the way you need it
On Thursday, 26 May 2016, Arfon Smith [email protected] wrote:
should I start to revise the points mentioned by @Spencerx
https://github.com/Spencerx right now and commit or wait until the
review is finished and I get the official decision to revise?@meyera https://github.com/meyera - feel free to make some changes as
you go. Just let @Spencerx https://github.com/Spencerx know that you've
made them.—
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#19 (comment)
Regards
[image: Spencer online]
Spencer
LinkedIn.com/in/Spencerx
from joss-reviews.
@Spencerx
Please apologize my late answer.
Here are my comments and my revision based on your remarks:
You only mispelt "therefore". I'm asserting your audience is in medical or
clinical coding based on the text, this needs an explicit statement in the
abstract in the positive.
The spelling error is corrected.
So there's a need for a easily accessible risk calculation structure,
engine or kernel to verify the math, with appropriate methods, you probably
need to have formal verification and peer review embedded in the process?
The audicence are scientist and statisticians working in clinical research, The usual risk in clinical medicine are usual simple in structure, but due to the high amount of patients, the manual calculation is a hassle. Even more, there are no tools allowing for batch processing based on a database. The riskscorer package aims to simplify the automatic calculation on demand, for instance as a part of the admission process of a patient, as well as batch processing of patient records.
Verfication for the two implemented scores (EuroScore I and STS score) is accomplished as described below:
EuroScore I:
The score is a simple logistic regression model. I have written serveral test cases to compare the calculated score against the a known score for the specific case.
STS:
The STS score is more complicated than the EuroScore I. The STS provides a Web-API for riskscore processing, for which, however, no convienent wrappers exists. The risk calculation function is actually a wrapper of the STS risk calculation web service. Test cases were similarily performed as for EuroScore I.
The second paragraph of the abstract is enough to work with for those with
familiarity to clinical or mission critical environments but you need to
offer more detail on the context of extensibility should you want
contributors or such without scaring anyone off yet, what features do you
want to add beyond score methods citing "extensibility"?
You are right, extensibility is an exaggeration. In the riskscorer package case extensibility would be just the addition of another calculation function for a dedicated score.
Therefore I deleted the extensibility point in the Paper.md file.
When you refer to 'the abstract', do you mean the Paper.md file?
By simple programming interface do you mean a command line interface, a
graphic user interface with everyday WIMP or touch based icons, feel free
to elaborate if this will operate exclusively as a headless service or
using any other incarnation you deem appropriate for your evaluation and
operating environments.
By simple programming interface I mean: each score -> one function. Nothing more. The coding of the various score parameters is flexibly recognized as elaborated in the package's vignette.
from joss-reviews.
This software consists of 350+ lines of code. I would like to ask the
author to confirm here that he thinks this software is a valuable
contribution to research. I want to point out that once this work has
been published it will be visible for a long time. Adding an example
of having used this tool in research would be a good idea.
from joss-reviews.
@pjotrp @Spencerx @arfon
Thank you for your comment. I agree that one should ask questions whether submitted software to JOSS is a valuable contribution to research.
In regard to the number of lines of code: The riskscorer package consists of several source files.
- sts.R ==> 863 lines of code
- test_sts.R ==> 383 lines of code
- utils.R ==> 581 lines of code
- test_utils.R ==> 221 lines of code
- es_II.R ==> 452 lines of code
- test_esII.R ==> 241 lines of code
It remains unclear to me where you did count 350 lines of code. In general, however, I do think that one should not use the number of lines of code as an indicator for anything.
Regarding the added value of riskscorer to science: calculating riskscores is a tedious task, which is taking a lot of time that could be used for the core research instead. Currently there are no tools that automat risk calculation of the prominent ESII or STS score available. The riskscorer package provides a convenient and time saving way to batch calculate the risk scores. Moreover, the web-service ready design enables integration into the clinical IT infrastructure and workflow and makes it possible to calculate the risk score automatically right at the admission of a patient. Because of the tedious work of the risk calculation in clinical reality these are rarely manually calculated. Automating this process therefor enables the use of powerful tools such as the STS score as a clinical decision support instrument.
We already use the package in our clinical research, papers where this tool was used will be soon ready for submission to clinical journals. Currently the EurValve EU research project looks at the source code to re-implement the code in another language for their own project (http://cordis.europa.eu/project/rcn/199897_en.html).
I hope the value to clinical research that riskscorer gives is now a little clearer.
Yours,
Alex
from joss-reviews.
@Spencerx @meyera - how are we doing on this review?
from joss-reviews.
@whedon assign @Spencerx as reviewer
from joss-reviews.
OK, the reviewer is @Spencerx
from joss-reviews.
@whedon assign @arfon as editor
from joss-reviews.
OK, the editor is @arfon
from joss-reviews.
@Spencerx - could you please give us an update on this review?
from joss-reviews.
@whedon list reviewers
from joss-reviews.
Here's the current list of JOSS reviewers: https://github.com/openjournals/joss/blob/master/docs/reviewers.csv
from joss-reviews.
@meyera - it seems like @Spencerx isn't going to be able to complete this review. Could you suggest any alternative reviewers who you think might be qualified to complete this review?
from joss-reviews.
@arfon - thanks for the pointer. I suggest @masalmon
from joss-reviews.
Ok! I have just read the thread. I will try to do it soon.
from joss-reviews.
I see the repository has three issues that look like bug reports here, here and here
What is the status on these issues @meyera ?
from joss-reviews.
Congratulations on creating this package. I can see how it can be useful to have the risk scorer as R functions instead of web interfaces, and your package could have a larger list of such scorers in the future. Here is my review, and I've opened several issues in the software repository itself.
Reviewer questions
General checks
-
[x ] Repository: Is the source code for this software available at the repository url?
-
[ x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
Yes but it gets an R CHECK error because apparently the license file needs to be mentioned in the DESCRIPTION. -
[ x] Version: Does the release version given match the GitHub release (v0.2.0)?
-
[x ] Archive: Does the software archive resolve?
Functionality
-
Installation: Does installation proceed as outlined in the documentation?
The reason I don't check this box yet is the fact that not all dependencies used in the vignette are listed in the DESCRIPTION so if one tried to install the package & build the vignette at the same time there'd be an error. -
Functionality: Have the functional claims of the software been confirmed?
-
Performance: Have the performance claims of the software been confirmed?
The reasons I leave this unchecked are 1) the examples in the vignette are not commented at all so it's hard to see if this is the expected behaviour of the functions 2) as mentioned in the issue about the vignette, it'd be nice to mention the existence of tests comparing values obtained with the package against known values.
Documentation
- [x ] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
- Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
- [x ] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
- [ x] Functionality documentation: Is the functionality of the software documented to a satisfactory level (e.g. API method documentation)?
There's a special character issue in one Rd file at least though so please check the lists of arguments.
- [x ] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
- Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
No, and this would be really important, e.g. if someone wishes to add a score calculator to your package.
Software paper
Compiled paper PDF: 10.21105.joss.00019.pdf
- [ x] Authors: Does the
paper.md
file include a list of authors with their affiliations? - [ x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
- [ x] References: Do all archival references that should have a DOI list one (e.g. papers, datasets, software)?
from joss-reviews.
Also @arfon I'm sorry for doing the review before being officially assigned as a reviewer but I figured out I'd better do it when I had time + I guess the review can still be useful.
from joss-reviews.
@whedon assign @masalmon as reviewer
from joss-reviews.
OK, the reviewer is @masalmon
from joss-reviews.
@meyera - please reply here when you've had a chance to address/respond to @masalmon's review.
from joss-reviews.
@masalmon thank you very much for the fast review. I will start working on it and inform you once everything's done.
from joss-reviews.
Cool, feel free to tag me here or in the repo if I can help!
from joss-reviews.
@meyera cc @arfon as mentioned I can clarify my feedback and help if needed
from joss-reviews.
from joss-reviews.
I just emailed @meyera directly. If there's no response from him in two weeks I will reject this submission.
from joss-reviews.
Makes sense
from joss-reviews.
@maelle thanks for all of your help here. This submission has been rejected due to the author going AWOL.
from joss-reviews.
No problem!
from joss-reviews.
Related Issues (20)
- [REVIEW]: Zoomerjoin: Superlatively Fast Fuzzy-Joins HOT 7
- [REVIEW]: PyBADS: Fast and robust black-box optimization in Python HOT 5
- [PRE REVIEW]: Ambit - A FEniCS-based cardiovascular physics solver HOT 10
- [PRE REVIEW]: PLIP: A hybrid python/C++ package for linearized machine learning interatomic potentials HOT 22
- [PRE REVIEW]: The Argonne Nuclear Data Exploration Software (ANDES) HOT 10
- [PRE REVIEW]: State-Averaged Orbital-Optimized VQE: A quantum algorithm for the democratic description of ground and excited electronic states HOT 6
- [PRE REVIEW]: SPICY: A python toolbox for meshless assimilation from image velocimetry using radial basis functions HOT 9
- [REVIEW]: TensorInference: A Julia package for tensor-based probabilistic inference HOT 6
- [REVIEW]: lavaanExtra: Convenience Functions for Package *lavaan* HOT 6
- [REVIEW]: BayesFlow: Amortized Bayesian Workflows With Neural Networks HOT 6
- [REVIEW]: Pysersic: A Python package for determining galaxy structural properties via Bayesian inference, accelerated with jax HOT 7
- [REVIEW]: hdlib: A Python library for designing Vector-Symbolic Architectures HOT 7
- [PRE REVIEW]: profet: A Python package for fetching protein structures from multiple data sources HOT 8
- [REVIEW]: PyBCI: Python Framework for User-Friendly and Real-Time Brain Computer Interfaces with LSL HOT 22
- [PRE REVIEW]: DREiMac: Dimensionality Reduction with Eilenberg-MacLane Coordinates HOT 18
- @csoneson - I HOT 1
- org HOT 1
- [REVIEW]: Bramble: adaptive common neighbor analysis (CNA) for the recognition of surface topologies in nanoparticles HOT 5
- [REVIEW]: CMakePPLang: An object-oriented extension to CMake HOT 7
- [PRE REVIEW]: `pgmuvi`: Quick and easy Gaussian Process Regression for multi-wavelength astronomical timeseries HOT 9
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from joss-reviews.