Giter VIP home page Giter VIP logo

Comments (22)

MGousseff avatar MGousseff commented on June 25, 2024 1

Th DOIS have been fixed.

from joss-reviews.

MGousseff avatar MGousseff commented on June 25, 2024 1

Thank you @matthiasdemuzere for your review.

I will answer the points you have raised and propose some modifications.

About the Need:
This is my main concern about your review. The fact that defining "ground truth" in LCZ studies is difficult is precisely why one needs a tool to compare LCZ classifications. As you noticed, the tool was developed for the need of the Bernard et al paper analysis to compare vectorial dataset but it has been directly thought as a generic method to compare any LCZ classification maps (thus raster is included). Maybe this was not clear in the manuscript. This part will be detailed in the future paper version. For example a function is available to load and process the WUDAPT Europe tiff map.

Concerning the need of the tool, we are a bit surprise about the comments β€œYet I am not convinced the community is really interested in a tool that can compare two LCZ maps” and β€œI am not convinced by the statement of need.” Indeed even if WUDAPT is the main standard method and endorsed by the community, there are others LCZ sources ( ground truth, GIS approaches, machine learning...). So there is a crucial need to objectively quantify differences obtained between two LCZ maps. In our opinion, LczExplore found a place there.
For instance in Bernard et al. (2023) manuscript, LczExplore was useful to compare OSM and BDTopo input data. We saw that the main discrepancies reflect the way building heights are taken into account but also the differences of input vegetation data in OSM and BDTopo. These are relevant informations that have been highlighted using the LczExplore package and it can be used the same way to compare any LCZ map (for example GeoClimate to WUDAPT) for a given territory.

Another useful feature of the package is how easy it makes grouping LCZ types into broader categories : if one wants to compare the urban envelopes of an area, it is very straightforward to group, for instance, all the urban LCZ, all the vegetation LCZ and so on, and to compare the resulting map, all in one go.

About the State of the field:
In our understanding, JOSS papers had to focus on similar tools, i.e. tools to compare LCZ classifications and not tools that produce LCZ classifications. This is why we did not describe any LCZ classification tool in details. We do not know any free and open source tool for comparing two LCZ maps while it seems interesting to offer such service. If you know more about such tool, please let us know.
The package is specifically designed for LCZ in the sense that when you specify that repr='standard' the colors on maps are standard LCZ map style colors. However, with repr='grouped' any qualitative variable pair of maps can be compared, as you can specify the names of the levels and the colors you want them to be associated with. A particular effort has been made to make this choice of grouping and colors robust for the user, with error messages quite explicit, so it may help geomaticians to compare several outputs for their treatments without any tedious repetitive work.

Representation:
In our opinion, the neighborhood-scale averages informations that might be quite heterogeneous at block scale. The way to compare WUDAPT to GeoClimate is not set yet since GeoClimate also allows to aggregate the LCZ available at block scale to the WUDAPT grid cell, keeping at this scale an information of the degree of heterogeneity found at block scale. In any case (the block information being aggregated at cell scale or not), LczExplore can still be used to investigate the differences between maps resulting from the GeoClimate or the WUDAPT approach in a near future.

About coding standards:
You are right about age of the tool, it has been developped for the need of Bernard et al. (2023) manuscript and has not been tested extensively since then. But it has been tested and shows interesting results when comparing WUDAPT to GeoClimate maps. The package has no warning and no error when one runs R CMD CHECK (only notes, which is allowed by CRAN standards). Each function has unitary tests (one can explore in the /inst/tinytest folder of the package) which are all documented and a vignette shows the most probable usecase.

Concerning the documentation and the readme informations:
We agree that some concretes examples must be added to facilitate the use of LczExplore. We will extend and describe it a bit more and specify how to sequence the main functions to produce the comparison of two maps. Concretely we propose to describe the way to obtain a comparison of two vector layer maps (such as in Bernard et al. (2023) manuscript plus also a new one describing how to obtain the comparison of European WUDAPT tiff LCZ map to a GeoClimate OSM output.

Whatever the choice of JOSS regarding this paper, we are looking forward to deepening the comparison of several approaches of LCZ generation, including, of course, the WUDAPT and GeoClimate approaches.

from joss-reviews.

editorialbot avatar editorialbot commented on June 25, 2024

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

from joss-reviews.

editorialbot avatar editorialbot commented on June 25, 2024
Software report:

github.com/AlDanial/cloc v 1.88  T=0.35 s (107.8 files/s, 12201.7 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
R                               29            538            794           1804
Markdown                         3            164              0            413
Rmd                              2            160            197            104
TeX                              1              9              0             91
YAML                             1              2              4             19
JSON                             2              0              0              2
-------------------------------------------------------------------------------
SUM:                            38            873            995           2433
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

from joss-reviews.

editorialbot avatar editorialbot commented on June 25, 2024

Wordcount for paper.md is 2359

from joss-reviews.

editorialbot avatar editorialbot commented on June 25, 2024
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.eiar.2015.10.004 is OK
- 10.1080/17512549.2015.1043643 is OK
- 10.1175/bams-d-11-00019.1 is OK
- 10.1016/j.buildenv.2021.107791 is OK
- 10.3390/land11050747 is OK
- 10.1111/j.1467-8306.1965.tb00529.x is OK

MISSING DOIs

- 10.1016/0013-9351(72)90023-0 may be a valid DOI for title: Some effects of the urban structure on heat mortality
- 10.21105/joss.03541 may be a valid DOI for title: GeoClimate: a Geospatial processing toolbox for environmental and climate studies
- 10.1016/j.landurbplan.2017.08.009 may be a valid DOI for title: Evaluating urban heat island in the critical local climate zones of an Indian city

INVALID DOIs

- None

from joss-reviews.

editorialbot avatar editorialbot commented on June 25, 2024

πŸ‘‰πŸ“„ Download article proof πŸ“„ View article proof on GitHub πŸ“„ πŸ‘ˆ

from joss-reviews.

martinfleis avatar martinfleis commented on June 25, 2024

πŸ‘‹πŸΌ @MGousseff, @matthiasdemuzere, @wcjochem this is the review thread for the paper. All of our communications will happen here from now on.

All reviewers should create checklists with the JOSS requirements using the command @editorialbot generate my checklist. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues (and small pull requests if needed) on the software repository. When doing so, please mention https://github.com/openjournals/joss-reviews/issues/5445 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for reviews to be completed within about 2-4 weeks, feel free to start whenever it works for you. Please let me know if any of you require significantly more time. We can also use editorialbot to set automatic reminders if you know you'll be away for a known period of time.

Please feel free to ping me (@martinfleis) if you have any questions/concerns.

Thanks!

from joss-reviews.

martinfleis avatar martinfleis commented on June 25, 2024

@MGousseff please check the DOI suggestions above. If they are correct, include the DOIs in the paper. Thanks!

from joss-reviews.

MGousseff avatar MGousseff commented on June 25, 2024

I'm sorry, I think the DOIs were in the bibtex, but flagged with url = in lieu of DOI =, I fix it right now and send the pull request asap.

from joss-reviews.

martinfleis avatar martinfleis commented on June 25, 2024

@editorialbot check references

from joss-reviews.

editorialbot avatar editorialbot commented on June 25, 2024
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.eiar.2015.10.004 is OK
- 10.1016/0013-9351(72)90023-0 is OK
- 10.1080/17512549.2015.1043643 is OK
- 10.1175/bams-d-11-00019.1 is OK
- 10.1016/j.buildenv.2021.107791 is OK
- 10.21105/joss.03541 is OK
- 10.3390/land11050747 is OK
- 10.1016/j.landurbplan.2017.08.009 is OK
- 10.1111/j.1467-8306.1965.tb00529.x is OK

MISSING DOIs

- None

INVALID DOIs

- None

from joss-reviews.

martinfleis avatar martinfleis commented on June 25, 2024

@editorialbot generate pdf

from joss-reviews.

editorialbot avatar editorialbot commented on June 25, 2024

πŸ‘‰πŸ“„ Download article proof πŸ“„ View article proof on GitHub πŸ“„ πŸ‘ˆ

from joss-reviews.

wcjochem avatar wcjochem commented on June 25, 2024

Review checklist for @wcjochem

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/orbisgis/lczexplore?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@MGousseff) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

from joss-reviews.

matthiasdemuzere avatar matthiasdemuzere commented on June 25, 2024

Review checklist for @matthiasdemuzere

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/orbisgis/lczexplore?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@MGousseff) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

from joss-reviews.

matthiasdemuzere avatar matthiasdemuzere commented on June 25, 2024
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?

@MGousseff: the repo contains a license file in markdown, whilst this checklist asks for a plain-text file? I am new to these JOSS requirements, so I ma not sure if this is fine @martinfleis?

from joss-reviews.

MGousseff avatar MGousseff commented on June 25, 2024

Hello @matthiasdemuzere , thank you for getting involved. I followed the recommandations of Hadley Wickham in https://r-pkgs.org/ but for common licence, the licence file is even ignored (specified in the .Rbuildignor) as CRAN considers it as redudant with the licence specified in the DESCRIPTION file. So if a copy in full text format is needed, I think I can add it and add it's path in the .Rbuildignore, just let me know if it is needed.

from joss-reviews.

matthiasdemuzere avatar matthiasdemuzere commented on June 25, 2024
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

The lczexplore R package presents a tool β€œto compare different LCZ classifications”, or, more general, β€œany type of classifications on geographical units”.

Personally, I am not convinced that the current state of the package represents a substantial scholarly effort. I assume the code has been developed in the context of Bernard et al. (2023), a paper that is currently under review? Serving a specific purpose within this paper? Data from this paper is therefore also used as a sample dataset within this package.

But, I am not convinced this package will be useful for a broader audience, who might have very different use cases or types of spatial classifications? Other reasons for this concern::

Need: I’ve been working with LCZ maps for many years now. Yet I am not convinced the community is really interested in a tool that can compare two LCZ maps? Once you know about the difference / agreements between two maps, what then? How will you use this information? Without a proper reference (ground truth), I don’t really see what are the next steps one can do to put this to use? Bottom line: I am not convinced by the statement of need.

State of the field: I am a bit surprised by the shallow description here? The interest in LCZs is immense, yet this is not reflected here? Eg. the LCZ generator (Demuzere et al., 2021) that has received 5000+ LCZ submission in the past two years, the global LCZ map (Demuzere et al., 2022), or the many LCZ review papers, with Huang et al. (2023) likely being the most recent and comprehensive one?

Representation: From a LCZ content point of view, I wonder whether it really makes sense to compare maps that are developed from earth observation information and VGI/GIS layers? Typically their spatial units are very different, one being more representative for the coarser neighborhood-scale (as intended by Stewart and Oke (2012)), and one more on the block level? Please note that this comment might be more about semantics, and not a key reason for me to doubt the applicability of the package as such.

Coding standards / tests: the code seems relatively young, with most commits in the past 2 months only. As far as I can see, it has also not been tested outside the scope of the Bernard et al. (2023) paper, using e.g. different types of data? Even testing with raster LCZ maps from the most widely used sources such as the LCZ Generator or other continental or global maps seems limited?
In any case, it does not seem very straightforward to me to do so, as exemplified by the Main Functions description in the README.md: The following functions are the core of this package : importLCZgen : imports the LCZ layer from a GIS (tested with geojson and shapefile files) importLCZwudapt : imports LCZ from the wudapt Europe Tiff. You'll have to use importLCZgen first to create the Bounding box of your zone of interest showLCZ : plots the map of your LCZ compareLCZ : compares two LCZ classifications of the same areas, output plots and data if this comparison confidSensib : explores how the agreement between two LCZ varies according to a confidence indicator associated for the LCZ value of each geom (sensibility analysis). Somehow the functionality seems to be there, but rather inaccessible, and definitely not in a shape I would expect from a package? To add, I eg. also do not see any tests to verify the functionality of this software?

In summary, I support the fact that the authors want to make this code publicly available. Yet in its current state, it feels more like a personal R library serving a specific goal than a tool that will be sufficiently useful for a broader community and more general applications? As such, I don’t think it is ready to be published in JOSS @martinfleis

from joss-reviews.

matthiasdemuzere avatar matthiasdemuzere commented on June 25, 2024

@MGousseff Thanks for providing additional information regarding my concerns.

Unfortunately I am still not convinced by the statement of need.

There are nowadays hundreds of papers using LCZs for a certain purpose. Yet, typically, people are not interested in comparing one LCZ map to another (typically even not in a proper accuracy assessment of one map). They just use the LCZ map they (or someone else) created (using whatever algorithm / input data) in any application of interest.

There are some that do an intercomparison, like the Muhammad et al (2022) paper you mention in your manuscript. In which they conclude that the GIS-LCZ based method shows a strong improvement over the previous WUDAPT L0 result, based on accuracy metrics (confusion matrix). We've checked this map in detail with local experts, and believe it is missing important features, eg. large LCZ 8 zones. So even though accuracy metrics can be higher, that does not necessarily mean that map is better.

Continuing with this example, I could use your lczexplore tool to visualize differences between the Muhammad et al. (2022) LCZ map and e.g. the LCZ map from Fenner et al. (2017). But then what? You mention the identification of how different input datasets treat vegetation, building heights, ... differently. Ok. Good. But again, what then?

So basically I am still looking for a more clear statement of need that outlines the potential of this tool to a broad community. How can it contribute to an improved understanding of the uncertainties, strengths and weakness of the LCZ framework / method(s)? Identifying differences between maps is one thing, but the crucial step is what comes after that: what to do with this information?

I hope I am not sounding to cruel here. I just genuinly would like to understand how this will benefit the community, in a broader sense and a context of continuesly advancing the field of LCZ-related work.

from joss-reviews.

matthiasdemuzere avatar matthiasdemuzere commented on June 25, 2024

plus also a new one describing how to obtain the comparison of European WUDAPT tiff LCZ map to a GeoClimate OSM output.

This would be much appreciated. I started using the tool, but I stopped as it was not 100% clear to me how to do so with a raster file. There was some very high level info there, but in my opinion that required too much time investment. Given the selling position this is a tool, it should in my opinion be much more automated and self-explanatory?

On the side: how does the tool deal with different labels? LCZ labels can very widely, example for the natural classes 11-17 or A - G or 101-107, ...
More general: a large proportion of the LCZ Generator and W2W code is dedicated to checking inputs, to make sure they are in line with expected formats. Does lczexplore has something similar in place?

from joss-reviews.

wcjochem avatar wcjochem commented on June 25, 2024

@MGousseff and @martinfleis, I've raised several issues in the repo (and linked in this thread) for your consideration and have completed my review at this stage. I can see that the package does support its primary aim, although it seems to be a very narrow application. I can't verify the claim in the paper that the package's functionality would be useful more generally in other types of spatial comparisons. I'm happy to continue my review if these comments are addressed.

from joss-reviews.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.