jgcri / basd Goto Github PK
View Code? Open in Web Editor NEWBias adjustment and statistical downscaling capabilities
Home Page: https://jgcri.github.io/basd/
License: BSD 2-Clause "Simplified" License
Bias adjustment and statistical downscaling capabilities
Home Page: https://jgcri.github.io/basd/
License: BSD 2-Clause "Simplified" License
Currently, warnings are being used and those are either printed to console or the slurm files created by sbatch. However, there should be a better logging solution. Thinking about separating warnings, general info, etc. Especially since there are often many of these things which quickly overwhelm the console/slurm file.
Trying to index a None object it seems. Unclear where as it happens in the parallel process and the traceback refers to joblib. So have to see why/where we would be returning nothing
In Stefan Lange's paper, there is no mention of fixing certain parameters when fitting distributions. However, this happens in the code for non-normal distributions. We should know why this happens, especially if we want to introduce additional distribution options in the future.
This seems to be acting as two things when it should be one. (Maybe not though I'm not sure). This parameter should only be the value to use if all points in the time series are invalid. It should not be used to replace invalid values when valid ones are present. Also, we don't need to throw an error if this is missing as long as it's not needed.
Currently, output datasets are returned using the same calendar as the input simulated future data. However, the bias adjustment process converts and uses a proleptci-gregorian calendar no matter what, and converts back if necessary. We should just allow for the final conversion to use any user specified calendar that is available in xarray.
If requested, it may be nice to provide the option to return the distribution parameters found during the fitting process. This could be an interesting piece for analysis over the grid.
Automated tests when pushing / pulling are failing due to not being able to install xesmf
package via conda during the github workflow. This is not indicative of a real failure in the branch and will cause confusion.
See how we can properly get all dependencies installed during workflow runs.
A key concept in the bias adjustment algorithm is that we force the observational data, which is usually at a more fine resolution, to have the same shape as the simulated data. Of course, users can and perhaps will be recommended to reshape their data before inputting into the algorithm so as to match their needs. However, we should provide a default process to make this happen, in hopefully a very general, consistent and accurate way.
Some work needs to be done in profiling the code. Though I believe already that there are some serious time losses in the get_upper_bound_climatology
function, it would be great to know where exactly we are being slowed down. Right now, precipitation runs quite rapidly. Shortwave radiation (rsds) though is a variable which gets scaled to the interval [0,1] before running bias adjustment. This happens using a running window, and hence is probably a choke point in the code. It would be great to get this to run fast because it is surely not optimized. Currently running nearly 10x slower (!!) where that really should be less than 2x easily.
Sometimes there's a problem writing to NetCDF because of calendar conversion issues and incompatible representations of time in Python. Right now we have a pair of try/except blocks that attempt to write out or fail informatively. We should see how to make the calendar conversion more robust.
When observational resolution is not a clean integer factor of the simulated resolution, we need to implement a strategy for reprojection onto the "closest" grid that satisfies this requirement.
Add functionality to utilize cluster computing to enable bias adjustment of finer grids at reasonable speeds.
Have yet to implement non-parametric quantile mapping for the case when unable to fit using MLE or MoM. Need to implement function: map_quantiles_non_parametric_with_constant_extrapolation
in utils.py.
Lange briefly mentions the includsion of the MBCn algorithm for adjustment of the inter-variable copula. This appears to be implemented in his version of the code.
This algorithm addresses the fact that we have so far treat model variables as independent when that is very much not the case. When we have multiple model variables, we may want to use this algorithm.
There appears to be a good paper on the topic by Alex J. Cannon here.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.