Comments (11)
Improving upon Eve's version, I've implemented a cleaner overview for the readiness test. See an example here
In the table on the right hand side, numbers in red indicate that they are outside the accepted ranges. Numbers in grey mean the accepted ranges are not specified. The accepted range will show up when mouse hovers on.
The test also detects potential identical quantities as you can see one of the example (here the duplication is due the inclusion of both native and derived quantities, it is just used as an example).
Corresponding plots are displayed on the left as usual, but sorted in the same order as in the table.
The accepted ranges are specified in the yaml config file. This is still a manual step that we have to complete.
Comments are welcome!
from descqa.
I am thinking that we need a generic test that takes a list of input quantities from the yaml config file and plots the histogram for each quantity, and reports the max, min, mean and std. devn I think I can code this up pretty quickly. This would be flexible since quantities can be added/removed from the list.
Other ideas?
from descqa.
from descqa.
That's a good idea. A few other ideas that should be quick:
-
Report the median in addition to mean (because median is more stable to outliers and a big difference between median and mean can sometimes be concerning, though it's OK for intrinsically skewed distributions).
-
Check for NaN/inf values using np.isnan and np.isinf, and report any problems.
-
You might want to have some means of outlier rejection, and report the outlier fraction. See for example the options on https://stackoverflow.com/questions/11686720/is-there-a-numpy-builtin-to-reject-outliers-from-a-list . You can then use the non-rejected values for settings the histogram range. (Otherwise you might have some quantity with a tiny outlier fraction the results in a fairly useless histogram. If you use outlier rejection, you learn about the existence of outliers but also still get a useful histogram of everything else.)
from descqa.
@rmandelb good suggestions, thanks!
@evevkovacs given the time scale, we should collaborate on this one. We can work together on the readiness-test
branch (i.e., https://github.com/LSSTDESC/descqa/tree/readiness-test, you have push permission to it). And we should commit frequently to avoid conflicts.
from descqa.
@rmandelb thanks for the outlier suggestion. I was worried about this issue. @yymao Yes, collaborating will be the fastest. I'll get started tomorrow morning and make a skeleton that we can flesh out. I was planning to put the acceptable ranges/tests for each quantity in a big dict (of dicts) with keys labelled by the quantity name. We can just keep adding as many checks as we want, depending on the quantity.
from descqa.
@jchiang87 @rmandelb @yymao @katrinheitmann Here is an example of the readiness test page for checking ellipticities. https://portal.nersc.gov/project/lsst/descqa/v2/?run=2018-01-17_10 and details in (https://portal.nersc.gov/project/lsst/descqa/v2/?run=2018-01-17_10&test=Check_ellipticity.) If you click on the graph in the left panel, and enlarged version is diosplayed in the right panel. If you click on Check_protoDC2_test_summary.txt, the output is shown in the right panel. I mocked up some values for tests on the means and ranges for the histogrammed quantities. You can see that if any of the individual tests fail, then the the test fails.
The print out indicates if a quantity is out of range.
I'll post some other examples in a few minutes, I wanted to get your feed back on what I have so far. The test is written in a flexible way. The config files specifies groups of quantities that appear on the same plot and are subjected to the same test criteria. So, we should be able to make lists of whatever we want tested and just feed them to the test. More soon!
from descqa.
@jchiang87 @rmandelb @yymao @katrinheitmann I just ran for the test out of the box for ellipticities, sizes and magnitudes. Here is the result:https://portal.nersc.gov/project/lsst/descqa/v2/?run=2018-01-17_11&test=Check_quantities Magnitudes worked. The plot for sizes needs some adjustment because I haven't done any outlier rejection as suggested by Rachel. And I seem to have a bug in the reporting, but this is so you can get an idea.
from descqa.
I see. So for example the mean ellipticity of the bulge was set to be in the range 0.2-0.8 for this test, and that's why it failed, but everything else related to ellipticity passed. (And I assume this test value will be changed.) This looks like a promising start, modulo the range problem on the size plot.
So one more question - I was just thinking about failure modes, and I thought there was an error at some point where all e2 values were equal to e1. Can we generalize the test to catch things like that? (I could also imagine that happening with e.g. magnitudes in different bands, if there is a band indexing error when creating the catalog.) For example, could we make part of the validation test for a given column (like e1) that the values in that column should not be numerically identical to the values in any other column? Is this worthwhile? It seems like it could be useful but I realize it's adding some complexity, since currently you are just testing specific quantities rather than comparing them to others.
from descqa.
@rmandelb Yes, that's correct on the ellipticities. The test values will need to be vetted. I put in obvious ones, but will need guidance on others. Regarding generalizing the tests, those kinds of checks are possible. Right now, one could catch that kind of problem by looking at the histogram and seeing that 2 curves are on top of one another. That's one bonus of grouping like quantities together. In terms of what will be ready soon, we need to prioritize our effort into covering the most important checks first. It would be good to assemble a list of all the quantities that we want to check, so we can try out how that works in the present scheme and assemble the acceptable bounds and so on.
from descqa.
Posting the latest readiness test example in case people come here to check the status of the readiness test.
Further discussion should move to #60
from descqa.
Related Issues (20)
- Update CheckColors test to be compatible with DM outputs HOT 5
- Tree ring test
- validate instance catalogs to filter out offending AGN HOT 3
- number counts test updates HOT 6
- Update DESCQA web app's landing page HOT 4
- Ability to "tag" a run after the run is complete
- `sklearn.cluster.k_means` not working when `n_jobs=-1` is set HOT 1
- README is pointing to jupyter-dev
- Segmentation fault running some correlation function tests HOT 2
- Shear Test fails due to OSError: libgfortran.so.3: cannot open shared object file: No such file or directory HOT 7
- shear_test fails due to camb attribute error
- Move from project/projecta to cfs
- Python Environment Name Change stack => desc HOT 1
- Need to update versions of gsl and cray-fftw in run_master.sh HOT 2
- Set HDF5_USE_FILE_LOCKING=FALSE in run_master.sh
- Make DESCQA compatible with new desc-python environment HOT 6
- Is there a way to pass `external_data_dir` as an argument? HOT 5
- Revert to using the desc-python env at NERSC
- Experiment with generalizing tests beyond GCRCatalogs HOT 3
- Versioning of releases and plans any plans for an updated release? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from descqa.