In case you guys didn't know. This README.md is github flavored markdown (gfm), and can be used just like our google docs document, but allows us to keep the stuff in one place (and more importantly in our local git repo on our own computer)
Markdown is pretty awesome
Jonathan wrote on google docs:
###Fremgangsmåde Vi skal have kortlagt en fremgangsmåde vi udvikler vores model på.
Jeg foreslår følgende: ####Sæt en benchmark
- Byg en simpel model, expand den til de overfitter, gå ét step tilbage og kald det fit, gå 2 steps tilbage og kald det underfit.
- Dette skulle gerne resultere i 3 modeller, en der overfitter, en der fitter og en der underfitter.
- Hver af benchmark modellerne skal have CV for 100(underfit), 200(fit), 500(overfit) epochs.
####Ved nye tiltag(én af gange, eller i anden struktureret orden)
- Test det nyt tiltag med de 3 modeller for de 3 forskellige epochs.
- Dette sikre os “sanity check” - er tiltaget udviklet korrekt?
- Samt ydeevne - giver det os noget?
####Dokumentation
- Ved hver test skal der opbygges en log for performance
###Model Arkitektur “going to 512x512 straight away is a pretty big jump. Would maybe try 256x256 first, with a large stride at the bottom of the image, and maybe go all-convolutional with some kind of global max pooling at the top i.e. replace the fully connected layers with 1x1 convolutions the architecture. Will need plenty of experimentation to get right, I reckon.”
###Loss function(I sværheds rækkefølge)
- MSE
- Clip MSE
- QW Kappa(ligner umildbart Clip MSE meget)
- Ordinal classification (when it’s MSE cost but a classification task)