Comments (2)
The makefile are made for GNU Make, where I make heavy use of automatic variables. To describe it quickly, it is simply a list of recipes in the form:
recipe_target: recipe_dependencies
actions to
preform to
create the recipe
When you tell make to create recipe a
that depends on b
, it will first run the recipe for b
, and then can do the one for a
. If another recipe c
also depends on b
, it will do it directly without rebuilding b
since it already exist. So I am building a chain of dependencies: the plots depends on the trained networks (and their metrics), the training depends on the sliced data, the sliced data depends on the extracted data. This is a really good way to automate the dependencies between sub-parts of the experiments. Take the time to decompose it and you should be able to understand how the different parts interact with each other.
I usually do not write them anymore from scratch, but start from an existing one and simply search and replace any reference to the previous dataset, and then modify the training options if needed. To give you an overview of what you need to change (taking wmh.make as an example):
Change the list of targets that interest you, for instance:
TRN = results/colon/gdl results/colon/gdl_surface_steal
GRAPH = results/colon/val_dice.png results/colon/tra_dice.png \
results/colon/tra_loss.png \
results/colon/val_haussdorf.png
HIST =
Change the archive name:
PACK = $(PBASE)/$(REPO)-$(DATE)-$(HASH)-$(HOSTNAME)-colon.tar.gz
Extraction and slicing: write recipes to extract the data from the archive, and create the png files, training and validation set. This what will take you the most time, and is really task specific. I usually create a target data/colon
for the extracted data, and data/COLON
for the sliced data.
The options for the training come in two parts:
- the individual ones where you define the loss to use and its parameters (+ which training data to use)
- the common ones, which is mostly for the network architecture, number of classes, batch size, ... Note that those options can be overriden by the individual options if you want to
Apart from that, as long as you replace wmh
with colon
and WMH
with COLON
, the rest should work pretty well right away. Make has a few very useful options to debug it:
-d Print debugging information in addition to normal processing. The debugging information says which files are being considered for remaking, which file-times are being compared and with what results, which files actually need to be remade, which implicit rules are considered and which are
applied---everything interesting about how make decides what to do.
-n, --just-print, --dry-run, --recon
Print the commands that would be executed, but do not execute them (except in certain circumstances).
--trace
Information about the disposition of each target is printed (why the target is being rebuilt and what commands are run to rebuild it).
from boundary-loss.
Hi @HKervadec ,
Thanks for your reply very much. I will try to do ti.
from boundary-loss.
Related Issues (20)
- Does einsum really make the code easier to understand HOT 2
- ISLES 2018 HOT 1
- Heterogeneous resolution yields non-zero boundary. HOT 5
- InvalidArgumentError: required broadcastable shapes at loc(unknown) [Op:Mul] HOT 2
- Can this loss be used for multi-label classification? HOT 4
- Create dist_map for image segmentation mask as label. HOT 2
- Is multiplication by negmask in one_hot2dist() irrelevant? HOT 2
- Question about the optional argument resolution in the dist_map_transform function HOT 1
- About the calculation of dist_map HOT 5
- how to use with sigmoid as activation function when meeting binary classification segmentation task HOT 3
- how to adjust the lambda parameter HOT 5
- How to use HausdorffLoss? HOT 1
- How to use HausdorffLoss? HOT 1
- How to one-hot encode a multi-class dataset and how to use Boundary Loss on B x N x W x H logits? HOT 2
- Only using boundary loss leads to non convergence HOT 1
- Failure of matching datasets of WMH HOT 1
- Is it possible to train the Boundary Loss code on a GPU? HOT 1
- Whether this loss function can be applied to the partition of a hollow region, that is, a region with two boundaries HOT 2
- License Request
- zero question
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from boundary-loss.