Giter VIP home page Giter VIP logo

unetplusplus's People

Contributors

mahfuzmohammad avatar mrgiovanni avatar sbajpai2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unetplusplus's Issues

How can I modify the network code for the gray-scale image patches?

Hello, Zongwei,
Thank you for your wonderful UNetPlusPlus project. I understand that the network expects input channels of 3. But if I would input a gray-scale image patch into the network, how should I modify the network code? Could you give me some advice?
Thanks again and Merry Christmas!

Problems on Xnet

I have some problems on building Xnet.

in segmentation_models/xnet/blocks.py : 38
x = Concatenate(name=merge_name)([x, skip])
in merge_name="merge2-2",
variables: skip is a Tensor list which we want to concatenate while x is a single Tensor.
Thus generating the following error message (my input shape is (512, 512, 3)):

Traceback (most recent call last):
File "/home/xjw/pycharm/helpers/pydev/pydevd.py", line 1664, in
main()
File "/home/xjw/pycharm/helpers/pydev/pydevd.py", line 1658, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/xjw/pycharm/helpers/pydev/pydevd.py", line 1068, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/xjw/pycharm/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/xjw/Projects/image-segmentation-keras/train.py", line 56, in
input_shape=(input_height, input_width, 3))}
File "/home/xjw/Projects/image-segmentation-keras/Models/segmentation_models/xnet/model.py", line 100, in Xnet
use_batchnorm=decoder_use_batchnorm)
File "/home/xjw/Projects/image-segmentation-keras/Models/segmentation_models/xnet/builder.py", line 83, in build_xnet
use_batchnorm=use_batchnorm)(interm[(n_upsample_blocks+1)*(i+1)+j])
File "/home/xjw/Projects/image-segmentation-keras/Models/segmentation_models/xnet/blocks.py", line 38, in layer
x = Concatenate(name=merge_name)([x, skip]) # Problems on it!
File "/home/xjw/anaconda3/lib/python3.5/site-packages/keras/engine/base_layer.py", line 414, in call
self.assert_input_compatibility(inputs)
File "/home/xjw/anaconda3/lib/python3.5/site-packages/keras/engine/base_layer.py", line 285, in assert_input_compatibility
str(inputs) + '. All inputs to the layer '
ValueError: Layer merge_2-2 was called with an input that isn't a symbolic tensor. Received type: <class 'list'>. Full input: [<tf.Tensor 'decoder_stage2-2_upsample/ResizeNearestNeighbor:0' shape=(?, 256, 256, 128) dtype=float32>, [<tf.Tensor 'relu0/Relu:0' shape=(?, 256, 256, 64) dtype=float32>, <tf.Tensor 'decoder_stage2-1_relu2/Relu:0' shape=(?, 256, 256, 64) dtype=float32>]]. All inputs to the layer should be tensors.

data preparation for training

I want to train the model with my own dataset so I add thos elines to prepare the data for training:
`orgs = glob.glob("/home/selka/src/python/DL/UNetPlusPlus/data/image/.png")
masks = glob.glob("/home/selka/src/python/DL/UNetPlusPlus/data/label/
.png")
orgs.sort()
masks.sort()

imgs_list = []
masks_list = []
for image, mask in zip(orgs, masks):
print("orgs",orgs)
print("masks",masks)
imgs_list.append(np.array(Image.open(image).convert("L").resize((512,512))))
masks_list.append(np.array(Image.open(mask).convert("L").resize((512,512))))

imgs_np = np.asarray(imgs_list)
masks_np = np.asarray(masks_list)

x = np.asarray(imgs_np, dtype=np.float32)/255
y = np.asarray(masks_np, dtype=np.float32)/255

x = x.reshape(x.shape[0], x.shape[1], x.shape[2], 1)
print(x.shape, y.shape)

y = y.reshape(y.shape[0], y.shape[1], y.shape[2], 1)
print(x.shape, y.shape)

from sklearn.model_selection import train_test_split

x_train, x_valid, y_train, y_valid = train_test_split(x, y, test_size=0.4, random_state=0)`

However I get this issue

Traceback (most recent call last):
File "unet_plus.py", line 239, in
model = Xnet(backbone_name='resnet50', encoder_weights='imagenet', decoder_block_type='transpose')
File "/home/selka/src/python/DL/UNetPlusPlus/segmentation_models/xnet/model.py", line 86, in Xnet
include_top=False)
File "/home/selka/src/python/DL/UNetPlusPlus/segmentation_models/backbones/backbones.py", line 32, in get_backbone
return backbones[name](*args, **kwargs)
File "/home/selka/src/python/DL/UNetPlusPlus/segmentation_models/backbones/classification_models/classification_models/resnet/models.py", line 39, in ResNet50
include_top=include_top)
File "/home/selka/src/python/DL/UNetPlusPlus/segmentation_models/backbones/classification_models/classification_models/resnet/builder.py", line 69, in build_resnet
x = BatchNormalization(name='bn_data', **no_scale_bn_params)(img_input)
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/keras/engine/base_layer.py", line 431, in call
self.build(unpack_singleton(input_shapes))
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/keras/layers/normalization.py", line 115, in build
constraint=self.beta_constraint)
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/keras/engine/base_layer.py", line 252, in add_weight
constraint=constraint)
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/keras/backend/theano_backend.py", line 154, in variable
value = value.eval()
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/gof/graph.py", line 516, in eval
self.fn_cache[inputs] = theano.function(inputs, self)
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/compile/function.py", line 326, in function
output_keys=output_keys)
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/compile/pfunc.py", line 486, in pfunc
output_keys=output_keys)
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/compile/function_module.py", line 1795, in orig_function
defaults)
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/compile/function_module.py", line 1661, in create
input_storage=input_storage_lists, storage_map=storage_map)
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/gof/link.py", line 699, in make_thunk
storage_map=storage_map)[:3]
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/gof/vm.py", line 1047, in make_all
impl=impl))
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/gof/op.py", line 935, in make_thunk
no_recycling)
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/gof/op.py", line 839, in make_c_thunk
output_storage=node_output_storage)
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/gof/cc.py", line 1190, in make_thunk
keep_lock=keep_lock)
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/gof/cc.py", line 1131, in compile
keep_lock=keep_lock)
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/gof/cc.py", line 1575, in cthunk_factory
key = self.cmodule_key()
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/gof/cc.py", line 1271, in cmodule_key
c_compiler=self.c_compiler(),
File "/home/selka/anaconda3/envs/unetplus/lib/python3.6/site-packages/theano/gof/cc.py", line 1350, in cmodule_key

np.core.multiarray._get_ndarray_c_version())
AttributeError: ('The following error happened while compiling the node', DeepCopyOp(TensorConstant{(3,) of 0.0}), '\n', "module 'numpy.core.multiarray' has no attribute '_get_ndarray_c_version'")

Thanks for helping

About Data Science Bowl 2018.

image

The high 92.37% IoU is a instance-level result, or just bilevel segmentation result ?
Do you follow the Data Science Bowl 2018 metric? or 92.37% IoU is just the bilevel mask IoU?
how do you deal with the instance-level dataset? It is not very clear in your paper.
Hoping for reply. Thanks.

How to get "BRATS2013_Syn_Flair_Train_X.npy" and "BRATS2013_Syn_Flair_Train_S.npy" from BRATS2013 dataSet?

I have got the BRATS2013 dataSet from https://www.smir.ch/BRATS/Start2013,however,there are 2 big folder inside(Image_Data & Synthetic_Data) how can i get "BRATS2013_Syn_Flair_Train_X.npy" and "BRATS2013_Syn_Flair_Train_S.npy" in the code?
THX
moreover, i can't find DSB2018_application.py from this website ,is this file similar to BRATS2013_application.py? are there any differences?
:)

How to get "Data/BRATS/*.npy"

I don't know how to get “BRATS2013_Syn_Flair_Train_X.npy” and “BRATS2013_Syn_Flair_Train_S.npy”. Please give me a brief explanation when you are free.
Thank you!

Deep supervision

Hi @MrGiovanni ,thank you for sharing your code.
I have a question about the implement of the Deep supervision structure. In the paper, you said that the final output of the model is the average of the 4 outputs of the branches. I calculate the average output feature map in this way but got the error below:

nestnet_output_1 = Conv2D(num_class, (1, 1), activation='sigmoid', name='output_1',  kernel_initializer = 'he_normal', padding='same', kernel_regularizer=l2(1e-4))(conv1_2)
    nestnet_output_2 = Conv2D(num_class, (1, 1), activation='sigmoid', name='output_2', kernel_initializer = 'he_normal', padding='same', kernel_regularizer=l2(1e-4))(conv1_3)
    nestnet_output_3 = Conv2D(num_class, (1, 1), activation='sigmoid', name='output_3', kernel_initializer = 'he_normal', padding='same', kernel_regularizer=l2(1e-4))(conv1_4)
    nestnet_output_4 = Conv2D(num_class, (1, 1), activation='sigmoid', name='output_4', kernel_initializer = 'he_normal', padding='same', kernel_regularizer=l2(1e-4))(conv1_5)
    nestnet_output_all = (nestnet_output_1+nestnet_output_2+nestnet_output_3+nestnet_output_4)/4

if deep_supervision:
    # model = Model(input=img_input, output=[nestnet_output_1,
    #                                        nestnet_output_2,
    #                                        nestnet_output_3,
    #                                        nestnet_output_4])
    model = Model(input=img_input, output=[nestnet_output_all])
else:
    model = Model(input=img_input, output=[nestnet_output_4])

`

Using TensorFlow backend.
D:\code\unet-master\revisedModel.py:116: UserWarning: Update your Model call to the Keras 2 API: Model(outputs=[<tf.Tenso..., inputs=Tensor("ma...)
model = Model(input=img_input, output=[nestnet_output_all])
Traceback (most recent call last):
File "D:/code/unet-master/revisedModelTrain.py", line 16, in
model = Nest_Net(256,256,1)
File "D:\code\unet-master\revisedModel.py", line 116, in Nest_Net
model = Model(input=img_input, output=[nestnet_output_all])
File "C:\Users\Administrator\Anaconda3\envs\tensorflow-gpu\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\Administrator\Anaconda3\envs\tensorflow-gpu\lib\site-packages\keras\engine\network.py", line 93, in init
self._init_graph_network(*args, **kwargs)
File "C:\Users\Administrator\Anaconda3\envs\tensorflow-gpu\lib\site-packages\keras\engine\network.py", line 188, in _init_graph_network
'Found: ' + str(x))
ValueError: Output tensors to a Model must be the output of a Keras Layer (thus holding past layer metadata). Found: Tensor("truediv:0", shape=(?, 256, 256, 1), dtype=float32)

Can you help about this, thanks!

Train own data

Thanks for yours source, but can you describe more clearly how can I train own data? I'm a student and want to try with my data includes images and the corresponding mask about the object.

backbone dimension error?

when i use resnet34 and vgg16 as backbone, the middle layer shape seems not right?

Layer (type) Output Shape Param # Connected to

data (InputLayer) (None, 224, 224, 3) 0


bn_data (BatchNormalization) (None, 224, 224, 3) 9 data[0][0]


zero_padding2d_1 (ZeroPadding2D (None, 224, 230, 9) 0 bn_data[0][0]


conv0 (Conv2D) (None, 64, 112, 2) 702464 zero_padding2d_1[0][0]


bn0 (BatchNormalization) (None, 64, 112, 2) 8 conv0[0][0]


relu0 (Activation) (None, 64, 112, 2) 0 bn0[0][0]


zero_padding2d_2 (ZeroPadding2D (None, 64, 114, 4) 0 relu0[0][0]


pooling0 (MaxPooling2D) (None, 64, 56, 1) 0 zero_padding2d_2[0][0]


stage1_unit1_bn1 (BatchNormaliz (None, 64, 56, 1) 4 pooling0[0][0]


stage1_unit1_relu1 (Activation) (None, 64, 56, 1) 0 stage1_unit1_bn1[0][0]


zero_padding2d_3 (ZeroPadding2D (None, 64, 58, 3) 0 stage1_unit1_relu1[0][0]


stage1_unit1_conv1 (Conv2D) (None, 64, 56, 1) 36864 zero_padding2d_3[0][0]


stage1_unit1_bn2 (BatchNormaliz (None, 64, 56, 1) 4 stage1_unit1_conv1[0][0]


stage1_unit1_relu2 (Activation) (None, 64, 56, 1) 0 stage1_unit1_bn2[0][0]


zero_padding2d_4 (ZeroPadding2D (None, 64, 58, 3) 0 stage1_unit1_relu2[0][0]


stage1_unit1_conv2 (Conv2D) (None, 64, 56, 1) 36864 zero_padding2d_4[0][0]


stage1_unit1_sc (Conv2D) (None, 64, 56, 1) 4096 stage1_unit1_relu1[0][0]


add_1 (Add) (None, 64, 56, 1) 0 stage1_unit1_conv2[0][0]
stage1_unit1_sc[0][0]

My anaconda configuration: python 3.6 tensorflow 1.6.0 keras: 2.2.2

Seems wired ?

simple backend that used by U-Net

How can I build the U-Net++ model with a simple backbone which used by U-Net paper?
In the options for the backbones, it seems that it must be chosen only among some architectures such as vgg, ResNet, etc.

when training model resnet34+xnet configuration, the default decoder_filter parameters seems not work

main train script configuration to:
model = Xnet(backbone_name=config.backbone, input_shape=(config.input_deps, config.input_rows, config.input_cols), n_upsample_blocks=4, decoder_filters=(64,64,128,256,512), encoder_weights=config.weights, decoder_block_type=config.decoder_block_type, classes=config.nb_class, activation=config.activation)

and builder.py in xnet model to:
if downterm[i+1] is not None: #interm[(n_upsample_blocks+1)*i+j+1] = up_block(decoder_filters[n_upsample_blocks-i-2], interm[(n_upsample_blocks+1)*i+j+1] = up_block(decoder_filters[i], i+1, j+1, upsample_rate=upsample_rate, skip=interm[(n_upsample_blocks+1)*i+j], use_batchnorm=use_batchnorm)(downterm[i+1]) else: interm[(n_upsample_blocks+1)*i+j+1] = None # print("\n{} = {} + {}\n".format(interm[(n_upsample_blocks+1)*i+j+1], # interm[(n_upsample_blocks+1)*i+j], # downterm[i+1])) else: #interm[(n_upsample_blocks+1)*i+j+1] = up_block(decoder_filters[n_upsample_blocks-i-2], interm[(n_upsample_blocks+1)*i+j+1] = up_block(decoder_filters[i], i+1, j+1, upsample_rate=upsample_rate, skip=interm[(n_upsample_blocks+1)*i : (n_upsample_blocks+1)*i+j+1], use_batchnorm=use_batchnorm)(interm[(n_upsample_blocks+1)*(i+1)+j]) # print("\n{} = {} + {}\n".format(interm[(n_upsample_blocks+1)*i+j+1], # interm[(n_upsample_blocks+1)*i : (n_upsample_blocks+1)*i+j+1], # interm[(n_upsample_blocks+1)*(i+1)+j]))

when i adapt resnet34 as backbone when try to training xnet model with my own data(5125123), after detailed checked the downsampling layer and skip-connection layers, and the up-block(transpose currently), it's seems the default decoder filter parameters didn't work, as the concatenate operation the up-block require the input as same dimension, so after checked the details network and skip-connection configuration, i changed the decoder filters, did anyone meet the same situation?
just to confirm that, thanks!

dice_coef_loss

Hi there,

Thanks for the code.

I get this error that 'dice_coef_loss' is not defined.

Error while creating Xnet model object.

My input shape is (256 X 256 X 1). When I create a Xnet model object, I get this error. I also changed the input_shape in model.py of Xnet from (None, None, 3) to (256, 256, 1):

ValueError: Dimension 0 in both shapes must be equal, but are 1 and 3. Shapes are [1] and [3]. for 'Assign' (op: 'Assign') with input shapes: [1], [3].

Complete Error:

InvalidArgumentError Traceback (most recent call last)
~/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
1658 try:
-> 1659 c_op = c_api.TF_FinishOperation(op_desc)
1660 except errors.InvalidArgumentError as e:

InvalidArgumentError: Dimension 0 in both shapes must be equal, but are 1 and 3. Shapes are [1] and [3]. for 'Assign' (op: 'Assign') with input shapes: [1], [3].

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
in
----> 1 model = Xnet(backbone_name='resnet50', encoder_weights='imagenet', decoder_block_type='transpose')

~/research/segmentation/segmentation_models/xnet/model.py in Xnet(backbone_name, input_shape, input_tensor, encoder_weights, freeze_encoder, skip_connections, decoder_block_type, decoder_filters, decoder_use_batchnorm, n_upsample_blocks, upsample_rates, classes, activation)
84 input_tensor=input_tensor,
85 weights=encoder_weights,
---> 86 include_top=False)
87
88 if skip_connections == 'default':

~/research/segmentation/segmentation_models/backbones/backbones.py in get_backbone(name, *args, **kwargs)
30
31 def get_backbone(name, *args, **kwargs):
---> 32 return backbones[name](*args, **kwargs)

~/research/segmentation/segmentation_models/backbones/classification_models/classification_models/resnet/models.py in ResNet50(input_shape, input_tensor, weights, classes, include_top)
41
42 if weights:
---> 43 load_model_weights(weights_collection, model, weights, classes, include_top)
44 return model
45

~/research/segmentation/segmentation_models/backbones/classification_models/classification_models/utils.py in load_model_weights(weights_collection, model, dataset, classes, include_top)
24 md5_hash=weights['md5'])
25
---> 26 model.load_weights(weights_path)
27
28 else:

~/.local/lib/python3.6/site-packages/keras/engine/network.py in load_weights(self, filepath, by_name, skip_mismatch, reshape)
1164 else:
1165 saving.load_weights_from_hdf5_group(
-> 1166 f, self.layers, reshape=reshape)
1167
1168 def _updated_config(self):

~/.local/lib/python3.6/site-packages/keras/engine/saving.py in load_weights_from_hdf5_group(f, layers, reshape)
1056 ' elements.')
1057 weight_value_tuples += zip(symbolic_weights, weight_values)
-> 1058 K.batch_set_value(weight_value_tuples)
1059
1060

~/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in batch_set_value(tuples)
2463 assign_placeholder = tf.placeholder(tf_dtype,
2464 shape=value.shape)
-> 2465 assign_op = x.assign(assign_placeholder)
2466 x._assign_placeholder = assign_placeholder
2467 x._assign_op = assign_op

~/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py in assign(self, value, use_locking, name, read_value)
1760 """
1761 assign = state_ops.assign(self._variable, value, use_locking=use_locking,
-> 1762 name=name)
1763 if read_value:
1764 return assign

~/.local/lib/python3.6/site-packages/tensorflow/python/ops/state_ops.py in assign(ref, value, validate_shape, use_locking, name)
221 return gen_state_ops.assign(
222 ref, value, use_locking=use_locking, name=name,
--> 223 validate_shape=validate_shape)
224 return ref.assign(value, name=name)
225

~/.local/lib/python3.6/site-packages/tensorflow/python/ops/gen_state_ops.py in assign(ref, value, validate_shape, use_locking, name)
62 _, _, _op = _op_def_lib._apply_op_helper(
63 "Assign", ref=ref, value=value, validate_shape=validate_shape,
---> 64 use_locking=use_locking, name=name)
65 _result = _op.outputs[:]
66 _inputs_flat = _op.inputs

~/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
786 op = g.create_op(op_type_name, inputs, output_types, name=scope,
787 input_types=input_types, attrs=attr_protos,
--> 788 op_def=op_def)
789 return output_structure, op_def.is_stateful, op
790

~/.local/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py in new_func(*args, **kwargs)
505 'in a future version' if date is None else ('after %s' % date),
506 instructions)
--> 507 return func(*args, **kwargs)
508
509 doc = _add_deprecated_arg_notice_to_docstring(

~/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in create_op(failed resolving arguments)
3298 input_types=input_types,
3299 original_op=self._default_original_op,
-> 3300 op_def=op_def)
3301 self._create_op_helper(ret, compute_device=compute_device)
3302 return ret

~/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in init(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def)
1821 op_def, inputs, node_def.attr)
1822 self._c_op = _create_c_op(self._graph, node_def, grouped_inputs,
-> 1823 control_input_ops)
1824
1825 # Initialize self._outputs.

~/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
1660 except errors.InvalidArgumentError as e:
1661 # Convert to ValueError for backwards compatibility.
-> 1662 raise ValueError(str(e))
1663
1664 return c_op

ValueError: Dimension 0 in both shapes must be equal, but are 1 and 3. Shapes are [1] and [3]. for 'Assign' (op: 'Assign') with input shapes: [1], [3].

about unet++ for nodule segmentation

many thanks for authors' contributions. however, i can't find the code about unet++ in lung nodule segmentation according to your paper. can you give me some suggestions? many thanks.

Does this code have implementations for weighted loss?

It's my understanding that one of the keys to success for the first uNet was the implementation of weighted loss. This is something I've always struggled to implement myself- my holy grail- and I'm curious if this repo or anyone else have managed to do it.

Here's a screenshot from the original paper showing what I mean:

weighted-loss

Thanks so much for this repo!

No match code with the paper on UnetPlusPlus

Hi @MrGiovanni , in your built_xnet function, you create the model as follow
x = Conv2D(classes, (3,3), padding='same', name='final_conv')(interm[n_upsample_blocks])
x = Activation(activation, name=activation)(x)
model = Model(input, x)

My problem:

  1. Is interm[n_upsample_blocks] the combining of X(0,1), X(0,2),X(0,3),X(0,4) as specified in deep supervision? Having studied your algorithm, I noticed that interm[n_upsample_blocks] is just up_block(X(0,4)) which is not normal.
  2. Creating the model at this level, how do you benefit from the MODEL PRUNING as you indicate in the paper?
    Thanks

Small question about Xnet vs Nestnet difference

The only architectural difference between the two seems to be this:

  • nestnet
    skip=interm[(n_upsample_blocks+1)*i+j]
  • xnet
    skip=interm[(n_upsample_blocks+1)*i: (n_upsample_blocks+1)*i+j+1]

in the upblock parameters.

can someone confirm this and maybe quickly explain how this affects the model?
would they both count as Unet++ architecture according to the paper but with different skip connections?

How to add more regularization methods?

This project really help me a lot.
However, I meet a problem when training our own data that the model is overfitted. Thus, I want to add more regularization methods, like L1/L2 or dropout. How can I add those features?

Suggestion: replace dropout with batch norm

I'm testing a small data set (training on 10 CT liver segmentations, testing on 4), but I get WAY better results when I replace dropout with batch norm in the standard_unit(). May be data size dependent, but worth noting.

getting an error

after running the training model.fit ... I'v got this error :
ValueError: Error when checking target: expected sigmoid to have shape (None, None, 1) but got array with shape (256, 256, 3)

import issue in collab

I have been trying to run the following code in google colab.But, while importing the segmentation model i'm getting the following error:
Using TensorFlow backend.


ImportError Traceback (most recent call last)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in ()
57
---> 58 from tensorflow.python.pywrap_tensorflow_internal import *
59 from tensorflow.python.pywrap_tensorflow_internal import version

16 frames

ImportError: libcublas.so.8.0: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

ImportError Traceback (most recent call last)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py in ()
70 for some common reasons and solutions. Include the entire stack trace
71 above this error message when asking for help.""" % traceback.format_exc()
---> 72 raise ImportError(msg)
73
74 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long

ImportError: Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcublas.so.8.0: cannot open shared object file: No such file or directory

Failed to load the native TensorFlow runtime.

Is there any update to the requirements file?

bce_dice_loss negative loss

Anyone else getting a negative loss value when using bce_dice_loss?

def bce_dice_loss(y_true, y_pred):
    return 0.5*binary_crossentropy(y_true, y_pred) - dice_coef(y_true, y_pred)

def dice_coef_loss(y_true, y_pred):
    return 1. - dice_coef(y_true, y_pred)

def dice_coef(y_true, y_pred):
    smooth = 1.
    y_true_f = K.flatten(y_true)
    y_pred_f = K.flatten(y_pred)
    intersection = K.sum(y_true_f * y_pred_f)
    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)

MaskRCNN++

Hello people who thought of Unet++. Thank you for leveraging your brilliance to contribute to society. I don't really have an issue, I just wanted to ask you if there is going to be an implementation of MaskRCNN++, which you mentioned in the paper as the MaskRCNN + Unet++ model, for advanced instance segmentation?

Best regards from a computer vision intern.

Attempting to use uninitialized value Adam/lr

Environment
keras 2.2.5
tensorflow 1.14.0

When i run this code on my own dataset, and use the Adam optimizer, errors occured like this "Attempting to use uninitialized value Adam/lr". I can't figure out since the initialization is not necessary in keras. So I add this code before "model.fit()", and no errors happens.

with tf.Session() as sess:
sess.run(tf.global_variables_initializer())

But, this line would overwrite the weights from imagenet. Can anyone have ideas about the issue?

How to input 6 depth data into xnet?

I set the model parameters as below:

model = Xnet(backbone_name='resnet152', input_shape=(None, None, 6), encoder_weights='imagenet11k', decoder_block_type='transpose')

And the error:

/device:GPU:0 with 10407 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
image pairs number: 18
/home/universe/miniconda3/lib/python3.6/site-packages/keras_applications/imagenet_utils.py:279: UserWarning: This model usually expects 1 or 3 input channels. However, it was passed an input_shape with 6 input channels.
  str(input_shape[-1]) + ' input channels.')
Traceback (most recent call last):
  File "/home/universe/miniconda3/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 686, in _call_cpp_shape_fn_impl
    input_tensors_as_shapes, status)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 516, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 0 in both shapes must be equal, but are 6 and 3. Shapes are [6] and [3]. for 'Assign' (op: 'Assign') with input shapes: [6], [3].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train_clothe.py", line 22, in <module>
    model = Xnet(backbone_name='resnet152', input_shape=(None, None, 6), encoder_weights='imagenet11k', decoder_block_type='transpose') # build UNet++
  File "/home/universe/jupyter/gxl/project/house_wall/models/Nested-UNet/segmentation_models/xnet/model.py", line 86, in Xnet
    include_top=False)
  File "/home/universe/jupyter/gxl/project/house_wall/models/Nested-UNet/segmentation_models/backbones/backbones.py", line 32, in get_backbone
    return backbones[name](*args, **kwargs)
  File "/home/universe/jupyter/gxl/project/house_wall/models/Nested-UNet/segmentation_models/backbones/classification_models/classification_models/resnet/models.py", line 69, in ResNet152
    load_model_weights(weights_collection, model, weights, classes, include_top)
  File "/home/universe/jupyter/gxl/project/house_wall/models/Nested-UNet/segmentation_models/backbones/classification_models/classification_models/utils.py", line 26, in load_model_weights
    model.load_weights(weights_path)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/keras/engine/network.py", line 1166, in load_weights
    f, self.layers, reshape=reshape)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/keras/engine/saving.py", line 1058, in load_weights_from_hdf5_group
    K.batch_set_value(weight_value_tuples)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2465, in batch_set_value
    assign_op = x.assign(assign_placeholder)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 609, in assign
    return state_ops.assign(self._variable, value, use_locking=use_locking)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/tensorflow/python/ops/state_ops.py", line 281, in assign
    validate_shape=validate_shape)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_state_ops.py", line 61, in assign
    use_locking=use_locking, name=name)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3292, in create_op
    compute_device=compute_device)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3332, in _create_op_helper
    set_shapes_for_outputs(op)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2496, in set_shapes_for_outputs
    return _set_shapes_for_outputs(op)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2469, in _set_shapes_for_outputs
    shapes = shape_func(op)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2399, in call_with_requiring
    return call_cpp_shape_fn(op, require_shape_fn=True)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 627, in call_cpp_shape_fn
    require_shape_fn)
  File "/home/universe/miniconda3/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 691, in _call_cpp_shape_fn_impl
    raise ValueError(err.message)
ValueError: Dimension 0 in both shapes must be equal, but are 6 and 3. Shapes are [6] and [3]. for 'Assign' (op: 'Assign') with input shapes: [6], [3].

compute iou function confusion

def compute_iou(img1, img2):

img1 = np.array(img1)
img2 = np.array(img2)

if img1.shape[0] != img2.shape[0]:
    raise ValueError("Shape mismatch: the number of images mismatch.")
IoU = np.zeros( (img1.shape[0],), dtype=np.float32)
for i in range(img1.shape[0]):
    im1 = np.squeeze(img1[i]>0.5)
    im2 = np.squeeze(img2[i]>0.5)

    if im1.shape != im2.shape:
        raise ValueError("Shape mismatch: im1 and im2 must have the same shape.")

    # Compute Dice coefficient
    intersection = np.logical_and(im1, im2)

    if im1.sum() + im2.sum() == 0:
        IoU[i] = 100
    else:
        **IoU[i] = 2. * intersection.sum() * 100.0 / (im1.sum() + im2.sum())**
    #database.display_image_mask_pairs(im1, im2)

return IoU

Is this bold line correct will it have 2 or not

code question

Hello, I have a problem when using your code: ValueError: The model expects 4 target arrays, but only received one array. Found: array with shape (8, 256, 256, 3). Specifically, the problem will appear if I use " model = Model(input=img_input, output=[nestnet_output_1, nestnet_output_2, nestnet_output_3, nestnet_output_4]) ". The Keras version I use is 2.1.0, and it is based on TensorFlow version 1.10.
Look forward to your reply.

EfficientNet backbone

Hello,
Thank you for sharing your project. I was wondering why you didn't keep the EfficientNet backbone from qubvel/segmentation_models implementation. Is there any reason or it is possible to use your code with EfficientNet backbone ?

Thank you :)

why there is no activations for Conv2DTranspose???

up1_2 = Conv2DTranspose(nb_filter[0], (2, 2), strides=(2, 2), name='up12', padding='same')(conv2_1)
conv1_2 = concatenate([up1_2, conv1_1], name='merge12', axis=bn_axis)
conv1_2 = standard_unit(conv1_2, stage='12', nb_filter=nb_filter[0])
all conv2DTranspose without activation...and activation: Activation function to use
(see activations).
If you don't specify anything, no activation is applied
(ie. "linear" activation: a(x) = x).
why do you not use activation in conv2DTranspose???

loading weight problem

Traceback (most recent call last):
File "BRATS2013_application.py", line 287, in
activation=config.activation)
File "D:\depressed\UNetPlusPlus-master\segmentation_models\xnet\model.py", line 86, in Xnet
include_top=False)
File "D:\depressed\UNetPlusPlus-master\segmentation_models\backbones\backbones.py", line 32, in get_backbone
return backbones[name](*args, **kwargs)
File "D:\depressed\UNetPlusPlus-master\segmentation_models\backbones\inception_v3.py", line 390, in InceptionV3
model.load_weights(weights)
File "D:\miniconda\lib\site-packages\keras\engine\network.py", line 1166, in load_weights
f, self.layers, reshape=reshape)
File "D:\miniconda\lib\site-packages\keras\engine\saving.py", line 1030, in load_weights_from_hdf5_group
str(len(filtered_layers)) + ' layers.')
ValueError: You are trying to load a weight file containing 0 layers into a model with 188 layers.

please guide me
thank you

Implementing in colab

I have been trying to implement this code in colab but there is alot of version issue.
The requirements file have tensorflow=1.14.1 which is supported by cuda=10.0
My colab have tensorflow=2.2.0 which is supported by 10.1 and i'm getting the following error:
AttributeError Traceback (most recent call last)

in ()
----> 1 model = Xnet(backbone_name='resnet50', encoder_weights='imagenet', decoder_block_type='transpose') # build UNet++

7 frames

/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in placeholder(shape, ndim, dtype, sparse, name)
513 x = tf.sparse_placeholder(dtype, shape=shape, name=name)
514 else:
--> 515 x = tf.placeholder(dtype, shape=shape, name=name)
516 x._keras_shape = shape
517 x._uses_learning_phase = False

AttributeError: module 'tensorflow' has no attribute 'placeholder'

Can there be any way in which the above version problem can be rectified?

Where is helper_functions ?

import helper_functions as H
...
IoU = H.compute_iou(y_test, p_test)
print(">> Testing dataset mIoU = {:.2f}%".format(np.mean(IoU)))

Some question in training

The following is the train print.
Anyone meet the same situation? the val_binary_accuracy is always 1.
25/25 [==============================] - 42s 2s/step - loss: 0.3984 - binary_accuracy: 0.8856 - val_loss: 0.0573 - val_binary_accuracy: 0.9995

Epoch 00001: saving model to D:/wcs/UNetPlusPlus-master/weights/road_001.h5
Epoch 2/50
25/25 [==============================] - 25s 991ms/step - loss: 0.0611 - binary_accuracy: 1.0000 - val_loss: 0.0041 - val_binary_accuracy: 1.0000

Epoch 00002: saving model to D:/wcs/UNetPlusPlus-master/weights/road_002.h5
Epoch 3/50
25/25 [==============================] - 24s 965ms/step - loss: 0.0200 - binary_accuracy: 1.0000 - val_loss: 0.0364 - val_binary_accuracy: 1.0000

Epoch 00003: saving model to D:/wcs/UNetPlusPlus-master/weights/road_003.h5
Epoch 4/50
25/25 [==============================] - 24s 978ms/step - loss: 0.0106 - binary_accuracy: 1.0000 - val_loss: 0.0312 - val_binary_accuracy: 1.0000

Epoch 00004: saving model to D:/wcs/UNetPlusPlus-master/weights/road_004.h5
Epoch 5/50
25/25 [==============================] - 25s 988ms/step - loss: 0.0071 - binary_accuracy: 1.0000 - val_loss: 0.0221 - val_binary_accuracy: 1.0000

Epoch 00005: saving model to D:/wcs/UNetPlusPlus-master/weights/road_005.h5
Epoch 6/50
25/25 [==============================] - 25s 984ms/step - loss: 0.0052 - binary_accuracy: 1.0000 - val_loss: 0.0155 - val_binary_accuracy: 1.0000

Epoch 00006: saving model to D:/wcs/UNetPlusPlus-master/weights/road_006.h5
Epoch 7/50
25/25 [==============================] - 25s 982ms/step - loss: 0.0040 - binary_accuracy: 1.0000 - val_loss: 0.0108 - val_binary_accuracy: 1.0000

Epoch 00007: saving model to D:/wcs/UNetPlusPlus-master/weights/road_007.h5
Epoch 8/50
25/25 [==============================] - 24s 974ms/step - loss: 0.0033 - binary_accuracy: 1.0000 - val_loss: 0.0079 - val_binary_accuracy: 1.0000

Epoch 00008: saving model to D:/wcs/UNetPlusPlus-master/weights/road_008.h5
Epoch 9/50
25/25 [==============================] - 25s 988ms/step - loss: 0.0028 - binary_accuracy: 1.0000 - val_loss: 0.0060 - val_binary_accuracy: 1.0000

Epoch 00009: saving model to D:/wcs/UNetPlusPlus-master/weights/road_009.h5
Epoch 10/50
25/25 [==============================] - 25s 982ms/step - loss: 0.0025 - binary_accuracy: 1.0000 - val_loss: 0.0047 - val_binary_accuracy: 1.0000

Epoch 00010: saving model to D:/wcs/UNetPlusPlus-master/weights/road_010.h5
Epoch 11/50
25/25 [==============================] - 25s 981ms/step - loss: 0.0022 - binary_accuracy: 1.0000 - val_loss: 0.0038 - val_binary_accuracy: 1.0000

Epoch 00011: saving model to D:/wcs/UNetPlusPlus-master/weights/road_011.h5
Epoch 12/50
25/25 [==============================] - 25s 983ms/step - loss: 0.0019 - binary_accuracy: 1.0000 - val_loss: 0.0030 - val_binary_accuracy: 1.0000

Epoch 00012: saving model to D:/wcs/UNetPlusPlus-master/weights/road_012.h5
Epoch 13/50
25/25 [==============================] - 25s 988ms/step - loss: 0.0017 - binary_accuracy: 1.0000 - val_loss: 0.0025 - val_binary_accuracy: 1.0000

Epoch 00013: saving model to D:/wcs/UNetPlusPlus-master/weights/road_013.h5
Epoch 14/50
25/25 [==============================] - 25s 983ms/step - loss: 0.0015 - binary_accuracy: 1.0000 - val_loss: 0.0022 - val_binary_accuracy: 1.0000

Epoch 00014: saving model to D:/wcs/UNetPlusPlus-master/weights/road_014.h5
Epoch 15/50
25/25 [==============================] - 25s 981ms/step - loss: 0.0014 - binary_accuracy: 1.0000 - val_loss: 0.0018 - val_binary_accuracy: 1.0000

Epoch 00015: saving model to D:/wcs/UNetPlusPlus-master/weights/road_015.h5
Epoch 16/50
25/25 [==============================] - 25s 981ms/step - loss: 0.0013 - binary_accuracy: 1.0000 - val_loss: 0.0016 - val_binary_accuracy: 1.0000

Epoch 00016: saving model to D:/wcs/UNetPlusPlus-master/weights/road_016.h5
Epoch 17/50
25/25 [==============================] - 25s 988ms/step - loss: 0.0011 - binary_accuracy: 1.0000 - val_loss: 0.0014 - val_binary_accuracy: 1.0000

Epoch 00017: saving model to D:/wcs/UNetPlusPlus-master/weights/road_017.h5
Epoch 18/50
25/25 [==============================] - 25s 982ms/step - loss: 0.0010 - binary_accuracy: 1.0000 - val_loss: 0.0012 - val_binary_accuracy: 1.0000

Epoch 00018: saving model to D:/wcs/UNetPlusPlus-master/weights/road_018.h5
Epoch 19/50
25/25 [==============================] - 25s 982ms/step - loss: 9.5877e-04 - binary_accuracy: 1.0000 - val_loss: 0.0011 - val_binary_accuracy: 1.0000

Epoch 00019: saving model to D:/wcs/UNetPlusPlus-master/weights/road_019.h5
Epoch 20/50
25/25 [==============================] - 25s 980ms/step - loss: 8.8322e-04 - binary_accuracy: 1.0000 - val_loss: 0.0010 - val_binary_accuracy: 1.0000

Epoch 00020: saving model to D:/wcs/UNetPlusPlus-master/weights/road_020.h5
Epoch 21/50
25/25 [==============================] - 25s 989ms/step - loss: 8.1642e-04 - binary_accuracy: 1.0000 - val_loss: 9.1734e-04 - val_binary_accuracy: 1.0000

Epoch 00021: saving model to D:/wcs/UNetPlusPlus-master/weights/road_021.h5
Epoch 22/50
25/25 [==============================] - 25s 983ms/step - loss: 7.5695e-04 - binary_accuracy: 1.0000 - val_loss: 8.3885e-04 - val_binary_accuracy: 1.0000

Epoch 00022: saving model to D:/wcs/UNetPlusPlus-master/weights/road_022.h5
Epoch 23/50
25/25 [==============================] - 25s 982ms/step - loss: 7.0407e-04 - binary_accuracy: 1.0000 - val_loss: 7.6486e-04 - val_binary_accuracy: 1.0000

Epoch 00023: saving model to D:/wcs/UNetPlusPlus-master/weights/road_023.h5
Epoch 24/50
25/25 [==============================] - 25s 980ms/step - loss: 6.5629e-04 - binary_accuracy: 1.0000 - val_loss: 7.1081e-04 - val_binary_accuracy: 1.0000

Epoch 00024: saving model to D:/wcs/UNetPlusPlus-master/weights/road_024.h5
Epoch 25/50
25/25 [==============================] - 25s 986ms/step - loss: 6.1347e-04 - binary_accuracy: 1.0000 - val_loss: 6.5125e-04 - val_binary_accuracy: 1.0000

Epoch 00025: saving model to D:/wcs/UNetPlusPlus-master/weights/road_025.h5
Epoch 26/50
25/25 [==============================] - 25s 983ms/step - loss: 5.7442e-04 - binary_accuracy: 1.0000 - val_loss: 6.0789e-04 - val_binary_accuracy: 1.0000

Epoch 00026: saving model to D:/wcs/UNetPlusPlus-master/weights/road_026.h5
Epoch 27/50
25/25 [==============================] - 25s 982ms/step - loss: 5.3929e-04 - binary_accuracy: 1.0000 - val_loss: 5.6109e-04 - val_binary_accuracy: 1.0000

Epoch 00027: saving model to D:/wcs/UNetPlusPlus-master/weights/road_027.h5
Epoch 28/50
25/25 [==============================] - 25s 980ms/step - loss: 5.0707e-04 - binary_accuracy: 1.0000 - val_loss: 5.2836e-04 - val_binary_accuracy: 1.0000

Epoch 00028: saving model to D:/wcs/UNetPlusPlus-master/weights/road_028.h5
Epoch 29/50
25/25 [==============================] - 25s 990ms/step - loss: 4.7771e-04 - binary_accuracy: 1.0000 - val_loss: 4.9583e-04 - val_binary_accuracy: 1.0000

Epoch 00029: saving model to D:/wcs/UNetPlusPlus-master/weights/road_029.h5
Epoch 30/50
25/25 [==============================] - 25s 984ms/step - loss: 4.5094e-04 - binary_accuracy: 1.0000 - val_loss: 4.6588e-04 - val_binary_accuracy: 1.0000

Epoch 00030: saving model to D:/wcs/UNetPlusPlus-master/weights/road_030.h5
Epoch 31/50
25/25 [==============================] - 24s 980ms/step - loss: 4.2627e-04 - binary_accuracy: 1.0000 - val_loss: 4.3289e-04 - val_binary_accuracy: 1.0000

Epoch 00031: saving model to D:/wcs/UNetPlusPlus-master/weights/road_031.h5
Epoch 32/50
25/25 [==============================] - 25s 983ms/step - loss: 4.0358e-04 - binary_accuracy: 1.0000 - val_loss: 4.1187e-04 - val_binary_accuracy: 1.0000

Epoch 00032: saving model to D:/wcs/UNetPlusPlus-master/weights/road_032.h5
Epoch 33/50
25/25 [==============================] - 25s 985ms/step - loss: 3.8265e-04 - binary_accuracy: 1.0000 - val_loss: 3.8633e-04 - val_binary_accuracy: 1.0000

Epoch 00033: saving model to D:/wcs/UNetPlusPlus-master/weights/road_033.h5
Epoch 34/50
25/25 [==============================] - 25s 984ms/step - loss: 3.6333e-04 - binary_accuracy: 1.0000 - val_loss: 3.7233e-04 - val_binary_accuracy: 1.0000

Epoch 00034: saving model to D:/wcs/UNetPlusPlus-master/weights/road_034.h5
Epoch 35/50
25/25 [==============================] - 25s 982ms/step - loss: 3.4537e-04 - binary_accuracy: 1.0000 - val_loss: 3.4674e-04 - val_binary_accuracy: 1.0000

Epoch 00035: saving model to D:/wcs/UNetPlusPlus-master/weights/road_035.h5
Epoch 36/50
25/25 [==============================] - 25s 982ms/step - loss: 3.2872e-04 - binary_accuracy: 1.0000 - val_loss: 3.3457e-04 - val_binary_accuracy: 1.0000

Epoch 00036: saving model to D:/wcs/UNetPlusPlus-master/weights/road_036.h5
Epoch 37/50
25/25 [==============================] - 25s 984ms/step - loss: 3.1336e-04 - binary_accuracy: 1.0000 - val_loss: 3.1528e-04 - val_binary_accuracy: 1.0000

Epoch 00037: saving model to D:/wcs/UNetPlusPlus-master/weights/road_037.h5
Epoch 38/50
25/25 [==============================] - 25s 984ms/step - loss: 2.9885e-04 - binary_accuracy: 1.0000 - val_loss: 3.0221e-04 - val_binary_accuracy: 1.0000

Epoch 00038: saving model to D:/wcs/UNetPlusPlus-master/weights/road_038.h5
Epoch 39/50
25/25 [==============================] - 25s 982ms/step - loss: 2.8693e-04 - binary_accuracy: 1.0000 - val_loss: 2.8710e-04 - val_binary_accuracy: 1.0000

Epoch 00039: saving model to D:/wcs/UNetPlusPlus-master/weights/road_039.h5
Epoch 40/50
25/25 [==============================] - 24s 980ms/step - loss: 2.7742e-04 - binary_accuracy: 1.0000 - val_loss: 2.8069e-04 - val_binary_accuracy: 1.0000

Epoch 00040: saving model to D:/wcs/UNetPlusPlus-master/weights/road_040.h5
Epoch 41/50
25/25 [==============================] - 25s 987ms/step - loss: 2.6816e-04 - binary_accuracy: 1.0000 - val_loss: 2.7068e-04 - val_binary_accuracy: 1.0000

Epoch 00041: saving model to D:/wcs/UNetPlusPlus-master/weights/road_041.h5
Epoch 42/50
25/25 [==============================] - 24s 980ms/step - loss: 2.5939e-04 - binary_accuracy: 1.0000 - val_loss: 2.5856e-04 - val_binary_accuracy: 1.0000

Epoch 00042: saving model to D:/wcs/UNetPlusPlus-master/weights/road_042.h5
Epoch 43/50
25/25 [==============================] - 25s 985ms/step - loss: 2.5100e-04 - binary_accuracy: 1.0000 - val_loss: 2.5420e-04 - val_binary_accuracy: 1.0000

Epoch 00043: saving model to D:/wcs/UNetPlusPlus-master/weights/road_043.h5
Epoch 44/50
25/25 [==============================] - 25s 981ms/step - loss: 2.4295e-04 - binary_accuracy: 1.0000 - val_loss: 2.4528e-04 - val_binary_accuracy: 1.0000

Epoch 00044: saving model to D:/wcs/UNetPlusPlus-master/weights/road_044.h5
Epoch 45/50
25/25 [==============================] - 25s 985ms/step - loss: 2.3524e-04 - binary_accuracy: 1.0000 - val_loss: 2.3748e-04 - val_binary_accuracy: 1.0000

Epoch 00045: saving model to D:/wcs/UNetPlusPlus-master/weights/road_045.h5
Epoch 46/50
25/25 [==============================] - 25s 980ms/step - loss: 2.2876e-04 - binary_accuracy: 1.0000 - val_loss: 2.3000e-04 - val_binary_accuracy: 1.0000

Epoch 00046: saving model to D:/wcs/UNetPlusPlus-master/weights/road_046.h5
Epoch 47/50
25/25 [==============================] - 25s 986ms/step - loss: 2.2333e-04 - binary_accuracy: 1.0000 - val_loss: 2.2572e-04 - val_binary_accuracy: 1.0000

Epoch 00047: saving model to D:/wcs/UNetPlusPlus-master/weights/road_047.h5
Epoch 48/50
25/25 [==============================] - 24s 979ms/step - loss: 2.1811e-04 - binary_accuracy: 1.0000 - val_loss: 2.1949e-04 - val_binary_accuracy: 1.0000

Epoch 00048: saving model to D:/wcs/UNetPlusPlus-master/weights/road_048.h5
Epoch 49/50
25/25 [==============================] - 25s 985ms/step - loss: 2.1302e-04 - binary_accuracy: 1.0000 - val_loss: 2.1420e-04 - val_binary_accuracy: 1.0000

Epoch 00049: saving model to D:/wcs/UNetPlusPlus-master/weights/road_049.h5
Epoch 50/50
25/25 [==============================] - 25s 984ms/step - loss: 2.0808e-04 - binary_accuracy: 1.0000 - val_loss: 2.1072e-04 - val_binary_accuracy: 1.0000

Epoch 00050: saving model to D:/wcs/UNetPlusPlus-master/weights/road_050.h5

my train code is :
from utils2 import *
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, ReduceLROnPlateau, TensorBoard
from segmentation_models import Unet, Nestnet, Xnet

prepare model

model = Xnet(backbone_name='resnet50', encoder_weights='imagenet', decoder_block_type='transpose') # build UNet++

model.summary()
model.compile(optimizer=Adam(lr=1.0e-3), loss='binary_crossentropy', metrics=['binary_accuracy'])

train model

batch_size = 4
img_size = 512
epochs = 50
train_im_path,train_mask_path = 'D:/xxx/unet_plus/road_test/train/imgs/','D:/xxx/unet_plus/road_test/train/labels/'
val_im_path,val_mask_path = 'D:/xxx/unet_plus/road_test/val/imgs/','D:/xxx/unet_plus/road_test/val/labels/'
train_set = get_train_val(train_im_path)
val_set = get_train_val(val_im_path)
train_number = len(train_set)
val_number = len(val_set)

training_generator = DataGenerator(train_im_path = train_im_path, train_mask_path=train_mask_path, img_size=img_size)
validation_generator = DataGenerator(train_im_path = val_im_path, train_mask_path=val_mask_path, img_size=img_size)

model_path = 'D:/xxx/UNetPlusPlus-master/weights/'
model_name = 'road_{epoch:03d}.h5'
model_file = os.path.join(model_path, model_name)
model_checkpoint = ModelCheckpoint(model_file, monitor='val_loss', verbose=1, save_best_only=False, mode='max')
lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=np.sqrt(0.5625), cooldown=0, patience=5, min_lr=0.5e-6)
callable = [model_checkpoint, lr_reducer, TensorBoard(log_dir='./log')]

history = model.fit_generator(generator=training_generator,
validation_data=validation_generator,
steps_per_epoch=train_number//batch_size,
validation_steps=val_number//batch_size,
use_multiprocessing=False,
epochs=epochs,verbose=1,
callbacks=callable)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.