vanvalenlab / deepcell-tf Goto Github PK
View Code? Open in Web Editor NEWDeep Learning Library for Single Cell Analysis
Home Page: https://deepcell.readthedocs.io
License: Other
Deep Learning Library for Single Cell Analysis
Home Page: https://deepcell.readthedocs.io
License: Other
The default "accuracy" metric does not properly calculate when using "channels_first", as the axis it calculates is -1, and does not vary with image_data_formats.
Currently when a new training notebook is generated following the template in scripts/misc/miscellaneous for sample-based segmentation, the full path to the data is not provided. Specifically, this line from Interior-Edge%20Segmentation%202D%20Sample%20Based.ipynb
# DATA_FILE should be a npz file, preferably from `make_training_data`
DATA_FILE = os.path.join(DATA_DIR, filename)
is missing. Instead, DATA_FILE and DATA_DIR are passed separately to train_model_sample.
However, DATA_FILE has not been modified yet and just represents the name of the directory, not the full path. This results in directory not found error.
run_models_on_directory takes an abstract model_fn from the model_zoo. However, the first argument it passes to the model_fn is batch_shape, which only a few of the models in model_zoo accept as an argument. This model_fn needs to be called in a flexible way that can pass this argument without crashing due to "unexpected keyword argument 'batch_shape'".
The tracking data generator should log and throw a custom error when data is poorly/incorrectly formatted. Ideally, it should be fault tolerant. At minimum it should throw an exception and fail on purpose.
tensorflow gets most of it's functions in keras.preprocessing.image from another repository keras_preprocessing
. In the new versions, tensorflow has removed several functions required to build new Iterator
classes, such as apply_transform
and flip_axis
.
These functions should be directly imported from keras_preprocessing. Additionally, there are several functions in tensorflow that are directly imported from keras_preprocessing
, without any additional modification or documentation. I am worried these may also become "deprecated", and perhaps should be directly imported from keras_preprocessing
as well. This library will also give us direct access to transform_matrix_offset_center
.
This is a direct cause of #102, but only for the tensorflow imports.
Pylint has an extension that checks for docstring coverage and reports errors and mismatches of function arguments and documented arguments.
In the model_zoo there are many models that take a "permute" flag when instantiating. This was originally a workaround for channels_first vs channels_last, but by activating along the channel axis, this should not be required.
All notebooks (deepcell, watershed, disc, 2D, 3D, etc) should download data and then be run on that downloaded data
Love everything here and would really like to get it running on my system (perhaps i should just use the docker or go back to an older version of tensorflow / keras.preprocessing). Alas I'm new to this.
I'm just getting errors with importing specific packages that have recently been changed. e.g.
from tensorflow.python.keras.preprocessing.image import apply_transform
from tensorflow.python.keras.preprocessing.image import flip_axis
from keras_maskrcnn.preprocessing.generator import Generator as _MaskRCNNGenerator
I'm trying to make a workaround for them in my clone of deepcell-tf but am new to tf. Perhaps you are already aware of these issues with newer versions of tf / keras? Or some advice on how to get it running.
This is the stack trace seen when running bn_dense_feature_net_3D model and image_data_format = channels_first. So far, this does not appear when using channels_last, but this may be because we error out beforehand.
Root cause comes from the loss function: discriminative_instance_loss_3D
Traceback (most recent call last):
File "deepcell_scripts/mousebrain_train.py", line 94, in <module>
train_model_on_training_data()
File "deepcell_scripts/mousebrain_train.py", line 89, in train_model_on_training_data
shear=False
File "/usr/local/lib/python3.5/dist-packages/deepcell/dc_training_functions.py", line 399, in train_model_movie
LearningRateScheduler(lr_sched)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/_impl/keras/engine/training.py", line 1598, in fit_generator
initial_epoch=initial_epoch)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/_impl/keras/engine/training_generator.py", line 191, in fit_generator
x, y, sample_weight=sample_weight, class_weight=class_weight)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/_impl/keras/engine/training.py", line 1390, in train_on_batch
outputs = self.train_function(ins)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/_impl/keras/backend.py", line 2824, in __call__
fetches=fetches, feed_dict=feed_dict, **self.session_kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Matrix size-incompatible: In[0]: [256,2560], In[1]: [7680,256]
[[Node: loss/softmax_1_loss/Tensordot/MatMul = MatMul[T=DT_FLOAT, _class=["loc:@training/SGD/gradients/loss/softmax_1_loss/Tensordot/MatMul_grad/MatMul_1"], transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](loss/softmax_1_loss/Tensordot/Reshape, loss/softmax_1_loss/Tensordot/Reshape_1)]]
[[Node: loss/add_8/_279 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3384_loss/add_8", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Caused by op 'loss/softmax_1_loss/Tensordot/MatMul', defined at:
File "deepcell_scripts/mousebrain_train.py", line 94, in <module>
train_model_on_training_data()
File "deepcell_scripts/mousebrain_train.py", line 89, in train_model_on_training_data
shear=False
File "/usr/local/lib/python3.5/dist-packages/deepcell/dc_training_functions.py", line 368, in train_model_movie
model.compile(loss=loss_function, optimizer=optimizer)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/_impl/keras/engine/training.py", line 428, in compile
output_loss = weighted_loss(y_true, y_pred, sample_weight, mask)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/_impl/keras/engine/training_utils.py", line 438, in weighted
score_array = fn(y_true, y_pred)
File "/usr/local/lib/python3.5/dist-packages/deepcell/dc_training_functions.py", line 366, in loss_function
return discriminative_instance_loss_3D(y_true, y_pred)
File "/usr/local/lib/python3.5/dist-packages/deepcell/dc_helper_functions.py", line 355, in discriminative_instance_loss_3D
cells_summed = tf.tensordot(y_true, y_pred, axes=[[0, 1, 2, 3], [0, 1, 2, 3]])
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_ops.py", line 3004, in tensordot
ab_matmul = matmul(a_reshape, b_reshape)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_ops.py", line 2122, in matmul
a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 4279, in mat_mul
name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1718, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): Matrix size-incompatible: In[0]: [256,2560], In[1]: [7680,256]
[[Node: loss/softmax_1_loss/Tensordot/MatMul = MatMul[T=DT_FLOAT, _class=["loc:@training/SGD/gradients/loss/softmax_1_loss/Tensordot/MatMul_grad/MatMul_1"], transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](loss/softmax_1_loss/Tensordot/Reshape, loss/softmax_1_loss/Tensordot/Reshape_1)]]
[[Node: loss/add_8/_279 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3384_loss/add_8", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
_reduce_median cannot reshape with a variable batch dimension (i.e. Dimension(None)).
Currently using tf.contrib.percentile(X, 50), but this finds a median across all channels, not just across row/col data.
Old versions of keras called this border_mode as well, but as the project is attempting to stay up-to-date with new versions of keras, the border_mode flag should be removed in favor of a name-consistent "padding" flag.
A number of images contain cells that are out of the focal plane and overlap with cells in the focal plane. These "stacked" cells could be segmented and tracked if the data structure allowed for it.
Edit: This will be fixed by #140 which includes an edit to the README with an example tracked image.
On both TensorFlow version 1.11.0
and 1.12.0
the FilterDetections test case test_mini_batch
will fail most of the time inside TravisCI. It will pass eventually if re-run enough.
On TensorFlow version 1.11.0
, sometimes the test fails its self.assertAllEqual(actual_scores, expected_scores)
test due to a correctness problem (the wrong entry is supprussed).
On TensorFlow version 1.12.0
sometimes the tests fails its self.assertAllEqual(actual_scores, expected_scores)
test due to a NonMaxSuppression shape issue.
Greetings,
I have deployed the Deepcell notebook example Interior-Edge Segmentation 2D Fully Convolutional.ipynb on Google Colab. When I try to load the dataset HeLa_S3.npz from Deepcell's AWS example bucket, the computing environment runs out of memory and crashes. Can you please advise on the recommended hardware requirements on which Deepcell has been proven to work for this notebook, so that I can set up a computing environment that meets these requirements and run Deepcell?
Thank you very much,
Greetings,
When running the notebook Interior-Edge Segmentation 2D Fully Convolutional, on cell [7] I obtain the following error:
X_train shape: (6480, 216, 256, 1)
y_train shape: (6480, 216, 256, 1)
X_test shape: (720, 216, 256, 1)
y_test shape: (720, 216, 256, 1)
Output Shape: (None, 216, 256, 2)
Number of Classes: 2
Training on 1 GPUs
MemoryErrorTraceback (most recent call last)
<ipython-input-7-00e3831b7992> in <module>
16 flip=True,
17 shear=False,
---> 18 zoom_range=(0.8, 1.2))
/usr/local/lib/python3.5/dist-packages/deepcell/training.py in train_model_conv(model, dataset, expt, test_size, n_epoch, batch_size, num_gpus, frames_per_batch, transform, optimizer, log_dir, model_dir, model_name, focal, gamma, lr_sched, rotation_range, flip, shear, zoom_range, seed, **kwargs)
309 batch_size=batch_size,
310 transform=transform,
--> 311 transform_kwargs=kwargs)
312
313 val_data = datagen_val.flow(
/usr/local/lib/python3.5/dist-packages/deepcell/image_generators.py in flow(self, train_dict, batch_size, skip, transform, transform_kwargs, shuffle, seed, save_to_dir, save_prefix, save_format)
694 save_to_dir=save_to_dir,
695 save_prefix=save_prefix,
--> 696 save_format=save_format)
697
698 def random_transform(self, x, y=None, seed=None):
/usr/local/lib/python3.5/dist-packages/deepcell/image_generators.py in __init__(self, train_dict, image_data_generator, batch_size, skip, shuffle, transform, transform_kwargs, seed, data_format, save_to_dir, save_prefix, save_format)
518 'with shape', self.x.shape)
519
--> 520 self.y = _transform_masks(y, transform, data_format=data_format, **transform_kwargs)
521 self.channel_axis = 3 if data_format == 'channels_last' else 1
522 self.skip = skip
/usr/local/lib/python3.5/dist-packages/deepcell/image_generators.py in _transform_masks(y, transform, data_format, **kwargs)
154 if data_format == 'channels_first':
155 y_transform = np.rollaxis(y_transform, 1, y.ndim)
--> 156 y_transform = to_categorical(y_transform)
157 if data_format == 'channels_first':
158 y_transform = np.rollaxis(y_transform, y.ndim - 1, 1)
/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/utils/np_utils.py in to_categorical(y, num_classes)
44 num_classes = np.max(y) + 1
45 n = y.shape[0]
---> 46 categorical = np.zeros((n, num_classes), dtype=np.float32)
47 categorical[np.arange(n), y] = 1
48 output_shape = input_shape + (num_classes,)
MemoryError:
I am running Deepcell on a GeForce GTX 1050 Ti with 4 GB of RAM; the CPU has 32 GB of RAM. Could you please let me know the reason of this error?
Thank you,
If seg=True
in metrics.Metrics(), an error is triggered during report generation.
Currently, in the scripts directory it's difficult to understand what each of the sub-directories is for and what to expect inside. A README file that describes the contents of each sub-directory could alleviate this problem without needing to abandon the organization of sub-directories.
Greetings,
Could you please let me know if it is possible to train DeepCell using images of a specific size (e.g., 256x256 pixels) and use it to predict segmentation on images of a different size (e.g., 2048x2048 pixels)?
Thank you
When calling run_models_on_directory with split=True, receive following error traceback:
Traceback (most recent call last):
File "deepcell_scripts/running_scripts/ecoli_kc_run.py", line 45, in <module>
split=True)
File "/usr/local/lib/python3.5/dist-packages/deepcell/dc_running_functions.py", line 108, in run_models_on_directory
process=process)
File "/usr/local/lib/python3.5/dist-packages/deepcell/dc_running_functions.py", line 74, in run_model_on_directory
std=std, split=split, process=process)
File "/usr/local/lib/python3.5/dist-packages/deepcell/dc_running_functions.py", line 44, in run_model
model_output = np.zeros((n_features, 2*image_size_x-win_x*2, 2*image_size_y-win_y*2), dtype='float32')
TypeError: 'float' object cannot be interpreted as an integer
It seems like a python 2 vs python 3 division mismatch
Model with Watershed post-processing fails on smaller images (160x135 pixels)
We're (@manugarciaquismondo and @cornhundred) getting the following error in the notebook "DeepCell Transform 2D Fully Convolutional" (cell 4)
FileNotFoundError: [Errno 2] No such file or directory: '/data/data/cells/ecoli/kc/set1/processed'
We were able to successfully build the docker container and run Jupyter notebook.
During process_image when std=False and remove_zeros=False, the image is divided by the median pixel value. If this value is 0, we encounter:
RuntimeWarning: invalid value encountered in true_divide channel_img /= p50
The links in the README under "Cell Edge and Cell Interior Segmentation" seem to be broken (I get a Github 404 page).
Consider changed to simply report the number of empty frames after calculating object metrics
Many users will not be able to authenticate with NGC in order to pull the image. By changing the base image, we can help all users with this issue.
README should be updated as well to indicate that this is no longer an issue.
Currently, data sampling occurs during the get_data function. This limits the amount of data that we can load simultaneously, as each pixel is duplicated many times in each sampled window. A much more efficient approach would sample data on the fly in every batch of the data generators.
Enable models to be trained with multiple GPUs. tf.keras has their own multi_gpu_model
function that can be leveraged, however, after some preliminary testing it seems that there a couple of issues:
If the validation data is not distributed evenly across all GPUs, we will receive an error like:
InvalidArgumentError: paddings must be less than the dimension size: 0, 0 not less than 0
.
Saving a model (with ModelCheckpoint) will not work with the GPU model. fchollet says we can use the model that is INPUT to multi_gpu_model
to call model.save()
, however, this will not work for the ModelCheckpoint callback.
An alternative approach is to investigate Uber's open source distributed training solution, Horovod. It looks like the nvidia base images are set up with horovod pre-installed as well, so a potentially viable alternative
Hi,
I'm tryining to run deepcell-tf using the "Watershed Distance Transform for 2D Data" notebook as python script on some of our images. For now I'm only using 2 sets of nuclear stain images each set has 1 image. However, unfortunately it seems not to work as expected.
After training the foreground/background separation model it crashes at step [5] and produces the following traceback:
Traceback (most recent call last):
File "runDeepCellWatershedSeq.py", line 158, in
shear=False)
File "/home/foo/miniconda3/envs/deepCell-tf/lib/python3.6/site-packages/DeepCell-0.1-py3.6.egg/deepcell/training.py", line 169, in train_model_sample
File "/home/foo/miniconda3/envs/deepCell-tf/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1761, in fit_generator
initial_epoch=initial_epoch)
File "/home/foo/miniconda3/envs/deepCell-tf/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_generator.py", line 190, in fit_generator
x, y, sample_weight=sample_weight, class_weight=class_weight)
File "/home/foo/miniconda3/envs/deepCell-tf/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1537, in train_on_batch
outputs = self.train_function(ins)
File "/home/foo/miniconda3/envs/deepCell-tf/lib/python3.6/site-packages/tensorflow/python/keras/backend.py", line 2897, in call
fetched = self._callable_fn(*array_vals)
File "/home/foo/miniconda3/envs/deepCell-tf/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1454, in call
self._session._session, self._handle, args, status, None)
File "/home/foo/miniconda3/envs/deepCell-tf/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 519, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [10,2] vs. [10,4]
[[Node: loss_1/softmax_1_loss/mul_1 = Mul[T=DT_FLOAT, _class=["loc:@training_1/SGD/gradients/loss_1/softmax_1_loss/mul_1_grad/Reshape_1"], _device="/job:localhost/replica:0/task:0/device:GPU:0"](_arg_softmax_1_target_0_1/_1259, loss_1/softmax_1_loss/Log)]]
[[Node: training_1/SGD/gradients/batch_normalization_15/cond/FusedBatchNorm_1_grad/FusedBatchNormGrad/_1481 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1300_...chNormGrad", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"
The problem seems to be: Incompatible shapes: [10,2] vs. [10,4]
How can this be fixed?
Tensorflow is moving code out of the tensorflow.python.keras._impl folder and into simply tensorflow.python.keras
All imports from the _impl folder must be changed for future releases.
Notebooks should all have clean runs and each should:
deepcell.datasets
fgbg
transform for fgbg modelsmodel_name
parameterNeed to add a Permute final layer for channel_axis != 1.
All training modes should be supported by the training notebook. Related to vanvalenlab/kiosk-console#88
Siamese data generator tests should include data correctness tests for both channels_first and channels_last.
The bn_dense_multires_feature_net_3D model gets an OOM call (no matter how many GPUs are being used) during its multires_block() call. This OOM issue happens for both image_data_formats (channels first or last).
The line number this occurs on varies, due to resource variation (I assume), but an example stack trace is below:
Caused by op 'concatenate_22/concat', defined at:
File "deepcell_scripts/mousebrain_train.py", line 94, in <module>
train_model_on_training_data()
File "deepcell_scripts/mousebrain_train.py", line 74, in train_model_on_training_data
model = the_model(batch_shape=batch_shape, permute=False)
File "/usr/local/lib/python3.5/dist-packages/deepcell/dc_model_zoo.py", line 1289, in bn_dense_multires_feature_net_3D
list_of_blocks.append(multires_block(list_of_blocks[-1], init = init, reg = reg))
File "/usr/local/lib/python3.5/dist-packages/deepcell/dc_model_zoo.py", line 1268, in multires_block
merge5 = Concatenate(axis = channel_axis)([merge4, act5])
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/_impl/keras/engine/base_layer.py", line 314, in __call__
output = super(Layer, self).__call__(inputs, *args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/layers/base.py", line 717, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/_impl/keras/layers/merge.py", line 182, in call
return self._merge_function(inputs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/_impl/keras/layers/merge.py", line 393, in _merge_function
return K.concatenate(inputs, axis=self.axis)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/_impl/keras/backend.py", line 2190, in concatenate
return array_ops.concat([to_dense(x) for x in tensors], axis)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/array_ops.py", line 1189, in concat
return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 953, in concat_v2
"ConcatV2", values=values, axis=axis, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1718, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[1,369,10,256,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: concatenate_22/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _class=["loc:@training/SGD/gradients/AddN_163"], _device="/job:localhost/replica:0/task:0/device:GPU:0"](concatenate_21/concat, activation_22/Relu, loss/activation_63_loss/ExpandDims_2/dim, ^swap_out_training/SGD/gradients/activation_22/Relu_grad/ReluGrad_1)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Just as tensorflow as images tensorflow/tensorflow
and tensorflow/tensorflow-gpu
, deepcell should also support these options for users that may not have access to GPU machines.
The two files model_zoo.py
and training.py
are not well documented (Thanks @msschwartz21 ).
These two files should both be documented with sphinx compatible doc-strings.
Greetings,
We managed to run the notebook Interior-Edge Segmentation 2D Fully Convolutional and trained a DeepCell model by using the dataset https://deepcell-data.s3.amazonaws.com/nuclei/HeLa_S3.npz.
Our aim is to use DeepCell to train a model with MIBI images. We have found that, in the webpage http://www.deepcell.org/predict, a model called mibi is available for training. Can you please provide such a model (in the form of a matrix of model weights in .h5 format) and the image suite that was used to train this model (in the form of a .npz file)?
Thank you very much,
The current base image is pointed to the vvlab organization. Change this organization to nvidia, so other users that have the DGX station can pull the base image and authenticate with nvidia. Additionally, the base image version (18.04) is out of date, and can be updated to 18.08.
This will upgrade the tensorflow version from 1.07 to 1.09
Tracking.py (as well as its associated data generator) currently only support grayscale images. Training and tracking should be robust to images with multiple channels (i.e. RGB).
What is it and how does it work?
The loss function seems to work but the results are linear and not well clustered in the embedded space.
All of the deepcell-tf
layers should be fully tested:
I've used older versions of deepcell-tf in the past with some degree of success to segment HeLa cells from brightfield images. Back then, the structure of the training data (2 features, cell edge and cell interior) for sampling was as follows:
HeLa
│
└───set1
│ │ feature_0.png
│ │ feature_1.png
│ │ phase.png
│
└───set2
│ │ feature_0.png
│ │ feature_1.png
│ │ phase.png
...
I've understood from the source code that this structure has been updated. I've tried running the provided Jupyter Notebooks to get a feel of how the training data .npz files are constructed but as I don't have access to the original raw data, I cannot replicate this training data structure with my own data. What's the preferred structure of raw images/annotated images to properly generate training data?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.