Giter VIP home page Giter VIP logo

segments-ai / fast-labeling-workflow Goto Github PK

View Code? Open in Web Editor NEW
14.0 14.0 9.0 573 KB

Building large-scale datasets is a time-consuming endeavour, especially for tasks like image segmentation where the labels need to be very precise. This tutorial shows how you can speed up your labeling workflow for image segmentation with Segments.ai, using model training in the loop.

Jupyter Notebook 71.98% Python 28.02%

fast-labeling-workflow's Introduction



GitHub GitHub release Documentation Downloads

Segments.ai is the training data platform for computer vision engineers and labeling teams. Our powerful labeling interfaces, easy-to-use management features, and extensive API integrations help you iterate quickly between data labeling, model training and failure case discovery.

Quickstart

Walk through the Python SDK quickstart.

Documentation

Please refer to the documentation for usage instructions.

Blog

Read our blog posts to learn more about the platform.

Changelog

The most notable changes in v1.0 of the Python SDK compared to v0.73 include:

  • Added Python type hints and better auto-generated docs.
  • Improved error handling: functions now raise proper exceptions.
  • New functions for managing issues and collaborators.

You can upgrade to v1.0 with pip install -—upgrade segments-ai. Please be mindful of following breaking changes:

  • The client functions now return classes instead of dicts, so you should access properties using dot-based indexing (e.g. dataset.description) instead of dict-based indexing (e.g. dataset[’description’]).
  • Functions now consistently raise exceptions, instead of sometimes silently failing with a print statement. You might want to handle these exceptions with a try-except block.
  • Some legacy fields are no longer returned: dataset.tasks, dataset.task_readme, dataset.data_type.
  • The default value of the id_increment argument in utils.export_dataset() and utils.get_semantic_bitmap() is changed from 1 to 0.
  • Python 3.6 and lower are no longer supported.

fast-labeling-workflow's People

Contributors

dbbert avatar ermaconomist avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

fast-labeling-workflow's Issues

model = train_model(dataset) produces error

I've been trying to run this notebook in colab: https://github.com/segments-ai/fast-labeling-workflow/blob/master/demo.ipynb

First of all, the notebook has a few mistakes that should be corrected. For example, in this line:
dataset = SegmentsDataset(release, labelset='ground-truth', filter_by='labeled'),
filter_by='labeled' should be removed; otherwise the resulting dataset will be empty. Obviously, the data should not be filtered by 'labeled', before it's labeled.

I fixed this myself, but when I run the cell that has this code snippet:

from utils import train_model
model = train_model(dataset)

I get the error message below. How could I resolve this?

Exporting dataset. This may take a while...
100%|██████████| 96/96 [00:00<00:00, 3081.17it/s]Exported to ./export_coco-instance_payman21_tomatoes_v0.1.json. Images in segments/payman21_tomatoes/v0.1
Dataset was already registered
[02/25 00:41:02 d2.data.datasets.coco]: Loaded 0 images in COCO format from ./export_coco-instance_payman21_tomatoes_v0.1.json
Metadata(evaluator_type='coco', image_root='segments/payman21_tomatoes/v0.1', json_file='./export_coco-instance_payman21_tomatoes_v0.1.json', name='my_dataset', thing_classes=['object'], thing_dataset_id_to_contiguous_id={1: 0})

GeneralizedRCNN(
(backbone): FPN(
(fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
(fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(top_block): LastLevelMaxPool()
(bottom_up): ResNet(
(stem): BasicStem(
(conv1): Conv2d(
3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
)
(res2): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv1): Conv2d(
64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
)
(conv3): Conv2d(
64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
)
(res3): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv1): Conv2d(
256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
)
(conv3): Conv2d(
128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
)
(res4): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
(conv1): Conv2d(
512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(3): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(4): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(5): BottleneckBlock(
(conv1): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
(conv3): Conv2d(
256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
)
(res5): Sequential(
(0): BottleneckBlock(
(shortcut): Conv2d(
1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
(conv1): Conv2d(
1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(1): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(2): BottleneckBlock(
(conv1): Conv2d(
2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
(conv3): Conv2d(
512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
)
)
)
(proposal_generator): RPN(
(rpn_head): StandardRPNHead(
(conv): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation): ReLU()
)
(objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))
(anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))
)
(anchor_generator): DefaultAnchorGenerator(
(cell_anchors): BufferList()
)
)
(roi_heads): StandardROIHeads(
(box_pooler): ROIPooler(
(level_poolers): ModuleList(
(0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True)
(1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True)
(2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
(3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True)
)
)
(box_head): FastRCNNConvFCHead(
(flatten): Flatten(start_dim=1, end_dim=-1)
(fc1): Linear(in_features=12544, out_features=1024, bias=True)
(fc_relu1): ReLU()
(fc2): Linear(in_features=1024, out_features=1024, bias=True)
(fc_relu2): ReLU()
)
(box_predictor): FastRCNNOutputLayers(
(cls_score): Linear(in_features=1024, out_features=2, bias=True)
(bbox_pred): Linear(in_features=1024, out_features=4, bias=True)
)
(mask_pooler): ROIPooler(
(level_poolers): ModuleList(
(0): ROIAlign(output_size=(14, 14), spatial_scale=0.25, sampling_ratio=0, aligned=True)
(1): ROIAlign(output_size=(14, 14), spatial_scale=0.125, sampling_ratio=0, aligned=True)
(2): ROIAlign(output_size=(14, 14), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
(3): ROIAlign(output_size=(14, 14), spatial_scale=0.03125, sampling_ratio=0, aligned=True)
)
)
(mask_head): MaskRCNNConvUpsampleHead(
(mask_fcn1): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation): ReLU()
)
(mask_fcn2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation): ReLU()
)
(mask_fcn3): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation): ReLU()
)
(mask_fcn4): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation): ReLU()
)
(deconv): ConvTranspose2d(256, 256, kernel_size=(2, 2), stride=(2, 2))
(deconv_relu): ReLU()
(predictor): Conv2d(256, 1, kernel_size=(1, 1), stride=(1, 1))
)
)
)
[02/25 00:41:03 d2.data.datasets.coco]: Loaded 0 images in COCO format from ./export_coco-instance_payman21_tomatoes_v0.1.json

AssertionError Traceback (most recent call last)
in
1 # Train an instance segmentation model on the dataset
2 from utils import train_model
----> 3 model = train_model(dataset)

6 frames
/content/fast-labeling-workflow/utils.py in train_model(dataset)
77 # Start the training
78 os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
---> 79 trainer = DefaultTrainer(cfg)
80 trainer.resume_or_load(resume=False)
81 trainer.train()

/usr/local/lib/python3.8/dist-packages/detectron2/engine/defaults.py in init(self, cfg)
376 model = self.build_model(cfg)
377 optimizer = self.build_optimizer(cfg, model)
--> 378 data_loader = self.build_train_loader(cfg)
379
380 model = create_ddp_model(model, broadcast_buffers=False)

/usr/local/lib/python3.8/dist-packages/detectron2/engine/defaults.py in build_train_loader(cls, cfg)
545 Overwrite it if you'd like a different data loader.
546 """
--> 547 return build_detection_train_loader(cfg)
548
549 @classmethod

/usr/local/lib/python3.8/dist-packages/detectron2/config/config.py in wrapped(*args, **kwargs)
205 def wrapped(*args, **kwargs):
206 if _called_with_cfg(*args, **kwargs):
--> 207 explicit_args = _get_args_from_config(from_config, *args, **kwargs)
208 return orig_func(**explicit_args)
209 else:

/usr/local/lib/python3.8/dist-packages/detectron2/config/config.py in _get_args_from_config(from_config_func, *args, **kwargs)
243 if name not in supported_arg_names:
244 extra_kwargs[name] = kwargs.pop(name)
--> 245 ret = from_config_func(*args, **kwargs)
246 # forward the other arguments to init
247 ret.update(extra_kwargs)

/usr/local/lib/python3.8/dist-packages/detectron2/data/build.py in _train_loader_from_config(cfg, mapper, dataset, sampler)
342 def _train_loader_from_config(cfg, mapper=None, *, dataset=None, sampler=None):
343 if dataset is None:
--> 344 dataset = get_detection_dataset_dicts(
345 cfg.DATASETS.TRAIN,
346 filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS,

/usr/local/lib/python3.8/dist-packages/detectron2/data/build.py in get_detection_dataset_dicts(names, filter_empty, min_keypoints, proposal_files, check_consistency)
250
251 for dataset_name, dicts in zip(names, dataset_dicts):
--> 252 assert len(dicts), "Dataset '{}' is empty!".format(dataset_name)
253
254 if proposal_files is not None:

AssertionError: Dataset 'my_dataset' is empty!

couple of issues here | main problem rn: expected str, bytes or os.PathLike object, not GeneralizedRCNN

line 64 under utils had to be changed to: cfg.merge_from_file(model_zoo.get("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))

I'm following the steps mentioned in your article: https://segments.ai/blog/speed-up-image-segmentation-with-model-assisted-labeling
currently getting the following error at line: model = train_model(dataset)

Metadata(evaluator_type='coco', image_root='segments\Dragos_Mycars\v0.1', json_file='041ae42f-25aa-434e-a303-1afbcfa3ce90_coco.json', name='my_dataset', thing_classes=['car'])
Config 'c:\users<>\detectron2\detectron2\model_zoo\configs\COCO-InstanceSegmentation\mask_rcnn_R_50_FPN_3x.yaml' has no VERSION. Assuming it to be compatible with latest v2.
Traceback (most recent call last):
File "c:\Users<>\2-1.py", line 28, in
model = train_model(dataset)
File "C:/Users/<>/fast-labeling-workflow\utils.py", line 64, in train_model
cfg.merge_from_file(model_zoo.get("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
File "c:\users<>\detectron2\detectron2\config\config.py", line 23, in merge_from_file
loaded_cfg = _CfgNode.load_yaml_with_base(cfg_filename, allow_unsafe=allow_unsafe)
File "C:\ProgramData\Anaconda3\lib\site-packages\fvcore\common\config.py", line 49, in load_yaml_with_base
with PathManager.open(filename, "r") as f:
File "C:\ProgramData\Anaconda3\lib\site-packages\fvcore\common\file_io.py", line 647, in open
return self.__get_path_handler(path)._open( # type: ignore
File "C:\ProgramData\Anaconda3\lib\site-packages\fvcore\common\file_io.py", line 622, in __get_path_handler
path = os.fspath(path) # pyre-ignore
TypeError: expected str, bytes or os.PathLike object, not GeneralizedRCNN

running this from W10
CUDA version: 10.2
Anaconda
Python 3.8.3
tried running under different conda environments (tried base and new env)
reinstalled fvcore


On a separate note can you explain what is needed at this line:
Get a list of image URLs
image_urls = get_image_urls('tomatoes')

I've tried comma delimited URLS, tried OneDrive4B accessible URL containing images, tried OneDrive4B accessible URL to a JSON file containing image URLS. tried local json file, tried txt file containing comma delimited URLs.


Also utils line: 69. model_zoo has no such function get_checkpoint_url

thank you

OOM in Colab

I think I am getting an out of memory error in colab when running the demo notebook. When training the dataset, the colab runtime crashes, without anything in the app.log. There is only a line added to /var/colab/ooms.

Here is the error I get when training the model:

WARNING [07/04 18:25:00 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

[07/04 18:25:00 d2.data.datasets.coco]: Loaded 91 images in COCO format from ./export_bdavis_leaf_segmentation_v0.1.json
[07/04 18:25:00 d2.data.build]: Removed 0 images with no usable annotations. 91 images left.
[07/04 18:25:00 d2.data.build]: Distribution of instances among all 1 categories:
|  category  | #instances   |
|:----------:|:-------------|
|    leaf    | 303          |
|            |              |
[07/04 18:25:00 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[07/04 18:25:00 d2.data.build]: Using training sampler TrainingSampler
[07/04 18:25:00 d2.data.common]: Serializing 91 elements to byte tensors and concatenating them all ...
[07/04 18:25:00 d2.data.common]: Serialized dataset takes 0.15 MiB
WARNING [07/04 18:25:00 d2.solver.build]: SOLVER.STEPS contains values larger than SOLVER.MAX_ITER. These values will be ignored.

and here is what is added to /var/colab/app.log:

Jul 4, 2021, 2:25:02 PM | WARNING | WARNING:root:kernel 2561a98b-6797-477e-b0bb-e56ee9e1744b restarted
-- | -- | --
Jul 4, 2021, 2:25:02 PM | INFO | KernelRestarter: restarting kernel (1/5), keep random ports

From the research I have done, the two warnings should not cause the colab runtime to crash.

In case it is meaningful, here is the /var/colab/ooms file:

1625421114,6,519,1809323373,-;python3[14457]: segfault at 7f27ce09d1e0 ip 00007f27ce09d1e0 sp 00007ffc7e9e7a08 error 15 in libtensorflow_framework.so.2[7f27ce099000+7000]
1625422118,6,521,2812796452,-;python3[14954]: segfault at 1ffffffff ip 00007f94af574794 sp 00007fffebda61f0 error 4 in _message.cpython-37m-x86_64-linux-gnu.so[7f94af3ad000+23e000]
1625422534,6,523,3228497055,-;python3[21106]: segfault at 7f6ba3fe11e0 ip 00007f6ba3fe11e0 sp 00007fff80151078 error 15 in libtensorflow_framework.so.2[7f6ba3fdd000+7000]
1625423101,6,525,3795572467,-;python3[23759]: segfault at 1ffffffff ip 00007f4c7303e794 sp 00007ffd96d807f0 error 4 in _message.cpython-37m-x86_64-linux-gnu.so[7f4c72e77000+23e000]

It is running with a GPU and high-ram runtime type. I have run out of things to try. If you think it is something else, I can post this issue in another project.

Problem with getting release - Autolabel example

Hey,

I have tried out API and connected succesfully, but after calling client.get_release() I get error 404 respone from api. It seems that I am creating wrong url call somehow, I added code I use to call it.

dataset_identifier = "Side_Lanes" 
# or dataset_identifier = "user_name/Side_Lanes" 
# Both give same output
name = "v0.48" # Tried with "v0.1"

release = client.get_release(dataset_identifier, name)
print(release)

Error output

/datasets/Side_Lanes/releases/v0.48/
{'Authorization': 'APIKey xxxxx'}
<Response [404]>

image

Modify prelabeled segmentation

Hello Bert,
I am using the fast labeling process in segments.ai and it's working really great!!. As you know I am doing semantic segmentation and the time it takes to do that is pretty significant. Hence I have been working on using the model automation to get prelabeled datasets. Very satisfied thus far and has helped in reducing my efforts a lot! :)

One question on that -

  1. Once the prelabeled datasets are created, and I am annotating them but made an error and need to erase a portion of it. the GUI does provide an eraser, but that does not erase the annotation. So seems like once a prelabeled image is annotated, you may not be able to selectively erase stuff from there? There's an option to remove all objects, but then that removes the objects, but also the pre-labeling. Am I doing something wrong in using the eraser? Appreciate your help!

Thanks,
Shailesh

Detectron 2 not Windows compatible

Hey, I want to try SegmentsAI via the demo and even uploaded and labeled some sample files but can't move past from utils import get_image_urls because detectron2 isn't compatible with Windows it seems. Or, at a minimum, doesn't offer prebuilt binaries. This is too high of a barrier of entry just to test your product. If you

pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.8/index.html fails on Win10 with error

ERROR: Could not find a version that satisfies the requirement detectron2 (from versions: none)
ERROR: No matching distribution found for detectron2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.