Giter VIP home page Giter VIP logo

monk_v1's Introduction

Version Build_Status


Why use Monk

  • Issue: Want to begin learning computer vision

    • Solution: Start with Monk's hands-on study roadmap tutorials
  • Issue: Multiple libraries hence multiple syntaxes to learn

    • Solution: Monk's one syntax to rule them all - pytorch, keras, mxnet, etc
  • Issue: Tough to keep track of all the trial projects while participating in a deep learning competition

    • Solution: Use monk's project management and work on multiple prototyping experiments
  • Issue: Tough to set hyper-parameters while training a classifier

    • Solution: Try out hyper-parameter analyser to find the right fit
  • Issue: Looking for a library to build quick solutions for your customer

    • Solution: Train, Infer and deploy with monk's low-code syntax


Create real-world Image Classification applications

Medical Domain Fashion Domain Autonomous Vehicles Domain
Agriculture Domain Wildlife Domain Retail Domain
Satellite Domain Healthcare Domain Activity Analysis Domain

...... For more check out the Application Model Zoo!!!!



How does Monk make image classification easy

  • Write less code and create end to end applications.
  • Learn only one syntax and create applications using any deep learning library - pytorch, mxnet, keras, tensorflow, etc
  • Manage your entire project easily with multiple experiments


For whom this library is built

  • Students
    • Seamlessly learn computer vision using our comprehensive study roadmaps
  • Researchers and Developers
    • Create and Manage multiple deep learning projects
  • Competiton participants (Kaggle, Codalab, Hackerearth, AiCrowd, etc)
    • Expedite the prototyping process and jumpstart with a higher rank


Table of Contents




Sample Showcase - Quick Mode

Create an image classifier.

#Create an experiment
ptf.Prototype("sample-project-1", "sample-experiment-1")

#Load Data
ptf.Default(dataset_path="sample_dataset/", 
             model_name="resnet18", 
             num_epochs=2)
# Train
ptf.Train()

Inference

predictions = ptf.Infer(img_name="sample.png", return_raw=True);

Compare Experiments

#Create comparison project
ctf.Comparison("Sample-Comparison-1");

#Add all your experiments
ctf.Add_Experiment("sample-project-1", "sample-experiment-1");
ctf.Add_Experiment("sample-project-1", "sample-experiment-2");
   
# Generate statistics
ctf.Generate_Statistics();



Installation

  • CUDA 9.0          : pip install -U monk-cuda90
  • CUDA 9.0          : pip install -U monk-cuda92
  • CUDA 10.0        : pip install -U monk-cuda100
  • CUDA 10.1        : pip install -U monk-cuda101
  • CUDA 10.2        : pip install -U monk-cuda102
  • CPU (+Mac-OS) : pip install -U monk-cpu
  • Google Colab   : pip install -U monk-colab
  • Kaggle              : pip install -U monk-kaggle

For More Installation instructions visit: Link




Study Roadmaps




Documentation




TODO-2020

Features

  • Model Visualization
  • Pre-processed data visualization
  • Learned feature visualization
  • NDimensional data input - npy - hdf5 - dicom - tiff
  • Multi-label Image Classification
  • Custom model development

General

  • Functional Documentation
  • Tackle Multiple versions of libraries
  • Add unit-testing
  • Contribution guidelines
  • Python pip packaging support

Backend Support

  • Tensorflow 2.0 provision support with v1
  • Tensorflow 2.0 complete
  • Chainer

External Libraries

  • TensorRT Acceleration
  • Intel Acceleration
  • Echo AI - for Activation functions


Connect with the project contributors



Copyright

Copyright 2019 onwards, Tessellate Imaging Private Limited Licensed under the Apache License, Version 2.0 (the "License"); you may not use this project's files except in compliance with the License. A copy of the License is provided in the LICENSE file in this repository.

monk_v1's People

Contributors

aanisha avatar aayush-fadia avatar abhi-kumar avatar arijitgupta42 avatar avishreekh avatar jayeshk7 avatar kshitij12345 avatar li8bot avatar piyushm1 avatar rohit0906 avatar sanskar329 avatar shreyashwaghe avatar shubham7169 avatar take2rohit avatar thefashiongeek avatar vidyap-xgboost avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

monk_v1's Issues

Issue Training with Pytorch

Check out this notebook -- https://drive.google.com/file/d/1JbBauWIKDR_CjInB2uFSVy6cBzS4VX0i/view?usp=sharing

The training proceeds fine with Keras and gluon
`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in ()
----> 1 ptf.Train()

8 frames
/usr/local/lib/python3.6/dist-packages/torch/_utils.py in reraise(self)
367 # (https://bugs.python.org/issue2651), so we work around it.
368 msg = KeyErrorMessage(msg)
--> 369 raise self.exc_type(msg)

TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py", line 256, in getitem
return self.dataset[self.indices[idx]]
File "./monk_v1/monk/pytorch/datasets/csv_dataset.py", line 25, in getitem
image = self.transform(image);
File "/usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py", line 61, in call
img = t(img)
File "/usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py", line 501, in call
return F.hflip(img)
File "/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py", line 414, in hflip
raise TypeError('img should be PIL Image. Got {}'.format(type(img)))
TypeError: img should be PIL Image. Got <class 'torch.Tensor'>`

ZeroDivisionError when validating the trained model

This is with respect to this Colab file
After training the model, when it is to be validated using

accuracy, class_based_accuracy = gtf.Evaluate();

ZeroDivision Error was observed
Following is the exact log

--------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) <ipython-input-281-e9c57d0b7fc4> in <module>() 1 # Run validation ----> 2 accuracy, class_based_accuracy = gtf.Evaluate(); monk_v1/monk/system/imports.py in decorator_wrapper(*function_args, **function_args_dicts) monk_v1/monk/gluon/finetune/level_14_master_main.py in Evaluate(self) monk_v1/monk/system/imports.py in decorator_wrapper(*function_args, **function_args_dicts) monk_v1/monk/gluon/finetune/level_4_evaluation_base.py in set_evaluation_final(self) ZeroDivisionError: division by zero

How can this be resolved?

Error when resuming training

Screenshot 2020-07-28 at 9 01 05 PM
Screenshot 2020-07-28 at 9 01 19 PM
Screenshot 2020-07-28 at 9 01 31 PM

After training for 5 epochs, I decided to update the learning rate and resume training for another 5 epochs. This threw some exceptions. Though I have resumed exactly how it is shown in the documentation, if I am making a mistake, then let me know.

ZeroDivisionError: division by zero after running gtf.Evaluate()

I ran:

# Run validation
accuracy, class_based_accuracy = gtf.Evaluate();

and got the below result + ZeroDivisionError.

    Result
        class based accuracies
            0. hoodies - 6.952869808679421 %
            1. hoodies-female - 1.8196856906534327 %
            2. longsleeve - 34.255444379046494 %
            3. shirt - 95.48966381290457 %
            4. sweatshirt - 85.657104736491 %
            5. sweatshirt-female - 63.951120162932796 %
---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-15-e9c57d0b7fc4> in <module>()
      1 # Run validation
----> 2 accuracy, class_based_accuracy = gtf.Evaluate();

3 frames
/content/monk_v1/monk/gluon/finetune/level_4_evaluation_base.py in set_evaluation_final(self)
     69         for i in range(len(self.system_dict["dataset"]["params"]["classes"])):
     70             self.custom_print("            {}. {} - {} %".format(i, self.system_dict["dataset"]["params"]["classes"][i], 
---> 71                 class_dict[self.system_dict["dataset"]["params"]["classes"][i]]["num_correct"]/class_dict[self.system_dict["dataset"]["params"]["classes"][i]]["num_images"]*100));
     72             class_dict[self.system_dict["dataset"]["params"]["classes"][i]]["accuracy(%)"] = class_dict[self.system_dict["dataset"]["params"]["classes"][i]]["num_correct"]/class_dict[self.system_dict["dataset"]["params"]["classes"][i]]["num_images"]*100;
     73         self.custom_print("        total images:            {}".format(len(self.system_dict["local"]["data_loaders"]["test"])));

ZeroDivisionError: division by zero

What does this error suggest?

Issue with inference when model is trained on a different system

I've trained a densenet model on an Ubuntu server with GPU. I've copied the experiment and trying to run inference on MacOS. With Pytorch and Gluon backend I recieve this error.
`Mxnet Version: 1.5.0

Model Details
Loading model - ./workspace/fashion/exp6/output/models/final-symbol.json
/usr/local/lib/python3.7/site-packages/pylg/pylg.py:425: RuntimeWarning: MXNetError RAISED
warnings.warn(core_msg, RuntimeWarning)
Traceback (most recent call last):
File "infer_gluon.py", line 9, in
gtf.Prototype("fashion", "exp6", eval_infer=True);
File "../monk_v1/monk/system/imports.py", line 525, in decorator_wrapper
return validate_function(*function_args, **function_args_dicts)
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(*function_args, **function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(args, kwargs)
File "../monk_v1/monk/gluon_prototype.py", line 25, in Prototype
self.set_system_experiment(experiment_name, eval_infer=eval_infer, resume_train=resume_train, copy_from=copy_from, summary=summary);
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(function_args, function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(args, *kwargs)
File "../monk_v1/monk/system/base_class.py", line 82, in set_system_experiment
self.set_system_state_eval_infer();
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(function_args, *function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(args, kwargs)
File "../monk_v1/monk/gluon/finetune/level_5_state_base.py", line 31, in set_system_state_eval_infer
self.set_model_final();
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(function_args, function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(args, kwargs)
File "../monk_v1/monk/gluon/finetune/level_2_model_base.py", line 50, in set_model_final
self.system_dict = model_to_device(self.system_dict);
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(function_args, function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(args, kwargs)
File "../monk_v1/monk/gluon/models/common.py", line 82, in model_to_device
system_dict["local"]["model"].collect_params().reset_ctx(system_dict["local"]["ctx"])
File "/usr/local/lib/python3.7/site-packages/mxnet/gluon/parameter.py", line 879, in reset_ctx
i.reset_ctx(ctx)
File "/usr/local/lib/python3.7/site-packages/mxnet/gluon/parameter.py", line 458, in reset_ctx
self._init_impl(data, ctx)
File "/usr/local/lib/python3.7/site-packages/mxnet/gluon/parameter.py", line 346, in _init_impl
self._data = [data.copyto(ctx) for ctx in self._ctx_list]
File "/usr/local/lib/python3.7/site-packages/mxnet/gluon/parameter.py", line 346, in
self._data = [data.copyto(ctx) for ctx in self._ctx_list]
File "/usr/local/lib/python3.7/site-packages/mxnet/ndarray/ndarray.py", line 2093, in copyto
return _internal._copyto(self, out=hret)
File "", line 25, in _copyto
File "/usr/local/lib/python3.7/site-packages/mxnet/_ctypes/ndarray.py", line 92, in _imperative_invoke
ctypes.byref(out_stypes)))
File "/usr/local/lib/python3.7/site-packages/mxnet/base.py", line 253, in check_call
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [21:23:39] src/ndarray/ndarray.cc:1285: GPU is not enabled
Stack trace:
[bt] (0) 1 libmxnet.so 0x000000010b68d929 mxnet::op::NDArrayOpProp::~NDArrayOpProp() + 4473
[bt] (1) 2 libmxnet.so 0x000000010ccf4e7f mxnet::CopyFromTo(mxnet::NDArray const&, mxnet::NDArray const&, int, bool) + 4671
[bt] (2) 3 libmxnet.so 0x000000010cc17ca9 mxnet::imperative::PushFComputeEx(std::__1::function<void (nnvm::NodeAttrs const&, mxnet::OpContext const&, std::__1::vector<mxnet::NDArray, std::__1::allocatormxnet::NDArray > const&, std::__1::vector<mxnet::OpReqType, std::__1::allocatormxnet::OpReqType > const&, std::__1::vector<mxnet::NDArray, std::__1::allocatormxnet::NDArray > const&)> const&, nnvm::Op const
, nnvm::NodeAttrs const&, mxnet::Context const&, std::__1::vector<mxnet::engine::Var
, std::__1::allocatormxnet::engine::Var* > const&, std::__1::vector<mxnet::engine::Var
, std::__1::allocatormxnet::engine::Var* > const&, std::__1::vector<mxnet::Resource, std::__1::allocatormxnet::Resource > const&, std::__1::vector<mxnet::NDArray
, std::__1::allocatormxnet::NDArray* > const&, std::__1::vector<mxnet::NDArray
, std::__1::allocatormxnet::NDArray* > const&, std::__1::vector<mxnet::OpReqType, std::__1::allocatormxnet::OpReqType > const&)::'lambda'(mxnet::RunContext)::operator()(mxnet::RunContext) const + 217
[bt] (3) 4 libmxnet.so 0x000000010cc0665e mxnet::imperative::PushFComputeEx(std::__1::function<void (nnvm::NodeAttrs const&, mxnet::OpContext const&, std::__1::vector<mxnet::NDArray, std::__1::allocatormxnet::NDArray > const&, std::__1::vector<mxnet::OpReqType, std::__1::allocatormxnet::OpReqType > const&, std::__1::vector<mxnet::NDArray, std::__1::allocatormxnet::NDArray > const&)> const&, nnvm::Op const
, nnvm::NodeAttrs const&, mxnet::Context const&, std::__1::vector<mxnet::engine::Var
, std::__1::allocatormxnet::engine::Var* > const&, std::__1::vector<mxnet::engine::Var
, std::__1::allocatormxnet::engine::Var* > const&, std::__1::vector<mxnet::Resource, std::__1::allocatormxnet::Resource > const&, std::__1::vector<mxnet::NDArray
, std::__1::allocatormxnet::NDArray* > const&, std::__1::vector<mxnet::NDArray
, std::__1::allocatormxnet::NDArray* > const&, std::__1::vector<mxnet::OpReqType, std::__1::allocatormxnet::OpReqType > const&) + 1230
[bt] (4) 5 libmxnet.so 0x000000010cc04f6a mxnet::Imperative::InvokeOp(mxnet::Context const&, nnvm::NodeAttrs const&, std::__1::vector<mxnet::NDArray
, std::__1::allocatormxnet::NDArray* > const&, std::__1::vector<mxnet::NDArray
, std::__1::allocatormxnet::NDArray* > const&, std::__1::vector<mxnet::OpReqType, std::__1::allocatormxnet::OpReqType > const&, mxnet::DispatchMode, mxnet::OpStatePtr) + 810
[bt] (5) 6 libmxnet.so 0x000000010cc098a1 mxnet::Imperative::Invoke(mxnet::Context const&, nnvm::NodeAttrs const&, std::__1::vector<mxnet::NDArray
, std::__1::allocatormxnet::NDArray* > const&, std::__1::vector<mxnet::NDArray
, std::__1::allocatormxnet::NDArray* > const&) + 817
[bt] (6) 7 libmxnet.so 0x000000010cb4f48e SetNDInputsOutputs(nnvm::Op const
, std::__1::vector<mxnet::NDArray
, std::__1::allocatormxnet::NDArray* >
, std::__1::vector<mxnet::NDArray
, std::__1::allocatormxnet::NDArray* >
, int, void
const
, int
, int, int, void
) + 1582
[bt] (7) 8 libmxnet.so 0x000000010cb501d0 MXImperativeInvokeEx + 176
[bt] (8) 9 _ctypes.cpython-37m-darwin.so 0x0000000107b8d3a7 ffi_call_unix64 + 79

Traceback (most recent call last):
File "", line 1, in
File "../monk_v1/monk/system/imports.py", line 525, in decorator_wrapper
return validate_function(*function_args, **function_args_dicts)
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(*function_args, **function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(*args, **kwargs)
File "../monk_v1/monk/pytorch_prototype.py", line 24, in Prototype
self.set_system_experiment(experiment_name, eval_infer=eval_infer, resume_train=resume_train, copy_from=copy_from, summary=summary);
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(*function_args, **function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(*args, **kwargs)
File "../monk_v1/monk/system/base_class.py", line 82, in set_system_experiment
self.set_system_state_eval_infer();
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(*function_args, **function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(*args, **kwargs)
File "../monk_v1/monk/pytorch/finetune/level_5_state_base.py", line 27, in set_system_state_eval_infer
self.set_model_final();
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(*function_args, **function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(*args, **kwargs)
File "../monk_v1/monk/pytorch/finetune/level_2_model_base.py", line 44, in set_model_final
self.system_dict["local"]["model"] = load_model(self.system_dict, final=True);
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(*function_args, **function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(*args, **kwargs)
File "../monk_v1/monk/pytorch/models/return_model.py", line 15, in load_model
finetune_net = torch.load(system_dict["model_dir"] + "final");
File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 386, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 573, in _load
result = unpickler.load()
File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 536, in persistent_load
deserialized_objects[root_key] = restore_location(obj, location)
File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 119, in default_restore_location
result = fn(storage, location)
File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 95, in _cuda_deserialize
device = validate_cuda_device(location)
File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 79, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.`

The inference works nicely with Keras backend.

Index out of range when using GPU on Kaggle

I am running monk on a Kaggle notebook with mxnet backend, and when I am running my model on the GPU, after the first two iterations I'm getting a list index out of range error.

/kaggle/working/monk_v1/monk/gluon/finetune/level_3_training_base.py in set_training_final(self)
    332                 if(self.system_dict["model"]["params"]["use_gpu"]):
    333                     GPUs = GPUtil.getGPUs()
--> 334                     gpuMemoryUsed = GPUs[0].memoryUsed
    335                     if(self.system_dict["training"]["outputs"]["max_gpu_memory_usage"] < int(gpuMemoryUsed)):
    336                         self.system_dict["training"]["outputs"]["max_gpu_memory_usage"] = int(gpuMemoryUsed);

IndexError: list index out of range

Issue with updating train/val split

Updating train/val split using Mxnet backend throws the following error.
`---------------------------------------------------------------------------
NameError Traceback (most recent call last)
in ()
----> 1 gtf.update_trainval_split(0.8);
2 gtf.Reload()

4 frames
/content/monk_v1/monk/gluon/finetune/level_13_updates_main.py in update_trainval_split(self, value)
85 msg = "Dataset Type invalid.\n";
86 msg += "Cannot update split"
---> 87 ConstraintsWarning(msg)
88
89 self.system_dict = set_dataset_train_path(self.system_dict, dataset_path, value, path_to_csv, self.system_dict["dataset"]["params"]["delimiter"]);

NameError: name 'ConstraintsWarning' is not defined`

GPU not being recognized by mxnet backend in Colab

When running the gluon_prototype on Kaggle with default parameters,

gtf.Default(dataset_path=train_path,
           model_name="densenet121",
           freeze_base_network=False,
           num_epochs=5)

The Use Gpu is automatically being set to false

Model Params
    Model name:           densenet121
    Use Gpu:              False
    Use pretrained:       True
    Freeze base network:  False

Model Details
    Loading pretrained model

and the following warning is raised in the Kaggle Notebook:

/kaggle/working/monk_v1/monk/system/imports.py:160: UserWarning: GPU not accessible yet requested.
  warnings.warn(msg)

FileNotFoundError: [Errno 2] No such file or directory: '/kaggle/working/workspace/Project-Zolando-store/gluon-resnet34_v2//experiment_state.json'

I'm working on Kaggle notebooks, and executed the following code:

gtf.Prototype("Project-Zolando-store", "gluon-resnet34_v2", eval_infer=True)

I get the bellow error:

---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-15-7b24d4577b66> in <module>
----> 1 gtf.Prototype("Project-Zolando-store", "gluon-resnet34_v2", eval_infer=True)

/kaggle/working/monk_v1/monk/system/imports.py in decorator_wrapper(*function_args, **function_args_dicts)
    490 
    491 
--> 492             return validate_function(*function_args, **function_args_dicts)
    493         return decorator_wrapper
    494     return accept_decorator

/kaggle/working/monk_v1/monk/system/imports.py in decorator_wrapper(*function_args, **function_args_dicts)
    140                                                       False)
    141 
--> 142             return validate_function(*function_args, **function_args_dicts)
    143         return decorator_wrapper
    144     return accept_decorator

/kaggle/working/monk_v1/monk/gluon_prototype.py in Prototype(self, project_name, experiment_name, eval_infer, resume_train, copy_from, pseudo_copy_from, summary)
     48         self.set_system_project(project_name);
     49         self.set_system_experiment(experiment_name, eval_infer=eval_infer, resume_train=resume_train, copy_from=copy_from, 
---> 50             pseudo_copy_from=pseudo_copy_from, summary=summary);
     51         self.custom_print("Experiment Details");
     52         self.custom_print("    Project: {}".format(self.system_dict["project_name"]));

/kaggle/working/monk_v1/monk/system/imports.py in decorator_wrapper(*function_args, **function_args_dicts)
    140                                                       False)
    141 
--> 142             return validate_function(*function_args, **function_args_dicts)
    143         return decorator_wrapper
    144     return accept_decorator

/kaggle/working/monk_v1/monk/system/base_class.py in set_system_experiment(self, experiment_name, eval_infer, copy_from, pseudo_copy_from, resume_train, summary)
    129 
    130             if(eval_infer):
--> 131                 self.set_system_state_eval_infer();
    132             elif(resume_train):
    133                 self.set_system_state_resume_train();

/kaggle/working/monk_v1/monk/system/imports.py in decorator_wrapper(*function_args, **function_args_dicts)
    140                                                       False)
    141 
--> 142             return validate_function(*function_args, **function_args_dicts)
    143         return decorator_wrapper
    144     return accept_decorator

/kaggle/working/monk_v1/monk/gluon/finetune/level_5_state_base.py in set_system_state_eval_infer(self)
     36             None
     37         '''
---> 38         self.system_dict = read_json(self.system_dict["fname"], verbose=self.system_dict["verbose"]);
     39         self.system_dict["states"]["eval_infer"] = True;
     40 

/kaggle/working/monk_v1/monk/system/imports.py in decorator_wrapper(*function_args, **function_args_dicts)
    140                                                       False)
    141 
--> 142             return validate_function(*function_args, **function_args_dicts)
    143         return decorator_wrapper
    144     return accept_decorator

/kaggle/working/monk_v1/monk/system/common.py in read_json(fname, verbose)
     17         dict: loaded system dict
     18     '''
---> 19     with open(fname) as json_file:
     20         system_dict = json.load(json_file);
     21         system_dict["verbose"] = verbose;

FileNotFoundError: [Errno 2] No such file or directory: '/kaggle/working/workspace/Project-Zolando-store/gluon-resnet34_v2//experiment_state.json'

I've installed monk_v1, changed pillow version to 7.2.0.

Did I do something wrong? Any solutions for this error?

Error Pytorch backend training - Expected object of scalar type

I used the pytorch backend to train 2 models with number of classes as 49 and 16. When I started training the third model with similar conditions, but with 174 classes, I received this error.

Attaching the screenshots. I believe there's nothing I can do because of the abstraction from using monk. Please suggest a fix.
Screenshot 2020-08-04 at 2 26 01 PM
Screenshot 2020-08-04 at 2 26 16 PM
Screenshot 2020-08-04 at 2 26 30 PM

Errors while using pytorch backend

https://www.kaggle.com/sinchubhat/covid19-analysis-using-monk

i am working on this dataset
1 when i use inception_v3 model
gtf.Default(dataset_path="/kaggle/input/covid19-image-dataset/Covid19-dataset/train",
model_name="inception_v3",
num_epochs=10);
Its not loading pretrained model even after restarting the notebook and running them again
image

  1. when i tried resnet152 model , it did load the pretrained model,but while running gtf.train()
    it showed AttributeError: 'JpegImageFile' object has no attribute 'getexif' error
    image

Issue with transferring the workspace onto another system

After training a model on one server, if I try to copy it onto another deployment server, it looks for absolute paths from the training server searching for model files. All 3 backends give the same issue.
`Keras Version: 2.2.5
Tensorflow Version: 1.14.0

Model Details
Traceback (most recent call last):
File "infer_keras.py", line 9, in
ktf.Prototype("fashion", "exp9", eval_infer=True);
File "../monk_v1/monk/system/imports.py", line 525, in decorator_wrapper
return validate_function(*function_args, **function_args_dicts)
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(*function_args, **function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(*args, **kwargs)
File "../monk_v1/monk/keras_prototype.py", line 26, in Prototype
self.set_system_experiment(experiment_name, eval_infer=eval_infer, resume_train=resume_train, copy_from=copy_from, summary=summary);
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(*function_args, **function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(*args, **kwargs)
File "../monk_v1/monk/system/base_class.py", line 82, in set_system_experiment
self.set_system_state_eval_infer();
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(*function_args, **function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(*args, **kwargs)
File "../monk_v1/monk/tf_keras/finetune/level_5_state_base.py", line 45, in set_system_state_eval_infer
self.set_model_final();
File "../monk_v1/monk/system/imports.py", line 173, in decorator_wrapper
return validate_function(*function_args, **function_args_dicts)
File "/usr/local/lib/python3.7/site-packages/pylg/pylg.py", line 287, in call
rv = self.function.function(*args, **kwargs)
File "../monk_v1/monk/tf_keras/finetune/level_2_model_base.py", line 46, in set_model_final
raise ConstraintError(msg);
system.imports.ConstraintError: Model not found - /home/abhi/Desktop/monk_exp/workspace/fashion/exp9/output/models/final.h5
Previous Training Incomplete.`

Unit_test failed for optimizers for gluon cpu windows

`Running 1/
###################################################

  1. Exp: test_layer_max_pooling1d
    Status: Pass
    ###################################################

Total Tests - 80
Time Taken - 96.55078792572021 sec
Num Successful Tests - 69
Num Failed Tests - 11
Num Skipped Tests - 0

  1. Failed Test:
    Name - test_optimizer_sgd
    Error -
    An attempt has been made to start a new process before the
    current process has finished its bootstrapping phase.

     This probably means that you are not using fork to start your
     child processes and you have forgotten to use the proper idiom
     in the main module:
    
         if __name__ == '__main__':
             freeze_support()
             ...
    
     The "freeze_support()" line can be omitted if the program
     is not going to be frozen to produce an executable.
    
  2. Failed Test:
    Name - test_optimizer_nesterov_sgd
    Error -
    An attempt has been made to start a new process before the
    current process has finished its bootstrapping phase.

     This probably means that you are not using fork to start your
     child processes and you have forgotten to use the proper idiom
     in the main module:
    
         if __name__ == '__main__':
             freeze_support()
             ...
    
     The "freeze_support()" line can be omitted if the program
     is not going to be frozen to produce an executable.
    
  3. Failed Test:
    Name - test_optimizer_rmsprop
    Error -
    An attempt has been made to start a new process before the
    current process has finished its bootstrapping phase.

     This probably means that you are not using fork to start your
     child processes and you have forgotten to use the proper idiom
     in the main module:
    
         if __name__ == '__main__':
             freeze_support()
             ...
    
     The "freeze_support()" line can be omitted if the program
     is not going to be frozen to produce an executable.
    
  4. Failed Test:
    Name - test_optimizer_momentum_rmsprop
    Error -
    An attempt has been made to start a new process before the
    current process has finished its bootstrapping phase.

     This probably means that you are not using fork to start your
     child processes and you have forgotten to use the proper idiom
     in the main module:
    
         if __name__ == '__main__':
             freeze_support()
             ...
    
     The "freeze_support()" line can be omitted if the program
     is not going to be frozen to produce an executable.
    
  5. Failed Test:
    Name - test_optimizer_adam
    Error -
    An attempt has been made to start a new process before the
    current process has finished its bootstrapping phase.

     This probably means that you are not using fork to start your
     child processes and you have forgotten to use the proper idiom
     in the main module:
    
         if __name__ == '__main__':
             freeze_support()
             ...
    
     The "freeze_support()" line can be omitted if the program
     is not going to be frozen to produce an executable.
    
  6. Failed Test:
    Name - test_optimizer_adamax
    Error -
    An attempt has been made to start a new process before the
    current process has finished its bootstrapping phase.

     This probably means that you are not using fork to start your
     child processes and you have forgotten to use the proper idiom
     in the main module:
    
         if __name__ == '__main__':
             freeze_support()
             ...
    
     The "freeze_support()" line can be omitted if the program
     is not going to be frozen to produce an executable.
    
  7. Failed Test:
    Name - test_optimizer_adadelta
    Error -
    An attempt has been made to start a new process before the
    current process has finished its bootstrapping phase.

     This probably means that you are not using fork to start your
     child processes and you have forgotten to use the proper idiom
     in the main module:
    
         if __name__ == '__main__':
             freeze_support()
             ...
    
     The "freeze_support()" line can be omitted if the program
     is not going to be frozen to produce an executable.
    
  8. Failed Test:
    Name - test_optimizer_adadelta
    Error -
    An attempt has been made to start a new process before the
    current process has finished its bootstrapping phase.

     This probably means that you are not using fork to start your
     child processes and you have forgotten to use the proper idiom
     in the main module:
    
         if __name__ == '__main__':
             freeze_support()
             ...
    
     The "freeze_support()" line can be omitted if the program
     is not going to be frozen to produce an executable.
    
  9. Failed Test:
    Name - test_optimizer_nadam
    Error -
    An attempt has been made to start a new process before the
    current process has finished its bootstrapping phase.

     This probably means that you are not using fork to start your
     child processes and you have forgotten to use the proper idiom
     in the main module:
    
         if __name__ == '__main__':
             freeze_support()
             ...
    
     The "freeze_support()" line can be omitted if the program
     is not going to be frozen to produce an executable.
    
  10. Failed Test:
    Name - test_optimizer_signum
    Error -
    An attempt has been made to start a new process before the
    current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:
    
        if __name__ == '__main__':
            freeze_support()
            ...
    
    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.
    
  11. Failed Test:
    Name - test_layer_transposed_convolution3d
    Error - Error in operator conv183_deconvolution0: [00:05:17] C:\Jenkins\workspace\mxnet-tag\mxnet\src\operator\nn\deconvolution.cc:44: If not using CUDNN, only 1D or 2D Deconvolution is supported

Skipped Tests List - []
`

Issue with Signum optimizer in Gluon backend

/kaggle/working/monk_v1/monk/gluon/optimizers/return_optimizer.py in load_optimizer(system_dict)
91 system_dict["local"]["optimizer"] = mx.optimizer.Signum(
92 learning_rate=learning_rate,
---> 93 wd_lh=weight_decay,
94 lr_scheduler=learning_rate_scheduler,
95 momentum=momentum);

NameError: name 'weight_decay' is not defined

KeyError: 'train' for a dataset which doesn't have train folder, how to split?

I've gone through tutorials for updating hyper parameters, especially this https://github.com/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/1_getting_started_roadmap/5_update_hyperparams/2_data_params/4)%20Play%20around%20with%20train-val%20splits.ipynb to split my dataset into train and validation. But I couldn't get anything like that so I took this Classifier to start training my classifier from scratch.

When I run the below code:

gtf.Default(dataset_path="/content/drive/My Drive/Data/zalando/zalando",
            model_name="resnet152_v2", 
            freeze_base_network=True,
            num_epochs=2)

I keep getting this error.

Dataset Details
    Train path:     /content/drive/My Drive/Data/zalando/zalando
    Val path:       None
    CSV train path: None
    CSV val path:   None
    Label Type:     single

Dataset Params
    Input Size:   224
    Batch Size:   4
    Data Shuffle: True
    Processors:   2
    Train-val split:   0.7

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-10-ca95f2c1dd79> in <module>()
      2             model_name="resnet152_v2",
      3             freeze_base_network=True,
----> 4             num_epochs=2)

9 frames
/content/monk_v1/monk/gluon/finetune/level_1_dataset_base.py in set_dataset_dataloader(self, test)
    116 
    117 
--> 118                 self.system_dict["dataset"]["params"]["num_train_images"] = len(image_datasets["train"]);
    119                 self.system_dict["dataset"]["params"]["num_val_images"] = len(image_datasets["val"]);
    120 

KeyError: 'train'

Where did I go wrong?

Environment: Google Colab

ModuleNotFoundError: No module named 'gluon_prototype'

Even after being in the working directory and recloning the monk before the import, the following error is observed.

ModuleNotFoundError Traceback (most recent call last)
in ()
1 # Using mxnet-gluon backend
----> 2 from gluon_prototype import prototype
3
4 # For pytorch backend
5 #from pytorch_prototype import prototype

ModuleNotFoundError: No module named 'gluon_prototype'


NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.

Cats Dogs training

Hi, I'm trying to do the Cats and Dogs training.
Unfortunately it is not downloading the dataset and I don't get any error output. So I have no idea what is wrong.
This is the step that doesn't respond:

# Download dataset import os if not os.path.isfile("datasets.zip"): os.system("! wget --load-cookies /tmp/cookies.txt \"https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1rG-U1mS8hDU7_wM56a1kc-li_zHLtbq2' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1rG-U1mS8hDU7_wM56a1kc-li_zHLtbq2\" -O datasets.zip && rm -rf /tmp/cookies.txt") if not os.path.isdir("datasets"): os.system("! unzip -qq datasets.zip")

This is the notebook I'm running:
3) Dog Vs Cat Classifier Using Keras Backend

No module named monk while trying to invoke monk.compare_prototype

My code:

# Invoke the comparison class
from monk.compare_prototype import compare

Error:

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-11-bc5eaa030529> in <module>()
      1 # Invoke the comparison class
      2 import monk_v1
----> 3 from monk.compare_prototype import compare

ModuleNotFoundError: No module named 'monk'

---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

I've imported the following code and it worked smoothly, but it isn't working for compare_prototype. Any solution? Am I doing something wrong?

#Using keras backend 
from keras_prototype import prototype

Output:

Keras Version: 2.3.1
Tensorflow Version: 2.2.0

Experiment Details
    Project: Project-Zalando
    Experiment: mobilenet_v2_exp1
    Dir: /content/drive/My Drive/Monk_v1/workspace/Project-Zalando/mobilenet_v2_exp1/

mxneterror: gpu is not enabled in windows_cpu

when running

gtf.Default(dataset_path="..\..\..\monk\system_check_tests\datasets\dataset_cats_dogs_train", 
            model_name="resnet18_v1", 
            num_epochs=5);

got this error

mxneterror: [01:28:37] c:\jenkins\workspace\mxnet-tag\mxnet\src\ndarray\ndarray.cc:1285: gpu is not enabled.

After changing line 251 in gluon/finetune/level_14_master.py to Use_gpu=False

got this error
image

This is working on Keras, Pytorch backend.

KeyError: 'test'

input:
gtf.Dataset_Params(dataset_path="/content/cars_train" ,path_to_csv="cars_train.csv" );
gtf.Dataset();
accuracy, class_based_accuracy = gtf.Evaluate( );

out put :

Screenshot (554)
Screenshot (553)
Dataset Details
Train path: /content/cars_train
Val path: None
CSV train path: cars_train.csv
CSV val path: None
Label Type: single

Dataset Params
Input Size: 224
Batch Size: 16
Data Shuffle: True
Processors: 2
Train-val split: 0.9
Delimiter: ,

Pre-Composed Train Transforms
[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}, {'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}, {'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]

Pre-Composed Val Transforms
[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}, {'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}, {'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]

Dataset Numbers
Num train images: 7329
Num val images: 815
Num classes: 8144

Testing

KeyError Traceback (most recent call last)
in ()
1 gtf.Dataset_Params(dataset_path="/content/cars_train" ,path_to_csv="cars_train.csv" );
2 gtf.Dataset();
----> 3 accuracy, class_based_accuracy = gtf.Evaluate( );

3 frames
/content/monk_v1/monk/pytorch/finetune/level_4_evaluation_base.py in set_evaluation_final(self)
37 self.system_dict["testing"]["status"] = False;
38 if(self.system_dict["training"]["settings"]["display_progress_realtime"] and self.system_dict["verbose"]):
---> 39 pbar=tqdm(total=len(self.system_dict["local"]["data_loaders"]["test"]));
40
41 running_corrects = 0

KeyError: 'test'

Can't install requirements_cu10.txt

I got this error:
ERROR: Could not find a version that satisfies the requirement tensorflow-gpu==1.12.0 (from -r .\installation\requirements_cu10.txt (line 12)) (from versions: 1.13.1, 1.13.2, 1.14.0, 1.15.0rc0, 1.15.0rc1, 1.15.0rc2, 1.15.0rc3, 1.15.0, 1.15.2, 2.0.0a0, 2.0.0b0, 2.0.0b1, 2.0.0rc0, 2.0.0rc1, 2.0.0rc2, 2.0.0, 2.0.1, 2.1.0rc0, 2.1.0rc1, 2.1.0rc2,
2.1.0, 2.2.0rc0)
ERROR: No matching distribution found for tensorflow-gpu==1.12.0 (from -r .\installation\requirements_cu10.txt (line 12))

How do I install this?

experiment_state.json file undetected

Steps to reproduce:

  • Running a model from the model zoo
  • And performing self-training in real time
  • Using Keras Backend for prototyping
  • Create project and experiment

Syntax:

gtf.Default(dataset_path="the-simpsons-characters-dataset/train",
            model_name="alexnet", 
            freeze_base_network=False,
            num_epochs=2);

Expected Behaviour:
The model should implement without compilation errors, with the experiment_state.json file detected.

Error:

Dataset Details
    Train path:     the-simpsons-characters-dataset/train
    Val path:       None
    CSV train path: None
    CSV val path:   None
    Label Type:     single

Dataset Params
    Input Size:   224
    Batch Size:   4
    Data Shuffle: True
    Processors:   8
    Train-val split:   0.7

Found 29327 images belonging to 43 classes.
Found 12539 images belonging to 43 classes.
---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-46-d99f83683376> in <module>
      2             model_name="alexnet",
      3             freeze_base_network=False,
----> 4             num_epochs=2);

./monk/system/imports.py in decorator_wrapper(*function_args, **function_args_dicts)

./monk/system/imports.py in decorator_wrapper(*function_args, **function_args_dicts)

./monk/system/imports.py in decorator_wrapper(*function_args, **function_args_dicts)

./monk/keras_prototype.py in Default(self, dataset_path, path_to_csv, delimiter, model_name, freeze_base_network, num_epochs)

./monk/system/imports.py in decorator_wrapper(*function_args, **function_args_dicts)

./monk/tf_keras_1/finetune/level_14_master_main.py in Dataset(self)

./monk/system/imports.py in decorator_wrapper(*function_args, **function_args_dicts)

./monk/system/common.py in save(system_dict)

./monk/system/imports.py in decorator_wrapper(*function_args, **function_args_dicts)

./monk/system/common.py in write_json(system_dict)

FileNotFoundError: [Errno 2] No such file or directory: 'workspace/Project-Anime/Mxnet-alexnet//experiment_state.json'

Installation error for pytorch_prototype

when using
from pytorch_prototype import prototype
ModuleNotFoundError
in ()
----> 1 from pytorch_prototype import prototype

1 frames
/content/monk_v1/monk/pytorch/finetune/imports.py in ()
import psutil
import numpy as np
import GPUtil
def isnotebook():

ModuleNotFoundError: No module named 'GPUtil'

Doubt in transfer learning parameters tutorial.

image

  1. Why there is a difference in potentially trainable layers and actual trainable layers even after we have unfreezed the base network (desnenet121) using mxnet as shown in the above image?

image

2.Why we have different no.of layers in PyTorch and mxnet for the same densenet121?
3. Why are Unfreezed base network in PyTorch as same no of potentially trainable layers and actual trainable layers if the first image is correct?

Dependency | Don't include all supported DL Frameworks in requirements.txt

I don't think it is a good idea to install MXNet, Pytorch, Tensorflow all three for using monk, as I am assuming, most of the people would only be using one of the framework at a given time.

The dependencies become especially heavy for GPU versions (in terms of size) as each library is heavy .

I believe it would be better for user to be able to opt-in the library/libraries of their choice.

results have deteriorated since this warning

same dataset used to give upto 94% validationn accuracy but since 15 days , i am getting the following error with

gtf.Train();
Training Start
WARNING:tensorflow:period argument is deprecated. Please use save_freq to specify the frequency in number of batches seen.

please enlighten

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.