Giter VIP home page Giter VIP logo

donsetpg / narya Goto Github PK

View Code? Open in Web Editor NEW
161.0 161.0 47.0 24.25 MB

The Narya API allows you track soccer player from camera inputs, and evaluate them with an Expected Discounted Goal (EDG) Agent. This repository contains the implementation of the flowing paper https://arxiv.org/abs/2101.05388. We also make available all of our pretrained agents, and the datasets we used as well.

License: MIT License

Python 4.25% Jupyter Notebook 78.55% HTML 17.20%

narya's People

Contributors

donsetpg avatar karlosos avatar kkoripl avatar larsmaurath avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

narya's Issues

Error while Loading Homography weights

homography model code was working fine until last week. Then i executed below blocks of code to overcome the tensorflow issue. Currently unable to load model weights due to AttributeError: 'str' object has no attribute 'decode' .

image

from narya.narya.models.keras_models import DeepHomoModel

deep_homo_model = DeepHomoModel()

WEIGHTS_PATH = (
"https://storage.googleapis.com/narya-bucket-1/models/deep_homo_model.h5"
)
WEIGHTS_NAME = "deep_homo_model.h5"
WEIGHTS_TOTAR = False

checkpoints = tf.keras.utils.get_file(
WEIGHTS_NAME, WEIGHTS_PATH, WEIGHTS_TOTAR,
)

deep_homo_model.load_weights(checkpoints)

corners = deep_homo_model(image)

image

I also additionally installed !pip install -q h5py==2.10.0 to overcome " AttributeError: 'str' object has no attribute 'decode' "

Application of DeepHomoModel to vertical pitches

Hi,

I have been trying to train the DeepHomoModel to identify vertical pitches which was pretty straight forward with your excellent documentation.

What I have noticed is that the training on vertical pitch data seems to be slower than for the horizontal pitch data you trained on in your paper. I have adapted "https://github.com/DonsetPG/narya/blob/master/narya/trainer/homography_train.py" to my new training data and also played around with the default pitch specs in "get_default_corners" to use a larger share of the image.

I have currently trained on around 400 images, but results are significantly worse than when I train the model on 400 horizontal images from your data set.

Do you have any sense why the model may not work as well for vertical pitches? Of course there may be many reasons why it doesn't train as well in my case, but wanted to see if you have any no-brainer reasons before I generate more training data.

Below one of the best results I could generate:

narya_vert_pitch

Thanks a lot already!

versions incompatibility problems

I'm trying to run the file in colab and I always find versions incompatibility problems.

Even after several attempts to change the requirements, I still can't complete the execution of the file. Has anyone managed to run successfully lately?

I also cannot finish the execution of the remaining codes on my personal computer... Any news about the correct versions to be able to run this code?
It would be extremely useful to me...

Build_Mask function - missing image size in Keypoints dataset

Hi,

Another one with shapes, as I was almost killing myself few times, why losses are throwing incompatible shapes error:

InvalidArgumentError:  Incompatible shapes: [2,320,320,30] vs. [2,512,512,30]
	 [[node gradients_1/loss_2/softmax_loss/dice_loss_plus_1focal_loss/mul_grad/BroadcastGradientArgs (defined at /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3009) ]] [Op:__inference_keras_scratch_graph_156811]

when all my pictures are of 512x512 size. Losses are getting 2 arguments of ground-truth and predictions, which are tensors of shapes: batch_size, image_height, image_width, classes, so the change need to be made beforehand and I think it's on the preprocessing staff.

In Keypoints dataset general class Dataset has function getItem and inside of it:

        mask = _build_mask(keypoints)

        # extract certain classes from mask (e.g. cars)
        masks = [(mask == v) for v in self.class_values]
        mask = np.stack(masks, axis=-1).astype("float")

        # add background if mask is not binary
        if mask.shape[-1] != 1:
            background = 1 - mask.sum(axis=-1, keepdims=True)
            mask = np.concatenate((mask, background), axis=-1)

        # apply augmentations
        if self.augmentation:
            sample = self.augmentation(image=image, mask=mask)
            image, mask = sample["image"], sample["mask"]

Everything looks fine, but, _build_mask_utils has some default values:

def _build_mask(keypoints, mask_shape=(320, 320), nb_of_mask=29):

I don't know what exactly module of augumentation does, but it looks like mask is changing shape of a validation image, so when to compare against old one, incompatible shapes error is thrown.

My idea is to invoke _build_mask like so:

mask = _build_mask(keypoints, mask_shape = image.shape)

Pitch keypoints id's description

Hi,

I'd like to train homography model with new key points data, so looked into your scripts for that thing and to data set. There are xml files with keypoints id's in there (29 of them or so). Are they same for every data set picture? I mean like is id = 0 every single time the middle of the pitch? Maybe could you make a graphic to show which id's are which points on the pitch?

add_nan_trajectories removes every data except data with id '1'

def add_nan_trajectories(trajectories, max_frame):
    """Add np.nan to frame where the x,y coordinates are missing
    Arguments:
        trajectories: Dict mapping each id to a list of tuple (x,y,frame)
        max_frame: Max number of frame
    Returns:
        trajectories: Dict mapping each id to a list of tuple (x,y,frame)
    Raises:
    """
    frame_range = [i for i in range(1, max_frame)]
    full_trajectories = {}
    for ids in trajectories.keys():
        traj = trajectories[ids]
        full_trajectories[ids] = []
        cnt = 0
        for x_, y_, frame_ in traj:
            if cnt == 0:
                full_trajectories[ids].append([x_, y_, frame_])
                last_x, last_y, last_frame = x_, y_, frame_
                cnt += 1
            else:
                nb_fake_data_to_add = frame_ - last_frame
                for i in range(nb_fake_data_to_add - 1):
                    full_trajectories[ids].append([np.nan, np.nan, last_frame + i + 1])
                full_trajectories[ids].append([x_, y_, frame_])
                last_x, last_y, last_frame = x_, y_, frame_
                cnt += 1
        last_frame = frame_
        if last_frame < frame_range[-1]:
            for i in range(last_frame + 1, frame_range[-1] + 1):
                full_trajectories[ids].append([np.nan, np.nan, i])
        return full_trajectories   ## <--------- return right after the first loop

it should be like this

def add_nan_trajectories(trajectories, max_frame):
    """Add np.nan to frame where the x,y coordinates are missing
    Arguments:
        trajectories: Dict mapping each id to a list of tuple (x,y,frame)
        max_frame: Max number of frame
    Returns:
        trajectories: Dict mapping each id to a list of tuple (x,y,frame)
    Raises:
    """
    frame_range = [i for i in range(1, max_frame)]
    full_trajectories = {}
    for ids in trajectories.keys():
        traj = trajectories[ids]
        full_trajectories[ids] = []
        cnt = 0
        for x_, y_, frame_ in traj:
            if cnt == 0:
                full_trajectories[ids].append([x_, y_, frame_])
                last_x, last_y, last_frame = x_, y_, frame_
                cnt += 1
            else:
                nb_fake_data_to_add = frame_ - last_frame
                for i in range(nb_fake_data_to_add - 1):
                    full_trajectories[ids].append([np.nan, np.nan, last_frame + i + 1])
                full_trajectories[ids].append([x_, y_, frame_])
                last_x, last_y, last_frame = x_, y_, frame_
                cnt += 1
        last_frame = frame_
        if last_frame < frame_range[-1]:
            for i in range(last_frame + 1, frame_range[-1] + 1):
                full_trajectories[ids].append([np.nan, np.nan, i])
     return full_trajectories   ## <--------- finish all the loops and return

About performance

Hello,

First of all thank you for this amazing work. I am interested in tracking players and projecting their coordinates on a 2D plane using live camera. I am not interested in the analysis part, just the projection. Could your detection, homography and reid models be used with live video stream and achieve good fps performance?

deephomo so slow

Hello, why is the deephomo so slow, it takes about 30ms, I found that the backbone network it uses is resnet18, which should be very fast.

Keras Models for Homography and Key Points Estimation not using GPU

I am Pytorch User and have not much idea about tensorflow. I am trying to run Keras Models (Homography, Keypoints) on GPU. I have RTX 3090 with CUDA 11.2. I installed Tensorflow-gpu==2.2.0 intsead of simple tensorflow and also tried adding these lines before loading models.
with tf.device('/GPU:0'):
Can you please guide me how can i load these models on GPU instead of CPU.

Module 'albumentations' has no attribute 'Lambda'

Hi - this time no questions..,

Using script in keypoints_train.py I've tried to start an overview into training time for KeyPointModel on Google Colab, but... got into trouble.

I've copied whole your code from keypoints_train.py, with just small change for arguments loading, but when making object of KeyPointDataset there is an error of module albumentations:
image

I don't know if this is an issue of colab itself, maybe some library version is not suitable, but I cloned all of this repository with installing all requirements into colab.

GoogleAPI 403 error : API delinquent ?

I found that the code related to downloading the pre-trained model through googleapis.com didn't work today.
The error said 403 error and I wasn't able to access each data by accessing each URL.

I'm so afraid but I guess it's caused by the GCP license. Does anybody have any ideas?
I didn't download the pre-trained model, so I'm so glad to have weights not only as API but through GitHub or something to use.

program error code

Exception: URL fetch failure on https://storage.googleapis.com/narya-bucket-1/models/deep_homo_model_1.h5: 403 -- Forbidden
Exception: URL fetch failure on https://storage.googleapis.com/narya-bucket-1/models/player_tracker.params: 403 -- Forbidde

html page once I access each URL listed in https://donsetpg.github.io/naryaDoc/models/index.html

This XML file does not appear to have any style information associated with it. The document tree is shown below.

UserProjectAccountProblem
The project to be billed is associated with a delinquent billing account.

The billing account for the owning project is disabled in state delinquent

ValueError: bad marshal data (unknown type code)

When I try to run models_examples.ipynb in my local machine, I get the above said error.
It usually happens in the line
deep_homo_model = DeepHomoModel()

I installed all the requirements.
What can be the possible source of error here?

Sorry for this long error.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-6-3f60b79e4c6b> in <module>
      1 from narya.models.keras_models import DeepHomoModel
      2 
----> 3 deep_homo_model = DeepHomoModel()
      4 
      5 # WEIGHTS_PATH = (

~/Documents/narya/narya/models/keras_models.py in __init__(self, pretrained, input_shape)
     59         self.pretrained = pretrained
     60 
---> 61         self.resnet_18 = _build_resnet18()
     62 
     63         inputs = tf.keras.layers.Input((self.input_shape[0], self.input_shape[1], 3))

~/Documents/narya/narya/models/keras_models.py in _build_resnet18()
     34     )
     35 
---> 36     resnet18 = tf.keras.models.load_model(resnet18_path_to_file)
     37     resnet18.compile()
     38 

~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/saving/save.py in load_model(filepath, custom_objects, compile)
    182     if (h5py is not None and (
    183         isinstance(filepath, h5py.File) or h5py.is_hdf5(filepath))):
--> 184       return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
    185 
    186     if sys.version_info >= (3, 4) and isinstance(filepath, pathlib.Path):

~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/saving/hdf5_format.py in load_model_from_hdf5(filepath, custom_objects, compile)
    175       raise ValueError('No model found in config file.')
    176     model_config = json.loads(model_config.decode('utf-8'))
--> 177     model = model_config_lib.model_from_config(model_config,
    178                                                custom_objects=custom_objects)
    179 

~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/saving/model_config.py in model_from_config(config, custom_objects)
     53                     '`Sequential.from_config(config)`?')
     54   from tensorflow.python.keras.layers import deserialize  # pylint: disable=g-import-not-at-top
---> 55   return deserialize(config, custom_objects=custom_objects)
     56 
     57 

~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/layers/serialization.py in deserialize(config, custom_objects)
    103     config['class_name'] = _DESERIALIZATION_TABLE[layer_class_name]
    104 
--> 105   return deserialize_keras_object(
    106       config,
    107       module_objects=globs,

~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
    367 
    368       if 'custom_objects' in arg_spec.args:
--> 369         return cls.from_config(
    370             cls_config,
    371             custom_objects=dict(

~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/engine/network.py in from_config(cls, config, custom_objects)
    984         ValueError: In case of improperly formatted config dict.
    985     """
--> 986     input_tensors, output_tensors, created_layers = reconstruct_from_config(
    987         config, custom_objects)
    988     model = cls(inputs=input_tensors, outputs=output_tensors,

~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/engine/network.py in reconstruct_from_config(config, custom_objects, created_layers)
   2017   # First, we create all layers and enqueue nodes to be processed
   2018   for layer_data in config['layers']:
-> 2019     process_layer(layer_data)
   2020   # Then we process nodes in order of layer depth.
   2021   # Nodes that cannot yet be processed (if the inbound node

~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/engine/network.py in process_layer(layer_data)
   1999       from tensorflow.python.keras.layers import deserialize as deserialize_layer  # pylint: disable=g-import-not-at-top
   2000 
-> 2001       layer = deserialize_layer(layer_data, custom_objects=custom_objects)
   2002       created_layers[layer_name] = layer
   2003 

~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/layers/serialization.py in deserialize(config, custom_objects)
    103     config['class_name'] = _DESERIALIZATION_TABLE[layer_class_name]
    104 
--> 105   return deserialize_keras_object(
    106       config,
    107       module_objects=globs,

~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
    367 
    368       if 'custom_objects' in arg_spec.args:
--> 369         return cls.from_config(
    370             cls_config,
    371             custom_objects=dict(

~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/layers/core.py in from_config(cls, config, custom_objects)
    988   def from_config(cls, config, custom_objects=None):
    989     config = config.copy()
--> 990     function = cls._parse_function_from_config(
    991         config, custom_objects, 'function', 'module', 'function_type')
    992 

~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/layers/core.py in _parse_function_from_config(cls, config, custom_objects, func_attr_name, module_attr_name, func_type_attr_name)
   1040     elif function_type == 'lambda':
   1041       # Unsafe deserialization from bytecode
-> 1042       function = generic_utils.func_load(
   1043           config[func_attr_name], globs=globs)
   1044     elif function_type == 'raw':

~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/utils/generic_utils.py in func_load(code, defaults, closure, globs)
    469   except (UnicodeEncodeError, binascii.Error):
    470     raw_code = code.encode('raw_unicode_escape')
--> 471   code = marshal.loads(raw_code)
    472   if globs is None:
    473     globs = globals()

ValueError: bad marshal data (unknown type code)

Keypoints Dataset - flip_keypoint,missing size issue

Hi,

During training phase for keypoints just after reading them from .xml files there is a method invoked on every loaded keypoint to flip it (utils.masks.py):

def _flip_keypoint(id_kp, x_kp, y_kp, input_shape=(320, 320, 3))

In general it takes 4 arguments: keypoints id, x, y and.. size of an image. But keypoints_dataset class call to this method is missing this last one, that is why every single time images used to train keypoints model need to be in 320x320x3 format:

for id_kp, v in six.iteritems(keypoints):
    new_id_kp, x_kp, y_kp = _flip_keypoint(id_kp, min(v[0],image.shape[0]-1), min(v[1],image.shape[1]-1))
    new_keypoints[new_id_kp] = (x_kp, y_kp)

Think this call should be made like that or there will be no possibility to train model on differently sized pictures:

for id_kp, v in six.iteritems(keypoints):
    new_id_kp, x_kp, y_kp = _flip_keypoint(id_kp, min(v[0],image.shape[0]-1), min(v[1],image.shape[1]-1), image.shape)
    new_keypoints[new_id_kp] = (x_kp, y_kp)

Issue in from narya.narya.models.keras_models import KeypointDetectorModel


AttributeError Traceback (most recent call last)
in ()
2
3 kp_model = KeypointDetectorModel(
4 backbone='efficientnetb3', num_classes=29, input_shape=(320, 320),
5 )
6

6 frames
/usr/local/lib/python3.7/dist-packages/efficientnet/model.py in EfficientNet(width_coefficient, depth_coefficient, default_resolution, dropout_rate, drop_connect_rate, depth_divisor, blocks_args, model_name, include_top, weights, input_tensor, input_shape, pooling, classes, **kwargs)
468 file_name = model_name + '_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5'
469 file_hash = WEIGHTS_HASHES[model_name][1]
470 weights_path = keras_utils.get_file(file_name,
471 BASE_WEIGHTS_PATH + file_name,
472 cache_subdir='models',

AttributeError: module 'keras.utils' has no attribute 'get_file'

Training KeypointDetectorModel on Google Colab

I have a problem with training KeypointDetectorModel on Google Colab. A mysterious thing happens and the cell ends with ^C output.

... [ommited logs for readability]
Total params: 13,945,158
Trainable params: 13,855,558
Non-trainable params: 89,600
__________________________________________________________________________________________________
----------
Building dataset
----------
----------
Launching the training
----------
Epoch 1/100
2021-02-11 19:22:11.500514: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
^C

I have no idea where the problem is as there is no logs. Here is colab notebook with minimal code to reproduce this error.

full-tracking.ipynb colab crashes

Hi,
I'm trying to run full-tracking.ipynb and every time I run this cell:
tracker = FootballTracker(frame_rate=24.7,track_buffer = 60)
the notebook just crashes and restarts, only error I can get is app.log
{"name":"app","hostname":"0272c32279bc","pid":1,"type":"jupyter","level":30,"msg":"The Jupyter Notebook is running at:","time":"2021-02-23T20:01:27.113Z","v":0} {"name":"app","hostname":"0272c32279bc","pid":1,"type":"jupyter","level":30,"msg":"http://172.28.0.12:9000/","time":"2021-02-23T20:01:27.114Z","v":0} {"name":"app","hostname":"0272c32279bc","pid":1,"type":"jupyter","level":30,"msg":"Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).","time":"2021-02-23T20:01:27.114Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"google.colab serverextension initialized.","time":"2021-02-23T20:01:27.111Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"Serving notebooks from local directory: /","time":"2021-02-23T20:01:27.115Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"0 active kernels","time":"2021-02-23T20:01:27.115Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"The Jupyter Notebook is running at:","time":"2021-02-23T20:01:27.115Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"http://172.28.0.2:9000/","time":"2021-02-23T20:01:27.115Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).","time":"2021-02-23T20:01:27.115Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"Kernel started: f1e57aae-7e38-0d90a92821e0","time":"2021-02-23T20:01:30.178Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"Adapting to protocol v5.1 for kernel f1e57aae-7e38-0d90a92821e0","time":"2021-02-23T20:01:31.374Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"Kernel restarted: f1e57aae-7e38-0d90a92821e0","time":"2021-02-23T20:08:30.051Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":"[20:09:16] src/imperative/./imperative_utils.h:92: GPU support is disabled. Compile MXNet with USE_CUDA=1 to enable GPU support.","time":"2021-02-23T20:09:16.738Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":"terminate called after throwing an instance of 'dmlc::Error'","time":"2021-02-23T20:09:16.739Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":" what(): [20:09:16] src/imperative/imperative.cc:81: Operator _zeros is not implemented for GPU.","time":"2021-02-23T20:09:16.739Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":"Stack trace:","time":"2021-02-23T20:09:16.740Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":" [bt] (0) /usr/local/lib/python3.7/dist-packages/mxnet/libmxnet.so(+0x307d3b) [0x7f6cb840cd3b]","time":"2021-02-23T20:09:16.740Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":" [bt] (1) /usr/local/lib/python3.7/dist-packages/mxnet/libmxnet.so(mxnet::Imperative::InvokeOp(mxnet::Context const&, nnvm::NodeAttrs const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, mxnet::DispatchMode, mxnet::OpStatePtr)+0x6bb) [0x7f6cbb5aec5b]","time":"2021-02-23T20:09:16.740Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"KernelRestarter: restarting kernel (1/5), keep random ports","time":"2021-02-23T20:09:18.050Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":"**WARNING:root:kernel f1e57aae-7e38-0d90a92821e0 restarted**","time":"2021-02-23T20:09:18.050Z","v":0}

How to project image coordinate points on template?

Dear @DonsetPG
I was able to play with your models_examples.ipynb notebook.
I even used the homographies from homography_dataset.zip dataset for observing the warped template and merging it with the corresponding image.
However, when I use inverse of the homography matrix to plot some point from the corresponding image to template, it does not work.
Basically, I am trying to observe the 4 control points from image onto template as shown below.
Screenshot from 2021-04-13 12-58-20
Can you suggest a solution for the same?
Thanking you in anticipation!

Training time for KeypointDetectorModel

Is there a reason why training KeypointDetectorModel is so much slower now?
I used to train on 100 epoch for about 3 hours few months ago in GoogleColab but now for just one epoch it takes about 40 minutes (67 hours for all 100 epochs).
Does anyone have a similar problem?

How to create Homography dataset?

I want to do this for a field hockey game but can't understand how the homography dataset is prepared, are there any tools available for the same? any help is appreciated, Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.