Giter VIP home page Giter VIP logo

predict-cluster's Introduction

Predict & Cluster

Introduction

This repository contains the code for the paper "PREDICT & CLUSTER: Unsupervised Skeleton Based Action Recognition", which is available here, published in CVPR 2020. The unsupervised approach implements two strategies discussed in the paper, Fixed-state(FS) and Fixed-weight(FW). It is tested on NW-UCLA and NTU-RGBD(60) dataset.

Alt text

Abstract

We propose a novel system for unsupervised skeleton-based action recognition. Given inputs of body-keypoints sequences obtained during various movements, our system associates the sequences with actions. Our system is based on an encoder-decoder recurrent neural network, where the encoder learns a separable feature representation within its hidden states formed by training the model to perform the prediction task. We show that according to such unsupervised training, the decoder and the encoder self-organize their hidden states into a feature space which clusters similar movements into the same cluster and distinct movements into distant clusters.

Examples

The python notebook ucla_demo.ipynb is the demonstration notebook for FS and FW strategies on NW-UCLA dataset, which is available on http://wangjiangb.github.io/my_data.html. The preprocessed UCLA data is included in ucla_data directory, please refer to ucla_demo.ipynb for more info. The data preprocessing part for UCLA dataset is also incorporated within the notebook, and is ready to run. The results shown in the notebook is not the most optimal one as we got in the paper, but is almost comparable.

Other python scripts are extracted from the notebooks we wrote for NTU-RGBD 60 datasets, but not tested yet.

Requirements

  1. Tensorflow 1.14.0
  2. Python 3
  3. scikit-learn 0.21.2
  4. matplotlib 3.1.0
  5. numpy 1.16.4

NEWS

PyTorch Implementation (by Jingyuan Li @christincha) is avaialable. PyTorch implementation is part of extension of Predict-Cluster, Iterate-Cluster, to consider self-supervised active learning levarging Predict-Cluster. You can find more details about it here: https://arxiv.org/abs/2006.06911. Iterate-Cluster code will be available soon in a new linked github repository.

If you are using PyTorch, you may just need the ucla_github_pytorch folder. Please check the UCLA_Pytorch_Demo.ipynb for more details. The data for Pytorch implementation are preprocessed and saved as h5py files for PyTorch usage.

NTU data

Please download the NTU data from their website. Then please follow the example to clean data by removing the noise & multiple actors situations to obtain the raw_train/test_data.pkl and apply view invriant transform to get trans_train/test_data.pkl.

UWA3D data

Please follow the UWA3D Multiview Activity II dataset to download the original dataset. Creative Commons License
The View invariant transformed data is available in Google Drive. This data is licensed under a Creative Commons Attribution 4.0 International License.

Citation

Please cite the PREDICT & CLUSTER: Unsupervised Skeleton Based Action Recognition (CVPR 2020) when you use this code:

@inproceedings{su2020predict,
  title={Predict \& cluster: Unsupervised skeleton based action recognition},
  author={Su, Kun and Liu, Xiulong and Shlizerman, Eli},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={9631--9640},
  year={2020}
}

predict-cluster's People

Contributors

dragonliu1995 avatar nikwl avatar shlizee avatar sukun1045 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

predict-cluster's Issues

Dimension of the hidden layer

Hello,

First of all I would like to thank you for releasing your implementation of your paper.

I have one question about the hidden size of the main auto-encoder

It seems to me that the UCLA data is made of sequences of shape (50,60), so the auto-encoder inputs have a dimension equal to 3000.

But the hidden layer of the auto-encoder is 2048 (very close to 3000), so my question is how does the autoencoder learns a useful representation in this case ? It is weird to me that the hidden layer size is not much lower...

Is it that the fixed weights or fixed states training strategies manage to generate better representations of the input but with the same dimensionality as the input?

I understand you then train another 6-layers auto-encoder on the output feature of the first AE to reduce the dimension to 256, and then use these final representations for clustering.

I would be very grateful if you could help me answers these questions.

Have a great day!

UWA3D handling

Hi,
Could you please share the price of the code about how you have handled the UWA3D dataset. Since the original website is not working and there is not much information regarding how to interpret the data values, it would be helpful if you can tell me how you manage it?
Please share the piece of code where you read the .mat files and send the required data points as input in your network.

feature level auto-encoder does not work

Hi, thanks for your excellent work! But I got some issues.
When I trained the feature-level auto-encoder (AEC) for Fixed Weights decoder (FW), the KNN score looks worse than that before training, as below:
image
image
Is it normal?

eval of each person for ntu dataset

Hi, thanks for your great code base!

I saw this issues #9 , and knew that each training sample was 1 person.

In this case, is the acc comparable with other papers?
Please tell me how to calculate acc.

About UWA3DII dataset

Thanks for your great work!
I am a student working on skeleton-based action recognition. Recently, I want to do experiments on the UWA3DII dataset with my model, but I cannot find the download link on the website. Since you perform experiments on the UWA3DII dataset, could you please provide a link of the UWA3DII dataset?
It would be very helpful for me. Thanks in advance!

About NTU datasets

There are tr_path and te_path in NTU preprocess, how can we get this file? I downloaded the NTU datasets, but i found that the file are with name of .skeleton. And after processing these data, i get the file:train_data_joint.npy train_label.pkl val_data_joint.npy val_label.pkl. I wonder which one is the needed?

Clarification re. Fixed Weight (FW) implementation

The intent of Fixed Weight (FW) and Fixed State (FS) strategies is to weaken the decoder such that the encoder final hidden state can be used as a robust representation for unsupervised skeleton-based action recognition.

In contrast to FW name, in the original implementation, the weights of the decoder are not kept fixed. The main feature of this strategy is that the decoder receives zero inputs such that it reconstructs sequences based on the encoder final state only. A more appropriate name would probably be Fixed Zero (FZ) for this strategy. Thanks to hyk1996 and Yutasq for pointing out and performing detailed analysis.

Rotation Matrix R

Thank you for your great project. I have two questions:

  1. In your paper: In section 3, Pre-processing of body keypoints, R = [v1|v2|v3] but in file NTU_View_Invariant_Transform.ipynb, R = np.hstack([v2,v3,v1]). I don't know why the order of v1, v2, v3 different between your paper and your code?
  2. I want to use this pre-processing method for 2D keypoints case. We just need to compute v1, v2? Or we need to pad zero on z-axis and compute rotation matrix according to formula in your papers? The order of v1, v2, v3 in question 1. makes me confusing about 2D keypoints case.
    Thank you!

Action to predict

Hello, after dimensionality reduction through encoder, how to connect the features with actions after dimensionality reduction?

some question on KNN

As is said in the work, it is unsupervised.but the labels of train data are used when KNN evaluation.So it's OK to do like this?

what's the shape of data at every

Could you tell me what's the shape of data at every stage such as before trained, the output of encoder and the output of decoder?
(I mean the dimensions of action data. For example, (3 , 300, 25) means 3features 300frames and 25nodes)
Good luck!

error with get_feature() when run the train.py

hello,when I run the train.py,I meet some problem, I don't change the code, can you help me?
the error shows:ValueError, setting an element with a sequence
it seems sth wrong with step==200, knn_feature=[] when run get_feature() in Seq2Seq.py
I print the shape of input_feed and output_feed, its format is ([[,,,,,,]]) and ([]),if it doesn't match?

Any plan for Pytorch Version

Hey, it is really a good work. Do you have any plan to release a PyTorch version? I personally think it will get more attention if there is a PyTorch one.

Best.

Train on 2D skeleton dataset

Hello, Thank you for your excellent work. The code release the preprocess data on UCLA, UWA3D and NTU RGBD. It seems they all 3D skeleton data. Can this code work on 2D skeleton dataset like COCO? If it is, could you tell me how?
Thank you again.

Your method cannot be called unsupervised!

The method is not completely unsupervised. Only feature representation learning is unsupervised, instead you use labels for knn fitting. So the paper title is misleading. Also the abstract is misleading. For example, "We propose a novel system for unsupervised skeleton-based action recognition" or "In contrast, our system is fully
unsupervised, does not require action labels at any stage".
Have you tried to replace knn with an unsupervised clustering approach?

Is something wrong in pytorch implementation

When training the system, the code should use the MSE or MAE loss and in the autoencoder, the L1 loss is used according to the paper. However, in the pytorch implementation, i found that you have used L1 loss durign training, and use MSE for the autoencoder, is something wrong happened?

L1 loss is used for the traininig
image
and MSE loss is for clustering_knn_acc

UCLA Data

Hi do you have joint position information for the UCLA data. I want to visualize the skeletons but can't interpret the 60 points in the data.

About problem in running ucla_demo

I am a student who want to reproduce the code. When I tried to run the code in ucla_demo at the FW Training loop, I got the error message that

---------------------------------------------------------------------------
InternalError                             Traceback (most recent call last)
~\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
   1355     try:
-> 1356       return fn(*args)
   1357     except errors.OpError as e:

~\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\client\session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
   1340       return self._call_tf_sessionrun(
-> 1341           options, feed_dict, fetch_list, target_list, run_metadata)
   1342 

~\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\client\session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
   1428         self._session, options, feed_dict, fetch_list, target_list,
-> 1429         run_metadata)
   1430 

InternalError: 2 root error(s) found.
  (0) Internal: Blas GEMM launch failed : a.shape=(3200, 60), b.shape=(2048, 60), m=3200, n=2048, k=60
	 [[{{node gradients/prediction/pred_xyz/pred_skel/Tensordot/MatMul_grad/MatMul}}]]
	 [[Adam/update/_154]]
  (1) Internal: Blas GEMM launch failed : a.shape=(3200, 60), b.shape=(2048, 60), m=3200, n=2048, k=60
	 [[{{node gradients/prediction/pred_xyz/pred_skel/Tensordot/MatMul_grad/MatMul}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

InternalError                             Traceback (most recent call last)
<ipython-input-21-fb710833f5b5> in <module>
      5 for i in range(0,iterations+1):
      6     encoder_inputs_xyz,decoder_inputs_xyz,seq_len_enc = mini_batch(dsamp_train, seq_len=50, input_size=60, batch_size=64)
----> 7     _,train_loss = model.step(sess,encoder_inputs_xyz,decoder_inputs_xyz,seq_len_enc, False)
      8     if i%100 == 0:
      9         loss.append(train_loss)

<ipython-input-12-adcec0763b40> in step(self, session, encoder_inputs_xyz, decoder_inputs_xyz, seq_len, forward_only)
     59                           self.seq_len: seq_len}
     60             output_feed = [self.updates,self.loss]
---> 61             outputs = session.run(output_feed, input_feed)
     62             return outputs[0], outputs[1]

~\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
    948     try:
    949       result = self._run(None, fetches, feed_dict, options_ptr,
--> 950                          run_metadata_ptr)
    951       if run_metadata:
    952         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

~\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1171     if final_fetches or final_targets or (handle and feed_dict_tensor):
   1172       results = self._do_run(handle, final_targets, final_fetches,
-> 1173                              feed_dict_tensor, options, run_metadata)
   1174     else:
   1175       results = []

~\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\client\session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1348     if handle is None:
   1349       return self._do_call(_run_fn, feeds, fetches, targets, options,
-> 1350                            run_metadata)
   1351     else:
   1352       return self._do_call(_prun_fn, handle, feeds, fetches)

~\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
   1368           pass
   1369       message = error_interpolation.interpolate(message, self._graph)
-> 1370       raise type(e)(node_def, op, message)
   1371 
   1372   def _extend_graph(self):

InternalError: 2 root error(s) found.
  (0) Internal: Blas GEMM launch failed : a.shape=(3200, 60), b.shape=(2048, 60), m=3200, n=2048, k=60
	 [[node gradients/prediction/pred_xyz/pred_skel/Tensordot/MatMul_grad/MatMul (defined at <ipython-input-12-adcec0763b40>:49) ]]
	 [[Adam/update/_154]]
  (1) Internal: Blas GEMM launch failed : a.shape=(3200, 60), b.shape=(2048, 60), m=3200, n=2048, k=60
	 [[node gradients/prediction/pred_xyz/pred_skel/Tensordot/MatMul_grad/MatMul (defined at <ipython-input-12-adcec0763b40>:49) ]]
0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node gradients/prediction/pred_xyz/pred_skel/Tensordot/MatMul_grad/MatMul:
 prediction/pred_xyz/pred_skel/Tensordot/Reshape_1 (defined at C:\Users\Qisr\AppData\Local\Temp\tmpu6n28dvh.py:23)

Input Source operations connected to node gradients/prediction/pred_xyz/pred_skel/Tensordot/MatMul_grad/MatMul:
 prediction/pred_xyz/pred_skel/Tensordot/Reshape_1 (defined at C:\Users\Qisr\AppData\Local\Temp\tmpu6n28dvh.py:23)

Original stack trace for 'gradients/prediction/pred_xyz/pred_skel/Tensordot/MatMul_grad/MatMul':
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\ipykernel_launcher.py", line 16, in <module>
    app.launch_new_instance()
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\traitlets\config\application.py", line 664, in launch_instance
    app.start()
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\ipykernel\kernelapp.py", line 612, in start
    self.io_loop.start()
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\platform\asyncio.py", line 199, in start
    self.asyncio_loop.run_forever()
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\asyncio\base_events.py", line 442, in run_forever
    self._run_once()
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\asyncio\base_events.py", line 1462, in _run_once
    handle._run()
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\asyncio\events.py", line 145, in _run
    self._callback(*self._args)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\ioloop.py", line 688, in <lambda>
    lambda f: self._run_callback(functools.partial(callback, future))
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\ioloop.py", line 741, in _run_callback
    ret = callback()
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\gen.py", line 814, in inner
    self.ctx_run(self.run)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\gen.py", line 162, in _fake_ctx_run
    return f(*args, **kw)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\gen.py", line 775, in run
    yielded = self.gen.send(value)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\ipykernel\kernelbase.py", line 381, in dispatch_queue
    yield self.process_one()
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\gen.py", line 250, in wrapper
    runner = Runner(ctx_run, result, future, yielded)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\gen.py", line 741, in __init__
    self.ctx_run(self.run)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\gen.py", line 162, in _fake_ctx_run
    return f(*args, **kw)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\gen.py", line 775, in run
    yielded = self.gen.send(value)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\ipykernel\kernelbase.py", line 365, in process_one
    yield gen.maybe_future(dispatch(*args))
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\gen.py", line 234, in wrapper
    yielded = ctx_run(next, result)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\gen.py", line 162, in _fake_ctx_run
    return f(*args, **kw)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\ipykernel\kernelbase.py", line 268, in dispatch_shell
    yield gen.maybe_future(handler(stream, idents, msg))
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\gen.py", line 234, in wrapper
    yielded = ctx_run(next, result)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\gen.py", line 162, in _fake_ctx_run
    return f(*args, **kw)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\ipykernel\kernelbase.py", line 545, in execute_request
    user_expressions, allow_stdin,
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\gen.py", line 234, in wrapper
    yielded = ctx_run(next, result)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tornado\gen.py", line 162, in _fake_ctx_run
    return f(*args, **kw)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\ipykernel\ipkernel.py", line 306, in do_execute
    res = shell.run_cell(code, store_history=store_history, silent=silent)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\ipykernel\zmqshell.py", line 536, in run_cell
    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\IPython\core\interactiveshell.py", line 2867, in run_cell
    raw_cell, store_history, silent, shell_futures)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\IPython\core\interactiveshell.py", line 2895, in _run_cell
    return runner(coro)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\IPython\core\async_helpers.py", line 68, in _pseudo_sync_runner
    coro.send(None)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\IPython\core\interactiveshell.py", line 3072, in run_cell_async
    interactivity=interactivity, compiler=compiler, result=result)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\IPython\core\interactiveshell.py", line 3263, in run_ast_nodes
    if (await self.run_code(code, result,  async_=asy)):
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\IPython\core\interactiveshell.py", line 3343, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-20-bd2e281ad5d3>", line 3, in <module>
    model = Seq2SeqModelFW(max_seq_len, input_size,rnn_size, batch_size, lr,train_keep_prob)
  File "<ipython-input-12-adcec0763b40>", line 49, in __init__
    gradients, self.pred_vars = zip(*opt.compute_gradients(self.loss))
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\training\optimizer.py", line 512, in compute_gradients
    colocate_gradients_with_ops=colocate_gradients_with_ops)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 158, in gradients
    unconnected_gradients)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 731, in _GradientsHelper
    lambda: grad_fn(op, *out_grads))
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 403, in _MaybeCompile
    return grad_fn()  # Exit early
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 731, in <lambda>
    lambda: grad_fn(op, *out_grads))
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\ops\math_grad.py", line 1387, in _MatMulGrad
    grad_a = gen_math_ops.mat_mul(grad, b, transpose_b=True)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6295, in mat_mul
    name=name)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\framework\ops.py", line 3616, in create_op
    op_def=op_def)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\framework\ops.py", line 2005, in __init__
    self._traceback = tf_stack.extract_stack()

...which was originally created as op 'prediction/pred_xyz/pred_skel/Tensordot/MatMul', defined at:
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
[elided 35 identical lines from previous traceback]
  File "<ipython-input-20-bd2e281ad5d3>", line 3, in <module>
    model = Seq2SeqModelFW(max_seq_len, input_size,rnn_size, batch_size, lr,train_keep_prob)
  File "<ipython-input-12-adcec0763b40>", line 42, in __init__
    pred_xyz2xyz = FC(self.dec_outputs_xyz)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\layers\base.py", line 537, in __call__
    outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 634, in __call__
    outputs = call_fn(inputs, *args, **kwargs)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 146, in wrapper
    ), args, kwargs)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 450, in converted_call
    result = converted_f(*effective_args, **kwargs)
  File "C:\Users\Qisr\AppData\Local\Temp\tmpu6n28dvh.py", line 63, in tf__call
    outputs = ag__.if_stmt(cond_2, if_true_2, if_false_2, get_state_2, set_state_2)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py", line 441, in if_stmt
    return _py_if_stmt(cond, body, orelse)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py", line 528, in _py_if_stmt
    return body() if cond else orelse()
  File "C:\Users\Qisr\AppData\Local\Temp\tmpu6n28dvh.py", line 23, in if_true_2
    outputs = ag__.converted_call('tensordot', standard_ops, ag__.ConversionOptions(recursive=True, force_conversion=False, optional_features=(), internal_convert_user_code=True), (inputs, self.kernel, [[rank - 1], [0]]), None)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 356, in converted_call
    return _call_unconverted(f, args, kwargs)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 255, in _call_unconverted
    return f(*args)
  File "C:\Users\Qisr\anaconda3\envs\tf114\lib\site-packages\tensorflow\python\ops\math_ops.py", line 3799, in tensordot
    ab_matmul = matmul(a_reshape, b_reshape)

I try to find out the issue of that but I'm still not sure which kind of problem is it? I checked all of the environments and they are correct. Also, the GPU memory is enough.

UWA dataset

Hi, thank you and I found your code really useful. Since the original download link for UWA dataset has expired, will it be possible for you to share this dataset and its pre-processing step as well in order to reduplicate the result?

Time for NTU dataset.

Hey,

I am trying to get the result for the NTU dataset with your PyTorch code. But it seems very slow. Could I ask how long did it take to get the result on NTU dataset ( either on Pytorch or Tensorflow)?

Best.

Pretrained model on NTU-CS

Hello! I got a problem when I'm trying to do cross-dataset validation. As we dont't have enough gpus, it's hard for us to train a convergened pytorch model. Do you mind sharing a pretrained pytorch model on NTU-CS?

About encoder states trajectories visualization

Hi, thanks for your work on unsupervised skeleton-based action recognition.
I'm curious about your encoder states trajectories visualization in your paper (Figure 4), and I'm willing to get similar visualization in my following work.
Some details of PCA visualization bothered me. If I understand correctly, each trajectory represents the trajectory of a sample. So do you use all samples to run PCA and get the trajectories? If possible, can you share the visualization code?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.