Giter VIP home page Giter VIP logo

temporalconvolutionalnetworks's People

Contributors

colincsl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

temporalconvolutionalnetworks's Issues

Conv layers used in ED-TCN

Hi Colin, thanks a lot for sharing this code, really helpful. Can I ask question about the implementation of you ED-TCN. Based on your paper(CVPR17), figure 1 shows there are two conv layers in the encoder and the decoder. However, your paper says "(e.g., 3 in the encoder)", and I can only see one conv layer in the encoder, "Convolution1D" line 81, tf_models.py. I might miss something there, I wonder which one is correct? Thanks.

Accelerometer features from 50 salads dataset

Hi

  1. Can you tell if the accelerometer features from 50 salads mid-level dataset(which I saw is a 30x1 vector at a timestep) are raw acceleration values or have they been processed?
  2. Assuming the 30x1 vector has 3 subsequent values as acc_x,acc_y ,acc_z from each of the IMUs (which I believe were 8), are there 10 sensors or some values are expected to be empty?

I just want to make sure if I can use this for IMU data I have for action detection.

can't download 50 salads features

Hello, I would like to download the features of the 50 salada dataset. I want to train this model for academic research. I have submitted the request but have not received feedback. Can these trained features still be downloaded? Thank you for your reply.

Version?

Could you share your pip list? Im in trouble with running your code due to the module versions.

L and B values

I would like to thank you for sharing the code, it is a really great job. I would appreciate if you can tell me what are L and B values that you used for Dilated TCN. Also, I would like to know if you have the trained models.
Thank you!

ImportError: No module named LCTM

I run the file TCN_main.py using iPython and got an error message
/TCN/code/metrics.py in ()
3 from numba import jit, int64, boolean
4
----> 5 from LCTM import utils
6 import sklearn.metrics as sm
7
ImportError: No module named LCTM

I can't find the LCTM module in the utils.py file.
Why does it happen?
I appreciate any suggestion.

No module named nets

Hi,
I performed all the steps before running demo.py. Following is the error log:

File "main/demo.py", line 12, in
from nets import model_train as model
ImportError: No module named nets
Please provide pointers to run the code.

How to interpret the output.

id@ipadress:# python3 code/TCN_main.py
Using TensorFlow backend.
2020-06-22 02:48:46.974129: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
/workspace/code/utils.py:124: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "partition_latent_labels" failed type inference due to: Untyped global name 'segment_intervals': cannot determine Numba type of <class 'function'>

File "code/utils.py", line 130:
def partition_latent_labels(Yi, n_latent):

Zi = Yi.copy()
intervals = segment_intervals(Yi)
^

@jit("int64[:](int64[:], int64)")
/workspace/code/utils.py:124: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "partition_latent_labels" failed type inference due to: Untyped global name 'segment_intervals': cannot determine Numba type of <class 'function'>

File "code/utils.py", line 130:
def partition_latent_labels(Yi, n_latent):

Zi = Yi.copy()
intervals = segment_intervals(Yi)
^

@jit("int64[:](int64[:], int64)")
/usr/local/lib/python3.6/dist-packages/numba/core/object_mode_passes.py:178: NumbaWarning: Function "partition_latent_labels" was compiled in object mode without forceobj=True, but has lifted loops.

File "code/utils.py", line 126:
def partition_latent_labels(Yi, n_latent):
if n_latent == 1:
^

state.func_ir.loc))
/usr/local/lib/python3.6/dist-packages/numba/core/object_mode_passes.py:188: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "code/utils.py", line 126:
def partition_latent_labels(Yi, n_latent):
if n_latent == 1:
^

state.func_ir.loc))
/workspace/code/metrics.py:143: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "levenstein_" failed type inference due to: No implementation of function Function() found for signature:

zeros(list(int64), Function(<class 'float'>))

There are 2 candidate implementations:

  • Of which 2 did not match due to:
    Overload in function 'zeros': File: : Line <N/A>.
    With argument(s): '(list(int64), Function(<class 'float'>))':
    No match.

During: resolving callee type: Function()
During: typing of call at /workspace/code/metrics.py (147)

File "code/metrics.py", line 147:
def levenstein_(p,y, norm=False):

n_col = len(y)
D = np.zeros([m_row+1, n_col+1], np.float)
^

@jit("float64(int64[:], int64[:], boolean)")
/workspace/code/metrics.py:143: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "levenstein_" failed type inference due to: cannot determine Numba type of <class 'numba.core.dispatcher.LiftedLoop'>

File "code/metrics.py", line 148:
def levenstein_(p,y, norm=False):

D = np.zeros([m_row+1, n_col+1], np.float)
for i in range(m_row+1):
^

@jit("float64(int64[:], int64[:], boolean)")
/usr/local/lib/python3.6/dist-packages/numba/core/object_mode_passes.py:178: NumbaWarning: Function "levenstein_" was compiled in object mode without forceobj=True, but has lifted loops.

File "code/metrics.py", line 145:
def levenstein_(p,y, norm=False):
m_row = len(p)
^

state.func_ir.loc))
/usr/local/lib/python3.6/dist-packages/numba/core/object_mode_passes.py:188: NumbaDeprecationWarning:
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit

File "code/metrics.py", line 145:
def levenstein_(p,y, norm=False):
m_row = len(p)
^

state.func_ir.loc))

Feat: 18

mid_video_SVM
/usr/local/lib/python3.6/dist-packages/numba/core/ir_utils.py:2031: NumbaPendingDeprecationWarning:
Encountered the use of a type that is scheduled for deprecation: type 'reflected list' found for argument 'p' of function 'levenstein_'.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-reflection-for-list-and-set-types

File "code/metrics.py", line 153:
def levenstein_(p,y, norm=False):

for j in range(1, n_col+1):
^

warnings.warn(NumbaPendingDeprecationWarning(msg, loc=loc))
/usr/local/lib/python3.6/dist-packages/numba/core/ir_utils.py:2031: NumbaPendingDeprecationWarning:
Encountered the use of a type that is scheduled for deprecation: type 'reflected list' found for argument 'y' of function 'levenstein_'.

For more information visit http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-reflection-for-list-and-set-types

File "code/metrics.py", line 153:
def levenstein_(p,y, norm=False):

for j in range(1, n_col+1):
^

warnings.warn(NumbaPendingDeprecationWarning(msg, loc=loc))
Trial Split_1: accuracy:61.51, edit_score:21.64, overlap_f1:30.84

/usr/local/lib/python3.6/dist-packages/numpy/core/asarray.py:136: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return array(a, dtype, copy=False, order=order, subok=True)
code/TCN_main.py:229: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
P_test
= np.array(P_test)/float(n_classes-1)
code/TCN_main.py:230: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
y_test_ = np.array(y_test)/float(n_classes-1)

Feat: 18

mid_video_SVM
Trial Split_1: accuracy:61.51, edit_score:21.64, overlap_f1:30.84
Trial Split_2: accuracy:53.48, edit_score:20.79, overlap_f1:27.2

/usr/local/lib/python3.6/dist-packages/numpy/core/asarray.py:136: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return array(a, dtype, copy=False, order=order, subok=True)
code/TCN_main.py:229: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
P_test
= np.array(P_test)/float(n_classes-1)
code/TCN_main.py:230: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
y_test_ = np.array(y_test)/float(n_classes-1)

Feat: 18

mid_video_SVM
Trial Split_1: accuracy:61.51, edit_score:21.64, overlap_f1:30.84
Trial Split_2: accuracy:53.48, edit_score:20.79, overlap_f1:27.2
Trial Split_3: accuracy:50.36, edit_score:21.9, overlap_f1:28.03

/usr/local/lib/python3.6/dist-packages/numpy/core/asarray.py:136: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return array(a, dtype, copy=False, order=order, subok=True)
code/TCN_main.py:229: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
P_test
= np.array(P_test)/float(n_classes-1)
code/TCN_main.py:230: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
y_test_ = np.array(y_test)/float(n_classes-1)

Feat: 18

mid_video_SVM
Trial Split_1: accuracy:61.51, edit_score:21.64, overlap_f1:30.84
Trial Split_2: accuracy:53.48, edit_score:20.79, overlap_f1:27.2
Trial Split_3: accuracy:50.36, edit_score:21.9, overlap_f1:28.03
Trial Split_4: accuracy:51.03, edit_score:20.17, overlap_f1:27.71

/usr/local/lib/python3.6/dist-packages/numpy/core/asarray.py:136: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return array(a, dtype, copy=False, order=order, subok=True)
code/TCN_main.py:229: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
P_test
= np.array(P_test)/float(n_classes-1)
code/TCN_main.py:230: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
y_test_ = np.array(y_test)/float(n_classes-1)

Feat: 18

mid_video_SVM
Trial Split_1: accuracy:61.51, edit_score:21.64, overlap_f1:30.84
Trial Split_2: accuracy:53.48, edit_score:20.79, overlap_f1:27.2
Trial Split_3: accuracy:50.36, edit_score:21.9, overlap_f1:28.03
Trial Split_4: accuracy:51.03, edit_score:20.17, overlap_f1:27.71
Trial Split_5: accuracy:58.34, edit_score:22.66, overlap_f1:29.69

/usr/local/lib/python3.6/dist-packages/numpy/core/asarray.py:136: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return array(a, dtype, copy=False, order=order, subok=True)
code/TCN_main.py:229: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
P_test
= np.array(P_test)/float(n_classes-1)
code/TCN_main.py:230: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
y_test_ = np.array(y_test)/float(n_classes-1)

All: accuracy:54.94 edit_score:21.43 overlap_f1:28.69
Trial Split_1: accuracy:61.51, edit_score:21.64, overlap_f1:30.84
Trial Split_2: accuracy:53.48, edit_score:20.79, overlap_f1:27.2
Trial Split_3: accuracy:50.36, edit_score:21.9, overlap_f1:28.03
Trial Split_4: accuracy:51.03, edit_score:20.17, overlap_f1:27.71
Trial Split_5: accuracy:58.34, edit_score:22.66, overlap_f1:29.69

I run the code TCN_main.py with 50Salads and your features.
Is this running fine? or How should I fix this?

Sensor data

Hi,

I believe in output_features.py your sensor data from the 50salads accelerometer actually goes into "X" not "S"; is my understanding correct?

ImportError: No module named 'LCTM.IoU_metrics'

Can anybody tell me, why is there an error?
While I run the TCN_main.py, an error arises.
Traceback (most recent call last):
File "TCN_main.py", line 40, in
import tf_models, datasets, utils, metrics
File "/home/yangchihyuan/Research/TCN/code/metrics.py", line 8, in
from LCTM.IoU_metrics import *
ImportError: No module named 'LCTM.IoU_metrics'

The LCTM module was downloaded from the GitHub and I installed it using sudo -H python3 setup.py install

I checked the installation by

import LCTM
dir(LCTM)
['builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec']

Where is the IoU_metrics?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.